EP3249648A1 - Verfahren und vorrichtung zur schaltung von sprach- oder audiosignalen - Google Patents

Verfahren und vorrichtung zur schaltung von sprach- oder audiosignalen Download PDF

Info

Publication number
EP3249648A1
EP3249648A1 EP17151713.9A EP17151713A EP3249648A1 EP 3249648 A1 EP3249648 A1 EP 3249648A1 EP 17151713 A EP17151713 A EP 17151713A EP 3249648 A1 EP3249648 A1 EP 3249648A1
Authority
EP
European Patent Office
Prior art keywords
frequency band
weight
band signal
speech
high frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP17151713.9A
Other languages
English (en)
French (fr)
Other versions
EP3249648B1 (de
Inventor
Zexin Liu
Lei Miao
Chen Hu
Wenhai Wu
Yue Lang
Qing Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP3249648A1 publication Critical patent/EP3249648A1/de
Application granted granted Critical
Publication of EP3249648B1 publication Critical patent/EP3249648B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/24Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/038Speech enhancement, e.g. noise reduction or echo cancellation using band spreading techniques

Definitions

  • the present invention relates to communication technologies, and in particular, to a method and an apparatus for switching speech or audio signals.
  • the network may intercept the bit stream of the speech or audio signals transmitted from an encoder to the network with different bit rates, so that the decoder may decode the speech or audio signals with different bandwidths from the intercepted bit stream.
  • the bidirectional switching from/to a narrow frequency band speech or audio signal to/from a wide frequency band speech or audio signal may occur during the process of transmitting speech or audio signals.
  • the narrow frequency band signal is switched to a wide frequency band signal with only a low frequency band component through up-sampling and low-pass filtering; the wide frequency band speech or audio signal includes both a low frequency band signal component and a high frequency band signal component.
  • the inventor discovers at least the following problems in the prior art: Because high frequency band signal information is available in wide frequency band speech or audio signals but is absent in narrow frequency band speech or audio signals, when speech or audio signals with different bandwidths are switched, a energy jump may occur in the speech or audio signals resulting in uncomfortable feeling in listening, and thus reducing the quality of audio signals received by a user.
  • Embodiments of the present invention provide a method and an apparatus for switching speech or audio signals to smoothly switch speech or audio signals between different bandwidths, thereby improving the quality of audio signals received by a user.
  • a method for switching speech or audio signals includes:
  • An apparatus for switching speech or audio signals includes:
  • the first high frequency band signal of the current frame of speech or audio signal is processed according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal; the processed first high frequency band signal and the first low frequency band signal are synthesized into a wide frequency band signal.
  • these speech or audio signals can be smoothly switched, thus reducing the ill impact of the energy jump on the subjective audio quality of the speech or audio signals and improving the quality of speech or audio signals received by the user.
  • FIG. 1 is a flowchart of the first embodiment of a method for switching speech or audio signals. As shown in FIG. 1 , by using the method for switching speech or audio signals, when a switching of a speech or audio occurs, each frame after a switching frame is processed according to the following steps:
  • the previous M frame of speech or audio signals refer to M frame of speech or audio signals before the current frame.
  • the L frame of speech or audio signals before the switching refer to L frame of speech or audio signals before the switching frame When a switching of a speech or audio occurs. If the current speech frame is a wide frequency band signal but the previous speech frame is a narrow frequency band signal or if the current speech frame is a narrow frequency band signal but the previous speech frame is a wide frequency band signal, the speech or audio signal is switched and the current speech frame is the switching frame.
  • the first high frequency band signal of the current frame of speech or audio signal is processed according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal.
  • the high frequency band signal of these speech or audio signals can be smoothly switched.
  • the processed first high frequency band signal and the first low frequency band signal are synthesized into a wide frequency band signal; the wide frequency band signal is transmitted to a user terminal, so that the user enjoys a high quality speech or audio signal.
  • FIG. 2 is a flowchart of the second embodiment of the method for switching speech or audio signals. As shown in FIG. 2 , the method includes the following steps:
  • the first frequency band speech or audio signal in this embodiment may be a wide frequency band speech or audio signal or a narrow frequency band speech or audio signal.
  • the operation may be executed according to the following two cases: 1. If the first frequency band speech or audio signal is a wide frequency band speech or audio signal, the low frequency band signal and high frequency band signal of the wide frequency band speech or audio signals are synthesized into a wide frequency band signal. 2. If the first frequency band speech or audio signal is a narrow frequency band speech or audio signal, the low frequency band signal and the high frequency band signal of the narrow frequency band speech or audio signal are synthesized into a wide frequency band signal. In this case, although the signal is a wide frequency band signal, the high frequency band is null.
  • Step 201 When the speech or audio signal is switched, weight the first high frequency band signal of the current frame of speech or audio signal and the second high frequency band signal of the previous M frame of speech or audio signals to obtain a processed first high frequency band signal.
  • M is greater than or equal to 1.
  • the first high frequency band signal of the current frame of speech or audio signal is processed according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal.
  • the wide frequency band speech or audio signal is switched to the narrow frequency band speech or audio signal, because the high frequency band signal information corresponding to the narrow frequency band speech or audio signal is null, the component of the high frequency band signal corresponding to the narrow frequency band speech or audio signal needs to be restored to enable the wide frequency band speech or audio signal to be smoothly switched to the narrow frequency band speech or audio signal.
  • the narrow frequency band speech or audio signal is switched to the wide frequency band speech or audio signal
  • the high frequency band signal of the wide frequency band speech or audio signal is not null
  • the energy of the high frequency band signals of consecutive multiple-frame wide frequency band speech or audio signals after the switching must be weakened to enable the narrow frequency band speech or audio signal to be smoothly switched to the wide frequency band speech or audio signal, so that the high frequency band signal of the wide frequency band speech or audio signal is gradually switched to a real high frequency band signal.
  • the first high frequency band signal and the second high frequency band signal of the previous M frame of speech or audio signals may be directly weighted. The weighted result is the processed first high frequency band signal.
  • Step 202 Synthesize the processed first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal into a wide frequency band signal.
  • the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal of the current frame; then, in step 202, the processed first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal are synthesized into a wide frequency band signal, so that the speech or audio signals received by the user are always wide frequency band speech or audio signals. In this way, speech or audio signals with different bandwidths are smoothly switched, which helps improve the quality of audio signals received by the user.
  • the first high frequency band signal of the current frame of speech or audio signal is processed according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal.
  • the high frequency band signal of these speech or audio signals can be smoothly switched.
  • the processed first high frequency band signal and the first low frequency band signal are synthesized into a wide frequency band signal; the wide frequency band signal is transmitted to a user terminal, so that the user enjoys a high quality speech or audio signal.
  • speech or audio signals with different bandwidths can be switched smoothly, thus reducing the impact of the sudden energy change on the subjective audio quality of the speech or audio signals and improving the quality of audio signals received by the user.
  • the first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal are synthesized into a wide frequency band signal, so that the user can obtain high quality audio signal.
  • step 201 when a switching from wide frequency band speech or audio signal to a narrow frequency band speech or audio signal occurs, step 201 includes the following steps:
  • the speech or audio signal may be divided into fine structure information and envelope information, so that the speech or audio signal can be restored according to the fine structure information and envelope information.
  • the high frequency band signal needed by the current narrow frequency band speech or audio signal needs to be restored so as to implement smooth switching between speech or audio signals.
  • the predicted fine structure information and envelope information corresponding to the first high frequency band signal of the narrow frequency band speech or audio signal are predicted.
  • the first low frequency band signal of the current frame of speech or audio signal may be classified in step 301, and then the predicted fine structure information and envelope information corresponding to the first high frequency band signal are predicted according to the signal type of the first low frequency band signal.
  • the narrow frequency band speech or audio signal of the current frame may be a harmonic signal, or a non-harmonic signal or a transient signal.
  • the fine structure information and envelope information corresponding to the type of the narrow frequency band speech or audio signal can be obtained, so that the fine structure information and envelope information corresponding to the high frequency band signal can be predicted more accurately.
  • the method for switching speech or audio signals in this embodiment does not limit the signal type of the narrow frequency band speech or audio signal.
  • Step 302 Weight the predicted envelope information and the previous M frame envelope information corresponding to the second high frequency band signal of the previous M frame of speech or audio signals to obtain first envelope information corresponding to the first high frequency band signal.
  • the first envelope information corresponding to the first high frequency band signal may be generated according to the predicted envelope information and the previous M frame envelope information corresponding to the second high frequency band signal of the previous M frame of speech or audio signals.
  • the process of generating the first envelope information corresponding to the first high frequency band signal in step 302 may be implemented by using the following two modes:
  • the first low frequency band signal of the current frame of speech or audio signal is compared with the low frequency band signal of the previous N frame of speech or audio signals to obtain a correlation coefficient between the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous N frame of speech or audio signals.
  • the correlation between the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous N frame of speech or audio signals may be determined by judging the difference between a frequency band of the first low frequency band signal of the current frame of speech or audio signal and the same frequency band of the low frequency band signal of the previous N frame of speech or audio signals in terms of the energy size or the information type, so that the desired correlation coefficient can be calculated.
  • the previous N frame of speech or audio signals may be narrow frequency band speech or audio signals, wide frequency band speech or audio signals, or hybrid signals of narrow frequency band speech or audio signals and wide frequency band speech or audio signals.
  • Step 402 Judge whether the correlation coefficient is within a given first threshold range.
  • the correlation coefficient is calculated in step 401, whether the correlation coefficient is within the given threshold range is judged.
  • the purpose of calculating the correlation coefficient is to judge whether the current frame of speech or audio signal is gradually switched from the previous N frame of speech or audio signals or suddenly switched from the previous N frame of speech or audio signals. That is, the purpose is to judge whether their characteristics are the same and then determine the weight of the high frequency band signal of the previous frame in the process of predicting the high frequency band signal of the current speech or audio signal.
  • the first low frequency band signal of the current frame of speech or audio signal has the same energy as the low frequency band signal of the previous frame of speech or audio signal and their signal types are the same, it indicates that the previous frame of speech or audio signal is highly correlated with the current frame of speech or audio signal.
  • the high frequency band envelope information or transitional envelope information corresponding to the previous frame of speech or audio signal occupies a larger weight; otherwise, if there is a huge difference between the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous frame of speech or audio signal in terms of energy and their signal types are different, it indicates that the previous speech or audio signal is lowly correlated with the current frame of speech or audio signal. Therefore, to accurately restore the first envelope information corresponding to the current frame of speech or audio signal, the high frequency band envelope information or transitional envelope information corresponding to the previous frame of speech or audio signal occupies a smaller weight.
  • Step 403 If the correlation coefficient is not within the given first threshold range, weight according to a set first weight 1 and a set first weight 2 to calculate the first envelope information.
  • the first weight 1 refers to the weight value of the previous frame envelope information corresponding to the high frequency band signal of the previous frame of speech or audio signal
  • the first weight 2 refers to the weight value of the envelope information.
  • the correlation coefficient is determined to be not within the given first threshold range in step 402, it indicates that the current frame of speech or audio signal is slightly correlated with the previous N frame of speech or audio signals. Therefore, the previous M frame envelope information or transitional envelope information corresponding to the first frequency band speech or audio signal of the previous M frames or the high frequency band envelope information corresponding to the previous frame of speech or audio signal has a slight impact on the first envelope information.
  • the previous M frame envelope information or transitional envelope information corresponding to the first frequency band speech or audio signal of the previous M frames or the high frequency band envelope information corresponding to the previous frame of speech or audio signal occupies a smaller weight.
  • the first envelope information of the current frame may be calculated according to the set first weight 1 and the first weight 2.
  • the first weight 1 refers to the weight value of the envelope information corresponding to the high frequency band signal of the previous frame of speech or audio signal.
  • the previous frame of speech or audio signal may be a wide frequency band speech or audio signal or a processed narrow frequency band speech or audio signal.
  • the previous frame of speech or audio signal is the wide frequency band speech or audio signal
  • the first weight 2 refers to the weight value of the predicted envelope information.
  • the product of the predicted envelope information and the first weight 2 is added to the product of the previous frame envelope information and the first weight 1, and the weighted sum is the first envelope information of the current frame.
  • subsequently transmitted speech or audio signals are processed according to this method and weight.
  • the first envelope information corresponding to the speech or audio signal is restored until a speech or audio signal is switched again.
  • Step 404 If the correlation coefficient is within the given first threshold range, weight according to a set second weight 1 and a set second weight 2 to calculate the transitional envelope information.
  • the second weight 1 refers to the weight value of the envelope information before the switching, and the second weight 2 refers to the weight value of the previous M frame envelope information, where M is greater than or equal to 1.
  • the current frame of speech or audio signal has characteristics similar to those of the previous consecutive N frame of speech or audio signals, and the first envelope information corresponding to the current frame of speech or audio signal is greatly affected by the envelope information of the previous consecutive N frame of speech or audio signals.
  • the transitional envelope information corresponding to the current frame of speech or audio signal needs to be calculated according to the previous M frame envelope information and the envelope information before the switching.
  • the first envelope information of the current frame of speech or audio signal is restored, the previous M frame envelope information and the previous L frame envelope information before the switching should occupy a larger weight. Then, the first envelope information is calculated according to the transitional envelope information.
  • the second weight 1 refers to the weight value of the envelope information before the switching
  • the second weight 2 refers to the weight value of the previous M frame envelope information.
  • the product of the envelope information before the switching and the second weight 1 is added to the product of the previous M frame envelope information and the second weight 2, and the weighted value is the transitional envelope information.
  • Step 405 Decrease the second weight 1 as per the first weight step, and increase the second weight 2 as per the first weight step.
  • Step 406 Judge whether a set third weight 1 is greater than the first weight 1.
  • the third weight 1 refers to the weight value of the transitional envelope information.
  • the impact of the transitional envelope information on the first envelope information of the current frame may be determined by comparing the third weight 1 with the second weight 1.
  • the transitional envelope information is calculated according to the previous M frame envelope information and the envelope information before the switching. Therefore, the third weight 1 actually represents the degree of the impact that the first envelope information suffers from the envelope information before the switching.
  • Step 407 If the third weight 1 is not greater than the first weight 1, weight according to the set first weight 1 and the first weight 2 to calculate the first envelope information.
  • the third weight 1 when the third weight 1 is determined to be smaller than or equal to the first weight 1 in step 406, it indicates that the current frame of speech or audio signal is a little far from the L frame of speech or audio signals before the switching and that the first envelope information is mainly affected by the previous M frame envelope information. Therefore, the first envelope information of the current frame may be calculated according to the set first weight 1 and the first weight 2.
  • Step 408 If the third weight 1 is greater than the first weight 1, weight according to the set third weight 1 and the third weight 2 to calculate the first envelope information.
  • the third weight 1 refers to the weight value of the transitional envelope information
  • the third weight 2 refers to the weight value of the predicted envelope information.
  • the third weight 1 is determined to be greater than the first weight 1 in step 406, it indicates that the current frame of speech or audio signal is closer to the L frame of speech or audio signals before the switching and that the first envelope information is greatly affected by the envelope information before the switching. Therefore, the first envelope information of the current frame needs to be calculated according to the transitional envelope information.
  • the third weight 1 refers to the weight value of the transitional envelope information
  • the third weight 2 refers to the weight value of the predicted envelope information.
  • the product of the transitional envelope information and the third weight 1 is added to the product of the predicted envelope information and the third weight 2, and the weighted value is the first envelope information.
  • Step 409 Decrease the third weight 1 as per the second weight step, and increase the third weight 2 as per the second weight step until the third weight 1 is equal to 0.
  • the purpose of modifying the third weight 1 and the third weight 2 in step 409 is the same as that of modifying the second weight 1 and the second weight 2 in step 405, that is, the purpose is to perform adaptive adjustment on the third weight 1 and the third weight 2 to calculate the first envelope information more accurately when the impact of the L frame of speech or audio signals before the switching on the subsequently transmitted speech or audio signals is decreased gradually. Because the impact of the L frame of speech or audio signals before the switching on the subsequent speech or audio signals is decreased gradually, the value of the third weight 1 turns smaller gradually, while the value of the third weight 2 turns larger gradually, thus weakening the impact of the envelope information before the switching on the first envelope information.
  • the sum of the first weight 1 and the first weight 2 is equal to 1; the sum of the second weight 1 and the second weight 2 is equal to 1; the sum of the third weight 1 and the third weight 2 is equal to 1; the initial value of the third weight 1 is greater than the initial value of the first weight 1; and the first weight 1 and the first weight 2 are fixed constants.
  • the weight 1 and the weight 2 in this embodiment actually represent the percentages of the envelope information before the switching and the previous M frame envelope information in the first envelope information of the current frame. If the current frame of speech or audio signal is close to the L frame of speech or audio signals before the switching and their correlation is high, the percentage of the envelope information before the switching is high, while the percentage of the previous M frame envelope information is low.
  • the current frame of speech or audio signal is a little far from the L frame of speech or audio signals before the switching, it indicates that the speech or audio signal is stably transmitted on the network; or if the current frame of speech or audio signal is slightly correlated with the L frame of speech or audio signals before the switching, it indicates that the characteristics of the current frame of speech or audio signal are already changed. Therefore, if the current frame of speech or audio signal is slightly affected by the L frame of speech or audio signals before the switching, the percentage of the envelope information before the switching is low.
  • step 404 may be executed after step 405. That is, the second weight 1 and the second weight 2 may be modified firstly, and then the transitional envelope information is calculated according to the second weight 1 and the second weight 2.
  • step 408 may be executed after step 409. That is, the third weight 1 and the third weight 2 may be modified firstly, and then the first envelope information is calculated according to the third weight 1 and the third weight 2.
  • the relationship between a frequency band of the first low frequency band signal of the current frame of speech or audio signal and the same frequency band of the low frequency band signal of the previous frame of speech or audio signal is calculated.
  • "corr" may be used to indicate the correlation coefficient. This correlation coefficient is obtained according to the energy relationship between the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous frame of speech or audio signal. If the energy difference is small, the "corr" is large; otherwise, the "corr” is small. For the specific process, see the calculation about the correlation of the previous N frame of speech or audio signals in step 401.
  • Step 502 Judge whether the correlation coefficient is within a given second threshold range.
  • the second threshold range may be represented by c1 to c2 in this embodiment.
  • Step 503 If the correlation coefficient is not within the given second threshold range, weight according to the set first weight 1 and the first weight 2 to calculate the first envelope information.
  • the first weight 1 refers to the weight value of the previous frame envelope information corresponding to the high frequency band signal of the previous frame of speech or audio signal
  • the first weight 2 refers to the weight value of the predicted envelope information.
  • the first weight 1 and the second weight 2 are fixed constants.
  • the first envelope information corresponding to the current frame of speech or audio signal is slightly affected by the envelope information of the previous frame of speech or audio signal before the switching. Therefore, the first envelope information of the current frame is calculated according to the set first weight 1 and the first weight 2. The product of the predicted envelope information and the first weight 2 is added to the product of the previous frame envelope information and the first weight 1, and the weighted sum is the first envelope information of the current frame.
  • subsequently transmitted narrowband speech or audio signals are processed according to this method and weight. The first envelope information corresponding to the narrowband speech or audio signal is restored until the speech or audio signals with different bandwidths are switched again.
  • the first weight 1 in this embodiment may be represented by a1; the first weight 2 may be represented by b1; the previous frame envelope information may be represented by pre_fenv; the predicted envelope information may be represented by fenv; and the first envelope information may be represented by cur_fenv.
  • Step 504 If the correlation coefficient is within the second threshold range, judge whether the set second weight 1 is greater than the first weight 1.
  • the second weight 1 refers to the weight value of the envelope information before the switching that corresponds to the high frequency band signal of the previous frame of speech or audio signal before the switching.
  • the degree of the impact of the envelope information before the switching and the previous frame envelope information on the first envelope information of the current frame may be obtained by comparing the second weight 1 with the first weight 1.
  • Step 505 If the second weight 1 is not greater than the first weight 1, weight according to the set first weight 1 and the first weight 2 to calculate the first envelope information.
  • Step 506 If the second weight 1 is greater than the first weight 1, weight according to the second weight 1 and the set second weight 2 to calculate the first envelope information.
  • the second weight 2 refers to the weight value of the predicted envelope information.
  • the second weight 1 may be represented by a2, and the second weight 2 may be represented by b2.
  • the first envelope information of the current frame may be calculated according to the set second weight 1 and the second weight 2.
  • the product of the predicted envelope information and the second weight 2 is added to the product of the envelope information before the switching and the second weight 1, and the weighted sum is the first envelope information of the current frame.
  • the envelope information before the switching may be represented by con_fenv.
  • Step 507 Decrease the second weight 1 as per the second weight step, and increase the second weight 2 as per the second weight step.
  • the impact of a speech or audio signal before the switching on the subsequent frame of speech or audio signal is gradually decreased.
  • adaptive adjustment needs to be performed on the second weight 1 and the second weight 2.
  • the impact of the speech or audio signal before the switching on the subsequent frame of speech or audio signal is gradually decreased, while the impact of the previous frame of speech or audio signal close to the current frame of speech or audio signal turns larger gradually. Therefore, the value of the second weight 1 turns smaller gradually, while the value of the second weight 2 turns larger gradually. In this way, the impact of the envelope information before the switching on the first envelope information is weakened, while the impact of the predicted envelope information on the first envelope information is enhanced.
  • the sum of the first weight 1 and the first weight 2 is equal to 1; the sum of the second weight 1 and the second weight 2 is equal to 1; the initial value of the second weight 1 is greater than the initial value of the first weight 1.
  • Step 303 Generate a processed first high frequency band signal according to the first envelope information and the predicted fine structure information.
  • the processed first high frequency band signal may be generated according to the first envelope information and predicted fine structure information, so that the second high frequency band signal can be smoothly switched to the processed first high frequency band signal.
  • the processed first high frequency band signal of the current frame is obtained according to the predicted fine structure information and the first envelope information.
  • the second high frequency band signal of the wide frequency band speech or audio signal before the switching can be smoothly switched to the processed first high frequency band signal corresponding to the narrow frequency band speech or audio signal, thus improving the quality of audio signals received by the user.
  • step 202 shown in FIG. 6 includes the following steps:
  • the first high frequency band signal of the narrowband speech or audio signal is null.
  • the energy of the processed first high frequency band signal is attenuated by frames until the attenuation coefficient reaches a given threshold after the number of frames of the wide frequency band signal extended from the narrow frequency band speech or audio signal reaches a given number of frames.
  • the interval between the current frame of speech or audio signal and the speech or audio signal of a frame before the switching may be obtained according to the current frame of speech or audio signal and the speech or audio signal of the frame before the switching.
  • the number of frames of the narrow frequency band speech or audio signal may be recorded by using a counter, where the number of frames may be a predetermined value greater than or equal to 0.
  • Step 602 If the processed first high frequency band signal does not need to be attenuated, synthesize the processed first high frequency band signal and the first low frequency band signal into a wide frequency band signal.
  • the processed first high frequency band signal and the first low frequency band signal are directly synthesized into a wide frequency band signal.
  • Step 603 If the processed first high frequency band signal needs to be attenuated, judge whether the attenuation factor corresponding to the processed first high frequency band signal is greater than the threshold.
  • the initial value of the attenuation factor is 1, and the threshold is greater than or equal to 0 and smaller than 1. If it is determined that the processed first high frequency band signal needs to be attenuated in step 601, whether the attenuation factor corresponding to the processed first high frequency band signal is greater than a given threshold is judged in step 603.
  • Step 604 If the attenuation factor is not greater than the given threshold, multiply the processed first high frequency band signal by the threshold, and synthesize the product and the first low frequency band signal into the wide frequency band signal.
  • the attenuation factor is determined to be not greater than the given threshold in step 603, it indicates that the energy of the processed first high frequency band signal is already attenuated to a certain degree and that the processed first high frequency band signal may not cause negative impacts. In this case, this attenuation ratio may be kept. Then, the processed first high frequency band signal is multiplied by the threshold, and then the product and the first low frequency band signal are synthesized into a wide frequency band signal.
  • Step 605 If the attenuation factor is greater than the given threshold, multiply the processed first high frequency band signal by the attenuation factor, and synthesize the product and the first low frequency band signal into the wide frequency band signal.
  • the processed first high frequency band signal may cause poor listening at the attenuation factor and needs to be further attenuated until it reaches the given threshold. Then, the processed first high frequency band signal is multiplied by the attenuation factor, and then the product and the first low frequency band signal are synthesized into a wide frequency band signal.
  • Step 606 Modify the attenuation factor to decrease the attenuation factor.
  • the impact of the speech or audio signals before the switching on subsequent narrowband speech or audio signals gradually turns smaller, and the attenuation factor also turns smaller gradually.
  • an embodiment of obtaining the processed first high frequency band signal through step 201 includes the following steps, as shown in FIG. 7 :
  • the energy of the high frequency band signal of the wide frequency band speech or audio signal needs to be attenuated to ensure that the narrow frequency band speech or audio signal can be smoothly switched to the wide frequency band speech or audio signal.
  • the product of the second high frequency band signal and the fourth weight 1 is added to the product of the first high frequency band signal and the fourth weight 2; the weighted value is the processed first high frequency band signal.
  • Step 702 Decrease the fourth weight 1 as per the third weight step, and increase the fourth weight 2 as per the third weight step until the fourth weight 1 is equal to 0. The sum of the fourth weight 1 and the fourth weight 2 is equal to 1.
  • the fourth weight 1 gradually turns smaller, while the fourth weight 2 gradually turns larger until the fourth weight 1 is equal to 0 and the fourth weight 2 is equal to 1. That is, the transmitted speech or audio signals are always wide frequency band speech or audio signals.
  • step 201 may further include the following steps:
  • a fixed parameter may be set to replace the high frequency band signal of the narrow frequency band speech or audio signal, where the fixed parameter is a constant greater than or equal to 0 and smaller than the energy of the first high frequency band signal.
  • the product of the fixed parameter and the fifth weight 1 is added to the product of the first high frequency band signal and the fifth weight 2; the weighted value is the processed first high frequency band signal.
  • Step 802 Decrease the fifth weight 1 as per the fourth weight step, and increase the fifth weight 2 as per the fourth weight step until the fifth weight 1 is equal to 0. The sum of the fifth weight 1 and the fifth weight 2 is equal to 1.
  • the transmitted speech or audio signals are always real wide frequency band speech or audio signals.
  • the high frequency band signal of the wide frequency band speech or audio signal is attenuated to obtain a processed high frequency band signal.
  • the high frequency band signal corresponding to the narrow frequency band speech or audio signal before the switching can be smoothly switched to the processed high frequency band signal corresponding to the wide frequency band speech or audio signal, thus helping to improve the quality of audio signals received by the user.
  • the envelope information may also be replaced by other parameters that can represent the high frequency band signal, for example, a linear predictive coding (LPC) parameter or an amplitude parameter.
  • LPC linear predictive coding
  • the program may be stored in a computer readable storage medium.
  • the storage medium may be a read only memory (ROM), a random access memory (RAM), a magnetic disk, or a compact disk-read only memory (CD-ROM).
  • FIG. 9 shows a structure of the first embodiment of an apparatus for switching speech or audio signals.
  • the apparatus for switching speech or audio signals includes a processing module 91 and a first synthesizing module 92.
  • the processing module 91 is adapted to weight the first high frequency band signal of the current frame of speech or audio signal and the second high frequency band signal of the previous M frame of speech or audio signals to obtain a processed first high frequency band signal When a switching of a speech or audio occurs.
  • M is greater than or equal to 1.
  • the first synthesizing module 92 is adapted to synthesize the processed first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal into a wide frequency band signal.
  • the processing module processes the first high frequency band signal of the current frame of speech or audio signal according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal can be smoothly switched to the processed first high frequency band signal.
  • the first synthesizing module synthesizes the processed first high frequency band signal and the first low frequency band signal into a wide frequency band signal; the wide frequency band signal is transmitted to a user terminal, so that the user enjoys a high quality speech or audio signal.
  • FIG. 10 shows a structure of the second embodiment of the apparatus for switching speech or audio signals.
  • the apparatus for switching speech or audio signals in this embodiment is based on the first embodiment, and further includes a second synthesizing module 103.
  • the second synthesizing module 103 is adapted to synthesize the first high frequency band signal and the first low frequency band signal into the wide frequency band signal when a switching of the speech or audio signal does not occur.
  • the second synthesizing module is set to synthesize the first low frequency band signal and the first high frequency band signal of the first frequency band speech or audio signals of the current frame into a wide frequency band signal when a switching between speech or audio signals with different bandwidths occurs. In this way, the quality of speech or audio signals received by the user is improved.
  • the processing module 101 includes the following modules, as shown in FIG. 10 and FIG. 11 :
  • the apparatus for switching speech or audio signals in this embodiment may include a classifying module 1010 adapted to classify the first low frequency band signal of the current frame of speech or audio signal.
  • the predicting module 1011 is further adapted to predict the fine structure information and envelope information corresponding to the first low frequency band signal of the current frame of speech or audio signal.
  • the predicting module predicts the fine structure information and envelope information corresponding to the first high frequency band signal, so that the processed first high frequency band signal can be accurately generated by the first generating module and the second generating module. In this way, the first high frequency band signal can be smoothly switched to the processed first high frequency band signal, thus improving the quality of speech or audio signals received by the user.
  • the classifying module classifies the first low frequency band signal of the current frame of speech or audio signal; the predicting module obtains the predicted fine structure information and predicted envelope information according to the signal type. In this way, the predicted fine structure information and predicted envelope information are more accurate, thus improving the quality of speech or audio signals received by the user.
  • the first synthesizing module 102 includes the following modules, as shown in FIG. 10 and FIG. 12 :
  • the initial value of the attenuation factor is 1, and the threshold is greater than or equal to 0 and smaller than 1.
  • the processed first high frequency band signal is attenuated, so that the wide frequency band signal obtained by processing the current frame of speech or audio signal is more accurate, thus improving the quality of audio signals received by the user.
  • the processing module 101 in this embodiment includes the following modules, as shown in FIG. 10 and FIG. 13a :
  • the processing module 101 in this embodiment may further include the following modules, as shown in FIG. 10 and FIG. 13b :
  • the apparatus for switching speech or audio signals in this embodiment, in the process of switching a speech or audio signal from a narrow frequency band speech or audio signal to a wide frequency band speech or audio signal, the high frequency band signal of the wide frequency band speech or audio signal is attenuated to obtain a processed high frequency band signal.
  • the high frequency band signal corresponding to the narrow frequency band speech or audio signal before the switching can be smoothly switched to the processed high frequency band signal corresponding to the wide frequency band speech or audio signal, thus helping to improve the quality of audio signals received by the user.

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Telephone Function (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Circuits Of Receivers In General (AREA)
EP17151713.9A 2010-04-28 2011-04-28 Verfahren und vorrichtung zur schaltung von sprach- oder audiosignalen Active EP3249648B1 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2010101634063A CN101964189B (zh) 2010-04-28 2010-04-28 语音频信号切换方法及装置
PCT/CN2011/073479 WO2011134415A1 (zh) 2010-04-28 2011-04-28 语音频信号切换方法及装置
EP11774406.0A EP2485029B1 (de) 2010-04-28 2011-04-28 Verfahren und vorrichtung zur umschaltung von audiosignalen

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
EP11774406.0A Division-Into EP2485029B1 (de) 2010-04-28 2011-04-28 Verfahren und vorrichtung zur umschaltung von audiosignalen
EP11774406.0A Division EP2485029B1 (de) 2010-04-28 2011-04-28 Verfahren und vorrichtung zur umschaltung von audiosignalen

Publications (2)

Publication Number Publication Date
EP3249648A1 true EP3249648A1 (de) 2017-11-29
EP3249648B1 EP3249648B1 (de) 2019-01-09

Family

ID=43517042

Family Applications (2)

Application Number Title Priority Date Filing Date
EP17151713.9A Active EP3249648B1 (de) 2010-04-28 2011-04-28 Verfahren und vorrichtung zur schaltung von sprach- oder audiosignalen
EP11774406.0A Active EP2485029B1 (de) 2010-04-28 2011-04-28 Verfahren und vorrichtung zur umschaltung von audiosignalen

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP11774406.0A Active EP2485029B1 (de) 2010-04-28 2011-04-28 Verfahren und vorrichtung zur umschaltung von audiosignalen

Country Status (8)

Country Link
EP (2) EP3249648B1 (de)
JP (3) JP5667202B2 (de)
KR (1) KR101377547B1 (de)
CN (1) CN101964189B (de)
AU (1) AU2011247719B2 (de)
BR (1) BR112012013306B8 (de)
ES (2) ES2718947T3 (de)
WO (1) WO2011134415A1 (de)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101110800B1 (ko) * 2003-05-28 2012-07-06 도꾸리쯔교세이호진 상교기쥬쯔 소고겡뀨죠 히드록실기 함유 화합물의 제조 방법
US11431312B2 (en) 2004-08-10 2022-08-30 Bongiovi Acoustics Llc System and method for digital signal processing
US8284955B2 (en) 2006-02-07 2012-10-09 Bongiovi Acoustics Llc System and method for digital signal processing
US10848118B2 (en) 2004-08-10 2020-11-24 Bongiovi Acoustics Llc System and method for digital signal processing
US10158337B2 (en) 2004-08-10 2018-12-18 Bongiovi Acoustics Llc System and method for digital signal processing
US10848867B2 (en) 2006-02-07 2020-11-24 Bongiovi Acoustics Llc System and method for digital signal processing
US10701505B2 (en) 2006-02-07 2020-06-30 Bongiovi Acoustics Llc. System, method, and apparatus for generating and digitally processing a head related audio transfer function
US8000968B1 (en) 2011-04-26 2011-08-16 Huawei Technologies Co., Ltd. Method and apparatus for switching speech or audio signals
CN101964189B (zh) * 2010-04-28 2012-08-08 华为技术有限公司 语音频信号切换方法及装置
CN103295578B (zh) * 2012-03-01 2016-05-18 华为技术有限公司 一种语音频信号处理方法和装置
CN105761724B (zh) * 2012-03-01 2021-02-09 华为技术有限公司 一种语音频信号处理方法和装置
CN103516440B (zh) * 2012-06-29 2015-07-08 华为技术有限公司 语音频信号处理方法和编码装置
CN106847297B (zh) 2013-01-29 2020-07-07 华为技术有限公司 高频带信号的预测方法、编/解码设备
US9883318B2 (en) 2013-06-12 2018-01-30 Bongiovi Acoustics Llc System and method for stereo field enhancement in two-channel audio systems
US9906858B2 (en) 2013-10-22 2018-02-27 Bongiovi Acoustics Llc System and method for digital signal processing
US9397629B2 (en) * 2013-10-22 2016-07-19 Bongiovi Acoustics Llc System and method for digital signal processing
US20150170655A1 (en) * 2013-12-15 2015-06-18 Qualcomm Incorporated Systems and methods of blind bandwidth extension
CN103714822B (zh) * 2013-12-27 2017-01-11 广州华多网络科技有限公司 基于silk编解码器的子带编解码方法及装置
KR101864122B1 (ko) 2014-02-20 2018-06-05 삼성전자주식회사 전자 장치 및 전자 장치의 제어 방법
US10820883B2 (en) 2014-04-16 2020-11-03 Bongiovi Acoustics Llc Noise reduction assembly for auscultation of a body
US10639000B2 (en) 2014-04-16 2020-05-05 Bongiovi Acoustics Llc Device for wide-band auscultation
EP2980794A1 (de) 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audiocodierer und -decodierer mit einem Frequenzdomänenprozessor und Zeitdomänenprozessor
BR112017024480A2 (pt) 2016-02-17 2018-07-24 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E. V. pós-processador, pré-processador, codificador de áudio, decodificador de áudio e métodos relacionados para aprimoramento do processamento transiente
CN112236812A (zh) 2018-04-11 2021-01-15 邦吉欧维声学有限公司 音频增强听力保护系统
CN110556116B (zh) 2018-05-31 2021-10-22 华为技术有限公司 计算下混信号和残差信号的方法和装置
WO2020028833A1 (en) 2018-08-02 2020-02-06 Bongiovi Acoustics Llc System, method, and apparatus for generating and digitally processing a head related audio transfer function
CN112002333B (zh) * 2019-05-07 2023-07-18 海能达通信股份有限公司 一种语音同步方法、装置及通信终端
CN117373465B (zh) * 2023-12-08 2024-04-09 富迪科技(南京)有限公司 一种语音频信号切换系统

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009056027A1 (fr) * 2007-11-02 2009-05-07 Huawei Technologies Co., Ltd. Procédé et dispositif de décodage audio

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4330689A (en) * 1980-01-28 1982-05-18 The United States Of America As Represented By The Secretary Of The Navy Multirate digital voice communication processor
US4769833A (en) * 1986-03-31 1988-09-06 American Telephone And Telegraph Company Wideband switching system
US5019910A (en) * 1987-01-29 1991-05-28 Norsat International Inc. Apparatus for adapting computer for satellite communications
FI115329B (fi) * 2000-05-08 2005-04-15 Nokia Corp Menetelmä ja järjestely lähdesignaalin kaistanleveyden vaihtamiseksi tietoliikenneyhteydessä, jossa on valmiudet useisiin kaistanleveyksiin
US7113522B2 (en) * 2001-01-24 2006-09-26 Qualcomm, Incorporated Enhanced conversion of wideband signals to narrowband signals
KR100940531B1 (ko) * 2003-07-16 2010-02-10 삼성전자주식회사 광대역 음성 신호 압축 및 복원 장치와 그 방법
JP2005080079A (ja) * 2003-09-02 2005-03-24 Sony Corp 音声再生装置及び音声再生方法
FI119533B (fi) * 2004-04-15 2008-12-15 Nokia Corp Audiosignaalien koodaus
CN1950883A (zh) * 2004-04-30 2007-04-18 松下电器产业株式会社 可伸缩性解码装置及增强层丢失的隐藏方法
WO2006011445A1 (ja) * 2004-07-28 2006-02-02 Matsushita Electric Industrial Co., Ltd. 信号復号化装置
JP4989971B2 (ja) * 2004-09-06 2012-08-01 パナソニック株式会社 スケーラブル復号化装置および信号消失補償方法
WO2006075663A1 (ja) * 2005-01-14 2006-07-20 Matsushita Electric Industrial Co., Ltd. 音声切替装置および音声切替方法
US8249861B2 (en) * 2005-04-20 2012-08-21 Qnx Software Systems Limited High frequency compression integration
CN101213590B (zh) * 2005-06-29 2011-09-21 松下电器产业株式会社 可扩展解码装置及丢失数据插值方法
US8194865B2 (en) * 2007-02-22 2012-06-05 Personics Holdings Inc. Method and device for sound detection and audio control
CN101425292B (zh) * 2007-11-02 2013-01-02 华为技术有限公司 一种音频信号的解码方法及装置
CN100585699C (zh) * 2007-11-02 2010-01-27 华为技术有限公司 一种音频解码的方法和装置
CN101964189B (zh) * 2010-04-28 2012-08-08 华为技术有限公司 语音频信号切换方法及装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009056027A1 (fr) * 2007-11-02 2009-05-07 Huawei Technologies Co., Ltd. Procédé et dispositif de décodage audio
US20100228557A1 (en) * 2007-11-02 2010-09-09 Huawei Technologies Co., Ltd. Method and apparatus for audio decoding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"G.729-based embedded variable bit-rate coder: An 8-32 kbit/s scalable wideband coder bitstream interoperable with G.729; G.729.1 (05/06)", ITU-T STANDARD, INTERNATIONAL TELECOMMUNICATION UNION, GENEVA ; CH, no. G.729.1 (05/06), 29 May 2006 (2006-05-29), pages 1 - 100, XP017466254 *
BERND GEISER ET AL: "Artificial Bandwidth Extension without Side Information for ITU-T G.729.1", INTERSPEECH 2007, 27 August 2007 (2007-08-27), pages 2493 - 2496, XP055001120, Retrieved from the Internet <URL:http://www.ind.rwth-aachen.de/fileadmin/publications/geiser07b.pdf> [retrieved on 20110621] *

Also Published As

Publication number Publication date
EP2485029B1 (de) 2017-06-14
JP6027081B2 (ja) 2016-11-16
EP3249648B1 (de) 2019-01-09
BR112012013306A2 (pt) 2016-03-01
WO2011134415A1 (zh) 2011-11-03
BR112012013306B8 (pt) 2021-02-17
JP2017033015A (ja) 2017-02-09
AU2011247719A1 (en) 2012-06-07
ES2718947T3 (es) 2019-07-05
ES2635212T3 (es) 2017-10-02
EP2485029A4 (de) 2013-01-02
JP6410777B2 (ja) 2018-10-24
KR20120074303A (ko) 2012-07-05
JP5667202B2 (ja) 2015-02-12
BR112012013306B1 (pt) 2020-11-10
KR101377547B1 (ko) 2014-03-25
AU2011247719B2 (en) 2013-07-11
JP2013512468A (ja) 2013-04-11
CN101964189A (zh) 2011-02-02
JP2015045888A (ja) 2015-03-12
EP2485029A1 (de) 2012-08-08
CN101964189B (zh) 2012-08-08

Similar Documents

Publication Publication Date Title
EP3249648A1 (de) Verfahren und vorrichtung zur schaltung von sprach- oder audiosignalen
US8000968B1 (en) Method and apparatus for switching speech or audio signals
US10559313B2 (en) Speech/audio signal processing method and apparatus
US9514762B2 (en) Audio signal coding method and apparatus
JP2022548299A (ja) オーディオ符号化方法および装置
CN110992965B (zh) 信号分类方法和装置以及使用其的音频编码方法和装置
EP1612773B1 (de) Vorrichtung zur Verarbeitung eines Klangsignals und Verfahren zur Bestimmung des Sprachengrad

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AC Divisional application: reference to earlier application

Ref document number: 2485029

Country of ref document: EP

Kind code of ref document: P

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180529

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20180723

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AC Divisional application: reference to earlier application

Ref document number: 2485029

Country of ref document: EP

Kind code of ref document: P

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 1088306

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190115

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602011055692

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2718947

Country of ref document: ES

Kind code of ref document: T3

Effective date: 20190705

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1088306

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190109

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190509

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190409

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190410

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190509

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190409

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602011055692

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

26N No opposition filed

Effective date: 20191010

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20190430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190428

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190430

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190430

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190428

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20110428

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190109

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230524

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230529

P03 Opt-out of the competence of the unified patent court (upc) deleted
PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20240315

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240307

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20240313

Year of fee payment: 14

Ref country code: FR

Payment date: 20240308

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240306

Year of fee payment: 14

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: ES

Payment date: 20240508

Year of fee payment: 14