AU2011247719A1 - Method and apparatus for switching speech or audio signals - Google Patents

Method and apparatus for switching speech or audio signals Download PDF

Info

Publication number
AU2011247719A1
AU2011247719A1 AU2011247719A AU2011247719A AU2011247719A1 AU 2011247719 A1 AU2011247719 A1 AU 2011247719A1 AU 2011247719 A AU2011247719 A AU 2011247719A AU 2011247719 A AU2011247719 A AU 2011247719A AU 2011247719 A1 AU2011247719 A1 AU 2011247719A1
Authority
AU
Australia
Prior art keywords
weight
frequency band
band signal
speech
high frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
AU2011247719A
Other versions
AU2011247719B2 (en
Inventor
Chen Hu
Yue Lang
Zexin Liu
Lei Miao
Wenhai Wu
Qing Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of AU2011247719A1 publication Critical patent/AU2011247719A1/en
Application granted granted Critical
Publication of AU2011247719B2 publication Critical patent/AU2011247719B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/24Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/038Speech enhancement, e.g. noise reduction or echo cancellation using band spreading techniques

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Telephone Function (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Circuits Of Receivers In General (AREA)

Abstract

An audio signal switching method and a device are provided. The audio signal switching method comprises the following steps: when an audio signal switches, performing weighting process on a first high-frequency band signal of a current frame audio signal and a second high-frequency band signal of former M frames audio signals, so as to obtain a processed first high-frequency band signal (101); synthesizing the processed first high-frequency band signal and a first low-frequency band signal of the current frame audio signal into a broad band signal (102).

Description

METHOD AND APPARATUS FOR SWITCHING SPEECH OR AUDIO SIGNALS 100011 This application claims priority to Chinese Patent, titled as "METHOD AND APPARATUS FOR SWITCHING SPEECH OR AUDIO SIGNALS", Application No. 201010163406.3, filed on Apr. 28, 2010, which is hereby incorporated by reference in its entirety. FIELD OF THE INVENTION 100021 The present invention relates to communication technologies, and in particular, to a method and an apparatus for switching speech or audio signals. BACKGROUND OF THE INVENTION [0003] Currently, during the process of transmitting speech or audio signals on a network, because the network conditions may vary, the network may intercept the bit stream of the speech or audio signals transmitted from an encoder to the network with different bit rates, so that the decoder may decode the speech or audio signals with different bandwidths from the intercepted bit stream. 10004] In the prior art, because the speech or audio signals transmitted on the network have different bandwidths, the bidirectional switching from/to a narrow frequency band speech or audio signal to/from a wide frequency band speech or audio signal may occur during the process of transmitting speech or audio signals. In embodiments of the present invention, the narrow frequency band signal is switched to a wide frequency band signal with only a low frequency band component through up-sampling and low-pass filtering; the wide frequency band speech or audio signal includes both a low frequency band signal component and a high frequency band signal component.
[0005] During the implementation of the present invention, the inventor discovers at least the following problems in the prior art: Because high frequency band signal information is available in wide frequency band speech or audio signals but is absent in narrow frequency band speech or audio signals, when speech or audio signals with different bandwidths are switched, a energy jump may occur in the speech or audio signals resulting in uncomfortable feeling in listening, and thus reducing the quality of audio signals received by a user. SUMMARY OF THE INVENTION 100061 Embodiments of the present invention provide a method and an apparatus for switching speech or audio signals to smoothly switch speech or audio signals between different bandwidths, thereby improving the quality of audio signals received by a user. 100071 A method for switching speech or audio signals includes: when a switching of a speech or audio signal occurs, weighting a first high frequency band signal of a current frame of speech or audio signal and a second high frequency band signal of the previous M frame of speech or audio signals to obtain a processed first high frequency band signal, wherein, M is greater than or equal to 1; and synthesizing the processed first high frequency band signal and a first low frequency band signal of the current frame of speech or audio signal into a wide frequency band signal. [00081 An apparatus for switching speech or audio signals includes: a processing module, configured to: when a switching of a speech or audio occurs, weight a first high frequency band signal of a current frame of speech or audio signal and a second high frequency band signal of the previous M frame of speech or audio signals to obtain a processed first high frequency band signal, wherein, M is greater than or equal to 1; and a first synthesizing module, configured to: synthesize the processed first high frequency band signal and a first low frequency band signal of the current frame of speech or audio signal into a wide frequency band signal.
[00091 By using the method and apparatus for switching speech or audio signals in embodiments of the present invention, the first high frequency band signal of the current frame of speech or audio signal is processed according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal; the processed first high frequency band signal and the first low frequency band signal are synthesized into a wide frequency band signal. In this way, during the process of switching between speech or audio signals with different bandwidths, these speech or audio signals can be smoothly switched, thus reducing the ill impact of the energy jump on the subjective audio quality of the speech or audio signals and improving the quality of speech or audio signals received by the user. BRIEF DESCRIPTION OF THE DRAWINGS 100101 To make the technical solution of the present invention clearer, the accompanying drawings for illustrating the embodiments of the present invention are outlined below. Apparently, the accompanying drawings are exemplary only, and those skilled in the art can derive other drawings from such accompanying drawings without creative efforts. [00111 FIG. 1 is a flowchart of a first embodiment of a method for switching speech or audio signals; 100121 FIG. 2 is a flowchart of a second embodiment of the method for switching speech or audio signals; 100131 FIG.2 is a flowchart of an embodiment of step 201 shown in FIG. 2; [00141 FIG.4 is a flowchart of an embodiment of step 302 shown in FIG. 3; [00151 FIG.5 is a second flowchart of another embodiment of step 302 shown in FIG. 3; [00161 FIG.6 is a flowchart of an embodiment of step 202 shown in FIG. 2; [00171 FIG.7 is a second flowchart of another embodiment of step 201 shown in FIG. 2; 100181 FIG.7 is a third flowchart of another embodiment of step 201 shown in FIG. 2; [00191 FIG. 9 shows a structure of a first embodiment of an apparatus for switching speech or audio signals; 100201 FIG. 10 shows a structure of a second embodiment of the apparatus for switching speech or audio signals; [0021] FIG. 11 is a first schematic diagram illustrating a structure of a processing module in the second embodiment of the apparatus for switching speech or audio signals; [0022] FIG. 12 is a schematic diagram illustrating a structure of a first module in the second embodiment of the apparatus for switching speech or audio signals; [00231 FIG. 13a is a second schematic diagram illustrating a structure of the processing module in the second embodiment of the apparatus for switching speech or audio signals; and [0024] FIG. 13b is a third schematic diagram illustrating a structure of the processing module in the second embodiment of the apparatus for switching speech or audio signals. DETAILED DESCRIPTION OF THE ENBODIMENTS [00251 To facilitate the understanding of the object, technical solution, and merit of the present invention, the following describes the present invention in detail with reference to embodiments and accompanying drawings. Apparently, the embodiments are exemplary only and the present invention is not limited to such embodiments. Persons having ordinary skill in the related art can derive other embodiments from the embodiments given herein without making remarkable creative effort, and all such embodiments are covered in the scope of the present invention. [00261 FIG. 1 is a flowchart of the first embodiment of a method for switching speech or audio signals. As shown in FIG. 1, by using the method for switching speech or audio signals, when a switching of a speech or audio occurs, each frame after a switching frame is processed according to the following steps: [00271 Step 101: When a switching of a speech or audio occurs, weight the first high frequency band signal of the current frame of speech or audio signal and the second high frequency band signal of the previous M frame of speech or audio signals to obtain a processed first high frequency band signal, where M is greater than or equal to 1.
[00281 Step 102: Synthesize the processed first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal into a wide frequency band signal. 100291 In this embodiment, the previous M frame of speech or audio signals refer to M frame of speech or audio signals before the current frame. The L frame of speech or audio signals before the switching refer to L frame of speech or audio signals before the switching frame When a switching of a speech or audio occurs. If the current speech frame is a wide frequency band signal but the previous speech frame is a narrow frequency band signal or if the current speech frame is a narrow frequency band signal but the previous speech frame is a wide frequency band signal, the speech or audio signal is switched and the current speech frame is the switching frame. [00301 By using the method for switching speech or audio signals in this embodiment, the first high frequency band signal of the current frame of speech or audio signal is processed according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal. In this way, during the process of switching between speech or audio signals with different bandwidths, the high frequency band signal of these speech or audio signals can be smoothly switched. Finally, the processed first high frequency band signal and the first low frequency band signal are synthesized into a wide frequency band signal; the wide frequency band signal is transmitted to a user terminal, so that the user enjoys a high quality speech or audio signal. By using the method for switching speech or audio signals in this embodiment, speech or audio signals with different bandwidths can be switched smoothly, thus reducing the impact of the sudden energy change on the subjective audio quality of the speech or audio signals and improving the quality of speech or audio signals received by the user. [00311 FIG. 2 is a flowchart of the second embodiment of the method for switching speech or audio signals. As shown in FIG. 2, the method includes the following steps: [0032] Step 200: When a switching of the speech or audio signal does not occur, synthesize the first high frequency band signal of the current frame of speech or audio signal and the first low frequency band signal into a wide frequency band signal.
C
[0033] Specifically, the first frequency band speech or audio signal in this embodiment may be a wide frequency band speech or audio signal or a narrow frequency band speech or audio signal. When the first frequency band speech or audio signal is not switched during the transmission of the speech or audio signal, the operation may be executed according to the following two cases: 1. If the first frequency band speech or audio signal is a wide frequency band speech or audio signal, the low frequency band signal and high frequency band signal of the wide frequency band speech or audio signals are synthesized into a wide frequency band signal. 2. If the first frequency band speech or audio signal is a narrow frequency band speech or audio signal, the low frequency band signal and the high frequency band signal of the narrow frequency band speech or audio signal are synthesized into a wide frequency band signal. In this case, although the signal is a wide frequency band signal, the high frequency band is null. [00341 Step 201: When the speech or audio signal is switched, weight the first high frequency band signal of the current frame of speech or audio signal and the second high frequency band signal of the previous M frame of speech or audio signals to obtain a processed first high frequency band signal. M is greater than or equal to 1. 100351 Specifically, when the switching between speech or audio signals with different bandwidths occurs, the first high frequency band signal of the current frame of speech or audio signal is processed according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal. For example, when the wide frequency band speech or audio signal is switched to the narrow frequency band speech or audio signal, because the high frequency band signal information corresponding to the narrow frequency band speech or audio signal is null, the component of the high frequency band signal corresponding to the narrow frequency band speech or audio signal needs to be restored to enable the wide frequency band speech or audio signal to be smoothly switched to the narrow frequency band speech or audio signal. However, when the narrow frequency band speech or audio signal is switched to the wide frequency band speech or audio signal, because the high frequency band signal of the wide frequency band speech or audio signal is not null, the energy of the high frequency band signals of consecutive multiple-frame wide frequency band speech or audio signals after the switching must be weakened to enable the narrow frequency band speech or audio signal to be smoothly switched to the wide frequency band speech or audio signal, so that the high frequency band signal of the wide frequency band speech or audio signal is gradually switched to a real high frequency band signal. By processing the current frame of speech or audio signal in step 201, high frequency band signals in speech or audio signals with different bandwidths can be smoothly switched, which avoids uncomfortable listening of the user due to the sudden energy change in the process of switching between the wide frequency band speech or audio signal and the narrow frequency band speech or audio signal, enabling the user to receive high quality audio signals. To simplify the process of obtaining the processed first high frequency band signal, the first high frequency band signal and the second high frequency band signal of the previous M frame of speech or audio signals may be directly weighted. The weighted result is the processed first high frequency band signal. 100361 Step 202: Synthesize the processed first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal into a wide frequency band signal. [00371 Specifically, after the current frame of speech or audio signal is processed in step 201, the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal of the current frame; then, in step 202, the processed first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal are synthesized into a wide frequency band signal, so that the speech or audio signals received by the user are always wide frequency band speech or audio signals. In this way, speech or audio signals with different bandwidths are smoothly switched, which helps improve the quality of audio signals received by the user. [00381 By using the method for switching speech or audio signals in this embodiment, the first high frequency band signal of the current frame of speech or audio signal is processed according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal of the previous M frame of speech or audio signals can be smoothly switched to the processed first high frequency band signal. In this way, during the process of switching between speech or audio signals with different bandwidths, the high frequency band signal of these speech or audio signals can be smoothly switched. Finally, the processed first high frequency band signal and the first low frequency band signal are synthesized into a wide frequency band signal; the wide frequency band signal is transmitted to a user terminal, so that the user enjoys a high quality speech or audio signal. By using the method for switching speech or audio signals in this embodiment, speech or audio signals with different bandwidths can be switched smoothly, thus reducing the impact of the sudden energy change on the subjective audio quality of the speech or audio signals and improving the quality of audio signals received by the user. In addition, when speech or audio signals with different bandwidths are not switched, the first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal are synthesized into a wide frequency band signal, so that the user can obtain high quality audio signal. [00391 According to the preceding technical solution, optionally, as shown in FIG. 3, when a switching from wide frequency band speech or audio signal to a narrow frequency band speech or audio signal occurs, step 201 includes the following steps: 100401 Step 301: Predict fine structure information and envelope information corresponding to the first high frequency band signal. 100411 Specifically, the speech or audio signal may be divided into fine structure information and envelope information, so that the speech or audio signal can be restored according to the fine structure information and envelope information. In the process of switching from a wide frequency band speech or audio signal to a narrow frequency band speech or audio signal, because only a low frequency band signal is available in the narrow frequency band speech or audio signal and the high frequency band signal is null, to enable the wide frequency band speech or audio signal to be smoothly switched to the narrow frequency band speech or audio signal, the high frequency band signal needed by the current narrow frequency band speech or audio signal needs to be restored so as to implement smooth switching between speech or audio signals. In step 301, the predicted fine structure information and envelope information corresponding to the first high frequency band signal of the narrow frequency band speech or audio signal are predicted.
Q
[0042] To predict the fine structure information and envelope information corresponding to the current frame of speech or audio signal more accurately, the first low frequency band signal of the current frame of speech or audio signal may be classified in step 301, and then the predicted fine structure information and envelope information corresponding to the first high frequency band signal are predicted according to the signal type of the first low frequency band signal. For example, the narrow frequency band speech or audio signal of the current frame may be a harmonic signal, or a non-harmonic signal or a transient signal. In this case, the fine structure information and envelope information corresponding to the type of the narrow frequency band speech or audio signal can be obtained, so that the fine structure information and envelope information corresponding to the high frequency band signal can be predicted more accurately. The method for switching speech or audio signals in this embodiment does not limit the signal type of the narrow frequency band speech or audio signal. [00431 Step 302: Weight the predicted envelope information and the previous M frame envelope information corresponding to the second high frequency band signal of the previous M frame of speech or audio signals to obtain first envelope information corresponding to the first high frequency band signal. [00441 Specifically, after the predicted fine structure information and envelope information corresponding to the first high frequency band signal of the current frame are predicted in step 301, the first envelope information corresponding to the first high frequency band signal may be generated according to the predicted envelope information and the previous M frame envelope information corresponding to the second high frequency band signal of the previous M frame of speech or audio signals. [00451 Specifically, the process of generating the first envelope information corresponding to the first high frequency band signal in step 302 may be implemented by using the following two modes: [00461 1. As shown in FIG. 4, an embodiment of obtaining the first envelope information through step 302 may include the following steps: [00471 Step 401: Calculate a correlation coefficient between the first low frequency band signal and the low frequency band signal of the previous N frame of speech or audio signals n according to the first low frequency band signal and the low frequency band signal of the previous N frame of speech or audio signals, where N is greater than or equal to 1. [00481 Specifically, the first low frequency band signal of the current frame of speech or audio signal is compared with the low frequency band signal of the previous N frame of speech or audio signals to obtain a correlation coefficient between the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous N frame of speech or audio signals. For example, the correlation between the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous N frame of speech or audio signals may be determined by judging the difference between a frequency band of the first low frequency band signal of the current frame of speech or audio signal and the same frequency band of the low frequency band signal of the previous N frame of speech or audio signals in terms of the energy size or the information type, so that the desired correlation coefficient can be calculated. The previous N frame of speech or audio signals may be narrow frequency band speech or audio signals, wide frequency band speech or audio signals, or hybrid signals of narrow frequency band speech or audio signals and wide frequency band speech or audio signals. [00491 Step 402: Judge whether the correlation coefficient is within a given first threshold range. 100501 Specifically, after the correlation coefficient is calculated in step 401, whether the correlation coefficient is within the given threshold range is judged. The purpose of calculating the correlation coefficient is to judge whether the current frame of speech or audio signal is gradually switched from the previous N frame of speech or audio signals or suddenly switched from the previous N frame of speech or audio signals. That is, the purpose is to judge whether their characteristics are the same and then determine the weight of the high frequency band signal of the previous frame in the process of predicting the high frequency band signal of the current speech or audio signal. For example, if the first low frequency band signal of the current frame of speech or audio signal has the same energy as the low frequency band signal of the previous frame of speech or audio signal and their signal types are the same, it indicates that the previous frame of speech or audio signal is highly correlated with the current frame of speech or audio signal. Therefore, to accurately restore the first envelope I A information corresponding to the current frame of speech or audio signal, the high frequency band envelope information or transitional envelope information corresponding to the previous frame of speech or audio signal occupies a larger weight; otherwise, if there is a huge difference between the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous frame of speech or audio signal in terms of energy and their signal types are different, it indicates that the previous speech or audio signal is lowly correlated with the current frame of speech or audio signal. Therefore, to accurately restore the first envelope information corresponding to the current frame of speech or audio signal, the high frequency band envelope information or transitional envelope information corresponding to the previous frame of speech or audio signal occupies a smaller weight. [00511 Step 403: If the correlation coefficient is not within the given first threshold range, weight according to a set first weight 1 and a set first weight 2 to calculate the first envelope information. The first weight I refers to the weight value of the previous frame envelope information corresponding to the high frequency band signal of the previous frame of speech or audio signal, and the first weight 2 refers to the weight value of the envelope information. [0052] Specifically, if the correlation coefficient is determined to be not within the given first threshold range in step 402, it indicates that the current frame of speech or audio signal is slightly correlated with the previous N frame of speech or audio signals. Therefore, the previous M frame envelope information or transitional envelope information corresponding to the first frequency band speech or audio signal of the previous M frames or the high frequency band envelope information corresponding to the previous frame of speech or audio signal has a slight impact on the first envelope information. When the first envelope information corresponding to the current frame of speech or audio signal is restored, the previous M frame envelope information or transitional envelope information corresponding to the first frequency band speech or audio signal of the previous M frames or the high frequency band envelope information corresponding to the previous frame of speech or audio signal occupies a smaller weight. Therefore, the first envelope information of the current frame may be calculated according to the set first weight I and the first weight 2. The first weight 1 refers to the weight value of the envelope information corresponding to the high 11 frequency band signal of the previous frame of speech or audio signal. The previous frame of speech or audio signal may be a wide frequency band speech or audio signal or a processed narrow frequency band speech or audio signal. In the case of first switching, the previous frame of speech or audio signal is the wide frequency band speech or audio signal, while the first weight 2 refers to the weight value of the predicted envelope information. The product of the predicted envelope information and the first weight 2 is added to the product of the previous frame envelope information and the first weight 1, and the weighted sum is the first envelope information of the current frame. In addition, subsequently transmitted speech or audio signals are processed according to this method and weight. The first envelope information corresponding to the speech or audio signal is restored until a speech or audio signal is switched again. [00531 Step 404: If the correlation coefficient is within the given first threshold range, weight according to a set second weight I and a set second weight 2 to calculate the transitional envelope information. The second weight 1 refers to the weight value of the envelope information before the switching, and the second weight 2 refers to the weight value of the previous M frame envelope information, where M is greater than or equal to 1. 100541 Specifically, if the correlation coefficient is determined to be within the given threshold range in step 402, the current frame of speech or audio signal has characteristics similar to those of the previous consecutive N frame of speech or audio signals, and the first envelope information corresponding to the current frame of speech or audio signal is greatly affected by the envelope information of the previous consecutive N frame of speech or audio signals. In view of the authenticity of the previous M frame envelopes, the transitional envelope information corresponding to the current frame of speech or audio signal needs to be calculated according to the previous M frame envelope information and the envelope information before the switching. When the first envelope information of the current frame of speech or audio signal is restored, the previous M frame envelope information and the previous L frame envelope information before the switching should occupy a larger weight. Then, the first envelope information is calculated according to the transitional envelope information. The second weight 1 refers to the weight value of the envelope information before the switching, and the second weight 2 refers to the weight value of the previous M
III
frame envelope information. In this case, the product of the envelope information before the switching and the second weight I is added to the product of the previous M frame envelope information and the second weight 2, and the weighted value is the transitional envelope information. 100551 Step 405: Decrease the second weight 1 as per the first weight step, and increase the second weight 2 as per the first weight step. 10056] Specifically, as the speech or audio signals are transmitted, the impact of the wide frequency band speech or audio signals before the switching on the subsequent narrow frequency band speech or audio signals is gradually decreased. To calculate the first envelope information more accurately, adaptive adjustment needs to be performed on the second weight 1 and the second weight 2. Because the impact of the L frame wide frequency band speech or audio signals before the switching on the subsequent speech or audio signals is decreased gradually, the value of the second weight 1 turns smaller gradually, while the value of the second weight 2 turns larger gradually, thus weakening the impact of the envelope information before the switching on the first envelope information. In step 405, the second weight 1 and the second weight 2 may be modified according to the following formulas: New second weight I = Old second weight 1 - First weight step; New second weight 2 = Old second weight 2 + First weight step, where the first weight step is a set value. [00571 Step 406: Judge whether a set third weight 1 is greater than the first weight 1. [00581 Specifically, the third weight I refers to the weight value of the transitional envelope information. The impact of the transitional envelope information on the first envelope information of the current frame may be determined by comparing the third weight 1 with the second weight 1. The transitional envelope information is calculated according to the previous M frame envelope information and the envelope information before the switching. Therefore, the third weight 1 actually represents the degree of the impact that the first envelope information suffers from the envelope information before the switching. [00591 Step 407: If the third weight I is not greater than the first weight 1, weight according to the set first weight 1 and the first weight 2 to calculate the first envelope information.
[00601 Specifically, when the third weight I is determined to be smaller than or equal to the first weight 1 in step 406, it indicates that the current frame of speech or audio signal is a little far from the L frame of speech or audio signals before the switching and that the first envelope information is mainly affected by the previous M frame envelope information. Therefore, the first envelope information of the current frame may be calculated according to the set first weight I and the first weight 2. [00611 Step 408: If the third weight 1 is greater than the first weight 1, weight according to the set third weight 1 and the third weight 2 to calculate the first envelope information. The third weight I refers to the weight value of the transitional envelope information, and the third weight 2 refers to the weight value of the predicted envelope information. [00621 Specifically, if the third weight 1 is determined to be greater than the first weight I in step 406, it indicates that the current frame of speech or audio signal is closer to the L frame of speech or audio signals before the switching and that the first envelope information is greatly affected by the envelope information before the switching. Therefore, the first envelope information of the current frame needs to be calculated according to the transitional envelope information. The third weight 1 refers to the weight value of the transitional envelope information, and the third weight 2 refers to the weight value of the predicted envelope information. In this case, the product of the transitional envelope information and the third weight I is added to the product of the predicted envelope information and the third weight 2, and the weighted value is the first envelope information. [0063] Step 409: Decrease the third weight 1 as per the second weight step, and increase the third weight 2 as per the second weight step until the third weight 1 is equal to 0. [00641 Specifically, the purpose of modifying the third weight 1 and the third weight 2 in step 409 is the same as that of modifying the second weight I and the second weight 2 in step 405, that is, the purpose is to perform adaptive adjustment on the third weight 1 and the third weight 2 to calculate the first envelope information more accurately when the impact of the L frame of speech or audio signals before the switching on the subsequently transmitted speech or audio signals is decreased gradually. Because the impact of the L frame of speech or audio signals before the switching on the subsequent speech or audio signals is decreased gradually, the value of the third weight I turns smaller gradually, while the value of the third weight 2 turns larger gradually, thus weakening the impact of the envelope information before the switching on the first envelope information. In step 409, the third weight 1 and the third weight 2 may be modified according to the following formulas: New third weight 1 = Old third weight I - Second weight step; New third weight 2 = Old third weight 2 + Second weight step, where the second weight step is a set value. 100651 The sum of the first weight 1 and the first weight 2 is equal to 1; the sum of the second weight I and the second weight 2 is equal to 1; the sum of the third weight I and the third weight 2 is equal to 1; the initial value of the third weight 1 is greater than the initial value of the first weight 1; and the first weight 1 and the first weight 2 are fixed constants. Specifically, the weight 1 and the weight 2 in this embodiment actually represent the percentages of the envelope information before the switching and the previous M frame envelope information in the first envelope information of the current frame. If the current frame of speech or audio signal is close to the L frame of speech or audio signals before the switching and their correlation is high, the percentage of the envelope information before the switching is high, while the percentage of the previous M frame envelope information is low. If the current frame of speech or audio signal is a little far from the L frame of speech or audio signals before the switching, it indicates that the speech or audio signal is stably transmitted on the network; or if the current frame of speech or audio signal is slightly correlated with the L frame of speech or audio signals before the switching, it indicates that the characteristics of the current frame of speech or audio signal are already changed. Therefore, if the current frame of speech or audio signal is slightly affected by the L frame of speech or audio signals before the switching, the percentage of the envelope information before the switching is low. [00661 In addition, step 404 may be executed after step 405. That is, the second weight I and the second weight 2 may be modified firstly, and then the transitional envelope information is calculated according to the second weight I and the second weight 2. Similarly, step 408 may be executed after step 409. That is, the third weight 1 and the third weight 2 may be modified firstly, and then the first envelope information is calculated according to the third weight 1 and the third weight 2.
[0067] 2. As shown in FIG. 5, another embodiment of obtaining the first envelope information through step 302 may further include the following steps: [0068] Step 501: Calculate a correlation coefficient between the first low frequency band signal and the low frequency band signal of the previous frame of speech or audio signal according to the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous frame of speech or audio signal. [00691 Specifically, to obtain more accurate first envelope information, the relationship between a frequency band of the first low frequency band signal of the current frame of speech or audio signal and the same frequency band of the low frequency band signal of the previous frame of speech or audio signal is calculated. In this embodiment, "corr" may be used to indicate the correlation coefficient. This correlation coefficient is obtained according to the energy relationship between the first low frequency band signal of the current frame of speech or audio signal and the low frequency band signal of the previous frame of speech or audio signal. If the energy difference is small, the "corr" is large; otherwise, the "corr" is small. For the specific process, see the calculation about the correlation of the previous N frame of speech or audio signals in step 401. 100701 Step 502: Judge whether the correlation coefficient is within a given second threshold range. 100711 Specifically, after the value of the "corr" is calculated in step 501, whether the calculated "corr" value is within the given second threshold is judged. For example, the second threshold range may be represented by cI to c2 in this embodiment. [00721 Step 503: If the correlation coefficient is not within the given second threshold range, weight according to the set first weight I and the first weight 2 to calculate the first envelope information. The first weight 1 refers to the weight value of the previous frame envelope information corresponding to the high frequency band signal of the previous frame of speech or audio signal, and the first weight 2 refers to the weight value of the predicted envelope information. The first weight I and the second weight 2 are fixed constants. [00731 Specifically, when the "corr" value is determined to be smaller than cl or greater than c2, it is determined that the first envelope information corresponding to the current frame of speech or audio signal is slightly affected by the envelope information of the previous 1IA frame of speech or audio signal before the switching. Therefore, the first envelope information of the current frame is calculated according to the set first weight 1 and the first weight 2. The product of the predicted envelope information and the first weight 2 is added to the product of the previous frame envelope information and the first weight 1, and the weighted sum is the first envelope information of the current frame. In addition, subsequently transmitted narrowband speech or audio signals are processed according to this method and weight. The first envelope information corresponding to the narrowband speech or audio signal is restored until the speech or audio signals with different bandwidths are switched again. For example, the first weight 1 in this embodiment may be represented by al; the first weight 2 may be represented by bl; the previous frame envelope information may be represented by prefenv; the predicted envelope information may be represented by fenv; and the first envelope information may be represented by curfenv. In this case, step 503 may be represented by the following formula: curfenv = prefenv x al + fenv x bl. [00741 Step 504: If the correlation coefficient is within the second threshold range, judge whether the set second weight I is greater than the first weight 1. The second weight 1 refers to the weight value of the envelope information before the switching that corresponds to the high frequency band signal of the previous frame of speech or audio signal before the switching. 100751 Specifically, if cl < corr < c2, the degree of the impact of the envelope information before the switching and the previous frame envelope information on the first envelope information of the current frame may be obtained by comparing the second weight I with the first weight 1. [00761 Step 505: If the second weight I is not greater than the first weight 1, weight according to the set first weight 1 and the first weight 2 to calculate the first envelope information. 100771 Specifically, when the second weight I is determined to be smaller than the first weight 1 in step 504, it indicates that the current frame of speech or audio signal is a little far from the previous frame of speech or audio signal before the switching and that the first envelope information is slightly affected by the previous frame envelope information before the switching. Therefore, the first envelope information of the current frame may be 1 -7 calculated according to the set first weight I and the first weight 2. In this case, step 505 may be represented by the following formula: curfenv = pre_fenv x al + fenv x bl. [00781 Step 506: If the second weight 1 is greater than the first weight 1, weight according to the second weight 1 and the set second weight 2 to calculate the first envelope information. The second weight 2 refers to the weight value of the predicted envelope information. For example, the second weight I may be represented by a2, and the second weight 2 may be represented by b2. [00791 Specifically, when the second weight I is determined to be greater than the first weight I in step 504, it indicates that the current frame of speech or audio signal is closer to the first frequency band speech or audio signal of the previous frame before the switching and that the first envelope information is greatly affected by the envelope information before the switching that corresponds to the previous frame of speech or audio signal before the switching. Therefore, the first envelope information of the current frame may be calculated according to the set second weight I and the second weight 2. In this case, the product of the predicted envelope information and the second weight 2 is added to the product of the envelope information before the switching and the second weight 1, and the weighted sum is the first envelope information of the current frame. The envelope information before the switching may be represented by confenv. In this case, step 506 may be represented by the following formula: curfenv = con fenv x a2 + fenv x b2. 100801 Step 507: Decrease the second weight I as per the second weight step, and increase the second weight 2 as per the second weight step. [0081] Specifically, as the speech or audio signals are transmitted, the impact of a speech or audio signal before the switching on the subsequent frame of speech or audio signal is gradually decreased. To calculate the first envelope information more accurately, adaptive adjustment needs to be performed on the second weight 1 and the second weight 2. The impact of the speech or audio signal before the switching on the subsequent frame of speech or audio signal is gradually decreased, while the impact of the previous frame of speech or audio signal close to the current frame of speech or audio signal turns larger gradually. Therefore, the value of the second weight 1 turns smaller gradually, while the value of the second weight 2 turns larger gradually. In this way, the impact of the envelope information 10 before the switching on the first envelope information is weakened, while the impact of the predicted envelope information on the first envelope information is enhanced. In step 507, the second weight I and the second weight 2 may be modified according to the following formulas: New second weight 1 = Old second weight I - First weight step; New second weight 2 = Old second weight 2 + First weight step, where the first weight step is a set value. [0082] The sum of the first weight I and the first weight 2 is equal to 1; the sum of the second weight I and the second weight 2 is equal to 1; the initial value of the second weight 1 is greater than the initial value of the first weight 1. [00831 Step 303: Generate a processed first high frequency band signal according to the first envelope information and the predicted fine structure information. 100841 Specifically, after the first envelope information of the current frame is obtained in step 302, the processed first high frequency band signal may be generated according to the first envelope information and predicted fine structure information, so that the second high frequency band signal can be smoothly switched to the processed first high frequency band signal. 100851 By using the method for switching speech or audio signals in this embodiment, in the process of switching a speech or audio signal from a wide frequency band speech or audio signal to a narrow frequency band speech or audio signal, the processed first high frequency band signal of the current frame is obtained according to the predicted fine structure information and the first envelope information. In this way, the second high frequency band signal of the wide frequency band speech or audio signal before the switching can be smoothly switched to the processed first high frequency band signal corresponding to the narrow frequency band speech or audio signal, thus improving the quality of audio signals received by the user. [0086] Based on the preceding technical solution, step 202 shown in FIG. 6 includes the following steps: 100871 Step 601: Judge whether the processed first high frequency band signal needs to be attenuated according to the current frame of speech or audio signal and the previous frame of speech or audio signal before the switching. 1 [00881 Specifically, the first high frequency band signal of the narrowband speech or audio signal is null. In the process of switching the wide frequency band speech or audio signal to the narrow frequency band speech or audio signal, to prevent the negative impact of the processed first high frequency band signal corresponding to the restored narrow frequency band speech or audio signal, the energy of the processed first high frequency band signal is attenuated by frames until the attenuation coefficient reaches a given threshold after the number of frames of the wide frequency band signal extended from the narrow frequency band speech or audio signal reaches a given number of frames. The interval between the current frame of speech or audio signal and the speech or audio signal of a frame before the switching may be obtained according to the current frame of speech or audio signal and the speech or audio signal of the frame before the switching. For example, the number of frames of the narrow frequency band speech or audio signal may be recorded by using a counter, where the number of frames may be a predetermined value greater than or equal to 0. [00891 Step 602: If the processed first high frequency band signal does not need to be attenuated, synthesize the processed first high frequency band signal and the first low frequency band signal into a wide frequency band signal. [00901 Specifically, if it is determined that the processed first high frequency band signal does not need to be attenuated in step 601, the processed first high frequency band signal and the first low frequency band signal are directly synthesized into a wide frequency band signal. 100911 Step 603: If the processed first high frequency band signal needs to be attenuated, judge whether the attenuation factor corresponding to the processed first high frequency band signal is greater than the threshold. [00921 Specifically, the initial value of the attenuation factor is 1, and the threshold is greater than or equal to 0 and smaller than 1. If it is determined that the processed first high frequency band signal needs to be attenuated in step 601, whether the attenuation factor corresponding to the processed first high frequency band signal is greater than a given threshold is judged in step 603. [00931 Step 604: If the attenuation factor is not greater than the given threshold, multiply the processed first high frequency band signal by the threshold, and synthesize the product and the first low frequency band signal into the wide frequency band signal. nAI [00941 Specifically, if the attenuation factor is determined to be not greater than the given threshold in step 603, it indicates that the energy of the processed first high frequency band signal is already attenuated to a certain degree and that the processed first high frequency band signal may not cause negative impacts. In this case, this attenuation ratio may be kept. Then, the processed first high frequency band signal is multiplied by the threshold, and then the product and the first low frequency band signal are synthesized into a wide frequency band signal. [00951 Step 605: If the attenuation factor is greater than the given threshold, multiply the processed first high frequency band signal by the attenuation factor, and synthesize the product and the first low frequency band signal into the wide frequency band signal. [00961 Specifically, if the attenuation factor is greater than the given threshold in step 603, it indicates that the processed first high frequency band signal may cause poor listening at the attenuation factor and needs to be further attenuated until it reaches the given threshold. Then, the processed first high frequency band signal is multiplied by the attenuation factor, and then the product and the first low frequency band signal are synthesized into a wide frequency band signal. [0097] Step 606: Modify the attenuation factor to decrease the attenuation factor. 100981 Specifically, as the speech or audio signals are transmitted, the impact of the speech or audio signals before the switching on subsequent narrowband speech or audio signals gradually turns smaller, and the attenuation factor also turns smaller gradually. [00991 Optionally, based on the preceding technical solution, when a switching from a narrow frequency band speech or audio signal a wide frequency band speech or audio signal occurs, an embodiment of obtaining the processed first high frequency band signal through step 201 includes the following steps, as shown in FIG. 7: [01001 Step 701: Weight according to the set fourth weight I and the fourth weight 2 to calculate a processed first high frequency band signal. The fourth weight I refers to the weight value of the second high frequency band signal, and the fourth weight 2 refers to the weight value of the first high frequency band signal of the current frame of speech or audio signal. -1 101011 Specifically, in the process of switching the narrow frequency band speech or audio signal to the wide frequency band speech or audio signal, because the high frequency band signal of the wide frequency band speech or audio signal is not null but the high frequency band signal corresponding to the narrow frequency band speech or audio signal is null, the energy of the high frequency band signal of the wide frequency band speech or audio signal needs to be attenuated to ensure that the narrow frequency band speech or audio signal can be smoothly switched to the wide frequency band speech or audio signal. The product of the second high frequency band signal and the fourth weight 1 is added to the product of the first high frequency band signal and the fourth weight 2; the weighted value is the processed first high frequency band signal. 101021 Step 702: Decrease the fourth weight I as per the third weight step, and increase the fourth weight 2 as per the third weight step until the fourth weight I is equal to 0. The sum of the fourth weight I and the fourth weight 2 is equal to 1. 101031 Specifically, as the speech or audio signals are transmitted, the impact of the narrow frequency band speech or audio signals before the switching on subsequent wide frequency band speech or audio signals gradually turns smaller. Therefore, the fourth weight I gradually turns smaller, while the fourth weight 2 gradually turns larger until the fourth weight 1 is equal to 0 and the fourth weight 2 is equal to 1. That is, the transmitted speech or audio signals are always wide frequency band speech or audio signals. [01041 Similarly, as shown in FIG. 8, another embodiment of obtaining the processed first high frequency band signal through step 201 may further include the following steps: [0105] Step 801: Weight according to the set fifth weight 1 and the fifth weight 2 to calculate a processed first high frequency band signal. The fifth weight I is the weight value of a set fixed parameter, and the fifth weight 2 is the weight value of the first high frequency band signal of the current frame of speech or audio signal. [01061 Specifically, because the first high frequency band signal of the narrow frequency band speech or audio signal is null, a fixed parameter may be set to replace the high frequency band signal of the narrow frequency band speech or audio signal, where the fixed parameter is a constant greater than or equal to 0 and smaller than the energy of the first high frequency band signal. The product of the fixed parameter and the fifth weight I is added to the product of the first high frequency band signal and the fifth weight 2; the weighted value is the processed first high frequency band signal. [01071 Step 802: Decrease the fifth weight I as per the fourth weight step, and increase the fifth weight 2 as per the fourth weight step until the fifth weight 1 is equal to 0. The sum of the fifth weight 1 and the fifth weight 2 is equal to 1. 101081 Specifically, as the speech or audio signals are transmitted, the impact of the narrow frequency band speech or audio signals before the switching on subsequent wide frequency band speech or audio signals gradually turns smaller. Therefore, the fifth weight 1 gradually turns smaller, while the fifth weight 2 gradually turns larger until the fifth weight 1 is equal to 0 and the fifth weight 2 is equal to 1. That is, the transmitted speech or audio signals are always real wide frequency band speech or audio signals. [01091 By using the method for switching speech or audio signals in this embodiment, in the process of switching a speech or audio signal from a narrow frequency band speech or audio signal to a wide frequency band speech or audio signal, the high frequency band signal of the wide frequency band speech or audio signal is attenuated to obtain a processed high frequency band signal. In this way, the high frequency band signal corresponding to the narrow frequency band speech or audio signal before the switching can be smoothly switched to the processed high frequency band signal corresponding to the wide frequency band speech or audio signal, thus helping to improve the quality of audio signals received by the user. [01101 In this embodiment, the envelope information may also be replaced by other parameters that can represent the high frequency band signal, for example, a linear predictive coding (LPC) parameter or an amplitude parameter. 101111 Those skilled in the art may understand that all or a part of the steps of the method according to the embodiments of the present invention may be implemented by a program instructing relevant hardware. The program may be stored in a computer readable storage medium. When the program runs, the steps of the method according to the embodiments of the present invention are performed. The storage medium may be a read only memory (ROM), a random access memory (RAM), a magnetic disk, or a compact disk-read only memory
(CD-ROM).
[0112] FIG. 9 shows a structure of the first embodiment of an apparatus for switching speech or audio signals. As shown in FIG. 9, the apparatus for switching speech or audio signals includes a processing module 91 and a first synthesizing module 92. [01131 The processing module 91 is adapted to weight the first high frequency band signal of the current frame of speech or audio signal and the second high frequency band signal of the previous M frame of speech or audio signals to obtain a processed first high frequency band signal When a switching of a speech or audio occurs. M is greater than or equal to 1. [01141 The first synthesizing module 92 is adapted to synthesize the processed first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal into a wide frequency band signal. [0115] In the apparatus for switching speech or audio signals in this embodiment, the processing module processes the first high frequency band signal of the current frame of speech or audio signal according to the second high frequency band signal of the previous M frame of speech or audio signals, so that the second high frequency band signal can be smoothly switched to the processed first high frequency band signal. In this way, during the process of switching between speech or audio signals with different bandwidths, the high frequency band signal of these speech or audio signals can be smoothly switched. Finally, the first synthesizing module synthesizes the processed first high frequency band signal and the first low frequency band signal into a wide frequency band signal; the wide frequency band signal is transmitted to a user terminal, so that the user enjoys a high quality speech or audio signal. By using the method for switching speech or audio signals in this embodiment, speech or audio signals with different bandwidths can be switched smoothly, thus reducing the impact of the sudden energy change on the subjective audio quality of the speech or audio signals and improving the quality of audio signals received by the user. 101161 FIG. 10 shows a structure of the second embodiment of the apparatus for switching speech or audio signals. As shown in FIG. 10, the apparatus for switching speech or audio signals in this embodiment is based on the first embodiment, and further includes a second synthesizing module 103.
[01171 The second synthesizing module 103 is adapted to synthesize the first high frequency band signal and the first low frequency band signal into the wide frequency band signal when a switching of the speech or audio signal does not occur. 101181 In the apparatus for switching speech or audio signal in this embodiment, the second synthesizing module is set to synthesize the first low frequency band signal and the first high frequency band signal of the first frequency band speech or audio signals of the current frame into a wide frequency band signal when a switching between speech or audio signals with different bandwidths occurs. In this way, the quality of speech or audio signals received by the user is improved. [01191 According to the preceding technical solution, optionally, when a switching from wide frequency band speech or audio signal to a narrow frequency band speech or audio signal occurs, the processing module 101 includes the following modules, as shown in FIG. 10 and FIG. 11: a predicting module 1011, adapted to predict fine structure information and envelope information corresponding to the first high frequency band signal; a first generating module 1012, adapted to weight the predicted envelope information and the previous M frame envelope information corresponding to the second high frequency band signal of the previous M frame of speech or audio signals to obtain first envelope information corresponding to the first high frequency band signal; and a second generating module 1013, adapted to generate a processed first high frequency band signal according to the first envelope information and the predicted fine structure information. 101201 Further, the apparatus for switching speech or audio signals in this embodiment may include a classifying module 1010 adapted to classify the first low frequency band signal of the current frame of speech or audio signal. The predicting module 1011 is further adapted to predict the fine structure information and envelope information corresponding to the first low frequency band signal of the current frame of speech or audio signal. [01211 In the apparatus for switching speech or audio signals in this embodiment, the predicting module predicts the fine structure information and envelope information corresponding to the first high frequency band signal, so that the processed first high frequency band signal can be accurately generated by the first generating module and the second generating module. In this way, the first high frequency band signal can be smoothly switched to the processed first high frequency band signal, thus improving the quality of speech or audio signals received by the user. In addition, the classifying module classifies the first low frequency band signal of the current frame of speech or audio signal; the predicting module obtains the predicted fine structure information and predicted envelope information according to the signal type. In this way, the predicted fine structure information and predicted envelope information are more accurate, thus improving the quality of speech or audio signals received by the user. [0122] Based on the preceding technical solution, optionally, the first synthesizing module 102 includes the following modules, as shown in FIG. 10 and FIG. 12: a first judging module 1021, adapted to judge whether the processed first high frequency band signal needs to be attenuated according to the current frame of speech or audio signal and the previous frame of speech or audio signal before the switching; a third synthesizing module 1022, adapted to synthesize the processed first high frequency band signal and the first low frequency band signal into a wide frequency band signal when the first judging module 1021 determines that the processed first high frequency band signal does not need to be attenuated; a second judging module 1023, adapted to judge whether the attenuation factor corresponding to the processed first high frequency band signal is greater than the given threshold when the first judging module 1021 determines that the processed first high frequency band signal needs to be attenuated; a fourth synthesizing module 1024, adapted to: if the second judging module 1023 determines that the attenuation factor is not greater than the given threshold, multiply the processed first high frequency band signal by the threshold, and synthesize the product and the first low frequency band signal into a wide frequency band signal; a fifth synthesizing module 1025, adapted to: if the second judging module 1023 determines that the attenuation factor is greater than the given threshold, multiply the processed first high frequency band signal by the attenuation factor, and synthesize the product and the first low frequency band signal into a wide frequency band signal; and a first modifying module 1026, adapted to modify the attenuation factor to decrease the attenuation factor. [01231 The initial value of the attenuation factor is 1, and the threshold is greater than or equal to 0 and smaller than 1. [0124] By using the apparatus for switching speech or audio signals, the processed first high frequency band signal is attenuated, so that the wide frequency band signal obtained by processing the current frame of speech or audio signal is more accurate, thus improving the quality of audio signals received by the user. [0125] According to the preceding technical solution, optionally, when a switching from a narrow frequency band speech or audio signal a wide frequency band speech or audio signal occurs, the processing module 101 in this embodiment includes the following modules, as shown in FIG. 10 and FIG. 13a: a first calculating module 1011a, adapted to weight according to a set fourth weight I and a fourth weight 2 to calculate the processed first high frequency band signal, where the fourth weight I refers to the weight value of the second high frequency band signal and the fourth weight 2 refers to the weight value of the first high frequency band signal; and a second modifying module 1012a, adapted to: decrease the fourth weight I as per the third weight step, and increase the fourth weight 2 as per the third weight step until the fourth weight 1 is equal to 0, where the sum of the fourth weight I and the fourth weight 2 is equal to 1. 101261 Similarly, when a switching from a narrow frequency band speech or audio signal a wide frequency band speech or audio signal occurs, the processing module 101 in this embodiment may further include the following modules, as shown in FIG. 10 and FIG. 13b: a second calculating module 1011b, adapted to weight according to a set fifth weight I and a fifth weight 2 to calculate the processed first high frequency band signal, where the fifth weight I refers to the weight value of a set fixed parameter, and the fifth weight 2 refers to the weight value of the first high frequency band signal; and a third modifying module 1012b, adapted to: decrease the fifth weight I as per the fourth weight step, and increase the fifth weight 2 as per the fourth weight step until the fifth weight 1 is equal to 0, where the sum of the fifth weight I and the fifth weight 2 is equal to 1, where the fixed parameter is a fixed constant greater than or equal to 0 and smaller than the energy value of the first high frequency band signal. [0127] By using the apparatus for switching speech or audio signals in this embodiment, in the process of switching a speech or audio signal from a narrow frequency band speech or audio signal to a wide frequency band speech or audio signal, the high frequency band signal of the wide frequency band speech or audio signal is attenuated to obtain a processed high frequency band signal. In this way, the high frequency band signal corresponding to the narrow frequency band speech or audio signal before the switching can be smoothly switched to the processed high frequency band signal corresponding to the wide frequency band speech or audio signal, thus helping to improve the quality of audio signals received by the user. [01281 It should be noted that the above embodiments are merely provided for describing the technical solution of the present invention, but not intended to limit the present invention. Although the present invention has been described in detail with reference to the foregoing embodiments, it is apparent that those skilled in the art can make various modifications and variations to the invention without departing from the spirit and scope of the invention. The invention shall cover the modifications and variations provided that they fall in the scope of protection defined by the following claims or their equivalents. 10

Claims (16)

1. A method for switching speech or audio signals, comprising: when a switching of a speech or audio occurs, weighting a first high frequency band signal of a current frame of speech or audio signal and a second high frequency band signal of previous M frame of speech or audio signals to obtain a processed first high frequency band signal, wherein M is greater than or equal to 1; and synthesizing the processed first high frequency band signal and a first low frequency band signal of the current frame of speech or audio signal into a wide frequency band signal.
2. The method of claim 1, further comprising: when a switching of the speech or audio signal does not occur, synthesizing the first high frequency band signal and the first low frequency band signal into the wide frequency band signal.
3. The method of claim 1 or 2, wherein when a switching from a wide frequency band speech or audio signal to a narrow frequency band speech or audio signal occurs, the step of weighting the first high frequency band signal of the current frame of speech or audio signal and the second high frequency band signal of the previous M frame of speech or audio signals to obtain the processed first high frequency band signal comprises: predicting the fine structure information and the envelope information corresponding to the first high frequency band signal of the current frame of speech or audio signal; weighting the predicted envelope information and previous M frame envelope information corresponding to the second high frequency band signal of the previous M frame of speech or audio signals to obtain first envelope information corresponding to the first high frequency band signal; and generating the processed first high frequency band signal according to the first envelope information and the predicted fine structure information.
4. The method of claim 3, wherein the step of predicting the fine structure information and envelope information corresponding to the first high frequency band signal of the current frame of speech or audio signal comprises: classifying the first low frequency band signal of the current frame of speech or audio signal; and predicting the fine structure information and envelope information according to the signal type of the first low frequency band signal.
5. The method of claim 3, wherein the step of weighting the predicted envelope information and the previous M frame envelope information corresponding to the second high frequency band signal of the previous M frame of speech or audio signals to obtain the first envelope information corresponding to the first high frequency band signal comprises: calculating a correlation coefficient between the first low frequency band signal and a low frequency band signal of previous N frame of speech or audio signals according to the first low frequency band signal and the low frequency band signal of the previous N frame of speech or audio signals, wherein N is greater than or equal to 1; judging whether the correlation coefficient is within a given first threshold range; if the correlation coefficient is not within the first threshold range, weighting according to a set first weight 1 and a set first weight 2 to calculate the first envelope information, wherein the first weight I refers to a weight value of previous frame envelope information corresponding to a high frequency band signal of a previous frame of speech or audio signal and the first weight 2 refers to a weight value of the envelope information; if the correlation coefficient is within the first threshold range, weighting according to a set second weight 1 and a set second weight 2 to calculate transitional envelope information, wherein the second weight I refers to a weight value of envelope information corresponding to a high frequency band signal of L frame of speech or audio signals before the switching and the second weight 2 refers to the weight value of the previous M frame envelope information, wherein L is greater than or equal to 1; decreasing the second weight I as per a first weight step, and increasing the second weight 2 as per the first weight step; judging whether a set third weight 1 is greater than the first weight 1; if the third weight 1 is not greater than the first weight 1, weighting according to the set first weight 1 and the first weight 2 to calculate the first envelope information; if the third weight 1 is greater than the first weight 1, weighting according to the set third weight I and a third weight 2 to calculate the first envelope information, wherein the third weight 1 refers to a weight value of the transitional envelope information and the third weight 2 refers to a weight value of the predicted envelope information; and decreasing the third weight 1 as per a second weight step, and increasing the third weight 2 as per the second weight step until the third weight I is equal to 0; wherein: a sum of the first weight I and the first weight 2 is equal to 1; a sum of the second weight 1 and the second weight 2 is equal to 1; a sum of the third weight 1 and the third weight 2 is equal to 1; an initial value of the third weight 1 is greater than an initial value of the first weight 1; and the first weight I and the first weight 2 are fixed constants.
6. The method of claim 3, wherein the step of weighting the predicted envelope information and the previous M frame envelope information corresponding to the second high frequency band signal of the previous M frame of speech or audio signals to obtain the first envelope information corresponding to the first high frequency band signal comprises: calculating a correlation coefficient between the first low frequency band signal of a current frame and a low frequency band signal of a previous frame of speech or audio signal according to the first low frequency band signal of the current frame and the low frequency band signal of the previous frame of speech or audio signal; judging whether the correlation coefficient is within a given second threshold range; if the correlation coefficient is not within the second threshold range, weighting according to a set first weight 1 and a set first weight 2 to calculate the first envelope information, wherein the first weight I refers to a weight value of previous frame envelope information corresponding to a high frequency band signal of the previous frame of speech or audio signal and the first weight 2 refers to a weight value of the predicted envelope information; and the first weight 1 and the first weight 2 are fixed constants; if the correlation coefficient is within the second threshold range, judging whether a set second weight I is greater than the first weight 1, wherein the second weight I refers to a '21 weight value of envelope information corresponding to the high frequency band signal of the previous frame of speech or audio signal before the switching; if the second weight 1 is not greater than the first weight 1, weighting according to the set first weight I and the first weight 2 to calculate the first envelope information; if the second weight I is greater than the first weight 1, weighting according to the second weight I and a set second weight 2 to calculate the first envelope information, wherein the second weight 2 refers to a weight value of the predicted envelope information; and decreasing the second weight 1 as per a second weight step, and increasing the second weight 2 as per the second weight step; wherein: a sum of the first weight 1 and the first weight 2 is equal to 1; a sum of the second weight I and the second weight 2 is equal to 1; an initial value of the second weight I is greater than an initial value of the first weight 1.
7. The method of claim 3, wherein the step of synthesizing the processed first high frequency band signal and the first low frequency band signal of the current frame of speech or audio signal into the wide frequency band signal comprises: judging whether the processed first high frequency band signal needs to be attenuated according to the current frame of speech or audio signal and a previous frame of speech or audio signal before the switching; if attenuation is not required, synthesizing the processed first high frequency band signal and the first low frequency band signal into the wide frequency band signal; if attenuation is required, judging whether an attenuation factor corresponding to the first high frequency band signal is greater than a given threshold; if the attenuation factor is not greater than the given threshold, multiplying the processed first high frequency band signal by the threshold, and synthesizing the product of the processed first high frequency band signal and the threshold and the first low frequency band signal into the wide frequency band signal; if the attenuation factor is greater than the given threshold, multiplying the processed first high frequency band signal by the attenuation factor, and synthesizing the product of the processed first high frequency band signal and the attenuation factor and the first low frequency band signal into the wide frequency band signal; and modifying the attenuation factor to decrease the attenuation factor; wherein: an initial value of the attenuation factor is 1, and the threshold is greater than or equal to 0 and smaller than 1.
8. The method of claim I or 2, wherein a switching from a narrow frequency band speech or audio signal to a wide frequency band speech or audio signal occurs, the step of weighting the first high frequency band signal of the current frame of speech or audio signal and the second high frequency band signal of the previous M frame of speech or audio signals to obtain the processed first high frequency band signal comprises: weighting according to a set fourth weight 1 and a set fourth weight 2 to calculate the processed first high frequency band signal, wherein the fourth weight I refers to a weight value of the second high frequency band signal and the fourth weight 2 refers to a weight value of the first high frequency band signal; and decreasing the fourth weight 1 as per a third weight step, and increasing the fourth weight 2 as per the third weight step until the fourth weight I is equal to 0, wherein a sum of the fourth weight I and the fourth weight 2 is equal to 1.
9. The method of claim I or 2, wherein when a switching from a wide frequency band speech or audio signal to a narrow frequency band speech or audio signal occurs,, the step of weighting the first high frequency band signal of the current frame of speech or audio signal and the second high frequency band signal of the previous M frame of speech or audio signals to obtain the processed first high frequency band signal comprises: weighting according to a set fifth weight 1 and a set fifth weight 2 to calculate the processed first high frequency band signal, wherein the fifth weight I refers to a weight value of a set fixed parameter, and the fifth weight 2 refers to a weight value of the first high frequency band signal; and reducing the fifth weight 1 as per a fourth weight step, and increasing the fifth weight 2 as per the fourth weight step until the fifth weight 1 is equal to 0, wherein a sum of the fifth weight 1 and the fifth weight 2 is equal to 1; wherein: the fixed parameter is a constant greater than or equal to 0 and smaller than an energy value of the first high frequency band signal.
10. An apparatus for switching speech or audio signals, comprising: a processing module, adapted to: when a switching of a speech or audio occurs, weight a first high frequency band signal of a current frame of speech or audio signal and a second high frequency band signal of previous M frame of speech or audio signals to obtain a processed first high frequency band signal, wherein M is greater than or equal to 1; and a first synthesizing module, adapted to synthesize the processed first high frequency band signal and a first low frequency band signal of the current frame of speech or audio signal into a wide frequency band signal.
11. The apparatus of claim 10, further comprising: a second synthesizing module, adapted to synthesize the first high frequency band signal and the first low frequency band signal into the wide frequency band signal when a switching of the speech or audio signal does not occur.
12. The apparatus of claim 10 or 11, wherein when a switching from a wide frequency band speech or audio signal to a narrow frequency band speech or audio signal occurs, the processing module comprises: a predicting module, adapted to predict the fine structure information and the envelope information corresponding to the first high frequency band signal of the current frame of speech or audio signal; a first generating module, adapted to weight the predicted envelope information and previous M frame envelope information corresponding to the second high frequency band signal of the previous M frame of speech or audio signals to obtain first envelope information corresponding to the first high frequency band signal; and a second generating module, adapted to generate the processed first high frequency band signal according to the first envelope information and the predicted fine structure information.
13. The apparatus of claim 12, further comprising a classifying module adapted to classify the first low frequency band signal of the current frame of speech or audio signal, wherein: the predicting module is further adapted to predict the fine structure information and the envelope information according to the signal type of the first low frequency band signal.
14. The apparatus of claim 12, wherein the first synthesizing module comprises: I A a first judging module, adapted to judge whether the processed first high frequency band signal needs to be attenuated according to the current frame of speech or audio signal and a previous frame of speech or audio signal before the switching; a third synthesizing module, adapted to synthesize the processed first high frequency band signal and the first low frequency band signal into the wide frequency band signal when the first judging module determines that the processed first high frequency band signal does not need to be attenuated; a second judging module, adapted to judge whether an attenuation factor corresponding to the processed first high frequency band signal is greater than a given threshold when the first judging module determines that the processed first high frequency band signal needs to be attenuated; a fourth synthesizing module, adapted to: if the second judging module determines that the attenuation factor is not greater than the given threshold, multiply the processed first high frequency band signal by the threshold, and synthesize the product and the first low frequency band signal into the wide frequency band signal; a fifth synthesizing module, adapted to: if the second judging module determines that the attenuation factor is greater than the given threshold, multiply the processed first high frequency band signal by the attenuation factor, and synthesize the product and the first low frequency band signal into the wide frequency band signal; and a first modifying module, adapted to modify the attenuation factor to decrease the attenuation factor; wherein: an initial value of the attenuation factor is 1, and the threshold is greater than or equal to 0 and smaller than 1.
15. The apparatus of claim 10 or 11, wherein, when a switching from a narrow frequency band speech or audio signal to a wide frequency band speech or audio signal occurs, the processing module comprises: a first calculating module, adapted to weight according to a set fourth weight 1 and a set fourth weight 2 to calculate the processed first high frequency band signal, wherein the fourth weight 1 refers to a weight value of the second high frequency band signal and the fourth weight 2 refers to a weight value of the first high frequency band signal; and a second modifying module, adapted to: decrease the fourth weight I as per a third weight step, and increase the fourth weight 2 as per the third weight step until the fourth weight I is equal to 0, wherein a sum of the fourth weight I and the fourth weight 2 is equal to 1.
16. The apparatus of claim 13, wherein, when a switching from a narrow frequency band speech or audio signal to a wide frequency band speech or audio signal occurs, the processing module comprises: a second calculating module, adapted to weight according to a set fifth weight 1 and a set fifth weight 2 to calculate the processed first high frequency band signal, wherein the fifth weight 1 refers to a weight value of a set fixed parameter and the fifth weight 2 refers to a weight value of the first high frequency band signal; and a third modifying module, adapted to: decrease the fifth weight I as per a fourth weight step, and increase the fifth weight 2 as per the fourth weight step until the fifth weight I is equal to 0, wherein a sum of the fifth weight 1 and the fifth weight 2 is equal to 1, wherein the fixed parameter is a constant greater than or equal to 0 and smaller than an energy value of the first high frequency band signal.
AU2011247719A 2010-04-28 2011-04-28 Method and apparatus for switching speech or audio signals Active AU2011247719B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2010101634063A CN101964189B (en) 2010-04-28 2010-04-28 Audio signal switching method and device
CN201010163406.3 2010-04-28
PCT/CN2011/073479 WO2011134415A1 (en) 2010-04-28 2011-04-28 Audio signal switching method and device

Publications (2)

Publication Number Publication Date
AU2011247719A1 true AU2011247719A1 (en) 2012-06-07
AU2011247719B2 AU2011247719B2 (en) 2013-07-11

Family

ID=43517042

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2011247719A Active AU2011247719B2 (en) 2010-04-28 2011-04-28 Method and apparatus for switching speech or audio signals

Country Status (8)

Country Link
EP (2) EP3249648B1 (en)
JP (3) JP5667202B2 (en)
KR (1) KR101377547B1 (en)
CN (1) CN101964189B (en)
AU (1) AU2011247719B2 (en)
BR (1) BR112012013306B8 (en)
ES (2) ES2718947T3 (en)
WO (1) WO2011134415A1 (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101110800B1 (en) * 2003-05-28 2012-07-06 도꾸리쯔교세이호진 상교기쥬쯔 소고겡뀨죠 Process for producing hydroxyl group-containing compound
US8284955B2 (en) 2006-02-07 2012-10-09 Bongiovi Acoustics Llc System and method for digital signal processing
US10848118B2 (en) 2004-08-10 2020-11-24 Bongiovi Acoustics Llc System and method for digital signal processing
US10158337B2 (en) 2004-08-10 2018-12-18 Bongiovi Acoustics Llc System and method for digital signal processing
US11431312B2 (en) 2004-08-10 2022-08-30 Bongiovi Acoustics Llc System and method for digital signal processing
US10701505B2 (en) 2006-02-07 2020-06-30 Bongiovi Acoustics Llc. System, method, and apparatus for generating and digitally processing a head related audio transfer function
US10848867B2 (en) 2006-02-07 2020-11-24 Bongiovi Acoustics Llc System and method for digital signal processing
CN101964189B (en) * 2010-04-28 2012-08-08 华为技术有限公司 Audio signal switching method and device
US8000968B1 (en) 2011-04-26 2011-08-16 Huawei Technologies Co., Ltd. Method and apparatus for switching speech or audio signals
CN105761724B (en) * 2012-03-01 2021-02-09 华为技术有限公司 Voice frequency signal processing method and device
CN103295578B (en) 2012-03-01 2016-05-18 华为技术有限公司 A kind of voice frequency signal processing method and device
CN103516440B (en) 2012-06-29 2015-07-08 华为技术有限公司 Audio signal processing method and encoding device
CN103971693B (en) 2013-01-29 2017-02-22 华为技术有限公司 Forecasting method for high-frequency band signal, encoding device and decoding device
US9883318B2 (en) 2013-06-12 2018-01-30 Bongiovi Acoustics Llc System and method for stereo field enhancement in two-channel audio systems
US9397629B2 (en) * 2013-10-22 2016-07-19 Bongiovi Acoustics Llc System and method for digital signal processing
US9906858B2 (en) 2013-10-22 2018-02-27 Bongiovi Acoustics Llc System and method for digital signal processing
US9524720B2 (en) * 2013-12-15 2016-12-20 Qualcomm Incorporated Systems and methods of blind bandwidth extension
CN103714822B (en) * 2013-12-27 2017-01-11 广州华多网络科技有限公司 Sub-band coding and decoding method and device based on SILK coder decoder
KR101864122B1 (en) * 2014-02-20 2018-06-05 삼성전자주식회사 Electronic apparatus and controlling method thereof
US10820883B2 (en) 2014-04-16 2020-11-03 Bongiovi Acoustics Llc Noise reduction assembly for auscultation of a body
US10639000B2 (en) 2014-04-16 2020-05-05 Bongiovi Acoustics Llc Device for wide-band auscultation
EP2980794A1 (en) 2014-07-28 2016-02-03 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audio encoder and decoder using a frequency domain processor and a time domain processor
AU2017219696B2 (en) 2016-02-17 2018-11-08 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Post-processor, pre-processor, audio encoder, audio decoder and related methods for enhancing transient processing
US11211043B2 (en) 2018-04-11 2021-12-28 Bongiovi Acoustics Llc Audio enhanced hearing protection system
CN110556116B (en) * 2018-05-31 2021-10-22 华为技术有限公司 Method and apparatus for calculating downmix signal and residual signal
WO2020028833A1 (en) 2018-08-02 2020-02-06 Bongiovi Acoustics Llc System, method, and apparatus for generating and digitally processing a head related audio transfer function
CN112002333B (en) * 2019-05-07 2023-07-18 海能达通信股份有限公司 Voice synchronization method and device and communication terminal
CN117373465B (en) * 2023-12-08 2024-04-09 富迪科技(南京)有限公司 Voice frequency signal switching system

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4330689A (en) * 1980-01-28 1982-05-18 The United States Of America As Represented By The Secretary Of The Navy Multirate digital voice communication processor
US4769833A (en) * 1986-03-31 1988-09-06 American Telephone And Telegraph Company Wideband switching system
US5019910A (en) * 1987-01-29 1991-05-28 Norsat International Inc. Apparatus for adapting computer for satellite communications
FI115329B (en) * 2000-05-08 2005-04-15 Nokia Corp Method and arrangement for switching the source signal bandwidth in a communication connection equipped for many bandwidths
US7113522B2 (en) * 2001-01-24 2006-09-26 Qualcomm, Incorporated Enhanced conversion of wideband signals to narrowband signals
KR100940531B1 (en) * 2003-07-16 2010-02-10 삼성전자주식회사 Wide-band speech compression and decompression apparatus and method thereof
JP2005080079A (en) * 2003-09-02 2005-03-24 Sony Corp Sound reproduction device and its method
FI119533B (en) * 2004-04-15 2008-12-15 Nokia Corp Coding of audio signals
CN1950883A (en) * 2004-04-30 2007-04-18 松下电器产业株式会社 Scalable decoder and expanded layer disappearance hiding method
CA2575215A1 (en) * 2004-07-28 2006-02-02 Matsushita Electric Industrial Co., Ltd. Relay device and signal decoding device
WO2006028009A1 (en) * 2004-09-06 2006-03-16 Matsushita Electric Industrial Co., Ltd. Scalable decoding device and signal loss compensation method
WO2006075663A1 (en) * 2005-01-14 2006-07-20 Matsushita Electric Industrial Co., Ltd. Audio switching device and audio switching method
US8249861B2 (en) * 2005-04-20 2012-08-21 Qnx Software Systems Limited High frequency compression integration
WO2007000988A1 (en) * 2005-06-29 2007-01-04 Matsushita Electric Industrial Co., Ltd. Scalable decoder and disappeared data interpolating method
US8194865B2 (en) * 2007-02-22 2012-06-05 Personics Holdings Inc. Method and device for sound detection and audio control
CN101425292B (en) * 2007-11-02 2013-01-02 华为技术有限公司 Decoding method and device for audio signal
WO2009056027A1 (en) * 2007-11-02 2009-05-07 Huawei Technologies Co., Ltd. An audio decoding method and device
CN100585699C (en) * 2007-11-02 2010-01-27 华为技术有限公司 A kind of method and apparatus of audio decoder
CN101964189B (en) * 2010-04-28 2012-08-08 华为技术有限公司 Audio signal switching method and device

Also Published As

Publication number Publication date
EP2485029B1 (en) 2017-06-14
CN101964189B (en) 2012-08-08
JP6027081B2 (en) 2016-11-16
KR20120074303A (en) 2012-07-05
ES2718947T3 (en) 2019-07-05
EP3249648B1 (en) 2019-01-09
BR112012013306B8 (en) 2021-02-17
BR112012013306A2 (en) 2016-03-01
ES2635212T3 (en) 2017-10-02
JP5667202B2 (en) 2015-02-12
EP2485029A1 (en) 2012-08-08
JP6410777B2 (en) 2018-10-24
KR101377547B1 (en) 2014-03-25
CN101964189A (en) 2011-02-02
JP2015045888A (en) 2015-03-12
EP2485029A4 (en) 2013-01-02
WO2011134415A1 (en) 2011-11-03
JP2017033015A (en) 2017-02-09
AU2011247719B2 (en) 2013-07-11
EP3249648A1 (en) 2017-11-29
JP2013512468A (en) 2013-04-11
BR112012013306B1 (en) 2020-11-10

Similar Documents

Publication Publication Date Title
AU2011247719B2 (en) Method and apparatus for switching speech or audio signals
US8214218B2 (en) Method and apparatus for switching speech or audio signals
US10559313B2 (en) Speech/audio signal processing method and apparatus
KR101427863B1 (en) Audio signal coding method and apparatus
JP7387879B2 (en) Audio encoding method and device
EP3113181B1 (en) Decoding device and decoding method
JP2014507681A (en) Method and apparatus for extending bandwidth
CN105761724B (en) Voice frequency signal processing method and device

Legal Events

Date Code Title Description
DA3 Amendments made section 104

Free format text: THE NATURE OF THE AMENDMENT IS: AMEND THE INVENTION TITLE TO READ METHOD AND APPARATUS FOR SWITCHING SPEECH OR AUDIO SIGNALS

FGA Letters patent sealed or granted (standard patent)