EP3147900B1 - Verfahren und vorrichtung zur verarbeitung von audiosignalen - Google Patents

Verfahren und vorrichtung zur verarbeitung von audiosignalen Download PDF

Info

Publication number
EP3147900B1
EP3147900B1 EP15802508.0A EP15802508A EP3147900B1 EP 3147900 B1 EP3147900 B1 EP 3147900B1 EP 15802508 A EP15802508 A EP 15802508A EP 3147900 B1 EP3147900 B1 EP 3147900B1
Authority
EP
European Patent Office
Prior art keywords
value
speech
audio signal
sample value
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP15802508.0A
Other languages
English (en)
French (fr)
Other versions
EP3147900A4 (de
EP3147900A1 (de
Inventor
Zexin Liu
Lei Miao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to EP23184053.9A priority Critical patent/EP4283614A3/de
Priority to EP19190663.5A priority patent/EP3712890B1/de
Publication of EP3147900A1 publication Critical patent/EP3147900A1/de
Publication of EP3147900A4 publication Critical patent/EP3147900A4/de
Application granted granted Critical
Publication of EP3147900B1 publication Critical patent/EP3147900B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/028Noise substitution, i.e. substituting non-tonal spectral components by noisy source
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/012Comfort noise or silence coding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/167Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/26Pre-filtering or post-filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0316Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/038Speech enhancement, e.g. noise reduction or echo cancellation using band spreading techniques

Definitions

  • the present invention relates to the communications field, and in particular, to a method for processing a speech/audio signal and an apparatus.
  • an electronic device reconstructs a noise component of a speech/audio signal obtained by means of decoding.
  • an electronic device reconstructs a noise component of a speech/audio signal generally by adding a random noise signal to the speech/audio signal. Specifically, weighted addition is performed on the speech/audio signal and the random noise signal, to obtain a signal after the noise component of the speech/audio signal is reconstructed.
  • the speech/audio signal may be a time-domain signal, a frequency-domain signal, or an excitation signal, or may be a low frequency signal, a high frequency signal, or the like.
  • the present invention provides a method for processing a speech/audio signal and an apparatus, so that for a speech/audio signal having an onset or an offset, when a noise component of the speech/audio signal is reconstructed, a signal obtained after the noise component of the speech/audio signal is reconstructed does not have an echo, thereby improving auditory quality of the signal obtained after the noise component is reconstructed.
  • the present invention provides a method for processing a speech/audio signal according to claim 1.
  • the present invention provides an apparatus for reconstructing a noise component of a speech/audio signal according to claim 7.
  • Preferred embodiments are set forth in the dependent claims.
  • the process of the invention only an original signal, that is, the first speech/audio signal is processed, and no new signal is added to the first speech/audio signal, so that no new energy is added to a second speech/audio signal obtained after a noise component is reconstructed. Therefore, if the first speech/audio signal has an onset or an offset, no echo is added to the second speech/audio signal, thereby improving auditory quality of the second speech/audio signal.
  • FIG. 1 is a flowchart of a method for reconstructing a noise component of a speech/audio signal according to an embodiment of the present invention.
  • the method includes: Step 101: Receive a bitstream, and decode the bitstream, to obtain a speech/audio signal.
  • Step 102 Determine a first speech/audio signal according to the speech/audio signal, where the first speech/audio signal is a signal, whose noise component needs to be reconstructed, in the speech/audio signal obtained by means of decoding.
  • the first speech/audio signal may be a low frequency band signal, a high frequency band signal, a fullband signal, or the like in the speech/audio signal obtained by means of decoding.
  • the speech/audio signal obtained by means of decoding may include a low frequency band signal and a high frequency band signal, or may include a fullband signal.
  • Step 103 Determine a sign of each sample value in the first speech/audio signal and an amplitude value of each sample value in the first speech/audio signal.
  • implementation manners of the sample value may also be different.
  • the sample value may be a spectrum coefficient
  • the speech/audio signal is a time-domain signal
  • the sample value may be a sample point value.
  • Step 104 Determine an adaptive normalization length.
  • the adaptive normalization length may be determined according to a related parameter of a low frequency band signal and/or a high frequency band signal of the speech/audio signal obtained by means of decoding.
  • the related parameter may include a signal type, a peak-to-average ratio, and the like.
  • the determining an adaptive normalization length includes:
  • the calculating the adaptive normalization length according to a signal type of the high frequency band signal in the speech/audio signal and the quantity of the subbands may include:
  • the adaptive normalization length may be calculated according to a signal type of the low frequency band signal in the speech/audio signal and the quantity of the subbands.
  • L K + ⁇ ⁇ M .
  • K is a numerical value corresponding to the signal type of the low frequency band signal in the speech/audio signal.
  • Different signal types of low frequency band signals correspond to different numerical values K.
  • the determining an adaptive normalization length may include: calculating a peak-to-average ratio of the low frequency band signal in the speech/audio signal and a peak-to-average ratio of the high frequency band signal in the speech/audio signal; and when an absolute value of a difference between the peak-to-average ratio of the low frequency band signal and the peak-to-average ratio of the high frequency band signal is less than a preset difference threshold, determining the adaptive normalization length as a preset first length value, or when an absolute value of a difference between the peak-to-average ratio of the low frequency band signal and the peak-to-average ratio of the high frequency band signal is not less than a preset difference threshold, determining the adaptive normalization length as a preset second length value.
  • the first length value is greater than the second length value.
  • the first length value and the second length value may also be obtained by means of calculation by using a ratio of the peak-to-average ratio of the low frequency band signal to the peak-to-average ratio of the high frequency band signal or a difference between the peak-to-average ratio of the low frequency band signal and the peak-to-average ratio of the high frequency band signal.
  • a specific calculation method is not limited.
  • the determining an adaptive normalization length may include: calculating a peak-to-average ratio of the low frequency band signal in the speech/audio signal and a peak-to-average ratio of the high frequency band signal in the speech/audio signal; and when the peak-to-average ratio of the low frequency band signal is less than the peak-to-average ratio of the high frequency band signal, determining the adaptive normalization length as a preset first length value, or when the peak-to-average ratio of the low frequency band signal is not less than the peak-to-average ratio of the high frequency band signal, determining the adaptive normalization length as a preset second length value.
  • the first length value is greater than the second length value.
  • the first length value and the second length value may also be obtained by means of calculation by using a ratio of the peak-to-average ratio of the low frequency band signal to the peak-to-average ratio of the high frequency band signal or a difference between the peak-to-average ratio of the low frequency band signal and the peak-to-average ratio of the high frequency band signal.
  • a specific calculation method is not limited.
  • the determining an adaptive normalization length may include: determining the adaptive normalization length according to a signal type of the high frequency band signal in the speech/audio signal. Different signal types correspond to different adaptive normalization lengths. For example, when the signal type is a harmonic signal, a corresponding adaptive normalization length is 32; when the signal type is a normal signal, a corresponding adaptive normalization length is 16; when the signal type is a transient signal, a corresponding adaptive normalization length is 8.
  • Step 105 Determine an adjusted amplitude value of each sample value according to the adaptive normalization length and the amplitude value of each sample value.
  • the determining an adjusted amplitude value of each sample value according to the adaptive normalization length and the amplitude value of each sample value includes:
  • the calculating, according to the amplitude value of each sample value and the adaptive normalization length, an average amplitude value corresponding to each sample value includes:
  • the determining, for each sample value and according to the adaptive normalization length, a subband to which the sample value belongs may include: performing subband grouping on all sample values in a preset order according to the adaptive normalization length; and for each sample value, determining a subband including the sample value as the subband to which the sample value belongs.
  • the preset order may be, for example, an order from a low frequency to a high frequency or an order from a high frequency to a low frequency, which is not limited herein.
  • x1 to x5 may be grouped into one subband
  • x6 to x10 may be grouped into one subband.
  • several subbands are obtained. Therefore, for each sample value in x1 to x5, a subband x1 to x5 is a subband to which each sample value belongs, and for each sample value in x6 to x10, a subband x6 to x10 is a subband to which each sample value belongs.
  • a subband to which the sample value belongs includes: for each sample value, determining a subband consisting of m sample values before the sample value, the sample value, and n sample values after the sample value as the subband to which the sample value belongs, where m and n depend on the adaptive normalization length, m is an integer not less than 0, and n is an integer not less than 0.
  • sample values in ascending order are respectively x1, x2, x3, ..., and xn
  • the adaptive normalization length is 5
  • m is 2
  • n is 2.
  • a subband consisting of x1 to x5 is a subband to which the sample value x3 belongs.
  • a subband consisting of x2 to x6 is a subband to which the sample value x4 belongs. The rest can be deduced by analogy.
  • the subbands to which x1, x2, x(n-1), and xn belong may be autonomously set.
  • the sample value itself may be added to compensate for a lack of a sample value in the subband to which the sample value belongs.
  • the sample value x1 there is no sample value before the sample value x1, and x1, x1, x1, x2, and x3 may be used as the subband to which the sample value x1 belongs.
  • the average amplitude value corresponding to each sample value may be directly used as the amplitude disturbance value corresponding to each sample value.
  • a preset operation may be performed on the average amplitude value corresponding to each sample value, to obtain the amplitude disturbance value corresponding to each sample value.
  • the preset operation may be, for example, that the average amplitude value is multiplied by a numerical value. The numerical value is generally greater than 0.
  • the calculating the adjusted amplitude value of each sample value according to the amplitude value of each sample value and according to the amplitude disturbance value corresponding to each sample value includes: subtracting the amplitude disturbance value corresponding to each sample value from the amplitude value of each sample value, to obtain a difference between the amplitude value of each sample value and the amplitude disturbance value corresponding to each sample value, and using the obtained difference as the adjusted amplitude value of each sample value.
  • Step 106 Determine a second speech/audio signal according to the sign of each sample value and the adjusted amplitude value of each sample value, where the second speech/audio signal is a signal obtained after the noise component of the first speech/audio signal is reconstructed.
  • a new value of each sample value may be determined according to the sign and the adjusted amplitude value of each sample value, to obtain the second speech/audio signal.
  • the determining a second speech/audio signal according to the sign of each sample value and the adjusted amplitude value of each sample value may include:
  • the obtained second speech/audio signal may include new values of all the sample values.
  • the modification factor may be calculated according to the adaptive normalization length. Specifically, the modification factor ⁇ may be equal to a/L, where a is a constant greater than 1.
  • the step of extracting the sign of each sample value in the first speech/audio signal in step 103 may be performed at any time before step 106. There is no necessary execution order between the step of extracting the sign of each sample value in the first speech/audio signal and step 104 and step 105.
  • An execution order between step 103 and step 104 is not limited.
  • a time-domain signal in the speech/audio signal may be within one frame.
  • a part of the speech/audio signal has an extremely large signal sample point value and extremely powerful signal energy, while another part of the speech/audio signal has an extremely small signal sample point value and extremely weak signal energy.
  • a random noise signal is added to the speech/audio signal in a frequency domain, to obtain a signal obtained after a noise component is reconstructed.
  • the newly added random noise signal generally causes signal energy of a part, whose original sample point value is extremely small, in the time-domain signal obtained by means of conversion to increase.
  • a signal sample point value of this part also correspondingly becomes relatively large. Consequently, the signal obtained after a noise component is reconstructed has some echoes, which affects auditory quality of the signal obtained after a noise component is reconstructed.
  • a first speech/audio signal is determined according to a speech/audio signal; a sign of each sample value in the first speech/audio signal and an amplitude value of each sample value in the first speech/audio signal are determined; an adaptive normalization length is determined; an adjusted amplitude value of each sample value is determined according to the adaptive normalization length and the amplitude value of each sample value; and a second speech/audio signal is determined according to the sign of each sample value and the adjusted amplitude value of each sample value.
  • an original signal that is, the first speech/audio signal is processed, and no new signal is added to the first speech/audio signal, so that no new energy is added to a second speech/audio signal obtained after a noise component is reconstructed. Therefore, if the first speech/audio signal has an onset or an offset, no echo is added to the second speech/audio signal, thereby improving auditory quality of the second speech/audio signal.
  • FIG. 2 is another schematic flowchart of a method for reconstructing a noise component of a speech/audio signal according to an embodiment of the present invention.
  • the method includes: Step 201: Receive a bitstream, decode the bitstream, to obtain a speech/audio signal, where the speech/audio signal obtained by means of decoding includes a low frequency band signal and a high frequency band signal; and determine the high frequency band signal as a first speech/audio signal.
  • Step 202 Determine a sign of each sample value in the high frequency band signal and an amplitude value of each sample value in the high frequency band signal.
  • a coefficient of a sample value in the high frequency band signal is -4
  • a sign of the sample value is "-”
  • an amplitude value is 4.
  • Step 203 Determine an adaptive normalization length.
  • step 104 For details on how to determine the adaptive normalization length, refer to related descriptions in step 104. Details are not described herein again.
  • Step 204 Determine, according to the amplitude value of each sample value and the adaptive normalization length, an average amplitude value corresponding to each sample value, and determine, according to the average amplitude value corresponding to each sample value, an amplitude disturbance value corresponding to each sample value.
  • step 105 For how to determine the average amplitude value corresponding to each sample value, refer to related descriptions in step 105. Details are not described herein again.
  • Step 205 Calculate an adjusted amplitude value of each sample value according to the amplitude value of each sample value and according to the amplitude disturbance value corresponding to each sample value.
  • step 105 For how to determine the adjusted amplitude value of each sample value, refer to related descriptions in step 105. Details are not described herein again.
  • Step 206 Determine a second speech/audio signal according to the sign and the adjusted amplitude value of each sample value.
  • the second speech/audio signal is a signal obtained after a noise component of the first speech/audio signal is reconstructed.
  • step 106 For specific implementation in this step, refer to related descriptions in step 106. Details are not described herein again.
  • the step of determining the sign of each sample value in the first speech/audio signal in step 202 may be performed at any time before step 206. There is no necessary execution order between the step of determining the sign of each sample value in the first speech/audio signal and step 203, step 204, and step 205.
  • An execution order between step 202 and step 203 is not limited.
  • Step 207 Combine the second speech/audio signal and the low frequency band signal in the speech/audio signal obtained by means of decoding, to obtain an output signal.
  • the first speech/audio signal is a low frequency band signal in the speech/audio signal obtained by means of decoding
  • the second speech/audio signal and a high frequency band signal in the speech/audio signal obtained by means of decoding may be combined, to obtain an output signal.
  • the first speech/audio signal is a high frequency band signal in the speech/audio signal obtained by means of decoding
  • the second speech/audio signal and a low frequency band signal in the speech/audio signal obtained by means of decoding may be combined, to obtain an output signal.
  • the second speech/audio signal may be directly determined as the output signal.
  • the noise component of the high frequency band signal is finally reconstructed, to obtain a second speech/audio signal. Therefore, if the high frequency band signal has an onset or an offset, no echo is added to the second speech/audio signal, thereby improving auditory quality of the second speech/audio signal and further improving auditory quality of the output signal finally output.
  • FIG. 3 is another schematic flowchart of a method for reconstructing a noise component of a speech/audio signal according to an embodiment of the present invention.
  • the method includes: Step 301 to step 305 are the same as step 201 to step 205, and details are not described herein again.
  • Step 306 Calculate a modification factor; and perform modification processing on an adjusted amplitude value, which is greater than 0, in the adjusted amplitude values of the sample values according to the modification factor.
  • step 106 For specific implementation in this step, refer to related descriptions in step 106. Details are not described herein again.
  • Step 307 Determine a second speech/audio signal according to the sign of each sample value and an adjusted amplitude value obtained after the modification processing.
  • step 106 For specific implementation in this step, refer to related descriptions in step 106. Details are not described herein again.
  • the step of determining the sign of each sample value in the first speech/audio signal in step 302 may be performed at any time before step 307. There is no necessary execution order between the step of determining the sign of each sample value in the first speech/audio signal and step 303, step 304, step 305, and step 306.
  • An execution order between step 302 and step 303 is not limited.
  • Step 308 Combine the second speech/audio signal and a low frequency band signal in the speech/audio signal obtained by means of decoding, to obtain an output signal.
  • a high frequency band signal in the speech/audio signal obtained by means of decoding is determined as the first speech/audio signal, and a noise component of the first speech/audio signal is reconstructed, to finally obtain the second speech/audio signal.
  • a noise component of a fullband signal of the speech/audio signal obtained by means of decoding may be reconstructed, or a noise component of a low frequency band signal of the speech/audio signal obtained by means of decoding is reconstructed, to finally obtain a second speech/audio signal.
  • a noise component of a fullband signal of the speech/audio signal obtained by means of decoding may be reconstructed, or a noise component of a low frequency band signal of the speech/audio signal obtained by means of decoding is reconstructed, to finally obtain a second speech/audio signal.
  • FIG. 2 and FIG. 3 For an implementation process thereof, refer to the exemplary methods shown in FIG. 2 and FIG. 3 .
  • a difference lies in only that, when a first speech/audio signal is to be determined, a fullband signal or a low frequency band signal is determined as the first speech/audio signal. Descriptions are not provided by using examples one by one herein.
  • FIG. 4 is a schematic structural diagram of an apparatus for reconstructing a noise component of a speech/audio signal according to an embodiment of the present invention.
  • the apparatus may be disposed in an electronic device.
  • An apparatus 400 may include:
  • the third determining unit 450 includes:
  • the determining subunit includes:
  • the determining module may be specifically configured to:
  • the adjusted amplitude value calculation subunit is specifically configured to: subtract the amplitude disturbance value corresponding to each sample value from the amplitude value of each sample value, to obtain a difference between the amplitude value of each sample value and the amplitude disturbance value corresponding to each sample value, and use the obtained difference as the adjusted amplitude value of each sample value.
  • the second determining unit 440 includes:
  • the length calculation subunit may be specifically configured to:
  • the second determining unit 440 may be specifically configured to:
  • the fourth determining unit 460 may be specifically configured to:
  • a first speech/audio signal is determined according to a speech/audio signal; a sign of each sample value in the first speech/audio signal and an amplitude value of each sample value in the first speech/audio signal are determined; an adaptive normalization length is determined; an adjusted amplitude value of each sample value is determined according to the adaptive normalization length and the amplitude value of each sample value; and a second speech/audio signal is determined according to the sign of each sample value and the adjusted amplitude value of each sample value.
  • an original signal that is, the first speech/audio signal is processed, and no new signal is added to the first speech/audio signal, so that no new energy is added to a second speech/audio signal obtained after a noise component is reconstructed. Therefore, if the first speech/audio signal has an onset or an offset, no echo is added to the second speech/audio signal, thereby improving auditory quality of the second speech/audio signal.
  • FIG. 5 is a structural diagram of an electronic device according to an embodiment of the present invention.
  • An electronic device 500 includes a processor 510, a memory 520, a transceiver 530, and a bus 540.
  • the processor 510, the memory 520, and the transceiver 530 are connected to each other by using the bus 540, and the bus 540 may be an ISA bus, a PCI bus, an EISA bus, or the like.
  • the bus may be classified into an address bus, a data bus, a control bus, or the like.
  • the bus shown in FIG. 5 is indicated by using only one bold line, but it does not indicate that there is only one bus or only one type of bus.
  • the memory 520 is configured to store a program.
  • the program may include program code, and the program code includes a computer operation instruction.
  • the memory 520 may include a high-speed RAM memory, and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk storage.
  • the transceiver 530 is configured to connect to another device, and communicate with the another device. Specifically, the transceiver 530 may be configured to receive a bitstream.
  • the processor 510 executes the program code stored in the memory 520 and is configured to: decode the bitstream, to obtain a speech/audio signal; determine a first speech/audio signal according to the speech/audio signal; determine a sign of each sample value in the first speech/audio signal and an amplitude value of each sample value in the first speech/audio signal; determine an adaptive normalization length; determine an adjusted amplitude value of each sample value according to the adaptive normalization length and the amplitude value of each sample value; and determine a second speech/audio signal according to the sign of each sample value and the adjusted amplitude value of each sample value.
  • processor 510 may be specifically configured to:
  • processor 510 may be specifically configured to:
  • processor 510 may be specifically configured to:
  • the processor 510 may be specifically configured to: subtract the amplitude disturbance value corresponding to each sample value from the amplitude value of each sample value, to obtain a difference between the amplitude value of each sample value and the amplitude disturbance value corresponding to each sample value, and use the obtained difference as the adjusted amplitude value of each sample value.
  • processor 510 may be specifically configured to:
  • processor 510 may be specifically configured to:
  • processor 510 may be specifically configured to:
  • processor 510 may be specifically configured to:
  • the electronic device determines a first speech/audio signal according to a speech/audio signal; determines a sign of each sample value in the first speech/audio signal and an amplitude value of each sample value in the first speech/audio signal; determines an adaptive normalization length; determines an adjusted amplitude value of each sample value according to the adaptive normalization length and the amplitude value of each sample value; and determines a second speech/audio signal according to the sign of each sample value and the adjusted amplitude value of each sample value.
  • the first speech/audio signal has an onset or an offset, no echo is added to the second speech/audio signal, thereby improving auditory quality of the second speech/audio signal.
  • a system embodiment basically corresponds to a method embodiment, and therefore for related parts, reference may be made to partial descriptions in the method embodiment.
  • the described system embodiment is merely exemplary.
  • the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units.
  • a part or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • a person of ordinary skill in the art may understand and implement the embodiments of the present invention without creative efforts.
  • the present invention can be described in the general context of executable computer instructions executed by a computer, for example, a program module.
  • the program unit includes a routine, a program, an object, a component, a data structure, and the like for executing a particular task or implementing a particular abstract data type.
  • the present invention may also be practiced in distributed computing environments in which tasks are performed by remote processing devices that are connected by using a communications network.
  • program modules may be located in both local and remote computer storage media including storage devices.
  • the program may be stored in a computer readable storage medium, such as a ROM, a RAM, a magnetic disc, or an optical disc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Quality & Reliability (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Noise Elimination (AREA)
  • Telephone Function (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)

Claims (12)

  1. Verfahren zum Verarbeiten eines Sprach-/Audiosignals, wobei das Verfahren umfasst:
    Empfangen (101) eines Bitstroms, und Decodieren des Bitstroms, um ein Sprach-/Audiosignal zu erhalten;
    Bestimmen (102) eines ersten Sprach-/Audiosignals gemäß dem Sprach-/Audiosignal, wobei das erste Sprach-/Audiosignal ein Signal, dessen Rauschkomponente rekonstruiert werden muss, in dem Sprach-/Audiosignal ist;
    Bestimmen (103) eines Zeichens jedes Abtastwerts in dem ersten Sprach-/Audiosignal und eines Amplitudenwerts jedes Abtastwerts in dem ersten Sprach-/Audiosignal;
    Bestimmen (104) einer Länge einer adaptiven Normalisierung; wobei das Bestimmten einer Länge einer adaptiven Normalisierung umfasst: Teilen eines Niederfrequenzbandsignals in dem Sprach-/Audiosignal in N Teilbänder, wobei N eine natürliche Zahl ist; Berechnen eines Verhältnisses der Spitzenleistung zu der mittleren Leistung jedes Teilbands, und Bestimmen einer Anzahl von Teilbändern, deren Verhältnisse der Spitzenleistung zu der mittleren Leistung größer sind als ein voreingestellter Schwellenwert für das Verhältnis der Spitzenleistung zu der mittleren Leistung; und Berechnen der Länge einer adaptiven Normalisierung gemäß einem Signaltyp eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal und der Anzahl der Teilbänder;
    Bestimmen (105) eines angepassten Amplitudenwerts jedes Abtastwerts gemäß der Länge einer adaptiven Normalisierung und des Amplitudenwerts jedes Abtastwerts;
    und
    Bestimmen (106) eines zweiten Sprach-/Audiosignals gemäß dem Zeichen jedes Abtastwerts und dem angepassten Amplitudenwert jedes Abtastwerts, wobei das zweite Sprach-/Audiosignal ein Signal ist, das erhalten wurde, nachdem die Rauschkomponente des ersten Sprach-/Audiosignals rekonstruiert ist wobei das Bestimmen (105) eines angepassten Amplitudenwerts jedes Abtastwerts gemäß der Länge einer adaptiven Normalisierung und des Amplitudenwerts jedes Abtastwerts umfasst:
    Berechnen, gemäß des Amplitudenwerts jedes Abtastwerts und der Länge einer adaptiven Normalisierung, eines durchschnittlichen Amplitudenwerts entsprechend jedes Abtastwerts, und Bestimmen, gemäß des durchschnittlichen Amplitudenwerts entsprechend jedes Abtastwerts, eines Amplitudenstörungswerts entsprechend jedes Abtastwerts; und
    Berechnen des angepassten Amplitudenwerts jedes Abtastwerts gemäß dem Amplitudenwert jedes Abtastwerts und gemäß dem Amplitudenstörungswert entsprechend jedes Abtastwerts; wobei das Berechnen des angepassten Amplitudenwerts jedes Abtastwerts gemäß dem Amplitudenwert jedes Abtastwerts und gemäß dem Amplitudenstörungswert entsprechend jedes Abtastwerts umfasst:
    Subtrahieren des Amplitudenstörungswerts entsprechend jedes Abtastwerts von dem Amplitudenwert jedes Abtastwerts, um eine Differenz zwischen dem Amplitudenwert jedes Abtastwerts und dem Amplitudenstörungswert entsprechend jedes Abtastwerts zu erhalten, und Verwenden der erhaltenen Differenz als den angepassten Amplitudenwert jedes Abtastwerts:
    wobei das Berechnen, gemäß dem Amplitudenwert jedes Abtastwerts und der Länge einer adaptiven Normalisierung, eines durchschnittlichen Amplitudenwerts entsprechend jedes Abtastwerts umfasst:
    Bestimmen, für jeden Abtastwert und gemäß der Länge einer adaptiven Normalisierung, eines Teilbands, zu dem der Abtastwert gehört; wobei das Teilband eine bestimmte Anzahl an Abtastwerten umfasst; wobei das Bestimmen, für jeden Abtastwert und gemäß der Länge einer adaptiven Normalisierung, eines Teilbands, zu dem der Abtastwert gehört, umfasst: für jeden Abtastwert, Bestimmen von einem Teilband bestehend aus m Abtastwerten vor dem Abtastwert, dem Abtastwert und n Abtastwerten nach dem Abtastwert als das Teilband, zu dem der Abtastwert gehört, wobei m und n von der Länge einer adaptiven Normalisierung abhängen, m eine Ganzzahl ist, die nicht kleiner als 0 ist, und n eine Ganzzahl ist, die nicht kleiner als 0 ist;
    und
    Berechnen eines Durchschnittswerts von Amplitudenwerten aller Abtastwerte in dem Teilband, zu dem der Abtastwert gehört, und Verwenden des Durchschnittswerts, der mittels Berechnung erhalten wurde, als den Durchschnittsamplitudenwert entsprechend des Abtastwerts.
  2. Verfahren nach Anspruch 1, wobei das Berechnen der Länge einer adaptiven Normalisierung gemäß einem Signaltyp eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal und der Anzahl der Teilbänder umfasst:
    Berechnen der Länge einer adaptiven Normalisierung gemäß einer Formel L = K + α × M, wobei
    L die Länge einer adaptiven Normalisierung ist; K ein numerischer Wert entsprechend dem Signaltyp des Hochfrequenzbandsignals in dem Sprach-/Audiosignal ist, und unterschiedliche Signaltypen von Hochfrequenzbandsignalen unterschiedlichen numerischen Werten K entsprechen; M die Anzahl der Teilbänder ist, deren Verhältnisse der Spitzenleistung zu der mittleren Leistung größer sind als der voreingestellte Schwellenwert für das Verhältnis der Spitzenleistung zu der mittleren Leistung; und α eine Konstante kleiner 1 ist.
  3. Verfahren nach Anspruch 1, wobei das Bestimmten einer Länge einer adaptiven Normalisierung umfasst:
    Berechnen eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Niederfrequenzbandsignals in dem Sprach-/Audiosignal und eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal; und wenn ein Absolutwert einer Differenz zwischen dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals und dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals kleiner als ein voreingestellter Differenzschwellenwert ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten ersten Längenwert, oder wenn ein Absolutwert einer Differenz zwischen dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals und dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals nicht kleiner als ein voreingestellter Differenzschwellenwert ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten zweiten Längenwert, wobei der erste Längenwert größer als der zweite Längenwert ist; oder
    Berechnen eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Niederfrequenzbandsignals in dem Sprach-/Audiosignal und eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal; und wenn das Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals kleiner als das Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten ersten Längenwert, oder wenn das Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals nicht kleiner als das Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten zweiten Längenwert; oder
    Bestimmen der Länge einer adaptiven Normalisierung gemäß eines Signaltyps eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal, wobei unterschiedliche Signaltypen von Hochfrequenzbandsignalen unterschiedlichen Längen einer adaptiven Normalisierung entsprechen.
  4. Verfahren nach einem der Ansprüche 1 bis 3, wobei das Bestimmen eines zweiten Sprach-/Audiosignals gemäß dem Zeichen jedes Abtastwerts und dem angepassten Amplitudenwert jedes Abtastwerts umfasst:
    Bestimmen eines neuen Werts jedes Abtastwerts gemäß dem Zeichen und dem angepassten Amplitudenwert jedes Abtastwerts, um das zweite Sprach-/Audiosignal zu erhalten; oder
    Berechnen eines Modifizierungsfaktors; Durchführen einer Modifizierungsverarbeitung an einem angepassten Amplitudenwert, der größer als 0 ist, in den angepassten Amplitudenwerten der Abtastwerte gemäß dem Modifizierungsfaktor;
    und Bestimmen eines neuen Werts jedes Abtastwerts gemäß dem Zeichen jedes Abtastwerts und eines angepassten Amplitudenwerts, der nach der Modifizierungsverarbeitung erhalten wird, um das zweite Sprach-/Audiosignal zu erhalten.
  5. Verfahren nach Anspruch 4, wobei das Berechnen eines Modifizierungsfaktors umfasst:
    Berechnen des Modifizierungsfaktors mithilfe einer Formel β = a/L, wobei β der Modifizierungsfaktor ist, L die Länge einer adaptiven Normalisierung ist und a eine Konstante größer 1 ist.
  6. Verfahren nach Anspruch 4 oder 5, wobei das Durchführen einer Modifizierungsverarbeitung an einem angepassten Amplitudenwert, der größer als 0 ist, in den angepassten Amplitudenwerten der Abtastwerte gemäß dem Modifizierungsfaktor umfasst:
    Durchführen einer Modifizierungsverarbeitung an dem angepassten Amplitudenwert, der größer als 0 ist, in den angepassten Amplitudenwerten der Abtastwerte mithilfe der folgenden Formel: Y = y × b β ;
    Figure imgb0006
    wobei Y der angepasste Amplitudenwert ist, der nach der Modifizierungsverarbeitung erhalten wird; y der angepasste Amplitudenwert, der größer 0 ist, in den angepassten Amplitudenwerten der Abtastwerte ist; und b eine Konstante ist, und 0 < b < 2.
  7. Einrichtung zum Rekonstruieren einer Rauschkomponente eines Sprach-/Audiosignals, umfassend:
    eine Bitstrom-Verarbeitungseinheit (410), die ausgestaltet ist, einen Bitstrom zu empfangen und den Bitstrom zu decodieren, um ein Sprach-/Audiosignal zu erhalten;
    eine Signalbestimmungseinheit (420), die ausgestaltet ist, ein erstes Sprach-/Audiosignal gemäß dem Sprach-/Audiosignal, das durch die Bitstrom-Verarbeitungseinheit erhalten wurde, zu bestimmen, wobei das erste Sprach-/Audiosignal ein Signal, dessen Rauschkomponente rekonstruiert werden muss, in dem mittels Decodieren erhaltenen Sprach-/Audiosignal ist;
    eine erste Bestimmungseinheit (430), die ausgestaltet ist, ein Zeichen jedes Abtastwerts in dem ersten Sprach-/Audiosignal, das durch die Signalbestimmungseinheit bestimmt wurde, und einen Amplitudenwert jedes Abtastwerts in dem ersten Sprach-/Audiosignal, das durch die Signalbestimmungseinheit bestimmt wurde, zu bestimmen;
    eine zweite Bestimmungseinheit (440), die ausgestaltet ist, eine Länge einer adaptiven Normalisierung zu bestimmen; wobei die zweite Bestimmungseinheit umfasst:
    eine Teilungsuntereinheit, die ausgestaltet ist; ein Niederfrequenzbandsignal in dem Sprach-/Audiosignal in N Teilbänder zu teilen, wobei N eine natürliche Zahl ist;
    eine Anzahlbestimmungsuntereinheit, die ausgestaltet ist, ein Verhältnis der Spitzenleistung zu der mittleren Leistung jedes Teilbands zu berechnen, und eine Anzahl von Teilbändern zu bestimmen, deren Verhältnisse der Spitzenleistung zu der mittleren Leistung größer sind als ein voreingestellter Schwellenwert für das Verhältnis der Spitzenleistung zu der mittleren Leistung; und
    eine Längenberechnungsuntereinheit, die ausgestaltet ist, die Länge einer adaptiven Normalisierung gemäß einem Signaltyp eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal und der Anzahl der Teilbänder zu berechnen eine dritte Bestimmungseinheit (450), die ausgestaltet ist, einen angepassten Amplitudenwert jedes Abtastwerts gemäß der Länge einer adaptiven Normalisierung, die durch die zweite Bestimmungseinheit bestimmt wurde, und des Amplitudenwerts, der von jedem Abtastwert ist und durch die erste Bestimmungseinheit bestimmt wird, zu bestimmen; und
    eine vierte Bestimmungseinheit (460), die ausgestaltet ist, ein zweites Sprach-/Audiosignal gemäß dem Zeichen, das von jedem Abtastwert ist und durch die erste Bestimmungseinheit bestimmt wird, und dem angepassten Amplitudenwert, der von jedem Abtastwert ist und durch die dritte Bestimmungseinheit bestimmt wird, zu bestimmen, wobei das zweite Sprach-/Audiosignal ein Signal ist, das erhalten wurde, nachdem die Rauschkomponente des ersten Sprach-/Audiosignals rekonstruiert wurde, wobei die dritte Bestimmungseinheit (450) umfasst:
    eine Bestimmungsuntereinheit, die ausgestaltet ist, gemäß dem Amplitudenwert jedes Abtastwerts und der Länge einer adaptiven Normalisierung, einen durchschnittlichen Amplitudenwert entsprechend jedes Abtastwerts zu bestimmen, und gemäß dem durchschnittlichen Amplitudenwert entsprechend jedes Abtastwerts, einen Amplitudenstörungswert entsprechend jedes Abtastwerts zu bestimmen; und
    eine angepasste Amplitudenwertberechnungsuntereinheit, die ausgestaltet ist, den angepassten Amplitudenwert jedes Abtastwerts gemäß des Amplitudenwerts jedes Abtastwerts und gemäß des Amplitudenstörungswerts entsprechend jedes Abtastwerts zu berechnen; wobei die angepasste Amplitudenwertberechnungsuntereinheit insbesondere zu Folgendem ausgestaltet ist:
    Subtrahieren des Amplitudenstörungswerts entsprechend jedes Abtastwerts von dem Amplitudenwert jedes Abtastwerts, um eine Differenz zwischen dem Amplitudenwert jedes Abtastwerts und dem Amplitudenstörungswert entsprechend jedes Abtastwerts zu erhalten, und Verwenden der erhaltenen Differenz als den angepassten Amplitudenwert jedes Abtastwerts;
    wobei die Bestimmungsuntereinheit umfasst:
    ein Bestimmungsmodul, das ausgestaltet ist, für jeden Abtastwert und gemäß der Länge einer adaptiven Normalisierung, ein Teilband zu bestimmen, zu dem der Abtastwert gehört; wobei das Teilband eine bestimmte Anzahl an Abtastwerten umfasst; wobei das Bestimmungsmodul insbesondere zu Folgendem ausgestaltet ist: für jeden Abtastwert, Bestimmen eines Teilbands bestehend aus m Abtastwerten vor dem Abtastwert, dem Abtastwert und n Abtastwerten nach dem Abtastwert als das Teilband, zu dem der Abtastwert gehört, wobei m und n von der Länge einer adaptiven Normalisierung abhängen, m eine Ganzzahl ist, die nicht kleiner als 0 ist, und n eine Ganzzahl ist, die nicht kleiner als 0 ist; und
    ein Berechnungsmodul, das ausgestaltet ist, einen Durchschnittswert von Amplitudenwerten aller Abtastwerte in dem Teilband zu berechnen, zu dem der Abtastwert gehört, und den Durchschnittswert, der mittels Berechnung erhalten wurde, als den Durchschnittsamplitudenwert entsprechend des Abtastwerts zu verwenden.
  8. Einrichtung nach Anspruch 7, wobei die Längenberechnungseinheit insbesondere zu Folgendem ausgestaltet ist:
    Berechnen der Länge einer adaptiven Normalisierung gemäß einer Formel L = K + α × M, wobei
    L die Länge einer adaptiven Normalisierung ist; K ein numerischer Wert entsprechend dem Signaltyp des Hochfrequenzbandsignals in dem Sprach-/Audiosignal ist, und unterschiedliche Signaltypen von Hochfrequenzbandsignalen unterschiedlichen numerischen Werten K entsprechen; M die Anzahl von Teilbändern ist, deren Verhältnisse der Spitzenleistung zu der mittleren Leistung größer sind als ein voreingestellter Schwellenwert für das Verhältnis der Spitzenleistung zu der mittleren Leistung; und α eine Konstante kleiner 1 ist.
  9. Einrichtung nach Anspruch 7, wobei die zweite Bestimmungseinheit (440) insbesondere zu Folgendem ausgestaltet ist:
    Berechnen eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Niederfrequenzbandsignals in dem Sprach-/Audiosignal und eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal; und wenn ein Absolutwert einer Differenz zwischen dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals und dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals kleiner als ein voreingestellter Differenzschwellenwert ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten ersten Längenwert, oder wenn ein Absolutwert einer Differenz zwischen dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals und dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals nicht kleiner als ein voreingestellter Differenzschwellenwert ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten zweiten Längenwert, wobei der erste Längenwert größer als der zweite Längenwert ist; oder
    Berechnen eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Niederfrequenzbandsignals in dem Sprach-/Audiosignal und eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal; und wenn das Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals kleiner als das Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten ersten Längenwert, oder wenn das Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals nicht kleiner als das Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten zweiten Längenwert; oder
    Bestimmen der Länge einer adaptiven Normalisierung gemäß eines Signaltyps eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal, wobei unterschiedliche Signaltypen von Hochfrequenzbandsignalen unterschiedlichen Längen einer adaptiven Normalisierung entsprechen.
  10. Einrichtung nach einem der Ansprüche 7 bis 9, wobei die vierte Bestimmungseinheit (460) insbesondere zu Folgendem ausgestaltet ist:
    Bestimmen eines neuen Werts jedes Abtastwerts gemäß dem Zeichen und dem angepassten Amplitudenwert jedes Abtastwerts, um das zweite Sprach-/Audiosignal zu erhalten; oder
    Berechnen eines Modifizierungsfaktors; Durchführen einer Modifizierungsverarbeitung an einem angepassten Amplitudenwert, der größer als 0 ist, in den angepassten Amplitudenwerten der Abtastwerte gemäß dem Modifizierungsfaktor;
    und Bestimmen eines neuen Werts jedes Abtastwerts gemäß dem Zeichen jedes Abtastwerts und eines angepassten Amplitudenwerts, der nach der Modifizierungsverarbeitung erhalten wird, um das zweite Sprach-/Audiosignal zu erhalten.
  11. Einrichtung nach Anspruch 10, wobei die vierte Bestimmungseinheit (460) insbesondere ausgestaltet ist, den Modifizierungsfaktor mithilfe einer Formel β = a/L zu berechnen, wobei β der Modifizierungsfaktor ist, L die Länge einer adaptiven Normalisierung ist und a eine Konstante größer 1 ist.
  12. Einrichtung nach Anspruch 11, wobei die vierte Bestimmungseinheit (460) insbesondere zu Folgendem ausgestaltet ist:
    Durchführen einer Modifizierungsverarbeitung an dem angepassten Amplitudenwert, der größer als 0 ist, in den angepassten Amplitudenwerten der Abtastwerte mithilfe der folgenden Formel: Y = y × b β ;
    Figure imgb0007
    wobei Y der angepasste Amplitudenwert ist, der nach der Modifizierungsverarbeitung erhalten wird; y der angepasste Amplitudenwert, der größer 0 ist, in den angepassten Amplitudenwerten der Abtastwerte ist; und b eine Konstante ist, und 0 < b < 2.
EP15802508.0A 2014-06-03 2015-01-19 Verfahren und vorrichtung zur verarbeitung von audiosignalen Active EP3147900B1 (de)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP23184053.9A EP4283614A3 (de) 2014-06-03 2015-01-19 Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung
EP19190663.5A EP3712890B1 (de) 2014-06-03 2015-01-19 Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410242233.2A CN105336339B (zh) 2014-06-03 2014-06-03 一种语音频信号的处理方法和装置
PCT/CN2015/071017 WO2015184813A1 (zh) 2014-06-03 2015-01-19 一种语音频信号的处理方法和装置

Related Child Applications (3)

Application Number Title Priority Date Filing Date
EP19190663.5A Division-Into EP3712890B1 (de) 2014-06-03 2015-01-19 Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung
EP19190663.5A Division EP3712890B1 (de) 2014-06-03 2015-01-19 Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung
EP23184053.9A Division EP4283614A3 (de) 2014-06-03 2015-01-19 Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung

Publications (3)

Publication Number Publication Date
EP3147900A1 EP3147900A1 (de) 2017-03-29
EP3147900A4 EP3147900A4 (de) 2017-05-03
EP3147900B1 true EP3147900B1 (de) 2019-10-02

Family

ID=54766052

Family Applications (3)

Application Number Title Priority Date Filing Date
EP19190663.5A Active EP3712890B1 (de) 2014-06-03 2015-01-19 Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung
EP23184053.9A Pending EP4283614A3 (de) 2014-06-03 2015-01-19 Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung
EP15802508.0A Active EP3147900B1 (de) 2014-06-03 2015-01-19 Verfahren und vorrichtung zur verarbeitung von audiosignalen

Family Applications Before (2)

Application Number Title Priority Date Filing Date
EP19190663.5A Active EP3712890B1 (de) 2014-06-03 2015-01-19 Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung
EP23184053.9A Pending EP4283614A3 (de) 2014-06-03 2015-01-19 Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung

Country Status (19)

Country Link
US (3) US9978383B2 (de)
EP (3) EP3712890B1 (de)
JP (3) JP6462727B2 (de)
KR (3) KR101943529B1 (de)
CN (2) CN110097892B (de)
AU (1) AU2015271580B2 (de)
BR (1) BR112016028375B1 (de)
CA (1) CA2951169C (de)
CL (1) CL2016003121A1 (de)
ES (1) ES2964221T3 (de)
HK (1) HK1220543A1 (de)
IL (1) IL249337B (de)
MX (2) MX362612B (de)
MY (1) MY179546A (de)
NZ (1) NZ727567A (de)
RU (1) RU2651184C1 (de)
SG (1) SG11201610141RA (de)
WO (1) WO2015184813A1 (de)
ZA (1) ZA201608477B (de)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110097892B (zh) 2014-06-03 2022-05-10 华为技术有限公司 一种语音频信号的处理方法和装置
CN108133712B (zh) * 2016-11-30 2021-02-12 华为技术有限公司 一种处理音频数据的方法和装置
CN106847299B (zh) * 2017-02-24 2020-06-19 喜大(上海)网络科技有限公司 延时的估计方法及装置
RU2754497C1 (ru) * 2020-11-17 2021-09-02 федеральное государственное автономное образовательное учреждение высшего образования "Казанский (Приволжский) федеральный университет" (ФГАОУ ВО КФУ) Способ передачи речевых файлов по зашумленному каналу и устройство для его реализации
US20230300524A1 (en) * 2022-03-21 2023-09-21 Qualcomm Incorporated Adaptively adjusting an input current limit for a boost converter

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130018660A1 (en) * 2011-07-13 2013-01-17 Huawei Technologies Co., Ltd. Audio signal coding and decoding method and device
US20140044192A1 (en) * 2010-09-29 2014-02-13 Huawei Technologies Co., Ltd. Method and device for encoding a high frequency signal, and method and device for decoding a high frequency signal

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6261312B1 (en) 1998-06-23 2001-07-17 Innercool Therapies, Inc. Inflatable catheter for selective organ heating and cooling and method of using the same
SE9803698L (sv) * 1998-10-26 2000-04-27 Ericsson Telefon Ab L M Metoder och anordningar i ett telekommunikationssystem
CA2252170A1 (en) * 1998-10-27 2000-04-27 Bruno Bessette A method and device for high quality coding of wideband speech and audio signals
US6687668B2 (en) * 1999-12-31 2004-02-03 C & S Technology Co., Ltd. Method for improvement of G.723.1 processing time and speech quality and for reduction of bit rate in CELP vocoder and CELP vococer using the same
US6631139B2 (en) * 2001-01-31 2003-10-07 Qualcomm Incorporated Method and apparatus for interoperability between voice transmission systems during speech inactivity
US6708147B2 (en) * 2001-02-28 2004-03-16 Telefonaktiebolaget Lm Ericsson(Publ) Method and apparatus for providing comfort noise in communication system with discontinuous transmission
US20030093270A1 (en) * 2001-11-13 2003-05-15 Domer Steven M. Comfort noise including recorded noise
EP1701340B1 (de) * 2001-11-14 2012-08-29 Panasonic Corporation Dekodiervorrichtung, -verfahren und -programm
US7536298B2 (en) * 2004-03-15 2009-05-19 Intel Corporation Method of comfort noise generation for speech communication
US7831421B2 (en) * 2005-05-31 2010-11-09 Microsoft Corporation Robust decoder
US7610197B2 (en) * 2005-08-31 2009-10-27 Motorola, Inc. Method and apparatus for comfort noise generation in speech communication systems
WO2008007700A1 (fr) 2006-07-12 2008-01-17 Panasonic Corporation Dispositif de décodage de son, dispositif de codage de son, et procédé de compensation de trame perdue
RU2460155C2 (ru) * 2006-09-18 2012-08-27 Конинклейке Филипс Электроникс Н.В. Кодирование и декодирование звуковых объектов
CN101320563B (zh) * 2007-06-05 2012-06-27 华为技术有限公司 一种背景噪声编码/解码装置、方法和通信设备
CN101335003B (zh) 2007-09-28 2010-07-07 华为技术有限公司 噪声生成装置、及方法
US8139777B2 (en) * 2007-10-31 2012-03-20 Qnx Software Systems Co. System for comfort noise injection
CN101483042B (zh) 2008-03-20 2011-03-30 华为技术有限公司 一种噪声生成方法以及噪声生成装置
ES2401487T3 (es) * 2008-07-11 2013-04-22 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Aparato y procedimiento para la codificación/decodificación de una señal de audio utilizando un esquema de conmutación de generación de señal ajena
PL2146344T3 (pl) * 2008-07-17 2017-01-31 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Sposób kodowania/dekodowania sygnału audio obejmujący przełączalne obejście
CN101483048B (zh) 2009-02-06 2010-08-25 凌阳科技股份有限公司 光学储存装置及其回路增益值的自动校正方法
US9047875B2 (en) * 2010-07-19 2015-06-02 Futurewei Technologies, Inc. Spectrum flatness control for bandwidth extension
CN105825858B (zh) * 2011-05-13 2020-02-14 三星电子株式会社 比特分配、音频编码和解码
US8731949B2 (en) 2011-06-30 2014-05-20 Zte Corporation Method and system for audio encoding and decoding and method for estimating noise level
US20130006644A1 (en) * 2011-06-30 2013-01-03 Zte Corporation Method and device for spectral band replication, and method and system for audio decoding
US20130132100A1 (en) 2011-10-28 2013-05-23 Electronics And Telecommunications Research Institute Apparatus and method for codec signal in a communication system
CN104040624B (zh) * 2011-11-03 2017-03-01 沃伊斯亚吉公司 改善低速率码激励线性预测解码器的非语音内容
US9305567B2 (en) 2012-04-23 2016-04-05 Qualcomm Incorporated Systems and methods for audio signal processing
CN110097892B (zh) * 2014-06-03 2022-05-10 华为技术有限公司 一种语音频信号的处理方法和装置
US20200333702A1 (en) 2019-04-19 2020-10-22 Canon Kabushiki Kaisha Forming apparatus, forming method, and article manufacturing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140044192A1 (en) * 2010-09-29 2014-02-13 Huawei Technologies Co., Ltd. Method and device for encoding a high frequency signal, and method and device for decoding a high frequency signal
US20130018660A1 (en) * 2011-07-13 2013-01-17 Huawei Technologies Co., Ltd. Audio signal coding and decoding method and device

Also Published As

Publication number Publication date
KR102104561B1 (ko) 2020-04-24
IL249337A0 (en) 2017-02-28
BR112016028375A2 (pt) 2017-08-22
WO2015184813A1 (zh) 2015-12-10
KR101943529B1 (ko) 2019-01-29
AU2015271580A1 (en) 2017-01-19
KR102201791B1 (ko) 2021-01-11
JP7142674B2 (ja) 2022-09-27
EP4283614A2 (de) 2023-11-29
US20180268830A1 (en) 2018-09-20
JP6462727B2 (ja) 2019-01-30
CN105336339B (zh) 2019-05-03
US20170084282A1 (en) 2017-03-23
MY179546A (en) 2020-11-10
US9978383B2 (en) 2018-05-22
MX362612B (es) 2019-01-28
CN105336339A (zh) 2016-02-17
US20200279572A1 (en) 2020-09-03
MX2016015950A (es) 2017-04-05
AU2015271580B2 (en) 2018-01-18
BR112016028375B1 (pt) 2022-09-27
ES2964221T3 (es) 2024-04-04
US11462225B2 (en) 2022-10-04
SG11201610141RA (en) 2017-01-27
CN110097892B (zh) 2022-05-10
EP3147900A4 (de) 2017-05-03
US10657977B2 (en) 2020-05-19
CA2951169C (en) 2019-12-31
MX2019001193A (es) 2019-06-12
CL2016003121A1 (es) 2017-04-28
JP2021060609A (ja) 2021-04-15
KR20200043548A (ko) 2020-04-27
ZA201608477B (en) 2018-08-29
IL249337B (en) 2020-09-30
HK1220543A1 (zh) 2017-05-05
JP2017517034A (ja) 2017-06-22
RU2651184C1 (ru) 2018-04-18
EP4283614A3 (de) 2024-02-21
EP3712890A1 (de) 2020-09-23
KR20190009440A (ko) 2019-01-28
EP3147900A1 (de) 2017-03-29
JP2019061282A (ja) 2019-04-18
EP3712890B1 (de) 2023-08-30
JP6817283B2 (ja) 2021-01-20
CN110097892A (zh) 2019-08-06
KR20170008837A (ko) 2017-01-24
CA2951169A1 (en) 2015-12-10
NZ727567A (en) 2018-01-26

Similar Documents

Publication Publication Date Title
US11462225B2 (en) Method for processing speech/audio signal and apparatus
US20190096386A1 (en) Method and apparatus for generating speech synthesis model
US8392176B2 (en) Processing of excitation in audio coding and decoding
CN103325380A (zh) 用于信号增强的增益后处理
US11881226B2 (en) Signal processing method and device
CN105529034A (zh) 一种基于混响的语音识别方法和装置
EP3176785A1 (de) Verfahren und vorrichtung zur audioobjektcodierung auf grundlage der trennung informierter quellen
EP3139379A1 (de) Audiocodierungsverfahren und zugehörige vorrichtung
CN113436643B (zh) 语音增强模型的训练及应用方法、装置、设备及存储介质
KR20220050924A (ko) 오디오 코딩을 위한 다중 래그 형식

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20161216

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

A4 Supplementary search report drawn up and despatched

Effective date: 20170404

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 21/0316 20130101ALI20170329BHEP

Ipc: G10L 19/26 20130101AFI20170329BHEP

Ipc: G10L 19/028 20130101ALI20170329BHEP

Ipc: G10L 21/02 20130101ALI20170329BHEP

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180313

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602015039167

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G10L0021020000

Ipc: G10L0019260000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 21/038 20130101ALN20190321BHEP

Ipc: G10L 19/26 20130101AFI20190321BHEP

Ipc: G10L 21/02 20130101ALI20190321BHEP

Ipc: G10L 19/028 20130101ALI20190321BHEP

Ipc: G10L 21/0316 20130101ALI20190321BHEP

INTG Intention to grant announced

Effective date: 20190415

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 1187056

Country of ref document: AT

Kind code of ref document: T

Effective date: 20191015

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602015039167

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20191002

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1187056

Country of ref document: AT

Kind code of ref document: T

Effective date: 20191002

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200102

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200203

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200102

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200103

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200224

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602015039167

Country of ref document: DE

PG2D Information on lapse in contracting state deleted

Ref country code: IS

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200202

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

26N No opposition filed

Effective date: 20200703

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20200131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200119

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200131

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200131

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200131

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200119

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191002

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230524

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20231130

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20231212

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20231205

Year of fee payment: 10