EP3147900B1 - Verfahren und vorrichtung zur verarbeitung von audiosignalen - Google Patents
Verfahren und vorrichtung zur verarbeitung von audiosignalen Download PDFInfo
- Publication number
- EP3147900B1 EP3147900B1 EP15802508.0A EP15802508A EP3147900B1 EP 3147900 B1 EP3147900 B1 EP 3147900B1 EP 15802508 A EP15802508 A EP 15802508A EP 3147900 B1 EP3147900 B1 EP 3147900B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- value
- speech
- audio signal
- sample value
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000005236 sound signal Effects 0.000 title claims description 229
- 238000000034 method Methods 0.000 title claims description 43
- 238000012545 processing Methods 0.000 title claims description 34
- 230000003044 adaptive effect Effects 0.000 claims description 110
- 238000010606 normalization Methods 0.000 claims description 110
- 238000012986 modification Methods 0.000 claims description 49
- 230000004048 modification Effects 0.000 claims description 49
- 238000004364 calculation method Methods 0.000 claims description 20
- 238000010586 diagram Methods 0.000 description 6
- 230000001174 ascending effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000002592 echocardiography Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000000695 excitation spectrum Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/028—Noise substitution, i.e. substituting non-tonal spectral components by noisy source
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/012—Comfort noise or silence coding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/167—Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/26—Pre-filtering or post-filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0316—Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/038—Speech enhancement, e.g. noise reduction or echo cancellation using band spreading techniques
Definitions
- the present invention relates to the communications field, and in particular, to a method for processing a speech/audio signal and an apparatus.
- an electronic device reconstructs a noise component of a speech/audio signal obtained by means of decoding.
- an electronic device reconstructs a noise component of a speech/audio signal generally by adding a random noise signal to the speech/audio signal. Specifically, weighted addition is performed on the speech/audio signal and the random noise signal, to obtain a signal after the noise component of the speech/audio signal is reconstructed.
- the speech/audio signal may be a time-domain signal, a frequency-domain signal, or an excitation signal, or may be a low frequency signal, a high frequency signal, or the like.
- the present invention provides a method for processing a speech/audio signal and an apparatus, so that for a speech/audio signal having an onset or an offset, when a noise component of the speech/audio signal is reconstructed, a signal obtained after the noise component of the speech/audio signal is reconstructed does not have an echo, thereby improving auditory quality of the signal obtained after the noise component is reconstructed.
- the present invention provides a method for processing a speech/audio signal according to claim 1.
- the present invention provides an apparatus for reconstructing a noise component of a speech/audio signal according to claim 7.
- Preferred embodiments are set forth in the dependent claims.
- the process of the invention only an original signal, that is, the first speech/audio signal is processed, and no new signal is added to the first speech/audio signal, so that no new energy is added to a second speech/audio signal obtained after a noise component is reconstructed. Therefore, if the first speech/audio signal has an onset or an offset, no echo is added to the second speech/audio signal, thereby improving auditory quality of the second speech/audio signal.
- FIG. 1 is a flowchart of a method for reconstructing a noise component of a speech/audio signal according to an embodiment of the present invention.
- the method includes: Step 101: Receive a bitstream, and decode the bitstream, to obtain a speech/audio signal.
- Step 102 Determine a first speech/audio signal according to the speech/audio signal, where the first speech/audio signal is a signal, whose noise component needs to be reconstructed, in the speech/audio signal obtained by means of decoding.
- the first speech/audio signal may be a low frequency band signal, a high frequency band signal, a fullband signal, or the like in the speech/audio signal obtained by means of decoding.
- the speech/audio signal obtained by means of decoding may include a low frequency band signal and a high frequency band signal, or may include a fullband signal.
- Step 103 Determine a sign of each sample value in the first speech/audio signal and an amplitude value of each sample value in the first speech/audio signal.
- implementation manners of the sample value may also be different.
- the sample value may be a spectrum coefficient
- the speech/audio signal is a time-domain signal
- the sample value may be a sample point value.
- Step 104 Determine an adaptive normalization length.
- the adaptive normalization length may be determined according to a related parameter of a low frequency band signal and/or a high frequency band signal of the speech/audio signal obtained by means of decoding.
- the related parameter may include a signal type, a peak-to-average ratio, and the like.
- the determining an adaptive normalization length includes:
- the calculating the adaptive normalization length according to a signal type of the high frequency band signal in the speech/audio signal and the quantity of the subbands may include:
- the adaptive normalization length may be calculated according to a signal type of the low frequency band signal in the speech/audio signal and the quantity of the subbands.
- L K + ⁇ ⁇ M .
- K is a numerical value corresponding to the signal type of the low frequency band signal in the speech/audio signal.
- Different signal types of low frequency band signals correspond to different numerical values K.
- the determining an adaptive normalization length may include: calculating a peak-to-average ratio of the low frequency band signal in the speech/audio signal and a peak-to-average ratio of the high frequency band signal in the speech/audio signal; and when an absolute value of a difference between the peak-to-average ratio of the low frequency band signal and the peak-to-average ratio of the high frequency band signal is less than a preset difference threshold, determining the adaptive normalization length as a preset first length value, or when an absolute value of a difference between the peak-to-average ratio of the low frequency band signal and the peak-to-average ratio of the high frequency band signal is not less than a preset difference threshold, determining the adaptive normalization length as a preset second length value.
- the first length value is greater than the second length value.
- the first length value and the second length value may also be obtained by means of calculation by using a ratio of the peak-to-average ratio of the low frequency band signal to the peak-to-average ratio of the high frequency band signal or a difference between the peak-to-average ratio of the low frequency band signal and the peak-to-average ratio of the high frequency band signal.
- a specific calculation method is not limited.
- the determining an adaptive normalization length may include: calculating a peak-to-average ratio of the low frequency band signal in the speech/audio signal and a peak-to-average ratio of the high frequency band signal in the speech/audio signal; and when the peak-to-average ratio of the low frequency band signal is less than the peak-to-average ratio of the high frequency band signal, determining the adaptive normalization length as a preset first length value, or when the peak-to-average ratio of the low frequency band signal is not less than the peak-to-average ratio of the high frequency band signal, determining the adaptive normalization length as a preset second length value.
- the first length value is greater than the second length value.
- the first length value and the second length value may also be obtained by means of calculation by using a ratio of the peak-to-average ratio of the low frequency band signal to the peak-to-average ratio of the high frequency band signal or a difference between the peak-to-average ratio of the low frequency band signal and the peak-to-average ratio of the high frequency band signal.
- a specific calculation method is not limited.
- the determining an adaptive normalization length may include: determining the adaptive normalization length according to a signal type of the high frequency band signal in the speech/audio signal. Different signal types correspond to different adaptive normalization lengths. For example, when the signal type is a harmonic signal, a corresponding adaptive normalization length is 32; when the signal type is a normal signal, a corresponding adaptive normalization length is 16; when the signal type is a transient signal, a corresponding adaptive normalization length is 8.
- Step 105 Determine an adjusted amplitude value of each sample value according to the adaptive normalization length and the amplitude value of each sample value.
- the determining an adjusted amplitude value of each sample value according to the adaptive normalization length and the amplitude value of each sample value includes:
- the calculating, according to the amplitude value of each sample value and the adaptive normalization length, an average amplitude value corresponding to each sample value includes:
- the determining, for each sample value and according to the adaptive normalization length, a subband to which the sample value belongs may include: performing subband grouping on all sample values in a preset order according to the adaptive normalization length; and for each sample value, determining a subband including the sample value as the subband to which the sample value belongs.
- the preset order may be, for example, an order from a low frequency to a high frequency or an order from a high frequency to a low frequency, which is not limited herein.
- x1 to x5 may be grouped into one subband
- x6 to x10 may be grouped into one subband.
- several subbands are obtained. Therefore, for each sample value in x1 to x5, a subband x1 to x5 is a subband to which each sample value belongs, and for each sample value in x6 to x10, a subband x6 to x10 is a subband to which each sample value belongs.
- a subband to which the sample value belongs includes: for each sample value, determining a subband consisting of m sample values before the sample value, the sample value, and n sample values after the sample value as the subband to which the sample value belongs, where m and n depend on the adaptive normalization length, m is an integer not less than 0, and n is an integer not less than 0.
- sample values in ascending order are respectively x1, x2, x3, ..., and xn
- the adaptive normalization length is 5
- m is 2
- n is 2.
- a subband consisting of x1 to x5 is a subband to which the sample value x3 belongs.
- a subband consisting of x2 to x6 is a subband to which the sample value x4 belongs. The rest can be deduced by analogy.
- the subbands to which x1, x2, x(n-1), and xn belong may be autonomously set.
- the sample value itself may be added to compensate for a lack of a sample value in the subband to which the sample value belongs.
- the sample value x1 there is no sample value before the sample value x1, and x1, x1, x1, x2, and x3 may be used as the subband to which the sample value x1 belongs.
- the average amplitude value corresponding to each sample value may be directly used as the amplitude disturbance value corresponding to each sample value.
- a preset operation may be performed on the average amplitude value corresponding to each sample value, to obtain the amplitude disturbance value corresponding to each sample value.
- the preset operation may be, for example, that the average amplitude value is multiplied by a numerical value. The numerical value is generally greater than 0.
- the calculating the adjusted amplitude value of each sample value according to the amplitude value of each sample value and according to the amplitude disturbance value corresponding to each sample value includes: subtracting the amplitude disturbance value corresponding to each sample value from the amplitude value of each sample value, to obtain a difference between the amplitude value of each sample value and the amplitude disturbance value corresponding to each sample value, and using the obtained difference as the adjusted amplitude value of each sample value.
- Step 106 Determine a second speech/audio signal according to the sign of each sample value and the adjusted amplitude value of each sample value, where the second speech/audio signal is a signal obtained after the noise component of the first speech/audio signal is reconstructed.
- a new value of each sample value may be determined according to the sign and the adjusted amplitude value of each sample value, to obtain the second speech/audio signal.
- the determining a second speech/audio signal according to the sign of each sample value and the adjusted amplitude value of each sample value may include:
- the obtained second speech/audio signal may include new values of all the sample values.
- the modification factor may be calculated according to the adaptive normalization length. Specifically, the modification factor ⁇ may be equal to a/L, where a is a constant greater than 1.
- the step of extracting the sign of each sample value in the first speech/audio signal in step 103 may be performed at any time before step 106. There is no necessary execution order between the step of extracting the sign of each sample value in the first speech/audio signal and step 104 and step 105.
- An execution order between step 103 and step 104 is not limited.
- a time-domain signal in the speech/audio signal may be within one frame.
- a part of the speech/audio signal has an extremely large signal sample point value and extremely powerful signal energy, while another part of the speech/audio signal has an extremely small signal sample point value and extremely weak signal energy.
- a random noise signal is added to the speech/audio signal in a frequency domain, to obtain a signal obtained after a noise component is reconstructed.
- the newly added random noise signal generally causes signal energy of a part, whose original sample point value is extremely small, in the time-domain signal obtained by means of conversion to increase.
- a signal sample point value of this part also correspondingly becomes relatively large. Consequently, the signal obtained after a noise component is reconstructed has some echoes, which affects auditory quality of the signal obtained after a noise component is reconstructed.
- a first speech/audio signal is determined according to a speech/audio signal; a sign of each sample value in the first speech/audio signal and an amplitude value of each sample value in the first speech/audio signal are determined; an adaptive normalization length is determined; an adjusted amplitude value of each sample value is determined according to the adaptive normalization length and the amplitude value of each sample value; and a second speech/audio signal is determined according to the sign of each sample value and the adjusted amplitude value of each sample value.
- an original signal that is, the first speech/audio signal is processed, and no new signal is added to the first speech/audio signal, so that no new energy is added to a second speech/audio signal obtained after a noise component is reconstructed. Therefore, if the first speech/audio signal has an onset or an offset, no echo is added to the second speech/audio signal, thereby improving auditory quality of the second speech/audio signal.
- FIG. 2 is another schematic flowchart of a method for reconstructing a noise component of a speech/audio signal according to an embodiment of the present invention.
- the method includes: Step 201: Receive a bitstream, decode the bitstream, to obtain a speech/audio signal, where the speech/audio signal obtained by means of decoding includes a low frequency band signal and a high frequency band signal; and determine the high frequency band signal as a first speech/audio signal.
- Step 202 Determine a sign of each sample value in the high frequency band signal and an amplitude value of each sample value in the high frequency band signal.
- a coefficient of a sample value in the high frequency band signal is -4
- a sign of the sample value is "-”
- an amplitude value is 4.
- Step 203 Determine an adaptive normalization length.
- step 104 For details on how to determine the adaptive normalization length, refer to related descriptions in step 104. Details are not described herein again.
- Step 204 Determine, according to the amplitude value of each sample value and the adaptive normalization length, an average amplitude value corresponding to each sample value, and determine, according to the average amplitude value corresponding to each sample value, an amplitude disturbance value corresponding to each sample value.
- step 105 For how to determine the average amplitude value corresponding to each sample value, refer to related descriptions in step 105. Details are not described herein again.
- Step 205 Calculate an adjusted amplitude value of each sample value according to the amplitude value of each sample value and according to the amplitude disturbance value corresponding to each sample value.
- step 105 For how to determine the adjusted amplitude value of each sample value, refer to related descriptions in step 105. Details are not described herein again.
- Step 206 Determine a second speech/audio signal according to the sign and the adjusted amplitude value of each sample value.
- the second speech/audio signal is a signal obtained after a noise component of the first speech/audio signal is reconstructed.
- step 106 For specific implementation in this step, refer to related descriptions in step 106. Details are not described herein again.
- the step of determining the sign of each sample value in the first speech/audio signal in step 202 may be performed at any time before step 206. There is no necessary execution order between the step of determining the sign of each sample value in the first speech/audio signal and step 203, step 204, and step 205.
- An execution order between step 202 and step 203 is not limited.
- Step 207 Combine the second speech/audio signal and the low frequency band signal in the speech/audio signal obtained by means of decoding, to obtain an output signal.
- the first speech/audio signal is a low frequency band signal in the speech/audio signal obtained by means of decoding
- the second speech/audio signal and a high frequency band signal in the speech/audio signal obtained by means of decoding may be combined, to obtain an output signal.
- the first speech/audio signal is a high frequency band signal in the speech/audio signal obtained by means of decoding
- the second speech/audio signal and a low frequency band signal in the speech/audio signal obtained by means of decoding may be combined, to obtain an output signal.
- the second speech/audio signal may be directly determined as the output signal.
- the noise component of the high frequency band signal is finally reconstructed, to obtain a second speech/audio signal. Therefore, if the high frequency band signal has an onset or an offset, no echo is added to the second speech/audio signal, thereby improving auditory quality of the second speech/audio signal and further improving auditory quality of the output signal finally output.
- FIG. 3 is another schematic flowchart of a method for reconstructing a noise component of a speech/audio signal according to an embodiment of the present invention.
- the method includes: Step 301 to step 305 are the same as step 201 to step 205, and details are not described herein again.
- Step 306 Calculate a modification factor; and perform modification processing on an adjusted amplitude value, which is greater than 0, in the adjusted amplitude values of the sample values according to the modification factor.
- step 106 For specific implementation in this step, refer to related descriptions in step 106. Details are not described herein again.
- Step 307 Determine a second speech/audio signal according to the sign of each sample value and an adjusted amplitude value obtained after the modification processing.
- step 106 For specific implementation in this step, refer to related descriptions in step 106. Details are not described herein again.
- the step of determining the sign of each sample value in the first speech/audio signal in step 302 may be performed at any time before step 307. There is no necessary execution order between the step of determining the sign of each sample value in the first speech/audio signal and step 303, step 304, step 305, and step 306.
- An execution order between step 302 and step 303 is not limited.
- Step 308 Combine the second speech/audio signal and a low frequency band signal in the speech/audio signal obtained by means of decoding, to obtain an output signal.
- a high frequency band signal in the speech/audio signal obtained by means of decoding is determined as the first speech/audio signal, and a noise component of the first speech/audio signal is reconstructed, to finally obtain the second speech/audio signal.
- a noise component of a fullband signal of the speech/audio signal obtained by means of decoding may be reconstructed, or a noise component of a low frequency band signal of the speech/audio signal obtained by means of decoding is reconstructed, to finally obtain a second speech/audio signal.
- a noise component of a fullband signal of the speech/audio signal obtained by means of decoding may be reconstructed, or a noise component of a low frequency band signal of the speech/audio signal obtained by means of decoding is reconstructed, to finally obtain a second speech/audio signal.
- FIG. 2 and FIG. 3 For an implementation process thereof, refer to the exemplary methods shown in FIG. 2 and FIG. 3 .
- a difference lies in only that, when a first speech/audio signal is to be determined, a fullband signal or a low frequency band signal is determined as the first speech/audio signal. Descriptions are not provided by using examples one by one herein.
- FIG. 4 is a schematic structural diagram of an apparatus for reconstructing a noise component of a speech/audio signal according to an embodiment of the present invention.
- the apparatus may be disposed in an electronic device.
- An apparatus 400 may include:
- the third determining unit 450 includes:
- the determining subunit includes:
- the determining module may be specifically configured to:
- the adjusted amplitude value calculation subunit is specifically configured to: subtract the amplitude disturbance value corresponding to each sample value from the amplitude value of each sample value, to obtain a difference between the amplitude value of each sample value and the amplitude disturbance value corresponding to each sample value, and use the obtained difference as the adjusted amplitude value of each sample value.
- the second determining unit 440 includes:
- the length calculation subunit may be specifically configured to:
- the second determining unit 440 may be specifically configured to:
- the fourth determining unit 460 may be specifically configured to:
- a first speech/audio signal is determined according to a speech/audio signal; a sign of each sample value in the first speech/audio signal and an amplitude value of each sample value in the first speech/audio signal are determined; an adaptive normalization length is determined; an adjusted amplitude value of each sample value is determined according to the adaptive normalization length and the amplitude value of each sample value; and a second speech/audio signal is determined according to the sign of each sample value and the adjusted amplitude value of each sample value.
- an original signal that is, the first speech/audio signal is processed, and no new signal is added to the first speech/audio signal, so that no new energy is added to a second speech/audio signal obtained after a noise component is reconstructed. Therefore, if the first speech/audio signal has an onset or an offset, no echo is added to the second speech/audio signal, thereby improving auditory quality of the second speech/audio signal.
- FIG. 5 is a structural diagram of an electronic device according to an embodiment of the present invention.
- An electronic device 500 includes a processor 510, a memory 520, a transceiver 530, and a bus 540.
- the processor 510, the memory 520, and the transceiver 530 are connected to each other by using the bus 540, and the bus 540 may be an ISA bus, a PCI bus, an EISA bus, or the like.
- the bus may be classified into an address bus, a data bus, a control bus, or the like.
- the bus shown in FIG. 5 is indicated by using only one bold line, but it does not indicate that there is only one bus or only one type of bus.
- the memory 520 is configured to store a program.
- the program may include program code, and the program code includes a computer operation instruction.
- the memory 520 may include a high-speed RAM memory, and may further include a non-volatile memory (non-volatile memory), such as at least one magnetic disk storage.
- the transceiver 530 is configured to connect to another device, and communicate with the another device. Specifically, the transceiver 530 may be configured to receive a bitstream.
- the processor 510 executes the program code stored in the memory 520 and is configured to: decode the bitstream, to obtain a speech/audio signal; determine a first speech/audio signal according to the speech/audio signal; determine a sign of each sample value in the first speech/audio signal and an amplitude value of each sample value in the first speech/audio signal; determine an adaptive normalization length; determine an adjusted amplitude value of each sample value according to the adaptive normalization length and the amplitude value of each sample value; and determine a second speech/audio signal according to the sign of each sample value and the adjusted amplitude value of each sample value.
- processor 510 may be specifically configured to:
- processor 510 may be specifically configured to:
- processor 510 may be specifically configured to:
- the processor 510 may be specifically configured to: subtract the amplitude disturbance value corresponding to each sample value from the amplitude value of each sample value, to obtain a difference between the amplitude value of each sample value and the amplitude disturbance value corresponding to each sample value, and use the obtained difference as the adjusted amplitude value of each sample value.
- processor 510 may be specifically configured to:
- processor 510 may be specifically configured to:
- processor 510 may be specifically configured to:
- processor 510 may be specifically configured to:
- the electronic device determines a first speech/audio signal according to a speech/audio signal; determines a sign of each sample value in the first speech/audio signal and an amplitude value of each sample value in the first speech/audio signal; determines an adaptive normalization length; determines an adjusted amplitude value of each sample value according to the adaptive normalization length and the amplitude value of each sample value; and determines a second speech/audio signal according to the sign of each sample value and the adjusted amplitude value of each sample value.
- the first speech/audio signal has an onset or an offset, no echo is added to the second speech/audio signal, thereby improving auditory quality of the second speech/audio signal.
- a system embodiment basically corresponds to a method embodiment, and therefore for related parts, reference may be made to partial descriptions in the method embodiment.
- the described system embodiment is merely exemplary.
- the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units.
- a part or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
- a person of ordinary skill in the art may understand and implement the embodiments of the present invention without creative efforts.
- the present invention can be described in the general context of executable computer instructions executed by a computer, for example, a program module.
- the program unit includes a routine, a program, an object, a component, a data structure, and the like for executing a particular task or implementing a particular abstract data type.
- the present invention may also be practiced in distributed computing environments in which tasks are performed by remote processing devices that are connected by using a communications network.
- program modules may be located in both local and remote computer storage media including storage devices.
- the program may be stored in a computer readable storage medium, such as a ROM, a RAM, a magnetic disc, or an optical disc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Quality & Reliability (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Noise Elimination (AREA)
- Telephone Function (AREA)
- Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
Claims (12)
- Verfahren zum Verarbeiten eines Sprach-/Audiosignals, wobei das Verfahren umfasst:Empfangen (101) eines Bitstroms, und Decodieren des Bitstroms, um ein Sprach-/Audiosignal zu erhalten;Bestimmen (102) eines ersten Sprach-/Audiosignals gemäß dem Sprach-/Audiosignal, wobei das erste Sprach-/Audiosignal ein Signal, dessen Rauschkomponente rekonstruiert werden muss, in dem Sprach-/Audiosignal ist;Bestimmen (103) eines Zeichens jedes Abtastwerts in dem ersten Sprach-/Audiosignal und eines Amplitudenwerts jedes Abtastwerts in dem ersten Sprach-/Audiosignal;Bestimmen (104) einer Länge einer adaptiven Normalisierung; wobei das Bestimmten einer Länge einer adaptiven Normalisierung umfasst: Teilen eines Niederfrequenzbandsignals in dem Sprach-/Audiosignal in N Teilbänder, wobei N eine natürliche Zahl ist; Berechnen eines Verhältnisses der Spitzenleistung zu der mittleren Leistung jedes Teilbands, und Bestimmen einer Anzahl von Teilbändern, deren Verhältnisse der Spitzenleistung zu der mittleren Leistung größer sind als ein voreingestellter Schwellenwert für das Verhältnis der Spitzenleistung zu der mittleren Leistung; und Berechnen der Länge einer adaptiven Normalisierung gemäß einem Signaltyp eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal und der Anzahl der Teilbänder;Bestimmen (105) eines angepassten Amplitudenwerts jedes Abtastwerts gemäß der Länge einer adaptiven Normalisierung und des Amplitudenwerts jedes Abtastwerts;undBestimmen (106) eines zweiten Sprach-/Audiosignals gemäß dem Zeichen jedes Abtastwerts und dem angepassten Amplitudenwert jedes Abtastwerts, wobei das zweite Sprach-/Audiosignal ein Signal ist, das erhalten wurde, nachdem die Rauschkomponente des ersten Sprach-/Audiosignals rekonstruiert ist wobei das Bestimmen (105) eines angepassten Amplitudenwerts jedes Abtastwerts gemäß der Länge einer adaptiven Normalisierung und des Amplitudenwerts jedes Abtastwerts umfasst:Berechnen, gemäß des Amplitudenwerts jedes Abtastwerts und der Länge einer adaptiven Normalisierung, eines durchschnittlichen Amplitudenwerts entsprechend jedes Abtastwerts, und Bestimmen, gemäß des durchschnittlichen Amplitudenwerts entsprechend jedes Abtastwerts, eines Amplitudenstörungswerts entsprechend jedes Abtastwerts; undBerechnen des angepassten Amplitudenwerts jedes Abtastwerts gemäß dem Amplitudenwert jedes Abtastwerts und gemäß dem Amplitudenstörungswert entsprechend jedes Abtastwerts; wobei das Berechnen des angepassten Amplitudenwerts jedes Abtastwerts gemäß dem Amplitudenwert jedes Abtastwerts und gemäß dem Amplitudenstörungswert entsprechend jedes Abtastwerts umfasst:
Subtrahieren des Amplitudenstörungswerts entsprechend jedes Abtastwerts von dem Amplitudenwert jedes Abtastwerts, um eine Differenz zwischen dem Amplitudenwert jedes Abtastwerts und dem Amplitudenstörungswert entsprechend jedes Abtastwerts zu erhalten, und Verwenden der erhaltenen Differenz als den angepassten Amplitudenwert jedes Abtastwerts:
wobei das Berechnen, gemäß dem Amplitudenwert jedes Abtastwerts und der Länge einer adaptiven Normalisierung, eines durchschnittlichen Amplitudenwerts entsprechend jedes Abtastwerts umfasst:Bestimmen, für jeden Abtastwert und gemäß der Länge einer adaptiven Normalisierung, eines Teilbands, zu dem der Abtastwert gehört; wobei das Teilband eine bestimmte Anzahl an Abtastwerten umfasst; wobei das Bestimmen, für jeden Abtastwert und gemäß der Länge einer adaptiven Normalisierung, eines Teilbands, zu dem der Abtastwert gehört, umfasst: für jeden Abtastwert, Bestimmen von einem Teilband bestehend aus m Abtastwerten vor dem Abtastwert, dem Abtastwert und n Abtastwerten nach dem Abtastwert als das Teilband, zu dem der Abtastwert gehört, wobei m und n von der Länge einer adaptiven Normalisierung abhängen, m eine Ganzzahl ist, die nicht kleiner als 0 ist, und n eine Ganzzahl ist, die nicht kleiner als 0 ist;undBerechnen eines Durchschnittswerts von Amplitudenwerten aller Abtastwerte in dem Teilband, zu dem der Abtastwert gehört, und Verwenden des Durchschnittswerts, der mittels Berechnung erhalten wurde, als den Durchschnittsamplitudenwert entsprechend des Abtastwerts. - Verfahren nach Anspruch 1, wobei das Berechnen der Länge einer adaptiven Normalisierung gemäß einem Signaltyp eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal und der Anzahl der Teilbänder umfasst:Berechnen der Länge einer adaptiven Normalisierung gemäß einer Formel L = K + α × M, wobeiL die Länge einer adaptiven Normalisierung ist; K ein numerischer Wert entsprechend dem Signaltyp des Hochfrequenzbandsignals in dem Sprach-/Audiosignal ist, und unterschiedliche Signaltypen von Hochfrequenzbandsignalen unterschiedlichen numerischen Werten K entsprechen; M die Anzahl der Teilbänder ist, deren Verhältnisse der Spitzenleistung zu der mittleren Leistung größer sind als der voreingestellte Schwellenwert für das Verhältnis der Spitzenleistung zu der mittleren Leistung; und α eine Konstante kleiner 1 ist.
- Verfahren nach Anspruch 1, wobei das Bestimmten einer Länge einer adaptiven Normalisierung umfasst:Berechnen eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Niederfrequenzbandsignals in dem Sprach-/Audiosignal und eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal; und wenn ein Absolutwert einer Differenz zwischen dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals und dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals kleiner als ein voreingestellter Differenzschwellenwert ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten ersten Längenwert, oder wenn ein Absolutwert einer Differenz zwischen dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals und dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals nicht kleiner als ein voreingestellter Differenzschwellenwert ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten zweiten Längenwert, wobei der erste Längenwert größer als der zweite Längenwert ist; oderBerechnen eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Niederfrequenzbandsignals in dem Sprach-/Audiosignal und eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal; und wenn das Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals kleiner als das Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten ersten Längenwert, oder wenn das Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals nicht kleiner als das Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten zweiten Längenwert; oderBestimmen der Länge einer adaptiven Normalisierung gemäß eines Signaltyps eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal, wobei unterschiedliche Signaltypen von Hochfrequenzbandsignalen unterschiedlichen Längen einer adaptiven Normalisierung entsprechen.
- Verfahren nach einem der Ansprüche 1 bis 3, wobei das Bestimmen eines zweiten Sprach-/Audiosignals gemäß dem Zeichen jedes Abtastwerts und dem angepassten Amplitudenwert jedes Abtastwerts umfasst:Bestimmen eines neuen Werts jedes Abtastwerts gemäß dem Zeichen und dem angepassten Amplitudenwert jedes Abtastwerts, um das zweite Sprach-/Audiosignal zu erhalten; oderBerechnen eines Modifizierungsfaktors; Durchführen einer Modifizierungsverarbeitung an einem angepassten Amplitudenwert, der größer als 0 ist, in den angepassten Amplitudenwerten der Abtastwerte gemäß dem Modifizierungsfaktor;und Bestimmen eines neuen Werts jedes Abtastwerts gemäß dem Zeichen jedes Abtastwerts und eines angepassten Amplitudenwerts, der nach der Modifizierungsverarbeitung erhalten wird, um das zweite Sprach-/Audiosignal zu erhalten.
- Verfahren nach Anspruch 4, wobei das Berechnen eines Modifizierungsfaktors umfasst:
Berechnen des Modifizierungsfaktors mithilfe einer Formel β = a/L, wobei β der Modifizierungsfaktor ist, L die Länge einer adaptiven Normalisierung ist und a eine Konstante größer 1 ist. - Verfahren nach Anspruch 4 oder 5, wobei das Durchführen einer Modifizierungsverarbeitung an einem angepassten Amplitudenwert, der größer als 0 ist, in den angepassten Amplitudenwerten der Abtastwerte gemäß dem Modifizierungsfaktor umfasst:
Durchführen einer Modifizierungsverarbeitung an dem angepassten Amplitudenwert, der größer als 0 ist, in den angepassten Amplitudenwerten der Abtastwerte mithilfe der folgenden Formel: - Einrichtung zum Rekonstruieren einer Rauschkomponente eines Sprach-/Audiosignals, umfassend:eine Bitstrom-Verarbeitungseinheit (410), die ausgestaltet ist, einen Bitstrom zu empfangen und den Bitstrom zu decodieren, um ein Sprach-/Audiosignal zu erhalten;eine Signalbestimmungseinheit (420), die ausgestaltet ist, ein erstes Sprach-/Audiosignal gemäß dem Sprach-/Audiosignal, das durch die Bitstrom-Verarbeitungseinheit erhalten wurde, zu bestimmen, wobei das erste Sprach-/Audiosignal ein Signal, dessen Rauschkomponente rekonstruiert werden muss, in dem mittels Decodieren erhaltenen Sprach-/Audiosignal ist;eine erste Bestimmungseinheit (430), die ausgestaltet ist, ein Zeichen jedes Abtastwerts in dem ersten Sprach-/Audiosignal, das durch die Signalbestimmungseinheit bestimmt wurde, und einen Amplitudenwert jedes Abtastwerts in dem ersten Sprach-/Audiosignal, das durch die Signalbestimmungseinheit bestimmt wurde, zu bestimmen;eine zweite Bestimmungseinheit (440), die ausgestaltet ist, eine Länge einer adaptiven Normalisierung zu bestimmen; wobei die zweite Bestimmungseinheit umfasst:eine Teilungsuntereinheit, die ausgestaltet ist; ein Niederfrequenzbandsignal in dem Sprach-/Audiosignal in N Teilbänder zu teilen, wobei N eine natürliche Zahl ist;eine Anzahlbestimmungsuntereinheit, die ausgestaltet ist, ein Verhältnis der Spitzenleistung zu der mittleren Leistung jedes Teilbands zu berechnen, und eine Anzahl von Teilbändern zu bestimmen, deren Verhältnisse der Spitzenleistung zu der mittleren Leistung größer sind als ein voreingestellter Schwellenwert für das Verhältnis der Spitzenleistung zu der mittleren Leistung; undeine Längenberechnungsuntereinheit, die ausgestaltet ist, die Länge einer adaptiven Normalisierung gemäß einem Signaltyp eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal und der Anzahl der Teilbänder zu berechnen eine dritte Bestimmungseinheit (450), die ausgestaltet ist, einen angepassten Amplitudenwert jedes Abtastwerts gemäß der Länge einer adaptiven Normalisierung, die durch die zweite Bestimmungseinheit bestimmt wurde, und des Amplitudenwerts, der von jedem Abtastwert ist und durch die erste Bestimmungseinheit bestimmt wird, zu bestimmen; undeine vierte Bestimmungseinheit (460), die ausgestaltet ist, ein zweites Sprach-/Audiosignal gemäß dem Zeichen, das von jedem Abtastwert ist und durch die erste Bestimmungseinheit bestimmt wird, und dem angepassten Amplitudenwert, der von jedem Abtastwert ist und durch die dritte Bestimmungseinheit bestimmt wird, zu bestimmen, wobei das zweite Sprach-/Audiosignal ein Signal ist, das erhalten wurde, nachdem die Rauschkomponente des ersten Sprach-/Audiosignals rekonstruiert wurde, wobei die dritte Bestimmungseinheit (450) umfasst:eine Bestimmungsuntereinheit, die ausgestaltet ist, gemäß dem Amplitudenwert jedes Abtastwerts und der Länge einer adaptiven Normalisierung, einen durchschnittlichen Amplitudenwert entsprechend jedes Abtastwerts zu bestimmen, und gemäß dem durchschnittlichen Amplitudenwert entsprechend jedes Abtastwerts, einen Amplitudenstörungswert entsprechend jedes Abtastwerts zu bestimmen; undeine angepasste Amplitudenwertberechnungsuntereinheit, die ausgestaltet ist, den angepassten Amplitudenwert jedes Abtastwerts gemäß des Amplitudenwerts jedes Abtastwerts und gemäß des Amplitudenstörungswerts entsprechend jedes Abtastwerts zu berechnen; wobei die angepasste Amplitudenwertberechnungsuntereinheit insbesondere zu Folgendem ausgestaltet ist:Subtrahieren des Amplitudenstörungswerts entsprechend jedes Abtastwerts von dem Amplitudenwert jedes Abtastwerts, um eine Differenz zwischen dem Amplitudenwert jedes Abtastwerts und dem Amplitudenstörungswert entsprechend jedes Abtastwerts zu erhalten, und Verwenden der erhaltenen Differenz als den angepassten Amplitudenwert jedes Abtastwerts;wobei die Bestimmungsuntereinheit umfasst:ein Bestimmungsmodul, das ausgestaltet ist, für jeden Abtastwert und gemäß der Länge einer adaptiven Normalisierung, ein Teilband zu bestimmen, zu dem der Abtastwert gehört; wobei das Teilband eine bestimmte Anzahl an Abtastwerten umfasst; wobei das Bestimmungsmodul insbesondere zu Folgendem ausgestaltet ist: für jeden Abtastwert, Bestimmen eines Teilbands bestehend aus m Abtastwerten vor dem Abtastwert, dem Abtastwert und n Abtastwerten nach dem Abtastwert als das Teilband, zu dem der Abtastwert gehört, wobei m und n von der Länge einer adaptiven Normalisierung abhängen, m eine Ganzzahl ist, die nicht kleiner als 0 ist, und n eine Ganzzahl ist, die nicht kleiner als 0 ist; undein Berechnungsmodul, das ausgestaltet ist, einen Durchschnittswert von Amplitudenwerten aller Abtastwerte in dem Teilband zu berechnen, zu dem der Abtastwert gehört, und den Durchschnittswert, der mittels Berechnung erhalten wurde, als den Durchschnittsamplitudenwert entsprechend des Abtastwerts zu verwenden.
- Einrichtung nach Anspruch 7, wobei die Längenberechnungseinheit insbesondere zu Folgendem ausgestaltet ist:Berechnen der Länge einer adaptiven Normalisierung gemäß einer Formel L = K + α × M, wobeiL die Länge einer adaptiven Normalisierung ist; K ein numerischer Wert entsprechend dem Signaltyp des Hochfrequenzbandsignals in dem Sprach-/Audiosignal ist, und unterschiedliche Signaltypen von Hochfrequenzbandsignalen unterschiedlichen numerischen Werten K entsprechen; M die Anzahl von Teilbändern ist, deren Verhältnisse der Spitzenleistung zu der mittleren Leistung größer sind als ein voreingestellter Schwellenwert für das Verhältnis der Spitzenleistung zu der mittleren Leistung; und α eine Konstante kleiner 1 ist.
- Einrichtung nach Anspruch 7, wobei die zweite Bestimmungseinheit (440) insbesondere zu Folgendem ausgestaltet ist:Berechnen eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Niederfrequenzbandsignals in dem Sprach-/Audiosignal und eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal; und wenn ein Absolutwert einer Differenz zwischen dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals und dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals kleiner als ein voreingestellter Differenzschwellenwert ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten ersten Längenwert, oder wenn ein Absolutwert einer Differenz zwischen dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals und dem Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals nicht kleiner als ein voreingestellter Differenzschwellenwert ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten zweiten Längenwert, wobei der erste Längenwert größer als der zweite Längenwert ist; oderBerechnen eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Niederfrequenzbandsignals in dem Sprach-/Audiosignal und eines Verhältnisses der Spitzenleistung zu der mittleren Leistung eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal; und wenn das Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals kleiner als das Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten ersten Längenwert, oder wenn das Verhältnis der Spitzenleistung zu der mittleren Leistung des Niederfrequenzbandsignals nicht kleiner als das Verhältnis der Spitzenleistung zu der mittleren Leistung des Hochfrequenzbandsignals ist, Bestimmen der Länge einer adaptiven Normalisierung als einen voreingestellten zweiten Längenwert; oderBestimmen der Länge einer adaptiven Normalisierung gemäß eines Signaltyps eines Hochfrequenzbandsignals in dem Sprach-/Audiosignal, wobei unterschiedliche Signaltypen von Hochfrequenzbandsignalen unterschiedlichen Längen einer adaptiven Normalisierung entsprechen.
- Einrichtung nach einem der Ansprüche 7 bis 9, wobei die vierte Bestimmungseinheit (460) insbesondere zu Folgendem ausgestaltet ist:Bestimmen eines neuen Werts jedes Abtastwerts gemäß dem Zeichen und dem angepassten Amplitudenwert jedes Abtastwerts, um das zweite Sprach-/Audiosignal zu erhalten; oderBerechnen eines Modifizierungsfaktors; Durchführen einer Modifizierungsverarbeitung an einem angepassten Amplitudenwert, der größer als 0 ist, in den angepassten Amplitudenwerten der Abtastwerte gemäß dem Modifizierungsfaktor;und Bestimmen eines neuen Werts jedes Abtastwerts gemäß dem Zeichen jedes Abtastwerts und eines angepassten Amplitudenwerts, der nach der Modifizierungsverarbeitung erhalten wird, um das zweite Sprach-/Audiosignal zu erhalten.
- Einrichtung nach Anspruch 10, wobei die vierte Bestimmungseinheit (460) insbesondere ausgestaltet ist, den Modifizierungsfaktor mithilfe einer Formel β = a/L zu berechnen, wobei β der Modifizierungsfaktor ist, L die Länge einer adaptiven Normalisierung ist und a eine Konstante größer 1 ist.
- Einrichtung nach Anspruch 11, wobei die vierte Bestimmungseinheit (460) insbesondere zu Folgendem ausgestaltet ist:
Durchführen einer Modifizierungsverarbeitung an dem angepassten Amplitudenwert, der größer als 0 ist, in den angepassten Amplitudenwerten der Abtastwerte mithilfe der folgenden Formel:
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19190663.5A EP3712890B1 (de) | 2014-06-03 | 2015-01-19 | Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung |
EP23184053.9A EP4283614A3 (de) | 2014-06-03 | 2015-01-19 | Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410242233.2A CN105336339B (zh) | 2014-06-03 | 2014-06-03 | 一种语音频信号的处理方法和装置 |
PCT/CN2015/071017 WO2015184813A1 (zh) | 2014-06-03 | 2015-01-19 | 一种语音频信号的处理方法和装置 |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP19190663.5A Division-Into EP3712890B1 (de) | 2014-06-03 | 2015-01-19 | Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung |
EP19190663.5A Division EP3712890B1 (de) | 2014-06-03 | 2015-01-19 | Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung |
EP23184053.9A Division EP4283614A3 (de) | 2014-06-03 | 2015-01-19 | Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung |
Publications (3)
Publication Number | Publication Date |
---|---|
EP3147900A1 EP3147900A1 (de) | 2017-03-29 |
EP3147900A4 EP3147900A4 (de) | 2017-05-03 |
EP3147900B1 true EP3147900B1 (de) | 2019-10-02 |
Family
ID=54766052
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP15802508.0A Active EP3147900B1 (de) | 2014-06-03 | 2015-01-19 | Verfahren und vorrichtung zur verarbeitung von audiosignalen |
EP23184053.9A Pending EP4283614A3 (de) | 2014-06-03 | 2015-01-19 | Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung |
EP19190663.5A Active EP3712890B1 (de) | 2014-06-03 | 2015-01-19 | Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP23184053.9A Pending EP4283614A3 (de) | 2014-06-03 | 2015-01-19 | Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung |
EP19190663.5A Active EP3712890B1 (de) | 2014-06-03 | 2015-01-19 | Verfahren zur verarbeitung von sprach-/audiosignalen und vorrichtung |
Country Status (19)
Country | Link |
---|---|
US (3) | US9978383B2 (de) |
EP (3) | EP3147900B1 (de) |
JP (3) | JP6462727B2 (de) |
KR (3) | KR102104561B1 (de) |
CN (2) | CN110097892B (de) |
AU (1) | AU2015271580B2 (de) |
BR (1) | BR112016028375B1 (de) |
CA (1) | CA2951169C (de) |
CL (1) | CL2016003121A1 (de) |
ES (1) | ES2964221T3 (de) |
HK (1) | HK1220543A1 (de) |
IL (1) | IL249337B (de) |
MX (2) | MX362612B (de) |
MY (1) | MY179546A (de) |
NZ (1) | NZ727567A (de) |
RU (1) | RU2651184C1 (de) |
SG (1) | SG11201610141RA (de) |
WO (1) | WO2015184813A1 (de) |
ZA (1) | ZA201608477B (de) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110097892B (zh) * | 2014-06-03 | 2022-05-10 | 华为技术有限公司 | 一种语音频信号的处理方法和装置 |
CN108133712B (zh) * | 2016-11-30 | 2021-02-12 | 华为技术有限公司 | 一种处理音频数据的方法和装置 |
CN106847299B (zh) * | 2017-02-24 | 2020-06-19 | 喜大(上海)网络科技有限公司 | 延时的估计方法及装置 |
RU2754497C1 (ru) * | 2020-11-17 | 2021-09-02 | федеральное государственное автономное образовательное учреждение высшего образования "Казанский (Приволжский) федеральный университет" (ФГАОУ ВО КФУ) | Способ передачи речевых файлов по зашумленному каналу и устройство для его реализации |
US20230300524A1 (en) * | 2022-03-21 | 2023-09-21 | Qualcomm Incorporated | Adaptively adjusting an input current limit for a boost converter |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130018660A1 (en) * | 2011-07-13 | 2013-01-17 | Huawei Technologies Co., Ltd. | Audio signal coding and decoding method and device |
US20140044192A1 (en) * | 2010-09-29 | 2014-02-13 | Huawei Technologies Co., Ltd. | Method and device for encoding a high frequency signal, and method and device for decoding a high frequency signal |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6261312B1 (en) | 1998-06-23 | 2001-07-17 | Innercool Therapies, Inc. | Inflatable catheter for selective organ heating and cooling and method of using the same |
SE9803698L (sv) * | 1998-10-26 | 2000-04-27 | Ericsson Telefon Ab L M | Metoder och anordningar i ett telekommunikationssystem |
CA2252170A1 (en) * | 1998-10-27 | 2000-04-27 | Bruno Bessette | A method and device for high quality coding of wideband speech and audio signals |
US6687668B2 (en) * | 1999-12-31 | 2004-02-03 | C & S Technology Co., Ltd. | Method for improvement of G.723.1 processing time and speech quality and for reduction of bit rate in CELP vocoder and CELP vococer using the same |
US6631139B2 (en) * | 2001-01-31 | 2003-10-07 | Qualcomm Incorporated | Method and apparatus for interoperability between voice transmission systems during speech inactivity |
US6708147B2 (en) * | 2001-02-28 | 2004-03-16 | Telefonaktiebolaget Lm Ericsson(Publ) | Method and apparatus for providing comfort noise in communication system with discontinuous transmission |
US20030093270A1 (en) * | 2001-11-13 | 2003-05-15 | Domer Steven M. | Comfort noise including recorded noise |
KR100935961B1 (ko) * | 2001-11-14 | 2010-01-08 | 파나소닉 주식회사 | 부호화 장치 및 복호화 장치 |
US7536298B2 (en) * | 2004-03-15 | 2009-05-19 | Intel Corporation | Method of comfort noise generation for speech communication |
US7831421B2 (en) * | 2005-05-31 | 2010-11-09 | Microsoft Corporation | Robust decoder |
US7610197B2 (en) * | 2005-08-31 | 2009-10-27 | Motorola, Inc. | Method and apparatus for comfort noise generation in speech communication systems |
US8255213B2 (en) | 2006-07-12 | 2012-08-28 | Panasonic Corporation | Speech decoding apparatus, speech encoding apparatus, and lost frame concealment method |
KR101396140B1 (ko) * | 2006-09-18 | 2014-05-20 | 코닌클리케 필립스 엔.브이. | 오디오 객체들의 인코딩과 디코딩 |
CN101320563B (zh) * | 2007-06-05 | 2012-06-27 | 华为技术有限公司 | 一种背景噪声编码/解码装置、方法和通信设备 |
CN101335003B (zh) * | 2007-09-28 | 2010-07-07 | 华为技术有限公司 | 噪声生成装置、及方法 |
US8139777B2 (en) * | 2007-10-31 | 2012-03-20 | Qnx Software Systems Co. | System for comfort noise injection |
CN101483042B (zh) * | 2008-03-20 | 2011-03-30 | 华为技术有限公司 | 一种噪声生成方法以及噪声生成装置 |
AU2009267518B2 (en) * | 2008-07-11 | 2012-08-16 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for encoding/decoding an audio signal using an aliasing switch scheme |
PT2146344T (pt) * | 2008-07-17 | 2016-10-13 | Fraunhofer Ges Forschung | Esquema de codificação/descodificação de áudio com uma derivação comutável |
CN101483048B (zh) | 2009-02-06 | 2010-08-25 | 凌阳科技股份有限公司 | 光学储存装置及其回路增益值的自动校正方法 |
US9047875B2 (en) * | 2010-07-19 | 2015-06-02 | Futurewei Technologies, Inc. | Spectrum flatness control for bandwidth extension |
JP6189831B2 (ja) * | 2011-05-13 | 2017-08-30 | サムスン エレクトロニクス カンパニー リミテッド | ビット割り当て方法及び記録媒体 |
US20130006644A1 (en) * | 2011-06-30 | 2013-01-03 | Zte Corporation | Method and device for spectral band replication, and method and system for audio decoding |
DE102011106033A1 (de) * | 2011-06-30 | 2013-01-03 | Zte Corporation | Verfahren und System zur Audiocodierung und -decodierung und Verfahren zur Schätzung des Rauschpegels |
US20130132100A1 (en) | 2011-10-28 | 2013-05-23 | Electronics And Telecommunications Research Institute | Apparatus and method for codec signal in a communication system |
LT2774145T (lt) * | 2011-11-03 | 2020-09-25 | Voiceage Evs Llc | Nekalbinio turinio gerinimas mažos spartos celp dekoderiui |
US20130282373A1 (en) | 2012-04-23 | 2013-10-24 | Qualcomm Incorporated | Systems and methods for audio signal processing |
CN110097892B (zh) * | 2014-06-03 | 2022-05-10 | 华为技术有限公司 | 一种语音频信号的处理方法和装置 |
US12044962B2 (en) | 2019-04-19 | 2024-07-23 | Canon Kabushiki Kaisha | Forming apparatus, forming method, and article manufacturing method |
-
2014
- 2014-06-03 CN CN201910358522.1A patent/CN110097892B/zh active Active
- 2014-06-03 CN CN201410242233.2A patent/CN105336339B/zh active Active
-
2015
- 2015-01-19 SG SG11201610141RA patent/SG11201610141RA/en unknown
- 2015-01-19 RU RU2016152224A patent/RU2651184C1/ru active
- 2015-01-19 MY MYPI2016704486A patent/MY179546A/en unknown
- 2015-01-19 EP EP15802508.0A patent/EP3147900B1/de active Active
- 2015-01-19 MX MX2016015950A patent/MX362612B/es active IP Right Grant
- 2015-01-19 AU AU2015271580A patent/AU2015271580B2/en active Active
- 2015-01-19 CA CA2951169A patent/CA2951169C/en active Active
- 2015-01-19 EP EP23184053.9A patent/EP4283614A3/de active Pending
- 2015-01-19 WO PCT/CN2015/071017 patent/WO2015184813A1/zh active Application Filing
- 2015-01-19 EP EP19190663.5A patent/EP3712890B1/de active Active
- 2015-01-19 BR BR112016028375-9A patent/BR112016028375B1/pt active IP Right Grant
- 2015-01-19 JP JP2016570979A patent/JP6462727B2/ja active Active
- 2015-01-19 KR KR1020197002091A patent/KR102104561B1/ko active IP Right Grant
- 2015-01-19 KR KR1020167035690A patent/KR101943529B1/ko active IP Right Grant
- 2015-01-19 ES ES19190663T patent/ES2964221T3/es active Active
- 2015-01-19 KR KR1020207011385A patent/KR102201791B1/ko active IP Right Grant
- 2015-01-19 NZ NZ727567A patent/NZ727567A/en unknown
-
2016
- 2016-07-15 HK HK16108374.1A patent/HK1220543A1/zh unknown
- 2016-12-01 IL IL249337A patent/IL249337B/en active IP Right Grant
- 2016-12-02 CL CL2016003121A patent/CL2016003121A1/es unknown
- 2016-12-02 MX MX2019001193A patent/MX2019001193A/es unknown
- 2016-12-05 US US15/369,396 patent/US9978383B2/en active Active
- 2016-12-08 ZA ZA2016/08477A patent/ZA201608477B/en unknown
-
2018
- 2018-05-21 US US15/985,281 patent/US10657977B2/en active Active
- 2018-12-26 JP JP2018242725A patent/JP6817283B2/ja active Active
-
2020
- 2020-05-18 US US16/877,389 patent/US11462225B2/en active Active
- 2020-12-23 JP JP2020213571A patent/JP7142674B2/ja active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140044192A1 (en) * | 2010-09-29 | 2014-02-13 | Huawei Technologies Co., Ltd. | Method and device for encoding a high frequency signal, and method and device for decoding a high frequency signal |
US20130018660A1 (en) * | 2011-07-13 | 2013-01-17 | Huawei Technologies Co., Ltd. | Audio signal coding and decoding method and device |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11462225B2 (en) | Method for processing speech/audio signal and apparatus | |
US8392176B2 (en) | Processing of excitation in audio coding and decoding | |
CN103325380A (zh) | 用于信号增强的增益后处理 | |
US11881226B2 (en) | Signal processing method and device | |
CN105529034A (zh) | 一种基于混响的语音识别方法和装置 | |
EP3139379A1 (de) | Audiocodierungsverfahren und zugehörige vorrichtung | |
KR20220050924A (ko) | 오디오 코딩을 위한 다중 래그 형식 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20161216 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20170404 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G10L 21/0316 20130101ALI20170329BHEP Ipc: G10L 19/26 20130101AFI20170329BHEP Ipc: G10L 19/028 20130101ALI20170329BHEP Ipc: G10L 21/02 20130101ALI20170329BHEP |
|
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20180313 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602015039167 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: G10L0021020000 Ipc: G10L0019260000 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G10L 21/038 20130101ALN20190321BHEP Ipc: G10L 19/26 20130101AFI20190321BHEP Ipc: G10L 21/02 20130101ALI20190321BHEP Ipc: G10L 19/028 20130101ALI20190321BHEP Ipc: G10L 21/0316 20130101ALI20190321BHEP |
|
INTG | Intention to grant announced |
Effective date: 20190415 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP Ref country code: AT Ref legal event code: REF Ref document number: 1187056 Country of ref document: AT Kind code of ref document: T Effective date: 20191015 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602015039167 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20191002 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1187056 Country of ref document: AT Kind code of ref document: T Effective date: 20191002 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200102 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200203 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200102 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200103 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200224 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602015039167 Country of ref document: DE |
|
PG2D | Information on lapse in contracting state deleted |
Ref country code: IS |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200202 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
26N | No opposition filed |
Effective date: 20200703 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20200131 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200119 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200131 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200131 Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200131 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20200119 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: MT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20191002 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230524 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20231130 Year of fee payment: 10 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: FR Payment date: 20231212 Year of fee payment: 10 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20231205 Year of fee payment: 10 |