EP3138095B1 - Verbesserte frameverlustkorrektur mit sprachinformationen - Google Patents

Verbesserte frameverlustkorrektur mit sprachinformationen Download PDF

Info

Publication number
EP3138095B1
EP3138095B1 EP15725801.3A EP15725801A EP3138095B1 EP 3138095 B1 EP3138095 B1 EP 3138095B1 EP 15725801 A EP15725801 A EP 15725801A EP 3138095 B1 EP3138095 B1 EP 3138095B1
Authority
EP
European Patent Office
Prior art keywords
signal
frame
components
decoding
voicing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP15725801.3A
Other languages
English (en)
French (fr)
Other versions
EP3138095A1 (de
Inventor
Julien Faure
Stéphane RAGOT
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Orange SA
Original Assignee
Orange SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Orange SA filed Critical Orange SA
Publication of EP3138095A1 publication Critical patent/EP3138095A1/de
Application granted granted Critical
Publication of EP3138095B1 publication Critical patent/EP3138095B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/005Correction of errors induced by the transmission channel, if related to the coding algorithm
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/028Noise substitution, i.e. substituting non-tonal spectral components by noisy source
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • G10L25/81Detection of presence or absence of voice signals for discriminating voice from music
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/20Vocoders using multiple modes using sound class specific coding, hybrid encoders or object based coding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/93Discriminating between voiced and unvoiced parts of speech signals
    • G10L2025/932Decision in previous or following frames
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/93Discriminating between voiced and unvoiced parts of speech signals

Definitions

  • the present invention relates to the field of telecommunication coding / decoding, and more particularly that of decoding frame loss correction.
  • a "frame” is understood to mean an audio segment composed of at least one sample (so that the invention applies equally well to the loss of one or more samples coded according to the G.711 standard, as well as to a loss of one or more sample packets coded according to G.723, G.729, etc.).
  • the loss of audio frames occurs when a real-time communication using an encoder and a decoder is disturbed by the conditions of a telecommunication network (radio frequency problems, congestion of the access network, etc.).
  • the decoder uses frame loss correction mechanisms to try to substitute the missing signal with a reconstructed signal by using information available to the decoder (for example the audio signal already decoded for one or more past frames). This technique can maintain quality of service despite degraded network performance.
  • Frame loss correction techniques are most often very dependent on the type of coding used.
  • CELP coding it is common to repeat certain parameters decoded at the previous frame (spectral envelope, pitch, dictionary gains), with adjustments such as a modification of the spectral envelope to converge towards a medium envelope or the use of a random fixed dictionary.
  • a Modulated Lapped Transform (or MLT) with 50% overlap addition and sinusoidal windows provide a transition between the last lost frame and the repeated frame which is slow enough to erase the artifacts related to the simple repetition of the frame in the case of a single lost frame.
  • MLT Modulated Lapped Transform
  • this realization does not require additional delay since it exploits the existing delay and the time folding of the MLT transform to make an overlay with the reconstituted signal.
  • the document FR 1350845 proposes a hybrid method that combines the advantages of the two methods by allowing phase continuity in the transformed domain.
  • the present invention falls within this framework. A detailed description of the solution that is the subject of this document FR 1350845 is described later with reference to the figure 1 .
  • the invention improves the situation.
  • the invention aims to improve the state of the art within the meaning of the document FR 1350845 by modifying different stages of the processing presented in this document (pitch search, selection of components, noise injection) but nevertheless according in particular to the characteristics of the original signal.
  • These characteristics of the original signal may be encoded as particular information in the data stream to the decoder (or "bitstream") depending on the classification of the speech and / or music, and the case of the speech class. in particular.
  • Such an embodiment can be implemented in an encoder for determining the voicing information, and more particularly in a decoder, particularly in the case of frame loss. It can be implemented as software in a coded / decoded implementation for Enhanced Voice Services ("EVS") specified by the 3GPP (SA4) group.
  • EVS Enhanced Voice Services
  • the present invention is also directed to a computer program as in claim 13.
  • An example of a flow chart of such a program is presented in the following detailed description with reference to FIG. figure 4 for decoding in the sense of the invention and with reference to the figure 3 for coding useful for the invention.
  • the present invention also relates to a device for decoding a digital audio signal as in claim 14 comprising a succession of samples distributed in successive frames.
  • a succession of N audio samples denoted b (n) below, is stored in a buffer memory of the decoder (or "buffer"). These samples correspond to samples already decoded and are therefore accessible for the decay field decoder correction. If the first sample to be synthesized is the sample N, the audio buffer corresponds to the previous samples 0 to N-1.
  • the audio buffer corresponds to the samples at the previous frame, and are not modifiable because this type of coding / decoding does not provide for delay in the return of the signal, so that it It is not planned to perform a crossfade of sufficient duration to cover a frame loss.
  • Fc separation frequency
  • This filtering is preferably a filtering without delay.
  • this filtering step may be optional, the following steps being carried out in full band.
  • the next step S3 consists of searching in the low band for a loopback point and a segment p (n) corresponding to the fundamental period (or "pitch” hereinafter) within the buffer b (n) resampled to the frequency Fc.
  • This realization makes it possible to take into account the continuity of the pitch in the frame (s) lost (s) to be reconstructed.
  • Step S4 consists of breaking down the segment p (n) into a sum of sinusoidal components.
  • the discrete Fourier transform (DFT) of the signal p (n) can be computed over a period corresponding to the length of the signal. The frequency, the phase and the amplitude of each of the sinusoidal components (or "peaks") that make up the signal are thus obtained.
  • Other transforms than DFT are possible. For example, transforms of DCT, MDCT or MCLT type can be implemented.
  • Step S5 is a step of selecting K sinusoidal components so as to keep only the most important components.
  • the selection of the components corresponds firstly to selecting the amplitudes A (n) for which A (n)> A (n-1) and A (n)> A (n + 1) with not ⁇ 0 ; P ' 2 - 1 , which ensures that the amplitudes correspond to spectral peaks.
  • Fourier transform analysis FFT is therefore more efficient over a length that is a power of 2, without changing the actual pitch period (due to the interpolation).
  • ⁇ ( k ) FFT ( p' (n)); and, from the FFT transform, we obtain directly the phases ⁇ ( k ) and amplitudes A ( k ) of the sinusoidal components, the normalized frequencies between 0 and 1 being given here by: f k - 2 ⁇ kP ' P 2 k ⁇ 0 ; P ' 2 - 1
  • the step S6 sinusoidal synthesis consists in generating a segment s (n) of length at least equal to the size of the lost frame (T).
  • Step S7 consists of "injecting noise” (filling the spectral zones corresponding to the unselected lines) so as to compensate for the energy loss linked to the omission of certain frequency peaks in the low band.
  • Step S8 applied to the high band may simply consist of repeating the past signal.
  • step S9 the signal is synthesized by resampling the low band at its original frequency fc, after being mixed in step S8 to the high band filtered (simply repeated in step S11).
  • Step S10 is an overlay addition that provides continuity between the signal before the frame loss and the synthesized signal.
  • signaling information of the signal before loss of frame, transmitted to at least one encoder bit rate, is used at decoding (step DI-1) to quantitatively determine a proportion of noise to be added to the synthesis signal replacing one or more lost frames .
  • the decoder uses the voicing information, to decrease, as a function of the voicing, the general amount of noise mixed with the synthesis signal (by assigning a gain G (res) lower to the noise signal r '(k) from a residue in step DI-3, and / or selecting more amplitude components A (k) to be used for construction of the synthesis signal in step DI-4).
  • the decoder can further adjust its parameters, including pitch search, to optimize the compromise quality / complexity of the treatment, according to the information of voicing. For example, for the pitch search, if the signal is voiced, the pitch search window Nc may be larger (at step DI-5), as will be seen later with reference to the figure 5 .
  • This "flatness" data Pi of the spectrum can be received on several bits at the decoder at the optional step DI-10 of the figure 2 , then compared to a threshold in step DI-11, which amounts to determining in steps DI-1 and DI-2 whether the voicing is greater or less than a threshold, and deducing from it the appropriate treatments, in particular for the selection of peaks and for the choice of duration of the pitch search segment.
  • This information (whether in the form of a single bit or a multi-bit value) is received from the encoder (at at least one rate of the codec), in the example described here.
  • the input signal presented in the form of C1 frames is analyzed in step C2.
  • the analysis step consists in determining whether the audio signal of the current frame has characteristics that would require special treatment in the event of loss of frames to the decoder, as is the case, for example, on voiced speech signals.
  • a classification speech / music or other
  • a coder classification already makes it possible to adapt the technique used for the coding according to the nature of the signal (speech or music).
  • predictive coders such as, for example, the coder according to the G.718 standard also use a classification so as to adapt the parameters of the coder to the nature of the signal (voiced / unvoiced, transient, generic, inactive).
  • characterization for the loss of frame is reserved. It is added to the code stream (or bitstream) in step C3 to indicate whether the signal is a speech signal (voiced or generic). This bit is for example set to 1 or to 0 according to the case of the table below: • the decision of the speech / music classifier, • and additionally the decision of the classifier of the speech coding mode. Decision of the encoder classifier speech Music Characterization bit value for frame loss Decision of the classifier Coding mode: 0 Voiced 1 Not Voised 0 transient 0 Generic 1 Inactive 0
  • the information transmitted to the decoder in the coded stream is not binary but corresponds to a quantification of the ratio between the peak levels and the valley levels in the spectrum.
  • x (k) is the amplitude spectrum of size N resulting from the analysis of the current frame in the frequency domain (after FFT).
  • a sinusoidal analysis decomposing the signal to the sinusoidal component and noise encoder is available and the measure of flatness is obtained by ratio between the sinusoidal components and the overall energy on the frame.
  • step C3 (comprising the single-bit voicing information or the flatness measurement over several bits)
  • the audio buffer of the encoder is conventionally coded in a step C4 before possible subsequent transmission to the decoder.
  • the decoder reads the information contained in the coded stream, including the "characterization for frame loss” information in step D2 (at at least one rate of the codec). These are stored in memory so that they can be reused in case a next frame is missing. The decoder then continues the conventional decoding steps D3, etc. to obtain the SYN SYNTH synthesized output frame.
  • step D4, D5, D6, D7, D8 and D12 corresponding respectively to the steps S2, S3, S4, S5, are applied.
  • S6 and S11 of the figure 1 are applied.
  • steps S3 and S5 respectively to the steps D5 (search for a loopback point for the determination of the pitch) and D7 (selection of the sinusoidal components).
  • the noise injection at step S7 of the figure 1 is performed with a gain determination according to two steps D9 and D10 in the figure 4 decoder within the meaning of the invention.
  • the invention consists in modifying the processing of steps D5, D7 and D9-D10, as follows.
  • step D7 of the figure 4 we select sinusoidal components so as to keep only the most important components.
  • the first selection of components amounts to selecting the amplitudes A (n) for which A (n)> A (n-1) and A (n)> A (n + 1) with not ⁇ 0 : p ' 2 - 1 .
  • the signal that one seeks to reconstruct is a speech signal (voiced or generic) so with marked peaks and a low noise level.
  • the signal that one seeks to reconstruct is a speech signal (voiced or generic) so with marked peaks and a low noise level.
  • This modification notably makes it possible to lower the noise level (and in particular the level of noise injected in steps D9 and D10 presented below) with respect to the level of signal synthesized by sinusoidal synthesis in step D8, while maintaining a global level. sufficient energy not to cause audible artifacts related to energy fluctuations.
  • the voicing information is advantageously used here to attenuate the noise by applying a gain G to the step D10.
  • G may be a constant equal to 1 or 0.25 as a function of the voiced or unvoiced nature of the signal of the preceding frame, according to the table given below by way of example: Bit value of "characterization for frame loss" 0 1 Gain G 1 0.25
  • the "frame loss characterization" information has several discrete levels characterizing the flatness Pl of the spectrum.
  • the gain G can be expressed directly as a function of the value Pl. The same applies to the limit of the segment Nc for the pitch search and / or for the number of peaks An to be taken into account for the synthesis of the signal.
  • a treatment can be defined as follows.
  • the gain G is already defined directly as a function of the value PI: G ( Pl ) - 2 Pl
  • the value P1 is compared to a mean value -3dB, with the proviso that the value 0 corresponds to a flat spectrum, and -5 dB corresponds to a spectrum with pronounced peaks.
  • the duration of the segment of search for pitch Nc at 33 ms and select the peaks A (n) such that A (n)> A (n-1) and A (n)> A (n + 1), as well as the neighboring first peaks A (n) n-1) and A (n + 1).
  • the duration Nc can be chosen shorter, for example 25 ms and only A (n) peaks such as A (n)> A (n-1) and A (n)> A (n + 1) are selected.
  • the decoding can then be continued by mixing the noise, the gain of which is thus obtained, with the components thus selected to obtain the synthesis signal in the low frequencies at the D13 tab, which is added to the synthesis signal in the high frequencies obtained at step D14, to obtain in step D15 the synthesized overall signal.
  • a DECOD decoder (comprising, for example, software and hardware hardware such as a judiciously programmed MEM memory and a PROC processor cooperating with this memory, or alternatively a component such as a ASIC, or other, as well as a COM communication interface) implanted for example in a telecommunication device such as a TEL telephone, uses, for the implementation of the method of the figure 4 , a voicing information that it receives from a coder COD.
  • This encoder comprises, for example, software and hardware hardware such as a memory MEM 'judiciously programmed to determine the voicing information and a processor PROC' cooperating with this memory, or alternatively a component such as an ASIC, or other, as well as a communication interface COM '.
  • the coder COD is implanted in a telecommunication device such as a TEL 'telephone.
  • the information on voicing can take different forms that can be varied.
  • it may be a binary value on a single bit (voicing or not), or a value on several bits which may be relative to a parameter such as the flatness of the signal spectrum, or any other parameter to characterize (quantitatively or qualitatively) a voicing.
  • this parameter can be determined at decoding, for example according to the degree of correlation that can be measured during the identification of the pitch period.
  • an embodiment comprising a separation in a high frequency band and a low frequency band of the signal from previous valid frames, with in particular a selection of the spectral components in the first embodiment, has been presented as an example. low frequency band. Nevertheless, this embodiment is optional although advantageous in the sense that it reduces the complexity of the treatment.
  • the frame replacement method assisted by the voicing information in the sense of the invention can nevertheless be achieved by considering the entire spectrum of the valid signal, alternatively.
  • the aforementioned noise signal can be obtained by the residue (between the valid signal and the sum of the peaks) by weighting this residue temporally. For example, it can be weighted by overlapping windows, as in the usual framework of a transform coding / decoding with overlap.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)

Claims (14)

  1. Verarbeitungsverfahren eines digitalen Audiosignals, das eine Abfolge von Proben umfasst, die in aufeinanderfolgenden Rahmen verteilt sind, wobei das Verfahren während eines Decodierens des Signals ausgeführt wird, um mindestens einen Signalrahmen, der beim Decodieren verloren wurde, zu ersetzen,
    wobei das Verfahren die folgenden Schritte umfasst:
    a) Suchen in einem gültigen Signalsegment (Nc), das für das Decodieren verfügbar ist, mindestens einer Periode in dem Signal, die in Abhängigkeit von dem gültigen Signal bestimmt wird,
    b) Analyse des Signals in der Periode für eine Bestimmung spektraler Komponenten des Signals in der Periode,
    c) Synthese mindestens eines Ersatzrahmens des verlorenen Rahmens durch Aufbau eines Synthesesignals ausgehend von:
    - einer Addition von Komponenten, die unter den bestimmten spektralen Komponenten ausgewählt werden, und
    - einem Rauschen, das zu der Addition der Komponenten hinzugefügt wird,
    wobei die Menge an Rauschen, die zu der Addition der Komponenten hinzugefügt wird, in Abhängigkeit von einer Stimmhaftigkeitsinformation des gültigen Signals gewichtet wird, wobei die Stimmhaftigkeitsinformation durch einen Codierer bestimmt und dann in einen codierten Fluss geliefert wird, der dem Signal entspricht, das von dem Codierer geliefert und am Decodieren empfangen wird, so dass, im Fall eines Rahmenverlusts bei dem Decodieren die Stimmhaftigkeitsinformation, die in einem gültigen Signalrahmen, der dem verlorenen Rahmen vorausgeht, enthalten ist, verwendet wird.
  2. Verfahren nach Anspruch 1, dadurch gekennzeichnet, dass ein Rauschsignal, das zu der Addition der Komponenten hinzugefügt wird, durch eine kleinere Verstärkung im Fall von Stimmhaftigkeit des gültigen Signals gewichtet wird.
  3. Verfahren nach Anspruch 2, dadurch gekennzeichnet, dass das Rauschsignal durch einen Rest zwischen dem gültigen Signal und der Addition der ausgewählten Komponenten erhalten wird.
  4. Verfahren nach einem der vorstehenden Ansprüche, dadurch gekennzeichnet, dass die Anzahl von Komponenten, die für die Addition ausgewählt wird, in dem Fall von Stimmhaftigkeit des gültigen Signals größer ist.
  5. Verfahren nach einem der vorstehenden Ansprüche, dadurch gekennzeichnet, dass die Periode bei dem Schritt a) in einem gültigen Signalsegment (Nc) mit einer Dauer, die im Fall von Stimmhaftigkeit des gültigen Signals größer ist, gesucht wird.
  6. Verfahren nach einem der vorstehenden Ansprüche, dadurch gekennzeichnet, dass die Stimmhaftigkeitsinformation auf einem einzigen Bit in dem codierten Fluss codiert wird.
  7. Verfahren nach Anspruch 6, in Kombination mit Anspruch 2 genommen, dadurch gekennzeichnet dass, falls das Signal stimmhaft ist, der Wert der Verstärkung 0,25 beträgt und er anderenfalls 1 beträgt.
  8. Verfahren nach einem der vorstehenden Ansprüche, dadurch gekennzeichnet, dass die Stimmhaftigkeitsinformation von einem Codierer stammt, der einen Spektrumflachheitswert (P1) bestimmt, der durch Vergleichen der Amplituden der spektralen Komponenten des Signals mit einem Hintergrundrauschen erhalten wird, wobei der Codierer den Wert in binärer Form in dem codierten Fluss liefert.
  9. Verfahren nach einem der vorstehenden Ansprüche, in Kombination mit Anspruch 2 genommen, dadurch gekennzeichnet, dass der Wert der Verstärkung von dem Flachheitswert abhängt.
  10. Verfahren nach einem der Ansprüche 8 und 9, dadurch gekennzeichnet, dass der Flachheitswert mit einem Schwellenwert verglichen wird, um zu bestimmen:
    - ob das Signal stimmhaft ist, falls der Flachheitswert niedriger ist als der Schwellenwert, und
    - ob das Signal anderenfalls nicht stimmhaft ist.
  11. Verfahren nach einem der Ansprüche 6 und 10, in Kombination mit Anspruch 4 genommen, dadurch gekennzeichnet, dass:
    - falls das Signal stimmhaft ist, die spektralen Komponenten ausgewählt werden, deren Amplituden größer sind als diejenigen der ersten benachbarten spektralen Komponenten, sowie als die ersten benachbarten spektralen Komponenten, und
    - anderenfalls nur die spektralen Komponenten, deren Amplituden größer sind als diejenigen der ersten benachbarten spektralen Komponenten ausgewählt werden.
  12. Verfahren nach einem der Ansprüche 6 und 10, in Kombination mit Anspruch 5 genommen, dadurch gekennzeichnet, dass:
    - falls das Signal stimmhaft ist, die Periode in einem gültigen Signalsegment mit einer Dauer größer als 30 Millisekunden gesucht wird,
    - und anderenfalls die Periode in einem gültigen Signalsegment mit einer Dauer kleiner als 30 Millisekunden gesucht wird.
  13. Computerprogramm, dadurch gekennzeichnet, dass es Anweisungen für das Umsetzen des Verfahrens nach einem der Ansprüche 1 bis 12 umfasst, wenn dieses Programm durch einen Prozessor ausgeführt wird.
  14. Vorrichtung zum Decodieren eines digitalen Audiosignals, das eine Abfolge von Proben umfasst, die auf aufeinanderfolgende Rahmen verteilt sind, wobei die Vorrichtung Mittel (MEM, PROC) umfasst, um mindestens einen verlorenen Signalrahmen zu ersetzen durch:
    a) Suchen in einem zum Decodieren verfügbaren gültigen Signalsegment (Nc) mindestens einer Periode in dem Signal, die in Abhängigkeit von dem gültigen Signal bestimmt wird,
    b) Analyse des Signals in der Periode für eine Bestimmung spektraler Komponenten des Signals in der Periode,
    c) Synthese mindestens eines Rahmens zum Ersetzen des verlorenen Rahmens durch Aufbau eines Synthesesignals ausgehend von:
    - einer Addition von Komponenten, die unter den bestimmten spektralen Komponenten ausgewählt werden, und
    - einem Rauschen, das zu der Addition von Komponenten hinzugefügt wird,
    wobei die Menge an Rauschen, die zu der Addition von Komponenten hinzugefügt wird, in Abhängigkeit von einer Stimmhaftigkeitsinformation des gültigen Signals gewichtet wird, wobei die Stimmhaftigkeitsinformation durch einen Codierer bestimmt und dann in einen codierten Fluss geliefert wird, der dem Signal entspricht, das von dem Codierer geliefert und am Decodieren empfangen wird, so dass, im Fall eines Rahmenverlusts bei dem Decodieren, die Stimmhaftigkeitsinformation, die in einem gültigen Signalrahmen, der dem verlorenen Rahmen vorausgeht, enthalten ist, verwendet wird.
EP15725801.3A 2014-04-30 2015-04-24 Verbesserte frameverlustkorrektur mit sprachinformationen Active EP3138095B1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR1453912A FR3020732A1 (fr) 2014-04-30 2014-04-30 Correction de perte de trame perfectionnee avec information de voisement
PCT/FR2015/051127 WO2015166175A1 (fr) 2014-04-30 2015-04-24 Correction de perte de trame perfectionnée avec information de voisement

Publications (2)

Publication Number Publication Date
EP3138095A1 EP3138095A1 (de) 2017-03-08
EP3138095B1 true EP3138095B1 (de) 2019-06-05

Family

ID=50976942

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15725801.3A Active EP3138095B1 (de) 2014-04-30 2015-04-24 Verbesserte frameverlustkorrektur mit sprachinformationen

Country Status (12)

Country Link
US (1) US10431226B2 (de)
EP (1) EP3138095B1 (de)
JP (1) JP6584431B2 (de)
KR (3) KR20230129581A (de)
CN (1) CN106463140B (de)
BR (1) BR112016024358B1 (de)
ES (1) ES2743197T3 (de)
FR (1) FR3020732A1 (de)
MX (1) MX368973B (de)
RU (1) RU2682851C2 (de)
WO (1) WO2015166175A1 (de)
ZA (1) ZA201606984B (de)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR3020732A1 (fr) * 2014-04-30 2015-11-06 Orange Correction de perte de trame perfectionnee avec information de voisement
CN108369804A (zh) * 2015-12-07 2018-08-03 雅马哈株式会社 语音交互设备和语音交互方法
CA3145047A1 (en) * 2019-07-08 2021-01-14 Voiceage Corporation Method and system for coding metadata in audio streams and for efficient bitrate allocation to audio streams coding
CN111883171B (zh) * 2020-04-08 2023-09-22 珠海市杰理科技股份有限公司 音频信号的处理方法及系统、音频处理芯片、蓝牙设备

Family Cites Families (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR1350845A (fr) 1962-12-20 1964-01-31 Procédé de classement visible sans index
FR1353551A (fr) 1963-01-14 1964-02-28 Fenêtre destinée en particulier à être montée sur des roulottes, des caravanes ou installations analogues
US5504833A (en) * 1991-08-22 1996-04-02 George; E. Bryan Speech approximation using successive sinusoidal overlap-add models and pitch-scale modifications
US5956674A (en) * 1995-12-01 1999-09-21 Digital Theater Systems, Inc. Multi-channel predictive subband audio coder using psychoacoustic adaptive bit allocation in frequency, time and over the multiple channels
US5799271A (en) * 1996-06-24 1998-08-25 Electronics And Telecommunications Research Institute Method for reducing pitch search time for vocoder
JP3364827B2 (ja) * 1996-10-18 2003-01-08 三菱電機株式会社 音声符号化方法、音声復号化方法及び音声符号化復号化方法並びにそれ等の装置
WO1999010719A1 (en) * 1997-08-29 1999-03-04 The Regents Of The University Of California Method and apparatus for hybrid coding of speech at 4kbps
ATE302991T1 (de) * 1998-01-22 2005-09-15 Deutsche Telekom Ag Verfahren zur signalgesteuerten schaltung zwischen verschiedenen audiokodierungssystemen
US6640209B1 (en) * 1999-02-26 2003-10-28 Qualcomm Incorporated Closed-loop multimode mixed-domain linear prediction (MDLP) speech coder
US6138089A (en) * 1999-03-10 2000-10-24 Infolio, Inc. Apparatus system and method for speech compression and decompression
US6691092B1 (en) * 1999-04-05 2004-02-10 Hughes Electronics Corporation Voicing measure as an estimate of signal periodicity for a frequency domain interpolative speech codec system
US6912496B1 (en) * 1999-10-26 2005-06-28 Silicon Automation Systems Preprocessing modules for quality enhancement of MBE coders and decoders for signals having transmission path characteristics
US7016833B2 (en) * 2000-11-21 2006-03-21 The Regents Of The University Of California Speaker verification system using acoustic data and non-acoustic data
US20030028386A1 (en) * 2001-04-02 2003-02-06 Zinser Richard L. Compressed domain universal transcoder
JP4089347B2 (ja) * 2002-08-21 2008-05-28 沖電気工業株式会社 音声復号装置
US7970606B2 (en) * 2002-11-13 2011-06-28 Digital Voice Systems, Inc. Interoperable vocoder
DE10254612A1 (de) * 2002-11-22 2004-06-17 Humboldt-Universität Zu Berlin Verfahren zur Ermittlung spezifisch relevanter akustischer Merkmale von Schallsignalen für die Analyse unbekannter Schallsignale einer Schallerzeugung
CN1717576A (zh) * 2002-11-27 2006-01-04 皇家飞利浦电子股份有限公司 用于将声音帧分离成为正弦分量和残余噪声的方法
JP3963850B2 (ja) * 2003-03-11 2007-08-22 富士通株式会社 音声区間検出装置
US7318035B2 (en) * 2003-05-08 2008-01-08 Dolby Laboratories Licensing Corporation Audio coding systems and methods using spectral component coupling and spectral component regeneration
US7825321B2 (en) * 2005-01-27 2010-11-02 Synchro Arts Limited Methods and apparatus for use in sound modification comparing time alignment data from sampled audio signals
US7930176B2 (en) * 2005-05-20 2011-04-19 Broadcom Corporation Packet loss concealment for block-independent speech codecs
KR100744352B1 (ko) * 2005-08-01 2007-07-30 삼성전자주식회사 음성 신호의 하모닉 성분을 이용한 유/무성음 분리 정보를추출하는 방법 및 그 장치
US7720677B2 (en) * 2005-11-03 2010-05-18 Coding Technologies Ab Time warped modified transform coding of audio signals
US8255207B2 (en) * 2005-12-28 2012-08-28 Voiceage Corporation Method and device for efficient frame erasure concealment in speech codecs
US8135047B2 (en) * 2006-07-31 2012-03-13 Qualcomm Incorporated Systems and methods for including an identifier with a packet associated with a speech signal
AU2007322488B2 (en) * 2006-11-24 2010-04-29 Lg Electronics Inc. Method for encoding and decoding object-based audio signal and apparatus thereof
KR100964402B1 (ko) * 2006-12-14 2010-06-17 삼성전자주식회사 오디오 신호의 부호화 모드 결정 방법 및 장치와 이를 이용한 오디오 신호의 부호화/복호화 방법 및 장치
US8060363B2 (en) * 2007-02-13 2011-11-15 Nokia Corporation Audio signal encoding
US8990073B2 (en) * 2007-06-22 2015-03-24 Voiceage Corporation Method and device for sound activity detection and sound signal classification
CN100524462C (zh) * 2007-09-15 2009-08-05 华为技术有限公司 对高带信号进行帧错误隐藏的方法及装置
US20090180531A1 (en) * 2008-01-07 2009-07-16 Radlive Ltd. codec with plc capabilities
US8036891B2 (en) * 2008-06-26 2011-10-11 California State University, Fresno Methods of identification using voice sound analysis
CN102089814B (zh) * 2008-07-11 2012-11-21 弗劳恩霍夫应用研究促进协会 对编码的音频信号进行解码的设备和方法
US8718804B2 (en) * 2009-05-05 2014-05-06 Huawei Technologies Co., Ltd. System and method for correcting for lost data in a digital audio signal
FR2966634A1 (fr) * 2010-10-22 2012-04-27 France Telecom Codage/decodage parametrique stereo ameliore pour les canaux en opposition de phase
WO2014036263A1 (en) * 2012-08-29 2014-03-06 Brown University An accurate analysis tool and method for the quantitative acoustic assessment of infant cry
US8744854B1 (en) * 2012-09-24 2014-06-03 Chengjun Julian Chen System and method for voice transformation
FR3001593A1 (fr) 2013-01-31 2014-08-01 France Telecom Correction perfectionnee de perte de trame au decodage d'un signal.
US9564141B2 (en) * 2014-02-13 2017-02-07 Qualcomm Incorporated Harmonic bandwidth extension of audio signals
US9697843B2 (en) * 2014-04-30 2017-07-04 Qualcomm Incorporated High band excitation signal generation
FR3020732A1 (fr) * 2014-04-30 2015-11-06 Orange Correction de perte de trame perfectionnee avec information de voisement

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
US10431226B2 (en) 2019-10-01
KR20230129581A (ko) 2023-09-08
WO2015166175A1 (fr) 2015-11-05
ZA201606984B (en) 2018-08-30
BR112016024358A2 (pt) 2017-08-15
MX2016014237A (es) 2017-06-06
KR20170003596A (ko) 2017-01-09
RU2016146916A (ru) 2018-05-31
US20170040021A1 (en) 2017-02-09
EP3138095A1 (de) 2017-03-08
CN106463140A (zh) 2017-02-22
BR112016024358B1 (pt) 2022-09-27
FR3020732A1 (fr) 2015-11-06
JP6584431B2 (ja) 2019-10-02
MX368973B (es) 2019-10-23
KR20220045260A (ko) 2022-04-12
ES2743197T3 (es) 2020-02-18
CN106463140B (zh) 2019-07-26
RU2682851C2 (ru) 2019-03-21
JP2017515155A (ja) 2017-06-08
RU2016146916A3 (de) 2018-10-26

Similar Documents

Publication Publication Date Title
EP2951813B1 (de) Verbesserte korrektur von rahmenverlusten bei der decodierung eines signals
EP2080195B1 (de) Synthese verlorener blöcke eines digitalen audiosignals
EP1316087B1 (de) Übertragungsfehler-verdeckung in einem audiosignal
EP1593116B1 (de) Verfahren zur differenzierten digitalen Sprach- und Musikbearbeitung, Rauschfilterung, Erzeugung von Spezialeffekten und Einrichtung zum Ausführen des Verfahrens
EP2987165B1 (de) Rahmenausfallkorrektur durch injektion von gewichtetem rauschen
EP2727107B1 (de) Verzögerungsoptimierte kodierungs-/dekodierungs-gewichtungsfenster durch überlappungstransformation
EP3138095B1 (de) Verbesserte frameverlustkorrektur mit sprachinformationen
EP2080194B1 (de) Dämpfung von stimmüberlagerung, im besonderen zur erregungserzeugung bei einem decoder in abwesenheit von informationen
EP2795618B1 (de) Verfahren zur erkennung eines vorgegebenen frequenzbandes in einem audiodatensignal, erkennungsvorrichtung und computerprogramm dafür
FR3024582A1 (fr) Gestion de la perte de trame dans un contexte de transition fd/lpd
EP2347411B1 (de) Vor-echo-dämpfung in einem digitalaudiosignal
WO2000021077A1 (fr) Procede de quantification des parametres d'un codeur de parole
FR3024581A1 (fr) Determination d'un budget de codage d'une trame de transition lpd/fd
FR2980620A1 (fr) Traitement d'amelioration de la qualite des signaux audiofrequences decodes
WO2008081141A2 (fr) Codage d'unites acoustiques par interpolation

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20161007

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RIN1 Information on inventor provided before grant (corrected)

Inventor name: RAGOT, STEPHANE

Inventor name: FAURE, JULIEN

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20171103

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 19/005 20130101AFI20181203BHEP

Ipc: G10L 19/20 20130101ALN20181203BHEP

Ipc: G10L 25/93 20130101ALN20181203BHEP

INTG Intention to grant announced

Effective date: 20190107

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

Free format text: NOT ENGLISH

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1140799

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190615

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602015031383

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

Free format text: LANGUAGE OF EP DOCUMENT: FRENCH

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190905

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190906

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190905

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1140799

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190605

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191007

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2743197

Country of ref document: ES

Kind code of ref document: T3

Effective date: 20200218

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191005

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602015031383

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

26N No opposition filed

Effective date: 20200306

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200430

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200424

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200430

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20200430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200430

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200424

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190605

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230321

Year of fee payment: 9

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: NL

Payment date: 20240320

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240320

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20240320

Year of fee payment: 10

Ref country code: FR

Payment date: 20240320

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: ES

Payment date: 20240502

Year of fee payment: 10