EP2922053A1 - Dispositif de codage audio, procédé de codage audio, programme de codage audio, dispositif de décodage audio, procédé de décodage audio et programme de décodage audio - Google Patents

Dispositif de codage audio, procédé de codage audio, programme de codage audio, dispositif de décodage audio, procédé de décodage audio et programme de décodage audio Download PDF

Info

Publication number
EP2922053A1
EP2922053A1 EP13854879.7A EP13854879A EP2922053A1 EP 2922053 A1 EP2922053 A1 EP 2922053A1 EP 13854879 A EP13854879 A EP 13854879A EP 2922053 A1 EP2922053 A1 EP 2922053A1
Authority
EP
European Patent Office
Prior art keywords
audio
side information
signal
unit
decoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP13854879.7A
Other languages
German (de)
English (en)
Other versions
EP2922053B1 (fr
EP2922053A4 (fr
Inventor
Kimitaka Tsutsumi
Kei Kikuiri
Atsushi Yamaguchi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NTT Docomo Inc
Original Assignee
NTT Docomo Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NTT Docomo Inc filed Critical NTT Docomo Inc
Priority to EP19185490.0A priority Critical patent/EP3579228A1/fr
Priority to PL13854879T priority patent/PL2922053T3/pl
Publication of EP2922053A1 publication Critical patent/EP2922053A1/fr
Publication of EP2922053A4 publication Critical patent/EP2922053A4/fr
Application granted granted Critical
Publication of EP2922053B1 publication Critical patent/EP2922053B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
    • G10L19/125Pitch excitation, e.g. pitch synchronous innovation CELP [PSI-CELP]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/005Correction of errors induced by the transmission channel, if related to the coding algorithm
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/09Long term prediction, i.e. removing periodical redundancies, e.g. by using adaptive codebook or pitch predictor
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M7/00Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
    • H03M7/30Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction

Definitions

  • the present invention relates to error concealment for transmission of audio packets through an IP network or a mobile communication network and, more specifically, relates to an audio encoding device, an audio encoding method, an audio encoding program, an audio decoding device, an audio decoding method, and an audio decoding program for highly accurate packet loss concealment signal generation to implement error concealment.
  • audio signal In the transmission of audio and acoustic signals (which are collectively referred to hereinafter as "audio signal") through an IP network or a mobile communication network, the audio signal is encoded into audio packets at regular time intervals and transmitted through a communication network.
  • the audio packets are received through the communication network and decoded into a decoded audio signal by server, a MCU (Multipoint Control Unit), a terminal or the like.
  • server a MCU (Multipoint Control Unit), a terminal or the like.
  • MCU Multipoint Control Unit
  • the audio signal is generally collected in digital format. Specifically, it is measured and accumulated as a sequence of numerals whose number is the same as a sampling frequency per second. Each element of the sequence is called a "sample”.
  • the above-described specified number of samples is called a "frame length", and a set of the same number of samples as the frame length is called "frame”. For example, at the sampling frequency of 32 kHz, when the frame length is 20 ms, the frame length is 640 samples. Note that the length of the buffer may be more than one frame.
  • packet loss When transmitting audio packets through a communication network, a phenomenon (so-called "packet loss") can occur where some of the audio packets are lost, or an error can occur in part of information written in the audio packets due to congestion in the communication network or the like. In such a case, the audio packets cannot be correctly decoded at the receiving end, and therefore a desired decoded audio signal cannot be obtained. Further, the decoded audio signal corresponding to the audio packet where packet loss has occurred is detected as noise, which significantly degrades the subjective quality to a person who listens to the audio.
  • packet loss concealment technology is used as a way to interpolate a part of the audio/acoustic signal that is lost by packet loss.
  • packet loss concealment technology There are two types of packet loss concealment technology: "packet loss concealment technology without using side information” where packet loss concealment is performed only at the receiving end and “packet loss concealment technology using side information” where parameters that help packet loss concealment are obtained at the transmitting end and transmitted to the receiving end, where packet loss concealment is performed using the received parameters at the receiving end.
  • the "packet loss concealment technology without using side information” generates an audio signal corresponding to a part where packet loss has occurred by copying a decoded audio signal contained in a packet that has been correctly received in the past on a pitch-by-pitch basis and then multiplying it by a predetermined attenuation coefficient as described in Non Patent Literature 1, for example. Because the "packet loss concealment technology without using side information" is based on the assumption that the properties of the part of the audio where packet loss has occurred are similar to those of the audio immediately before the occurrence of loss, the concealment effect cannot be sufficiently obtained when the part of the audio where packet loss has occurred has different properties from the audio immediately before the occurrence of loss or when there is a sudden change in power.
  • the "packet loss concealment technology using side information” includes a technique that encodes parameters required for packet loss concealment at the transmitting end and transmits them for use in packet loss concealment at the receiving end as described in Patent Literature 1.
  • the audio is encoded by two encoding methods: main encoding and redundant encoding.
  • the redundant encoding encodes the frame immediately before the frame to be encoded by the main encoding at a lower bit rate than the main encoding (see Fig. 1 (a) ).
  • the Nth packet contains an audio code obtained by encoding the Nth frame by major encoding and a side information code obtained by encoding the (N-1)th frame by redundant encoding.
  • the receiving end waits for the arrival of two or more temporally successive packets and then decodes the temporally earlier packet and obtains a decoded audio signal. For example, to obtain a signal corresponding to the Nth frame, the receiving end waits for the arrival of the (N+1)th packet and then performs decoding. In the case where the Nth packet and the (N+1)th packet are correctly received, the audio signal of the Nth frame is obtained by decoding the audio code contained in the Nth packet (see Fig. 1(b) ).
  • the audio signal of the Nth frame can be obtained by decoding the side information code contained in the (N+1)th packet (see Fig. 1(c) ).
  • CELP Code Excited Linear Prediction
  • an audio signal is synthesized by filtering an excitation signal e(n) using an all-pole synthesis filter.
  • the excitation signal is accumulated in a buffer called an adaptive codebook.
  • an excitation signal is newly generated by adding an adaptive codebook vector read from the adaptive codebook and a fixed codebook vector representing a change in excitation signal over time based on position information called a pitch lag.
  • the newly generated excitation signal is accumulated in the adaptive codebook and is also filtered by the all-pole synthesis filter, and thereby a decoded signal is synthesized.
  • an LP coefficient is calculated for all frames.
  • a look-ahead signal of about 10 ms is required.
  • a look-ahead signal is accumulated in the buffer, and then the LP coefficient calculation and the subsequent processing are performed (see Fig. 2 ).
  • Each frame is divided into about four sub-frames, and processing such as the above-described pitch lag calculation, adaptive codebook vector calculation, fixed codebook vector calculation and adaptive codebook update are performed in each sub-frame.
  • the LP coefficient is also interpolated so that the coefficient varies from sub-frame to sub-frame.
  • the LP coefficient is encoded after being converted into an ISP (Immittance Spectral Pair) parameter and an ISF (Immittance Spectral Frequency) parameter, which are equivalent representation(s) of the LP coefficient(s).
  • ISP International Mobile Subscriber Identity
  • ISF Immittance Spectral Frequency
  • CELP encoding In CELP encoding, encoding and decoding are performed based on the assumption that both of the encoding end and the decoding end have adaptive codebooks, and those adaptive codebooks are always synchronized with each other. Although the adaptive codebook at the encoding end and the adaptive codebook at the decoding end are synchronized under conditions where packets are correctly received and decoding correctly, once packet loss has occurred, the synchronization of the adaptive codebooks cannot be achieved.
  • a time lag occurs between the adaptive codebook vectors. Because the adaptive codebook is updated with those adaptive codebook vectors, even if the next frame is correctly received, the adaptive codebook vector calculated at the encoding end and the adaptive codebook vector calculated at the decoding end do not coincide, and the synchronization of the adaptive codebooks is not recovered. Due to such inconsistency of the adaptive codebooks, the degradation of the audio quality occurs for several frames after the frame where packet loss has happened.
  • Patent Literature 2 In the packet loss concealment in CELP encoding, a more advanced technique is described in Patent Literature 2.
  • an index of a transition mode codebook is transmitted instead of a pitch lag or an adaptive codebook gain in a specific frame that is largely affected by packet loss.
  • the technique of Patent Literature 2 focuses attentions on a transition frame (transition from a silent audio segment to a sound audio segment, or transition between two vowels) as the frame that is largely affected by packet loss.
  • generating an excitation signal using the transition mode codebook in this transition frame it is possible to generate an excitation signal that is not dependent on the past adaptive codebook and thereby recover from the inconsistency of the adaptive codebooks due to the past packet loss.
  • Patent Literature 2 does not use the transition frame codebook in a frame where a long vowel continues, for example, it is not possible to recover from the inconsistency of the adaptive codebooks in such a frame. Further, in the case where the packet containing the transition frame codebook is lost, packet loss affects the frames after the loss. This is the same when the next packet after the packet containing the transition frame codebook is lost.
  • the present invention has been accomplished to solve the above problems and an object of the present invention is thus to provide an audio encoding device, an audio encoding method, an audio encoding program, an audio decoding device, an audio decoding method, and an audio decoding program that recover audio quality without increasing algorithmic delay in the event of packet loss in audio encoding.
  • an audio encoding device for encoding an audio signal, which includes an audio encoding unit configured to encode an audio signal, and a side information encoding unit configured to calculate side information from a look-ahead signal and encode the side information.
  • the side information may be related to a pitch lag in a look-ahead signal, related to a pitch gain in a look-ahead signal, or related to a pitch lag and a pitch gain in a look-ahead signal. Further, the side information may contain information related to availability of the side information.
  • the side information encoding unit may calculate side information for a look-ahead signal part and encode the side information, and also generate a concealment signal
  • the audio encoding device may further include an error signal encoding unit configured to encode an error signal between an input audio signal and a concealment signal output from the side information encoding unit, and a main encoding unit configured to encode an input audio signal.
  • an audio decoding device for decoding an audio code and outputting an audio signal, which includes an audio code buffer configured to detect packet loss based on a received state of an audio packet, an audio parameter decoding unit configured to decode an audio code when an audio packet is correctly received, a side information decoding unit configured to decode a side information code when an audio packet is correctly received, a side information accumulation unit configured to accumulate side information obtained by decoding a side information code, an audio parameter missing processing unit configured to output an audio parameter when audio packet loss is detected, and an audio synthesis unit configured to synthesize a decoded audio from an audio parameter.
  • the side information may be related to a pitch lag in a look-ahead signal, related to a pitch gain in a look-ahead signal, or related to a pitch lag and a pitch gain in a look-ahead signal. Further, the side information may contain information related to the availability of side information.
  • the side information decoding unit may decode a side information code and output side information, and may further output a concealment signal related to a look-ahead part by using the side information
  • the audio decoding device may further include an error decoding unit configured to decode a code related to an error signal between an audio signal and a concealment signal, a main decoding unit configured to decode a code related to an audio signal, and a concealment signal accumulation unit configured to accumulate a concealment signal output from the side information decoding unit.
  • a part of a decoded signal may be generated by adding a concealment signal read from the concealment signal accumulation unit and a decoded error signal output from the error decoding unit, and the concealment signal accumulation unit may be updated with a concealment signal output from the side information decoding unit.
  • a concealment signal read from the concealment signal accumulation unit may be used as a part, or a whole, of a decoded signal.
  • a decoded signal may be generated by using an audio parameter predicted by the audio parameter missing processing unit, and the concealment signal accumulation unit may be updated by using a part of the decoded signal.
  • the audio parameter missing processing unit may use side information read from the side information accumulation unit as a part of a predicted value of an audio parameter.
  • the audio synthesis unit may correct an adaptive codebook vector, which is one of the audio parameters, by using side information read from the side information accumulation unit.
  • An audio encoding method is an audio encoding method by an audio encoding device for encoding an audio signal, which includes an audio encoding step of encoding an audio signal, and a side information encoding step of calculating side information from a look-ahead signal and encoding the side information.
  • An audio decoding method is an audio decoding method by an audio decoding device for decoding an audio code and outputting an audio signal, which includes an audio code buffer step of detecting packet loss based on a received state of an audio packet, an audio parameter decoding step of decoding an audio code when an audio packet is correctly received, a side information decoding step of decoding a side information code when an audio packet is correctly received, a side information accumulation step of accumulating side information obtained by decoding a side information code, an audio parameter missing processing step of outputting an audio parameter when audio packet loss is detected, and an audio synthesis step of synthesizing a decoded audio from an audio parameter.
  • An audio encoding program causes a computer to function as an audio encoding unit to encode an audio signal, and a side information encoding unit to calculate side information from a look-ahead signal and encode the side information.
  • An audio decoding program causes a computer to function as an audio code buffer to detect packet loss based on a received state of an audio packet, an audio parameter decoding unit to decode an audio code when an audio packet is correctly received, a side information decoding unit to decode a side information code when an audio packet is correctly received, a side information accumulation unit to accumulate side information obtained by decoding a side information code, an audio parameter missing processing unit to output an audio parameter when audio packet loss is detected, and an audio synthesis unit to synthesize a decoded audio from an audio parameter.
  • An embodiment of the present invention relates to an encoder and a decoder that implement "packet loss concealment technology using side information" that encodes and transmits side information calculated on the encoder side for use in packet loss concealment on the decoder side.
  • the side information that is used for packet loss concealment is contained in a previous packet.
  • Fig. 3 shows a temporal relationship between an audio code and a side information code contained in a packet.
  • the side information in the embodiments of the present invention is parameters (pitch lag, adaptive codebook gain, etc.) that are calculated for a look-ahead signal in CELP encoding.
  • the side information is contained in a previous packet, it is possible to perform decoding without waiting for a packet that arrives after a packet to be decoded. Further, when packet loss is detected, because the side information for a frame to be concealed is obtained from the previous packet, it is possible to implement highly accurate packet loss concealment without waiting for the next packet.
  • the embodiments of the present invention can be composed of an audio signal transmitting device (audio encoding device) and an audio signal receiving device (audio decoding device).
  • a functional configuration example of an audio signal transmitting device is shown in Fig. 4 , and an example procedure of the same is shown in Fig. 6 .
  • ae functional configuration example of an audio signal receiving device is shown in Fig. 5 , and an example procedure of the same is shown in Fig. 7 .
  • the audio signal transmitting device includes an audio encoding unit 111 and a side information encoding unit 112.
  • the audio signal receiving device includes an audio code buffer 121, an audio parameter decoding unit 122, an audio parameter missing processing unit 123, an audio synthesis unit 124, a side information decoding unit 125, and a side information accumulation unit 126.
  • the audio signal transmitting device encodes an audio signal for each frame and can transmit the audio signal by the example procedure shown in Fig. 6 .
  • the audio encoding unit 111 can calculate audio parameters for a frame to be encoded and output an audio code (Step S131 in Fig. 6 ).
  • the side information encoding unit 112 can calculate audio parameters for a look-ahead signal and output a side information code (Step S 132 in Fig. 6 ).
  • Step S133 in Fig. 6 It is determined whether the audio signal ends, and the above steps can be repeated until the audio signal ends.
  • the audio signal receiving device decodes a received audio packet and outputs an audio signal by the example procedure shown in Fig. 7 .
  • the audio code buffer 121 waits for the arrival of an audio packet and accumulates an audio code.
  • the processing is switched to the audio parameter decoding unit 122.
  • the processing is switched to the audio parameter missing processing unit 123 (Step S141 in Fig. 7 ).
  • the audio parameter decoding unit 122 decodes the audio code and outputs audio parameters (Step S142 in Fig. 7 ).
  • the side information decoding unit 125 decodes the side information code and outputs side information.
  • the outputted side information is sent to the side information accumulation unit 126 (Step S 143 in Fig. 7 ).
  • the audio synthesis unit 124 synthesizes an audio signal from the audio parameters output from the audio parameter decoding unit 122 and outputs the synthesized audio signal (Step S144 in Fig. 7 ).
  • the audio parameter missing processing unit 123 accumulates the audio parameters output from the audio parameter decoding unit 122 in preparation for packet loss (Step S 145 in Fig. 7 ).
  • the audio code buffer 121 determines whether the transmission of audio packets has ended, and when the transmission of audio packets has ended, stops the processing. While the transmission of audio packets continues, the above Steps S141 to S 146 are repeated (Step S 147 in Fig. 7 ).
  • the audio parameter missing processing unit 123 reads the side information from the side information accumulation unit 126 and carries out prediction for the parameter(s) not contained in the side information and thereby outputs the audio parameters (Step S 146 in Fig. 7 ).
  • the audio synthesis unit 124 synthesizes an audio signal from the audio parameters output from the audio parameter missing processing unit 123 and outputs the synthesized audio signal (Step S144 in Fig. 7 ).
  • the audio parameter missing processing unit 123 accumulates the audio parameters output from the audio parameter missing processing unit 123 in preparation for packet loss (Step S145 in Fig. 7 ).
  • the audio code buffer 121 determines whether the transmission of audio packets has ended, and when the transmission of audio packets has ended, stops the processing. While the transmission of audio packets continues, the above Steps S141 to S 146 are repeated (Step S147 in Fig. 7 ).
  • the pitch lag can be used for generation of a packet loss concealment signal at the decoding end.
  • the functional configuration example of the audio signal transmitting device is shown in Fig. 4
  • the functional configuration example of the audio signal receiving device is shown in Fig. 5
  • An example of the procedure of the audio signal transmitting device is shown in Fig. 6
  • an example of the procedure of the audio signal receiving device is shown in Fig. 7 .
  • an input audio signal is sent to the audio encoding unit 111.
  • the audio encoding unit 111 encodes a frame to be encoded by CELP encoding (Step 131 in Fig. 6 ).
  • CELP encoding the method described in Non Patent Literature 3 is used, for example.
  • the details of the procedure of CELP encoding are omitted.
  • local decoding is performed at the encoding end.
  • the local decoding is to decode an audio code also at the encoding end and obtain parameters (ISP parameter and corresponding ISF parameter, pitch lag, long-term prediction parameter, adaptive codebook, adaptive codebook gain, fixed codebook gain, fixed codebook vector, etc.) required for audio synthesis.
  • the parameters obtained by the local decoding include: at least one or both of the ISP parameter and the ISF parameter, the pitch lag, and the adaptive codebook, which are sent to the side information encoding unit 112.
  • an index representing the characteristics of a frame to be encoded may also be sent to the side information encoding unit 112.
  • encoding different from CELP encoding may be used in the audio encoding unit 111.
  • At least one or both of the ISP parameter and the ISF parameter, the pitch lag, and the adaptive codebook can be separately calculated from an input signal, or a decoded signal obtained by the local decoding, and sent to the side information encoding unit 112.
  • the side information encoding unit 112 calculates a side information code using the parameters calculated by the audio encoding unit 111 and the look-ahead signal (Step 132 in Fig. 6 ).
  • the side information encoding unit 112 includes an LP coefficient calculation unit 151, a target signal calculation unit 152, a pitch lag calculation unit 153, an adaptive codebook calculation unit 154, an excitation vector synthesis unit 155, an adaptive codebook buffer 156, a synthesis filter 157, and a pitch lag encoding unit 158.
  • An example procedure in the side information encoding unit is shown in Fig. 9 .
  • the LP coefficient calculation unit 151 calculates an LP coefficient using the ISF parameter calculated by the audio encoding unit 111 and the ISF parameter calculated in the past several frames (Step 161 in Fig. 9 ). The procedure of the LP coefficient calculation unit 151 is shown in Fig. 10 .
  • the buffer is updated using the ISF parameter obtained from the audio encoding unit 111 (Step 171 in Fig.10 ).
  • the ISF parameter ⁇ i in the look-ahead signal is calculated.
  • the ISF parameter ⁇ i is calculated by the following equation (Step 172 in Fig.10 ).
  • ⁇ i (- j ) is the ISF parameter, stored in the buffer, which is for the frame preceding by j-number of frames.
  • ⁇ i C is the ISF parameter during the speech period that is calculated in advance by learning or the like.
  • is a constant, and it may be a value such as 0.75, for example, though not limited thereto.
  • is also constant, and it may be a value such as 0.9, for example, though not limited thereto.
  • ⁇ i C , ⁇ and ⁇ may be varied by the index representing the characteristics of the frame to be encoded as in the ISF concealment described in Non Patent Literature 4, for example.
  • Non Patent Literature 4 (Equation 151) may be used, for example (Step 173 in Fig. 10 ).
  • the ISF parameter ⁇ i is converted into an ISP parameter and interpolation can be performed for each sub-frame.
  • the method described in the section 6.4.4 in Non Patent Literature 4 may be used, and as a method of interpolation, the procedure described in the section 6.8.3 in Non Patent Literature 4 may be used (Step 174 in Fig. 10 ).
  • the ISP parameter for each sub-frame is converted into an LP coefficient ⁇ ⁇ j i 0 ⁇ i ⁇ P , 0 ⁇ j ⁇ M la .
  • the number of sub-frames contained in the look-ahead signal is M la .
  • the procedure described in the section 6.4.5 in Non Patent Literature 4 may be used (Step 175 in Fig. 10 ).
  • the target signal calculation unit 152 calculates a target signal x(n) and an impulse response h(n) by using the LP coefficient ⁇ ⁇ j i (Step 162 in Fig. 9 ). As described in the section 6.8.4.1.3 in Non Patent Literature 4, the target signal is obtained by applying an perceptual weighting filter to a linear prediction residual signal ( Fig. 11 ).
  • a residual signal r(n) of the look-ahead signal S pre l n ⁇ 0 ⁇ n ⁇ L ⁇ is calculated using the LP coefficient according to the following equation (Step 181 in Fig. 11 ).
  • L' indicates the number of samples of a sub-frame
  • the target signal x(n)(0 ⁇ n ⁇ L') is calculated by the following equations (Step 182 in Fig. 11 ).
  • the value of the perceptual weighting filter may be a different value according to the design policy of audio encoding.
  • the pitch lag calculation unit 153 calculates a pitch lag for each sub-frame by calculating k that maximizes the following equation (Step 163 in Fig. 9 ). Note that, in order to reduce the amount of calculations, the above-described target signal calculation (Step 182 in Fig. 11 ) and the impulse response calculation (Step 183 in Fig. 11 ) may be omitted, and the residual signal may be used as the target signal.
  • y k (n) is obtained by convoluting the impulse response with the linear prediction residual.
  • Int(i) indicates an interpolation filter.
  • the details of the interpolation filter are as described in the section 6.8.4.1.4.1 in Non Patent Literature 4.
  • the pitch lag can be calculated as an integer by the above-described calculation method
  • the accuracy of the pitch lag may be increased to after the decimal point accuracy by interpolating the above T k .
  • the processing method described in the section 6.8.4.1.4.1 in Non Patent Literature 4 may be used.
  • the adaptive codebook calculation unit 154 calculates an adaptive codebook vector v'(n) and a long-term prediction parameter from the pitch lag Tp and the adaptive codebook u(n) stored in the adaptive codebook buffer 156 according to the following equation (Step 164 in Fig. 9 ).
  • the method described in the section 5.7 in Non Patent Literature 3 may be used.
  • the excitation vector synthesis unit 155 multiplies the adaptive codebook vector v'(n) by a predetermined adaptive codebook gain g p C and outputs an excitation signal vector according to the following equation (Step 165 in Fig. 9 ).
  • e n g p C ⁇ v ⁇ n
  • the value of the adaptive codebook gain g p C may be 1.0 or the like, for example, a value obtained in advance by learning may be used, or it may be varied by the index representing the characteristics of the frame to be encoded.
  • the synthesis filter 157 synthesizes a decoded signal according to the following equation by linear prediction inverse filtering using the excitation signal vector as an excitation source (Step 167 in Fig. 9 ).
  • Steps 162 to 167 in Fig. 9 are repeated for each sub-frame until the end of the look-ahead signal (Step 168 in Fig. 9 ).
  • the pitch lag encoding unit 158 encodes the pitch lag T p j 0 ⁇ j ⁇ M la that is calculated in the look-ahead signal (Step 169 in Fig. 9 ).
  • the number of sub-frames contained in the look-ahead signal is M la .
  • Encoding may be performed by a method such as one of the following methods, for example, although any method may be used for encoding.
  • a codebook determined empirically or a codebook calculated in advance by learning may be used.
  • a method that performs encoding after adding an offset value to the above pitch lag may also be included in the scope of the embodiment of the present invention as a matter of course.
  • an example of the audio signal receiving device includes the audio code buffer 121, the audio parameter decoding unit 122, the audio parameter missing processing unit 123, the audio synthesis unit 124, the side information decoding unit 125, and the side information accumulation unit 126.
  • the procedure of the audio signal receiving device is as shown in the example of Fig. 7 .
  • the audio code buffer 121 determines whether a packet is correctly received or not. When the audio code buffer 121 determines that a packet is correctly received, the processing is switched to the audio parameter decoding unit 122 and the side information decoding unit 125. On the other hand, when the audio code buffer 121 determines that a packet is not correctly received, the processing is switched to the audio parameter missing processing unit 123 (Step 141 in Fig. 7 ).
  • the audio parameter decoding unit 122 decodes the received audio code and calculates audio parameters required to synthesize the audio for the frame to be encoded (ISP parameter and corresponding ISF parameter, pitch lag, long-term prediction parameter, adaptive codebook, adaptive codebook gain, fixed codebook gain, fixed codebook vector etc.) (Step 142 in Fig. 7 ).
  • the side information decoding unit 125 decodes the side information code, calculates a pitch lag T ⁇ p j 0 ⁇ j ⁇ M la and stores it in the side information accumulation unit 126.
  • the side information decoding unit 125 decodes the side information code by using the decoding method corresponding to the encoding method used at the encoding end (Step 143 in Fig. 7 ).
  • the audio synthesis unit 124 synthesizes the audio signal corresponding to the frame to be encoded based on the parameters output from the audio parameter decoding unit 122 (Step 144 in Fig. 7 ).
  • the functional configuration example of the audio synthesis unit 124 is shown in Fig. 15
  • an example procedure of the audio synthesis unit 124 is shown in Fig. 16 . Note that, although the audio parameter missing processing unit 123 is illustrated to show the flow of the signal, the audio parameter missing processing unit 123 is not included in the functional configuration of the audio synthesis unit 124.
  • An LP coefficient calculation unit 1121 converts an ISF parameter into an ISP parameter and then performs interpolation processing, and thereby obtains an ISP coefficient for each sub-frame.
  • the LP coefficient calculation unit 1121 then converts the ISP coefficient into a linear prediction coefficient (LP coefficient) and thereby obtains an LP coefficient for each sub-frame (Step 11301 in Fig. 16 ).
  • LP coefficient linear prediction coefficient
  • Step 11301 in Fig. 16 For the interpolation of the ISP coefficient and the ISP-LP coefficient, the method described in, for example, section 6.4.5 in Non Patent Literature 4 may be used.
  • the procedure of parameter conversion is not the essential part of the embodiment of the present invention and thus not described in detail.
  • An adaptive codebook calculation unit 1123 calculates an adaptive codebook vector by using the pitch lag, a long-term prediction parameter and an adaptive codebook 1122 (Step 11302 in Fig. 16 ).
  • An adaptive codebook vector v'(n) is calculated from the pitch lag T ⁇ p j and the adaptive codebook u(n) according to the following equation.
  • the adaptive codebook vector is calculated by interpolating the adaptive codebook u(n) using FIR filter Int(i).
  • the length of the adaptive codebook is N adapt .
  • This is the FIR filter with a predetermined length 21+1.
  • L' is the number of samples of the sub-frame. It is not necessary to use a filter for the interpolation, whereas at the encoder end a filter is used for the interpolation.
  • the adaptive codebook calculation unit 1123 carries out filtering on the adaptive codebook vector according to the value of the long-term prediction parameter (Step 11303 in Fig. 16 ).
  • the long-term prediction parameter has a value indicating the activation of filtering
  • filtering is performed on the adaptive codebook vector by the following equation.
  • An excitation vector synthesis unit 1124 multiplies the adaptive codebook vector by an adaptive codebook gain gp (Step 11304 in Fig. 16 ). Further, the excitation vector synthesis unit 1124 multiplies a fixed codebook vector c(n) by a fixed codebook gain g c (Step 11305 in Fig. 16 ). Furthermore, the excitation vector synthesis unit 1124 adds the adaptive codebook vector and the fixed codebook vector together and outputs an excitation signal vector (Step 11306 in Fig. 16 ).
  • e n g p ⁇ v ⁇ n + g c ⁇ c n
  • a post filter 1125 performs post processing such as pitch enhancement, noise enhancement and low-frequency enhancement, for example, on the excitation signal vector.
  • the details of techniques such as pitch enhancement, noise enhancement and low-frequency enhancement are described in the section 6.1 in Non Patent Literature 3.
  • the processing in the post filter is not significantly related to the essential part of the embodiment of the present invention and thus not described in detail (Step 11307 in Fig. 16 ).
  • the adaptive codebook 1122 updates the state by an excitation signal vector according to the following equations (Step 11308 in Fig. 16 ).
  • u n u ⁇ n + L 0 ⁇ n ⁇ N - L
  • u ⁇ n + N - L e n 0 ⁇ n ⁇ L
  • a synthesis filter 1126 synthesizes a decoded signal according to the following equation by linear prediction inverse filtering using the excitation signal vector as an excitation source (Step 11309 in Fig. 16 ).
  • An perceptual weighting inverse filter 1127 applies an perceptual weighting inverse filter to the decoded signal according to the following equation (Step 11310 in Fig. 16 ).
  • s ⁇ n s ⁇ n + ⁇ ⁇ s ⁇ ⁇ n - 1
  • the value of ⁇ is typically 0.68 or the like, though not limited to this value.
  • the audio parameter missing processing unit 123 stores the audio parameters (ISF parameter, pitch lag, adaptive codebook gain, fixed codebook gain) used in the audio synthesis unit 124 into the buffer (Step 145 in Fig. 7 ).
  • the audio parameter missing processing unit 123 reads a pitch lag T ⁇ p j 0 ⁇ j ⁇ M la from the side information accumulation unit 126 and predicts audio parameters.
  • the functional configuration example of the audio parameter missing processing unit 123 is shown in the example of Fig. 12 , and an example procedure of audio parameter prediction is shown in Fig. 13 .
  • An ISF prediction unit 191 calculates an ISF parameter using the ISF parameter for the previous frame and the ISF parameter calculated for the past several frames (Step 1101 in Fig. 13 ). The procedure of the ISF prediction unit 191 is shown in Fig. 10 .
  • the buffer is updated using the ISF parameter of the immediately previous frame (Step 171 in Fig. 10 ).
  • the ISF parameter ⁇ i is calculated according to the following equation (Step 172 in Fig.10 ).
  • ⁇ i ( -j ) is the ISF parameter, stored in the buffer, which is for the frame preceding by j-number of frames.
  • ⁇ i C , ⁇ and ⁇ are the same values as those used at the encoding end.
  • Non Patent Literature 4 (Equation 151) may be used (Step 173 in Fig. 10 ).
  • a pitch lag prediction unit 192 decodes the side information code from the side information accumulation unit 126 and thereby obtains a pitch lag T ⁇ p i 0 ⁇ i ⁇ M la . Further, by using a pitch lag T ⁇ p - j 0 ⁇ j ⁇ J used for the past decoding, the pitch lag prediction unit 192 outputs a pitch lag T ⁇ p i M la ⁇ i ⁇ M .
  • the number of sub-frames contained in one frame is M
  • the number of pitch lags contained in the side information is M la .
  • the procedure described in, for example, section 7.11.1.3 in Non Patent Literature 4 may be used (Step 1102 in Fig. 13 ).
  • An adaptive codebook gain prediction unit 193 outputs an adaptive codebook gain g p i M la ⁇ i ⁇ M by using a predetermined adaptive codebook gain g p C and an adaptive codebook gain g p j 0 ⁇ j ⁇ J used in the past decoding.
  • the number of sub-frames contained in one frame is M, and the number of pitch lags contained in the side information is M la .
  • the procedure described in, for example, section 7.11.2.5.3 in Non Patent Literature 4 may be used (Step 1103 in Fig. 13 ).
  • a fixed codebook gain prediction unit 194 outputs a fixed codebook gain g c i 0 ⁇ i ⁇ M by using a fixed codebook gain g c j 0 ⁇ j ⁇ J used in the past decoding.
  • the number of sub-frames contained in one frame is M.
  • the procedure described in the section 7.11.2.6 in Non Patent Literature 4 may be used, for example (Step 1104 in Fig. 13 ).
  • a noise signal generation unit 195 outputs a noise vector, such as a white noise, with a length of L (Step 1105 in Fig. 13 ).
  • the length of one frame is L.
  • the audio synthesis unit 124 synthesizes a decoded signal based on the audio parameters output from the audio parameter missing processing unit 123 (Step 144 in Fig. 7 ).
  • the operation of the audio synthesis unit 124 is the same as the operation of the audio synthesis unit ⁇ When audio packet is correctly received> and not redundantly described in detail (Step 144 in Fig. 7 ).
  • the audio parameter missing processing unit 123 stores the audio parameters (ISF parameter, pitch lag, adaptive codebook gain, fixed codebook gain) used in the audio synthesis unit 124 into the buffer (Step 145 in Fig. 7 ).
  • the procedure of the excitation vector synthesis unit 155 is shown in the example of Fig. 14 .
  • An adaptive codebook gain g p C is calculated from the adaptive codebook vector v'(n) and the target signal x(n) according to the following equation (Step 1111 in Fig. 14 ).
  • the calculated adaptive codebook gain is encoded and contained in the side information code (Step 1112 in Fig. 14 ).
  • scalar quantization using a codebook obtained in advance by learning may be used, although any other technique may be used for the encoding.
  • an excitation vector is calculated according to the following equation (Step 1113 in Fig. 14 ).
  • e n g ⁇ p ⁇ v ⁇ n
  • the excitation vector synthesis unit 155 multiplies the adaptive codebook vector v'(n) by an adaptive codebook gain ⁇ p obtained by decoding the side information code and outputs an excitation signal vector according to the following equation (Step 165 in Fig. 9 ).
  • e n g ⁇ p ⁇ v ⁇ n
  • the functional configuration example of the side information encoding unit is shown in Fig. 17 , and the procedure of the side information encoding unit is shown in the example of Fig. 18 .
  • a difference from the example 1 is only a side information output determination unit 1128 (Step 1131 in Fig. 18 ), and therefore description of the other parts is omitted.
  • the side information output determination unit 1128 calculates segmental SNR of the decoded signal and the look-ahead signal according to the following equation, and only when segmental SNR exceeds a threshold, sets the value of the flag to ON and adds it to the side information.
  • the side information output determination unit 1128 sets the value of the flag to OFF and adds it to the side information (Step 1131 in Fig. 18 ).
  • the amount of bits of the side information may be reduced by adding the side information such as a pitch lag and a pitch gain to the flag and transmitting the added side information only when the value of the flag is ON, and transmitting only the value of the flag when the value of the flag is OFF.
  • the side information decoding unit decodes the flag contained in the side information code.
  • the audio parameter missing processing unit calculates a decoded signal by the same procedure as in the example 1.
  • the value of the flag is OFF, it calculates a decoded signal by the packet loss concealment technique without using side information (Step 1151 in Fig. 19 ).
  • the decoded audio of the look-ahead signal part is also used when a packet is correctly received.
  • the number of sub-frames contained in one frame is M sub-frames
  • the length of the look-ahead signal is M' sub-frame(s).
  • the audio signal transmitting device includes a main encoding unit 211, a side information encoding unit 212, a concealment signal accumulation unit 213, and an error signal encoding unit 214.
  • the procedure of the audio signal transmitting device is shown in Fig. 22 .
  • the error signal encoding unit 214 reads a concealment signal for one sub-frame from the concealment signal accumulation unit 213, subtracts it from the audio signal and thereby calculates an error signal (Step 221 in Fig. 22 ).
  • the error signal encoding unit 214 encodes the error signal.
  • AVQ described in the section 6.8.4.1.5 in Non Patent Literature 4
  • a decoded error signal is output (Step 222 in Fig. 22 ).
  • a decoded signal for one sub-frame is output (Step 223 in Fig. 22 ).
  • Steps 221 to 223 are repeated for M' sub-frames until the end of the concealment signal.
  • the main encoding unit 211 includes an ISF encoding unit 2011, a target signal calculation unit 2012, a pitch lag calculation unit 2013, an adaptive codebook calculation unit 2014, a fixed codebook calculation unit 2015, a gain calculation unit 2016, an excitation vector calculation unit 2017, a synthesis filter 2018, and an adaptive codebook buffer 2019.
  • the ISF encoding unit 2011 obtains an LP coefficient by applying the Levinson-Durbin method to the frame to be encoded and the look-ahead signal.
  • the ISF encoding unit 2011 then converts the LP coefficient into an ISF parameter and encodes the ISF parameter.
  • the ISF encoding unit 2011 then decodes the code and obtains a decoded ISF parameter.
  • the ISF encoding unit 2011 interpolates the decoded ISF parameter and obtains a decoded LP coefficient for each sub-frame.
  • the procedures of the Levinson-Durbin method and the conversion from the LP coefficient to the ISF parameter are the same as in the example 1.
  • an index obtained by encoding the ISF parameter, the decoded ISF parameter, and the decoded LP coefficient (which is obtained by converting the decoded ISF parameter into the LP coefficient) can be obtained by the ISF encoding unit 2011 (Step 224 in Fig. 22 ).
  • the detailed procedure of the target signal calculation unit 2012 is the same as in Step 162 in Fig. 9 in the example 1 (Step 225 in Fig. 22 ).
  • the pitch lag calculation unit 2013 refers to the adaptive codebook buffer and calculates a pitch lag and a long-term prediction parameter by using the target signal.
  • the detailed procedure of the calculation of the pitch lag and the long-term prediction parameter is the same as in the example 1 (Step 226 in Fig. 22 ).
  • the adaptive codebook calculation unit 2014 calculates an adaptive codebook vector by using the pitch lag and the long-term prediction parameter calculated by the pitch lag calculation unit 2013.
  • the detailed procedure of the adaptive codebook calculation unit 2014 is the same as in the example 1 (Step 227 in Fig. 22 ).
  • the fixed codebook calculation unit 2015 calculates a fixed codebook vector and an index obtained by encoding the fixed codebook vector by using the target signal and the adaptive codebook vector.
  • the detailed procedure is the same as the procedure of AVQ used in the error signal encoding unit 214 (Step 228 in Fig. 22 ).
  • the gain calculation unit 2016 calculates an adaptive codebook gain, a fixed codebook gain and an index obtained by encoding these two gains using the target signal, the adaptive codebook vector and the fixed codebook vector.
  • a detailed procedure which can be used is described in, for example, section 6.8.4.1.6 in Non Patent Literature 4 (Step 229 in Fig. 22 ).
  • the excitation vector calculation unit 2017 calculates an excitation vector by adding the adaptive codebook vector and the fixed codebook vector to which the gain is applied.
  • the detailed procedure is the same as in example 1.
  • the excitation vector calculation unit 2017 updates the state of the adaptive codebook buffer 2019 by using the excitation vector.
  • the detailed procedure is the same as in the example 1 (Step 2210 in Fig. 22 ).
  • the synthesis filter 2018 synthesizes a decoded signal by using the decoded LP coefficient and the excitation vector (Step 2211 in Fig. 22 ).
  • Steps 224 to 2211 are repeated for M-M' sub-frames until the end of the frame to be encoded.
  • the side information encoding unit 212 calculates the side information for the look-ahead signal M' sub-frame.
  • a specific procedure is the same as in the example 1 (Step 2212 in Fig. 22 ).
  • the decoded signal output by the synthesis filter 157 of the side information encoding unit 212 is accumulated in the concealment signal accumulation unit 213 in the example 2 (Step 2213 in Fig. 22 ).
  • an example of the audio signal receiving device includes an audio code buffer 231, an audio parameter decoding unit 232, an audio parameter missing processing unit 233, an audio synthesis unit 234, a side information decoding unit 235, a side information accumulation unit 236, an error signal decoding unit 237, and a concealment signal accumulation unit 238.
  • An example procedure of the audio signal receiving device is shown in Fig. 24 .
  • An example functional configuration of the audio synthesis unit 234 is shown in Fig. 25 .
  • the audio code buffer 231 determines whether a packet is correctly received or not. When the audio code buffer 231 determines that a packet is correctly received, the processing is switched to the audio parameter decoding unit 232, the side information decoding unit 235 and the error signal decoding unit 237. On the other hand, when the audio code buffer 231 determines that a packet is not correctly received, the processing is switched to the audio parameter missing processing unit 233 (Step 241 in Fig. 24 ).
  • the error signal decoding unit 237 decodes an error signal code and obtains a decoded error signal.
  • a decoding method corresponding to the method used at the encoding end such as AVQ described in the section 7.1.2.1.2 in Non Patent Literature 4can be used (Step 242 in Fig. 24 ).
  • a look-ahead excitation vector synthesis unit 2318 reads a concealment signal for one sub-frame from the concealment signal accumulation unit 238 and adds the concealment signal to the decoded error signal, and thereby outputs a decoded signal for one sub-frame (Step 243 in Fig. 24 ).
  • Steps 241 to 243 are repeated for M' sub-frames until the end of the concealment signal.
  • the audio parameter decoding unit 232 includes an ISF decoding unit 2211, a pitch lag decoding unit 2212, a gain decoding unit 2213, and a fixed codebook decoding unit 2214.
  • the functional configuration example of the audio parameter decoding unit 232 is shown in Fig. 26 .
  • the ISF decoding unit 2211 decodes the ISF code and converts it into an LP coefficient and thereby obtains a decoded LP coefficient. For example, the procedure described in the section 7.1.1 in Non Patent Literature 4 is used (Step 244 in Fig. 24 ).
  • the pitch lag decoding unit 2212 decodes a pitch lag code and obtains a pitch lag and a long-term prediction parameter (Step 245 in Fig. 24 ).
  • the gain decoding unit 2213 decodes a gain code and obtains an adaptive codebook gain and a fixed codebook gain.
  • An example detailed procedure is described in the section 7.1.2.1.3 in Non Patent Literature 4 (Step 246 in Fig. 24 ).
  • An adaptive codebook calculation unit 2313 calculates an adaptive codebook vector by using the pitch lag and the long-term prediction parameter.
  • the detailed procedure of the adaptive codebook calculation unit 2313 is as described in the example 1 (Step 247 in Fig. 24 ).
  • the fixed codebook decoding unit 2214 decodes a fixed codebook code and calculates a fixed codebook vector.
  • the detailed procedure is as described in the section 7.1.2.1.2 in Non Patent Literature 4 (Step 248 in Fig. 24 ).
  • An excitation vector synthesis unit 2314 calculates an excitation vector by adding the adaptive codebook vector and the fixed codebook vector to which the gain is applied. Further, an excitation vector calculation unit updates the adaptive codebook buffer by using the excitation vector (Step 249 in Fig. 24 ). The detailed procedure is the same as in the example 1.
  • a synthesis filter 2316 synthesizes a decoded signal by using the decoded LP coefficient and the excitation vector (Step 2410 in Fig. 24 ).
  • the detailed procedure is the same as in the example 1.
  • Steps 244 to 2410 are repeated for M-M' sub-frames until the end of the frame to be encoded.
  • the functional configuration of the side information decoding unit 235 is the same as in the example 1.
  • the side information decoding unit 235 decodes the side information code and calculates a pitch lag (Step 2411 in Fig. 24 ).
  • the functional configuration of the audio parameter missing processing unit 233 is the same as in the example 1.
  • the ISF prediction unit 191 predicts an ISF parameter using the ISF parameter for the previous frame and converts the predicted ISF parameter into an LP coefficient.
  • the procedure is the same as in Steps 172, 173 and 174 of the example 1 shown in Fig. 10 (Step 2412 in Fig. 24 ).
  • the adaptive codebook calculation unit 2313 calculates an adaptive codebook vector by using the pitch lag output from the side information decoding unit 235 and an adaptive codebook 2312 (Step 2413 in Fig. 24 ). The procedure is the same as in Steps 11301 and 11302 in Fig. 16 .
  • the adaptive codebook gain prediction unit 193 outputs an adaptive codebook gain.
  • a specific procedure is the same as in Step 1103 in Fig. 13 (Step 2414 in Fig. 24 ).
  • the fixed codebook gain prediction unit 194 outputs a fixed codebook gain.
  • a specific procedure is the same as in Step 1104 in Fig. 13 (Step 2415 in Fig. 24 ).
  • the noise signal generation unit 195 outputs a noise, such as a white noise as a fixed codebook vector.
  • a noise such as a white noise as a fixed codebook vector.
  • the procedure is the same as in Step 1105 in Fig. 13 (Step 2416 in Fig. 24 ).
  • the excitation vector synthesis unit 2314 applies gain to each of the adaptive codebook vector and the fixed codebook vector and adds them together and thereby calculates an excitation vector. Further, the excitation vector synthesis unit 2314 updates the adaptive codebook buffer using the excitation vector (Step 2417 in Fig. 24 ).
  • the synthesis filter 2316 calculates a decoded signal using the above-described LP coefficient and the excitation vector. The synthesis filter 2316 then updates the concealment signal accumulation unit 238 using the calculated decoded signal (Step 2418 in Fig. 24 ).
  • a concealment signal for one sub-frame is read from the concealment signal accumulation unit and is used as the decoded signal (Step 2419 in Fig. 24 ).
  • the ISF prediction unit 191 predicts an ISF parameter (Step 2420 in Fig. 24 ).
  • Step 1101 in Fig. 13 can be used.
  • the pitch lag prediction unit 192 outputs a predicted pitch lag by using the pitch lag used in the past decoding (Step 2421 in Fig. 24 ).
  • the procedure used for the prediction is the same as in Step 1102 in Fig. 13 .
  • the operations of the adaptive codebook gain prediction unit 193, the fixed codebook gain prediction unit 194, the noise signal generation unit 195 and the audio synthesis unit 234 are the same as in the example 1 (Step 2422 in Fig. 24 ).
  • the functional configuration of the audio signal transmitting device is the same as in example 1.
  • the functional configuration and the procedure are different only in the side information encoding unit, and therefore only the operation of the side information encoding unit is described below.
  • the side information encoding unit includes an LP coefficient calculation unit 311, a pitch lag prediction unit 312, a pitch lag selection unit 313, a pitch lag encoding unit 314, and an adaptive codebook buffer 315.
  • the functional configuration of an example of the side information encoding unit is shown in Fig. 27
  • an example procedure of the side information encoding unit is shown in the example of Fig. 28 .
  • the LP coefficient calculation unit 311 is the same as the LP coefficient calculation unit in example 1 and thus will not be redundantly described (Step 321 in Fig. 28 ).
  • the pitch lag prediction unit 312 calculates a pitch lag predicted value T ⁇ p using the pitch lag obtained from the audio encoding unit (Step 322 in Fig. 28 ).
  • the specific processing of the prediction is the same as the prediction of the pitch lag T ⁇ p i M la ⁇ i ⁇ M in the pitch lag prediction unit 192 in the example 1 (which is the same as in Step 1102 in Fig. 13 ).
  • the pitch lag selection unit 313 determines a pitch lag to be transmitted as the side information (Step 323 in Fig. 28 ).
  • the detailed procedure of the pitch lag selection unit 313 is shown in the example of Fig. 29 .
  • a pitch lag codebook is generated from the pitch lag predicted value T ⁇ p and the value of the past pitch lag T ⁇ p - j 0 ⁇ j ⁇ J according to the following equations (Step 331 in Fig. 29 ).
  • the value of the pitch lag for one sub-frame before is T ⁇ p - 1 .
  • the number of indexes of the codebook is I.
  • ⁇ j is a predetermined step width
  • is a predetermined constant.
  • an initial excitation vector u 0 (n) is generated according to the following equation (Step 332 in Fig. 29 ).
  • u 0 n ⁇ 0.18 ⁇ u 0 ⁇ n - T ⁇ p - 1 + 0.64 ⁇ u 0 ⁇ n - T ⁇ p + 0.18 ⁇ u 0 ⁇ n - T ⁇ p + 1 ⁇ 0 ⁇ n ⁇ T ⁇ p u ⁇ n - T ⁇ p ⁇ T ⁇ p ⁇ n ⁇ L
  • the procedure of calculating the initial excitation vector is the same as the equations (607) and (608) in Non Patent Literature 4.
  • glottal pulse synchronization is applied to the initial excitation vector by using all candidate pitch lags T ⁇ C j 0 ⁇ j ⁇ J in the pitch lag codebook to thereby generate a candidate adaptive codebook vector u j (n)(0 ⁇ j ⁇ I) (Step 333 in Fig. 29 ).
  • the same procedure can be used as in the case described in section 7.11.2.5 in Non Patent Literature 4 where a pulse position is not available.
  • Non Patent Literature 4 corresponds to u 0 (n) in the embodiment of the present invention
  • extrapolated pitch corresponds to T ⁇ C j in the embodiment of the present invention
  • the last reliable pitch(T c ) corresponds to T ⁇ p - 1 in the embodiment of the present invention.
  • a rate scale is calculated (Step 334 in Fig. 29 ).
  • segmental SNR a signal is synthesized by inverse filtering using the LP coefficient, and segmental SNR is calculated with the input signal according to the following equation.
  • segmental SNR may be calculated in the region of the adaptive codebook vector by using a residual signal according to the following equation.
  • a residual signal r(n) of the look-ahead signal s(n)(0 ⁇ n ⁇ L') is calculated by using the LP coefficient (Step 181 in Fig. 11 ).
  • Step 334 An index corresponding to the largest rate scale calculated in Step 334 is selected, and a pitch lag corresponding to the index is calculated (Step 335 in Fig. 29 ).
  • the functional configuration of the audio signal receiving device is the same as in the example 1. Differences from the example 1 are the functional configuration and the procedure of the audio parameter missing processing unit 123, the side information decoding unit 125 and the side information accumulation unit 126, and only those are described hereinbelow.
  • the side information decoding unit 125 decodes the side information code and calculates a pitch lag T ⁇ C idx and stores it into the side information accumulation unit 126.
  • the example procedure of the side information decoding unit 125 is shown in Fig. 30 .
  • the pitch lag prediction unit 312 first calculates a pitch lag predicted value T ⁇ p by using the pitch lag obtained from the audio decoding unit (Step 341 in Fig. 30 ).
  • the specific processing of the prediction is the same as in Step 322 of Fig. 28 in the example 3.
  • a pitch lag codebook is generated from the pitch lag predicted value T ⁇ p , and the value of the past pitch lag T ⁇ p - j 0 ⁇ j ⁇ J , according to the following equations (Step 342 in Fig. 30 ).
  • a pitch lag T ⁇ C idx corresponding to the index idx transmitted as part of the side information is calculated and stored in the side information accumulation unit 126 (Step 343 in Fig. 30 ).
  • the functional configuration of the audio synthesis unit is also the same as in the example 1 (which is the same as in Fig. 15 ), only the adaptive codebook calculation unit 1123 that operates differently from that in the example 1 is described hereinbelow.
  • the audio parameter missing processing unit 123 reads the pitch lag from the side information accumulation unit 126 and calculates a pitch lag predicted value according to the following equation, and uses the calculated pitch lag predicted value instead of the output of the pitch lag prediction unit 192.
  • T ⁇ p T ⁇ p - 1 + ⁇ ⁇ T ⁇ C idx - T ⁇ p - 1 where ⁇ is a predetermined constant.
  • an initial excitation vector u 0 (n) is generated according to the following equation (Step 332 in Fig. 29 ).
  • u 0 n ⁇ 0.18 ⁇ u 0 ⁇ n - T ⁇ p - 1 - 1 + 0.64 ⁇ u 0 ⁇ n - T ⁇ p - 1 + 0.18 ⁇ u 0 ⁇ n - T ⁇ p - 1 + 1 ⁇ 0 ⁇ n ⁇ T ⁇ p - 1 u 0 ⁇ n - T ⁇ p - 1 ⁇ T ⁇ p - 1 ⁇ n ⁇ L
  • glottal pulse synchronization is applied to the initial excitation vector by using the pitch lag T ⁇ C idx to thereby generate an adaptive codebook vector u(n).
  • the same procedure as in Step 333 of Fig. 29 is used.
  • an audio encoding program 70 that causes a computer to execute the above-described processing by the audio signal transmitting device is described. As shown in Fig. 31 , the audio encoding program 70 is stored in a program storage area 61 formed in a recording medium 60 that is inserted into a computer and accessed, or included in a computer.
  • the audio encoding program 70 includes an audio encoding module 700 and a side information encoding module 701.
  • the functions implemented by executing the audio encoding module 700 and the side information encoding module 701 are the same as the functions of the audio encoding unit 111 and the side information encoding unit 112 in the audio signal transmitting device described above, respectively.
  • a part or the whole of the audio encoding program 70 may be transmitted through a transmission medium such as a communication line, received and stored (including being installed) by another device. Further, each module of the audio encoding program 70 may be installed not in one computer but in any of a plurality of computers. In this case, the above-described processing of the audio encoding program 70 is performed by a computer system composed of the plurality of computers.
  • an audio decoding program 90 that causes a computer to execute the above-described processing by the audio signal receiving device is described.
  • the audio decoding program 90 is stored in a program is stored in a program storage area 81 formed in a recording medium 80 that is inserted into a computer and accessed, or included in a computer.
  • the audio decoding program 90 includes an audio code buffer module 900, an audio parameter decoding module 901, a side information decoding module 902, a side information accumulation module 903, an audio parameter missing processing module 904, and an audio synthesis module 905.
  • the functions implemented by executing the audio code buffer module 900, the audio parameter decoding module 901, the side information decoding module 902, the side information accumulation module 903, an audio parameter missing processing module 904 and the audio synthesis module 905 are the same as the function of the audio code buffer 231, the audio parameter decoding unit 232, the side information decoding unit 235, the side information accumulation unit 236, the audio parameter missing processing unit 233 and the audio synthesis unit 234 described above, respectively.
  • a part or the whole of the audio decoding program 90 may be transmitted through a transmission medium such as a communication line, received and stored (including being installed) by another device. Further, each module of the audio decoding program 90 may be installed not in one computer but in any of a plurality of computers. In this case, the above-described processing of the audio decoding program 90 is performed by a computer system composed of the plurality of computers.
  • the functional configuration of the audio signal transmitting device is the same as in the example 1.
  • the functional configuration and the procedure are different only in the side information encoding unit 112, and therefore the operation of the side information encoding unit 112 only is described hereinbelow.
  • the functional configuration of an example of the side information encoding unit 112 is shown in Fig. 33 , and an example procedure of the side information encoding unit 112 is shown in Fig. 34 .
  • the side information encoding unit 112 includes an LP coefficient calculation unit 511, a residual signal calculation unit 512, a pitch lag calculation unit 513, an adaptive codebook calculation unit 514, an adaptive codebook buffer 515, and a pitch lag encoding unit 516.
  • the LP coefficient calculation unit 511 is the same as the LP coefficient calculation unit 151 in example 1 shown in Fig. 8 and thus is not redundantly described.
  • the residual signal calculation unit 512 calculates a residual signal by the same processing as in Step 181 in example 1 shown in Fig. 11 .
  • the pitch lag calculation unit 513 calculates a pitch lag for each sub-frame by calculating k that maximizes the following equation (Step 163 in Fig. 34 ).
  • u(n) indicates the adaptive codebook
  • L' indicates the number of samples contained in one sub-frame.
  • T p arg k ⁇ maxT k
  • the adaptive codebook calculation unit 514 calculates an adaptive codebook vector v'(n) from the pitch lag Tp and the adaptive codebook u(n).
  • the length of the adaptive codebook is N adapt (Step 164 in Fig. 34 ).
  • v ⁇ n u ⁇ n + N adapt - T p
  • the adaptive codebook buffer 515 updates the state by the adaptive codebook vector v'(n) (Step 166 in Fig. 34 ).
  • the pitch lag encoding unit 516 is the same as that in example 1 and thus not redundantly described (Step 169 in Fig. 34 ).
  • the audio signal receiving device includes the audio code buffer 121, the audio parameter decoding unit 122, the audio parameter missing processing unit 123, the audio synthesis unit 124, the side information decoding unit 125, and the side information accumulation unit 126, just like in example 1.
  • the procedure of the audio signal receiving device is as shown in Fig. 7 .
  • the operation of the audio code buffer 121 is the same as in example 1.
  • the operation of the audio parameter decoding unit 122 is the same as in the example 1.
  • the side information decoding unit 125 decodes the side information code, calculates a pitch lag T ⁇ p j 0 ⁇ j ⁇ M la and stores it into the side information accumulation unit 126.
  • the side information decoding unit 125 decodes the side information code by using the decoding method corresponding to the encoding method used at the encoding end.
  • the audio synthesis unit 124 is the same as that of example 1.
  • the ISF prediction unit 191 of the audio parameter missing processing unit 123 calculates an ISF parameter the same way as in the example 1.
  • the pitch lag prediction unit 192 reads the side information code from the side information accumulation unit 126 and obtains a pitch lag T ⁇ p i 0 ⁇ i ⁇ M la in the same manner as in example 1 (Step 4051 in Fig. 35 ). Further, the pitch lag prediction unit 192 outputs the pitch lag T ⁇ p i M la ⁇ i ⁇ M by using the pitch lag T ⁇ p - j 0 ⁇ j ⁇ J used in the past decoding (Step 4052 in Fig. 35 ).
  • the number of sub-frames contained in one frame is M, and the number of pitch lags contained in the side information is M la .
  • the procedure as described in Non Patent Literature 4 can be used (Step 1102 in Fig. 13 ).
  • the procedure of the pitch lag prediction unit in this case is shown in Fig. 37 .
  • Instruction information as to whether the predicated value is used, or the pitch lag T ⁇ p M la obtained by the side information is used may be input to the adaptive codebook calculation unit 154.
  • the adaptive codebook gain prediction unit 193 and the fixed codebook gain prediction unit 194 are the same as those of the example 1.
  • the noise signal generation unit 195 is the same as that of the example 1.
  • the audio synthesis unit 124 synthesizes, from the parameters output from the audio parameter missing processing unit 123, an audio signal corresponding to the frame to be encoded.
  • the LP coefficient calculation unit 1121 of the audio synthesis unit 124 obtains an LP coefficient in the same manner as in example 1 (Step S11301 in Fig. 16 ).
  • the adaptive codebook calculation unit 1123 calculates an adaptive codebook vector in the same manner as in example 1.
  • the adaptive codebook calculation unit 1123 may perform filtering on the adaptive codebook vector or may not perform filtering.
  • the adaptive codebook vector is calculated using the following equation.
  • the adaptive codebook calculation unit 1123 may calculate an adaptive codebook vector in the following procedure (adaptive codebook calculation step B).
  • glottal pulse synchronization is applied to the initial adaptive codebook vector.
  • the same procedure as in the case where a pulse position is not available in the section 7.11.2.5 in Non Patent Literature 4 is used. Note that, however, u(n) in Non Patent Literature 4 corresponds to v(n) in the embodiment of the present invention, and extrapolated pitch corresponds to T ⁇ p M - 1 in the embodiment of the present invention, and the last reliable pitch(T c ) corresponds to T ⁇ p M la - 1 in the embodiment of the present invention.
  • the adaptive codebook calculation unit 1123 may use the above-described adaptive codebook calculation step A, and if it is indicated that the pitch value should be used (YES in Step 4082 in Fig. 38 ), the adaptive codebook calculation unit 1123 may use the above-described adaptive codebook calculation step B.
  • the procedure of the adaptive codebook calculation unit 1123 in this case is shown in the example of Fig. 38 .
  • the excitation vector synthesis unit 1124 outputs an excitation vector in the same manner as in example 1 (Step 11306 in Fig. 16 ).
  • the post filter 1125 performs post processing on the synthesis signal in the same manner as in the example 1.
  • the adaptive codebook 1122 updates the state by using the excitation signal vector in the same manner as in the example 1 (Step 11308 in Fig. 16 ).
  • the synthesis filter 1126 synthesizes a decoded signal in the same manner as in the example 1 (Step 11309 in Fig. 16 ).
  • the perceptual weighting inverse filter 1127 applies an perceptual weighting inverse filter in the same manner as in the example 1.
  • the audio parameter missing processing unit 123 stores the audio parameters (ISF parameter, pitch lag, adaptive codebook gain, fixed codebook gain) used in the audio synthesis unit 124 into the buffer in the same manner as in the example 1 (Step 145 in Fig. 7 ).
  • a configuration is described in which a pitch lag is transmitted as side information only in a specific frame class, and otherwise a pitch lag is not transmitted.
  • an input audio signal is sent to the audio encoding unit 111.
  • the audio encoding unit 111 in this example calculates an index representing the characteristics of a frame to be encoded and transmits the index to the side information encoding unit 112.
  • the other operations are the same as in example 1.
  • the side information encoding unit 112 a difference from the examples 1 to 4 is only with regard to the pitch lag encoding unit 158, and therefore the operation of the pitch lag encoding unit 158 is described hereinbelow.
  • the configuration of the side information encoding unit 112 in the example 5 is shown in Fig. 39 .
  • the procedure of the pitch lag encoding unit 158 is shown in the example of Fig. 40 .
  • the pitch lag encoding unit 158 reads the index representing the characteristics of the frame to be encoded (Step 5021 in Fig. 40 ) and, when the index representing the characteristics of the frame to be encoded is equal to a predetermined value, the pitch lag encoding unit 158 determines the number of bits to be assigned to the side information as B bits (B>1). On the other hand, when the index representing the characteristics of the frame to be encoded is different from a predetermined value, the pitch lag encoding unit 158 determines the number of bits to be assigned to the side information as 1 bit (Step 5022 in Fig. 40 ).
  • a value indicating non-transmission of the side information is used as the side information code, and is set to the side information index (Step 5023 in Fig. 40 ).
  • Step 5022 in Fig. 40 when the number of bits to be assigned to the side information is B bits (Yes in Step 5022 in Fig. 40 ), a value indicating transmission of the side information is set to the side information index (Step 5024 in Fig. 40 ), and further, a code of B-1 bits obtained by encoding the pitch lag by the method described in example 1 is added, for use as the side information code (Step 5025 in Fig. 40 ).
  • the audio signal receiving device includes the audio code buffer 121, the audio parameter decoding unit 122, the audio parameter missing processing unit 123, the audio synthesis unit 124, the side information decoding unit 125, and the side information accumulation unit 126, just like in example 1.
  • the procedure of the audio signal receiving device is as shown in Fig. 7 .
  • the operation of the audio code buffer 121 is the same as in example 1.
  • the operation of the audio parameter decoding unit 122 is the same as in example 1.
  • the procedure of the side information decoding unit 125 is shown in the example of Fig. 41 .
  • the side information decoding unit 125 decodes the side information index contained in the side information code first (Step 5031 in Fig. 41 ).
  • the side information decoding unit 125 does not perform any further decoding operations.
  • the side information decoding unit 125 stores the value of the side information index in the side information accumulation unit 126 (Step 5032 in Fig. 41 ).
  • the side information decoding unit 125 when the side information index indicates transmission of the side information, the side information decoding unit 125 further performs decoding of B-1 bits and calculates a pitch lag T ⁇ p j 0 ⁇ j ⁇ M la and stores the calculated pitch lag in the side information accumulation unit 126 (Step 5033 in Fig. 41 ). Further, the side information decoding unit 125 stores the value of the side information index into the side information accumulation unit 126. Note that the decoding of the side information of B-1 bits is the same operation as the side information decoding unit 125 in example 1.
  • the audio synthesis unit 124 is the same as that of example 1.
  • the ISF prediction unit 191 of the audio parameter missing processing unit 123 calculates an ISF parameter the same way as in example 1.
  • the procedure of the pitch lag prediction unit 192 is shown in the example of Fig. 42 .
  • the pitch lag prediction unit 192 reads the side information index from the side information accumulation unit 126 (Step 5041 in Fig. 42 ) and checks whether it is the value indicating transmission of the side information (Step 5042 in Fig. 42 ).
  • the side information code is read from the side information accumulation unit 126 to obtain a pitch lag T ⁇ p i 0 ⁇ i ⁇ M la (Step 5043 in Fig. 42 ). Further, the pitch lag T ⁇ p i M la ⁇ i ⁇ M is output by using the pitch lag T ⁇ p - j 0 ⁇ j ⁇ J used in the past decoding and T ⁇ p i 0 ⁇ i ⁇ M la obtained as the side information (Step 5044 in Fig. 42 ). The number of sub-frames contained in one frame is M, and the number of pitch lags contained in the side information is M la .
  • the pitch lag prediction unit 192 predicts the pitch lag T ⁇ p i 0 ⁇ i ⁇ M by using the pitch lag T ⁇ p - j 1 ⁇ j ⁇ J used in the past decoding (Step 5048 in Fig. 42 ).
  • the adaptive codebook gain prediction unit 193 and the fixed codebook gain prediction unit 194 are the same as those of example 1.
  • the noise signal generation unit 195 is the same as that of the example 1.
  • the audio synthesis unit 124 synthesizes, from the parameters output from the audio parameter missing processing unit 123, an audio signal which corresponds to the frame to be encoded.
  • the LP coefficient calculation unit 1121 of the audio synthesis unit 124 obtains an LP coefficient in the same manner as in example 1 (Step S11301 in Fig. 16 ).
  • the procedure of the adaptive codebook calculation unit 1123 is shown in the example of Fig. 43 .
  • the adaptive codebook calculation unit 1123 calculates an adaptive codebook vector in the same manner as in example 1.
  • the adaptive codebook vector is calculated using the following equation (Step 5055 in Fig. 43 ).
  • the filtering coefficient is f i .
  • the adaptive codebook calculation unit 1123 calculates the adaptive codebook vector by the following procedure.
  • the initial adaptive codebook vector is calculated using the pitch lag and the adaptive codebook 1122 (Step 5053 in Fig. 43 ).
  • glottal pulse synchronization is applied to the initial adaptive codebook vector.
  • the same procedure can be used as in the case where a pulse position is not available in section 7.11.2.5 in Non Patent Literature 4 (Step 5054 in Fig. 43 ).
  • u(n) in Non Patent Literature 4 corresponds to v(n) in the embodiment of the present invention
  • extrapolated pitch corresponds to T ⁇ p M - 1 in the embodiment of the present invention
  • the last reliable pitch(T c ) corresponds to T ⁇ p - 1 in the embodiment of the present invention.
  • the excitation vector synthesis unit 1124 outputs an excitation signal vector in the same manner as in the example 1 (Step 11306 in Fig. 16 ).
  • the post filter 1125 performs post processing on the synthesis signal in the same manner as in example 1.
  • the adaptive codebook 1122 updates the state using the excitation signal vector in the same manner as in the example 1 (Step 11308 in Fig. 16 ).
  • the synthesis filter 1126 synthesizes a decoded signal in the same manner as in example 1 (Step 11309 in Fig. 16 ).
  • the perceptual weighting inverse filter 1127 applies an perceptual weighting inverse filter in the same manner as in example 1.
  • the audio parameter missing processing unit 123 stores the audio parameters (ISF parameter, pitch lag, adaptive codebook gain, fixed codebook gain) used in the audio synthesis unit 124 into the buffer in the same manner as in example 1 (Step 145 in Fig. 7 ).

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
  • Reduction Or Emphasis Of Bandwidth Of Signals (AREA)
EP13854879.7A 2012-11-15 2013-11-12 Dispositif de codage audio, procédé de codage audio, programme de codage audio, dispositif de décodage audio, procédé de décodage audio et programme de décodage audio Active EP2922053B1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19185490.0A EP3579228A1 (fr) 2012-11-15 2013-11-12 Dispositif de codage audio et procédé de codage audio, programme de codage audio, dispositif de décodage audio, procédé de décodage audio et programme de décodage audio
PL13854879T PL2922053T3 (pl) 2012-11-15 2013-11-12 Urządzenie do kodowania audio, sposób kodowania audio, program do kodowania audio, urządzenie do dekodowania audio, sposób dekodowania audio, i program do dekodowania audio

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012251646 2012-11-15
PCT/JP2013/080589 WO2014077254A1 (fr) 2012-11-15 2013-11-12 Dispositif de codage audio, procédé de codage audio, programme de codage audio, dispositif de décodage audio, procédé de décodage audio et programme de décodage audio

Related Child Applications (2)

Application Number Title Priority Date Filing Date
EP19185490.0A Division-Into EP3579228A1 (fr) 2012-11-15 2013-11-12 Dispositif de codage audio et procédé de codage audio, programme de codage audio, dispositif de décodage audio, procédé de décodage audio et programme de décodage audio
EP19185490.0A Division EP3579228A1 (fr) 2012-11-15 2013-11-12 Dispositif de codage audio et procédé de codage audio, programme de codage audio, dispositif de décodage audio, procédé de décodage audio et programme de décodage audio

Publications (3)

Publication Number Publication Date
EP2922053A1 true EP2922053A1 (fr) 2015-09-23
EP2922053A4 EP2922053A4 (fr) 2016-07-06
EP2922053B1 EP2922053B1 (fr) 2019-08-28

Family

ID=50731166

Family Applications (2)

Application Number Title Priority Date Filing Date
EP13854879.7A Active EP2922053B1 (fr) 2012-11-15 2013-11-12 Dispositif de codage audio, procédé de codage audio, programme de codage audio, dispositif de décodage audio, procédé de décodage audio et programme de décodage audio
EP19185490.0A Pending EP3579228A1 (fr) 2012-11-15 2013-11-12 Dispositif de codage audio et procédé de codage audio, programme de codage audio, dispositif de décodage audio, procédé de décodage audio et programme de décodage audio

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP19185490.0A Pending EP3579228A1 (fr) 2012-11-15 2013-11-12 Dispositif de codage audio et procédé de codage audio, programme de codage audio, dispositif de décodage audio, procédé de décodage audio et programme de décodage audio

Country Status (18)

Country Link
US (7) US9564143B2 (fr)
EP (2) EP2922053B1 (fr)
JP (8) JP6158214B2 (fr)
KR (10) KR102171293B1 (fr)
CN (2) CN107256709B (fr)
AU (6) AU2013345949B2 (fr)
BR (1) BR112015008505B1 (fr)
CA (4) CA3127953C (fr)
DK (1) DK2922053T3 (fr)
ES (1) ES2747353T3 (fr)
HK (1) HK1209229A1 (fr)
IN (1) IN2015DN02595A (fr)
MX (3) MX2018016263A (fr)
PL (1) PL2922053T3 (fr)
PT (1) PT2922053T (fr)
RU (8) RU2640743C1 (fr)
TW (2) TWI587284B (fr)
WO (1) WO2014077254A1 (fr)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MX2014003610A (es) * 2011-09-26 2014-11-26 Sirius Xm Radio Inc Sistema y metodo para incrementar la eficiencia del ancho de banda de transmision ("ebt2").
MX2018016263A (es) * 2012-11-15 2021-12-16 Ntt Docomo Inc Dispositivo codificador de audio, metodo de codificacion de audio, programa de codificacion de audio, dispositivo decodificador de audio, metodo de decodificacion de audio, y programa de decodificacion de audio.
US9418671B2 (en) * 2013-08-15 2016-08-16 Huawei Technologies Co., Ltd. Adaptive high-pass post-filter
EP2922056A1 (fr) 2014-03-19 2015-09-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Appareil,procédé et programme d'ordinateur correspondant pour générer un signal de masquage d'erreurs utilisant une compensation de puissance
EP2922055A1 (fr) * 2014-03-19 2015-09-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Appareil, procédé et programme d'ordinateur correspondant pour générer un signal de dissimulation d'erreurs au moyen de représentations LPC de remplacement individuel pour les informations de liste de codage individuel
EP2922054A1 (fr) * 2014-03-19 2015-09-23 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Appareil, procédé et programme d'ordinateur correspondant permettant de générer un signal de masquage d'erreurs utilisant une estimation de bruit adaptatif
CN105897666A (zh) * 2015-10-08 2016-08-24 乐视致新电子科技(天津)有限公司 实时语音通话中的实时语音接收设备及降低延迟的方法
US10650837B2 (en) 2017-08-29 2020-05-12 Microsoft Technology Licensing, Llc Early transmission in packetized speech
US11710492B2 (en) * 2019-10-02 2023-07-25 Qualcomm Incorporated Speech encoding using a pre-encoded database

Family Cites Families (63)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5327520A (en) 1992-06-04 1994-07-05 At&T Bell Laboratories Method of use of voice message coder/decoder
JP3713288B2 (ja) * 1994-04-01 2005-11-09 株式会社東芝 音声復号装置
JPH08160993A (ja) * 1994-12-08 1996-06-21 Nec Corp 音声分析合成器
JP4121578B2 (ja) * 1996-10-18 2008-07-23 ソニー株式会社 音声分析方法、音声符号化方法および装置
CN1737903A (zh) * 1997-12-24 2006-02-22 三菱电机株式会社 声音译码方法以及声音译码装置
US7072832B1 (en) 1998-08-24 2006-07-04 Mindspeed Technologies, Inc. System for speech encoding having an adaptive encoding arrangement
US6757654B1 (en) 2000-05-11 2004-06-29 Telefonaktiebolaget Lm Ericsson Forward error correction in speech coding
JP2002118517A (ja) 2000-07-31 2002-04-19 Sony Corp 直交変換装置及び方法、逆直交変換装置及び方法、変換符号化装置及び方法、並びに復号装置及び方法
US6862567B1 (en) 2000-08-30 2005-03-01 Mindspeed Technologies, Inc. Noise suppression in the frequency domain by adjusting gain according to voicing parameters
US6968309B1 (en) 2000-10-31 2005-11-22 Nokia Mobile Phones Ltd. Method and system for speech frame error concealment in speech decoding
KR100674423B1 (ko) * 2001-01-19 2007-01-29 엘지전자 주식회사 송/수신 시스템 및 데이터 처리 방법
JP3628268B2 (ja) * 2001-03-13 2005-03-09 日本電信電話株式会社 音響信号符号化方法、復号化方法及び装置並びにプログラム及び記録媒体
US7308406B2 (en) 2001-08-17 2007-12-11 Broadcom Corporation Method and system for a waveform attenuation technique for predictive speech coding based on extrapolation of speech waveform
SE521600C2 (sv) * 2001-12-04 2003-11-18 Global Ip Sound Ab Lågbittaktskodek
JP3722366B2 (ja) * 2002-02-22 2005-11-30 日本電信電話株式会社 パケット構成方法及び装置、パケット構成プログラム、並びにパケット分解方法及び装置、パケット分解プログラム
WO2003077425A1 (fr) * 2002-03-08 2003-09-18 Nippon Telegraph And Telephone Corporation Procedes de codage et de decodage signaux numeriques, dispositifs de codage et de decodage, programme de codage et de decodage de signaux numeriques
JP2004077688A (ja) * 2002-08-14 2004-03-11 Nec Corp 音声通信装置
US7584107B2 (en) * 2002-09-09 2009-09-01 Accenture Global Services Gmbh Defined contribution benefits tool
JP4287637B2 (ja) * 2002-10-17 2009-07-01 パナソニック株式会社 音声符号化装置、音声符号化方法及びプログラム
US7876966B2 (en) * 2003-03-11 2011-01-25 Spyder Navigations L.L.C. Switching between coding schemes
JP4365653B2 (ja) * 2003-09-17 2009-11-18 パナソニック株式会社 音声信号送信装置、音声信号伝送システム及び音声信号送信方法
SE527670C2 (sv) * 2003-12-19 2006-05-09 Ericsson Telefon Ab L M Naturtrogenhetsoptimerad kodning med variabel ramlängd
WO2005109401A1 (fr) * 2004-05-10 2005-11-17 Nippon Telegraph And Telephone Corporation Méthode de communication de paquet de signaux acoustiques, méthode de transmission, méthode de réception et dispositif et programme de ceux-ci
ATE403216T1 (de) * 2004-06-02 2008-08-15 Koninkl Philips Electronics Nv Verfahren und vorrichtung zum einbetten von hilfsinformationen in einem media-signal
US20060088093A1 (en) * 2004-10-26 2006-04-27 Nokia Corporation Packet loss compensation
SE0402652D0 (sv) * 2004-11-02 2004-11-02 Coding Tech Ab Methods for improved performance of prediction based multi- channel reconstruction
SE0402650D0 (sv) * 2004-11-02 2004-11-02 Coding Tech Ab Improved parametric stereo compatible coding of spatial audio
US7933767B2 (en) * 2004-12-27 2011-04-26 Nokia Corporation Systems and methods for determining pitch lag for a current frame of information
CA2596341C (fr) 2005-01-31 2013-12-03 Sonorit Aps Procede permettant la concatenation des trames dans un systeme de communication
US8170883B2 (en) * 2005-05-26 2012-05-01 Lg Electronics Inc. Method and apparatus for embedding spatial information and reproducing embedded signal for an audio signal
US7707034B2 (en) * 2005-05-31 2010-04-27 Microsoft Corporation Audio codec post-filter
US20070055510A1 (en) * 2005-07-19 2007-03-08 Johannes Hilpert Concept for bridging the gap between parametric multi-channel audio coding and matrixed-surround multi-channel coding
US9058812B2 (en) * 2005-07-27 2015-06-16 Google Technology Holdings LLC Method and system for coding an information signal using pitch delay contour adjustment
US7712008B2 (en) * 2006-01-26 2010-05-04 Agere Systems Inc. Systems and methods for error reduction associated with information transfer
US8438018B2 (en) * 2006-02-06 2013-05-07 Telefonaktiebolaget Lm Ericsson (Publ) Method and arrangement for speech coding in wireless communication systems
US7457746B2 (en) * 2006-03-20 2008-11-25 Mindspeed Technologies, Inc. Pitch prediction for packet loss concealment
CN101000768B (zh) * 2006-06-21 2010-12-08 北京工业大学 嵌入式语音编解码的方法及编解码器
WO2008007700A1 (fr) * 2006-07-12 2008-01-17 Panasonic Corporation Dispositif de décodage de son, dispositif de codage de son, et procédé de compensation de trame perdue
JPWO2008007698A1 (ja) 2006-07-12 2009-12-10 パナソニック株式会社 消失フレーム補償方法、音声符号化装置、および音声復号装置
JP4380669B2 (ja) * 2006-08-07 2009-12-09 カシオ計算機株式会社 音声符号化装置、音声復号装置、音声符号化方法、音声復号方法、及び、プログラム
US7752038B2 (en) 2006-10-13 2010-07-06 Nokia Corporation Pitch lag estimation
BRPI0718300B1 (pt) 2006-10-24 2018-08-14 Voiceage Corporation Método e dispositivo para codificar quadros de transição em sinais de fala.
JP5123516B2 (ja) * 2006-10-30 2013-01-23 株式会社エヌ・ティ・ティ・ドコモ 復号装置、符号化装置、復号方法及び符号化方法
JP5394931B2 (ja) * 2006-11-24 2014-01-22 エルジー エレクトロニクス インコーポレイティド オブジェクトベースオーディオ信号の復号化方法及びその装置
KR100862662B1 (ko) * 2006-11-28 2008-10-10 삼성전자주식회사 프레임 오류 은닉 방법 및 장치, 이를 이용한 오디오 신호복호화 방법 및 장치
CN101226744B (zh) * 2007-01-19 2011-04-13 华为技术有限公司 语音解码器中实现语音解码的方法及装置
CN101256771A (zh) * 2007-03-02 2008-09-03 北京工业大学 嵌入式编码、解码方法、编码器、解码器及系统
JP5291096B2 (ja) * 2007-06-08 2013-09-18 エルジー エレクトロニクス インコーポレイティド オーディオ信号処理方法及び装置
CN100550712C (zh) * 2007-11-05 2009-10-14 华为技术有限公司 一种信号处理方法和处理装置
CN101207665B (zh) 2007-11-05 2010-12-08 华为技术有限公司 一种衰减因子的获取方法
KR100998396B1 (ko) * 2008-03-20 2010-12-03 광주과학기술원 프레임 손실 은닉 방법, 프레임 손실 은닉 장치 및 음성송수신 장치
EP2144231A1 (fr) * 2008-07-11 2010-01-13 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Schéma de codage/décodage audio à taux bas de bits avec du prétraitement commun
US8706479B2 (en) * 2008-11-14 2014-04-22 Broadcom Corporation Packet loss concealment for sub-band codecs
JP5309944B2 (ja) * 2008-12-11 2013-10-09 富士通株式会社 オーディオ復号装置、方法、及びプログラム
US8452606B2 (en) * 2009-09-29 2013-05-28 Skype Speech encoding using multiple bit rates
US8423355B2 (en) * 2010-03-05 2013-04-16 Motorola Mobility Llc Encoder for audio signal including generic audio and speech frames
CN101894558A (zh) * 2010-08-04 2010-11-24 华为技术有限公司 丢帧恢复方法、设备以及语音增强方法、设备和系统
WO2012046685A1 (fr) 2010-10-05 2012-04-12 日本電信電話株式会社 Procédé de codage, procédé de décodage, dispositif de codage, dispositif de décodage, programme et support d'enregistrement
JP6000854B2 (ja) * 2010-11-22 2016-10-05 株式会社Nttドコモ 音声符号化装置および方法、並びに、音声復号装置および方法
AR085895A1 (es) * 2011-02-14 2013-11-06 Fraunhofer Ges Forschung Generacion de ruido en codecs de audio
US9026434B2 (en) 2011-04-11 2015-05-05 Samsung Electronic Co., Ltd. Frame erasure concealment for a multi rate speech and audio codec
MX2018016263A (es) * 2012-11-15 2021-12-16 Ntt Docomo Inc Dispositivo codificador de audio, metodo de codificacion de audio, programa de codificacion de audio, dispositivo decodificador de audio, metodo de decodificacion de audio, y programa de decodificacion de audio.
KR102452593B1 (ko) 2015-04-15 2022-10-11 삼성전자주식회사 반도체 장치의 제조 방법

Also Published As

Publication number Publication date
KR101812123B1 (ko) 2017-12-26
KR102302012B1 (ko) 2021-09-13
JP2021092814A (ja) 2021-06-17
AU2019202186A1 (en) 2019-04-18
AU2019202186B2 (en) 2020-12-03
US20180122394A1 (en) 2018-05-03
US20200126577A1 (en) 2020-04-23
CN104781876A (zh) 2015-07-15
JP6793675B2 (ja) 2020-12-02
JPWO2014077254A1 (ja) 2017-01-05
CN107256709A (zh) 2017-10-17
CA3127953A1 (fr) 2014-05-22
CA3044983A1 (fr) 2014-05-22
CN107256709B (zh) 2021-02-26
RU2737465C1 (ru) 2020-11-30
KR20180115357A (ko) 2018-10-22
AU2013345949B2 (en) 2017-05-04
RU2640743C1 (ru) 2018-01-11
RU2760485C1 (ru) 2021-11-25
JP2020034951A (ja) 2020-03-05
AU2020294317B2 (en) 2022-03-31
US11195538B2 (en) 2021-12-07
RU2722510C1 (ru) 2020-06-01
US20200126578A1 (en) 2020-04-23
TW201635274A (zh) 2016-10-01
JP2019070866A (ja) 2019-05-09
HK1209229A1 (en) 2016-03-24
MX345692B (es) 2017-02-10
US20150262588A1 (en) 2015-09-17
ES2747353T3 (es) 2020-03-10
BR112015008505B1 (pt) 2021-10-26
KR102110853B1 (ko) 2020-05-14
US11211077B2 (en) 2021-12-28
AU2020294317A1 (en) 2021-02-25
US20200126576A1 (en) 2020-04-23
CA3210225A1 (fr) 2014-05-22
KR20200124339A (ko) 2020-11-02
US10553231B2 (en) 2020-02-04
US11176955B2 (en) 2021-11-16
KR101780667B1 (ko) 2017-09-21
US9564143B2 (en) 2017-02-07
KR102259112B1 (ko) 2021-05-31
RU2015122777A (ru) 2017-01-10
RU2665301C1 (ru) 2018-08-28
AU2022202856A1 (en) 2022-05-19
KR20170141827A (ko) 2017-12-26
TWI587284B (zh) 2017-06-11
AU2023208191B2 (en) 2024-09-26
CA2886140C (fr) 2021-03-23
KR20200123285A (ko) 2020-10-28
KR102459376B1 (ko) 2022-10-25
IN2015DN02595A (fr) 2015-09-11
EP2922053B1 (fr) 2019-08-28
CA2886140A1 (fr) 2014-05-22
CA3044983C (fr) 2022-07-12
CN104781876B (zh) 2017-07-21
MX2018016263A (es) 2021-12-16
KR20190133302A (ko) 2019-12-02
JP2016197254A (ja) 2016-11-24
JP2020038396A (ja) 2020-03-12
DK2922053T3 (da) 2019-09-23
KR102173422B1 (ko) 2020-11-03
KR20150056614A (ko) 2015-05-26
EP2922053A4 (fr) 2016-07-06
JP6846500B2 (ja) 2021-03-24
JP6659882B2 (ja) 2020-03-04
US20170148459A1 (en) 2017-05-25
KR101689766B1 (ko) 2016-12-26
JP6626026B2 (ja) 2019-12-25
TWI547940B (zh) 2016-09-01
KR20210118988A (ko) 2021-10-01
KR20160111550A (ko) 2016-09-26
RU2612581C2 (ru) 2017-03-09
KR20200051858A (ko) 2020-05-13
EP3579228A1 (fr) 2019-12-11
PL2922053T3 (pl) 2019-11-29
BR112015008505A2 (pt) 2020-01-07
JP6158214B2 (ja) 2017-07-05
US9881627B2 (en) 2018-01-30
JP2017138607A (ja) 2017-08-10
RU2713605C1 (ru) 2020-02-05
AU2017208369A1 (en) 2017-08-17
AU2023208191A1 (en) 2023-08-17
PT2922053T (pt) 2019-10-15
KR102171293B1 (ko) 2020-10-28
CA3127953C (fr) 2023-09-26
AU2013345949A1 (en) 2015-04-16
WO2014077254A1 (fr) 2014-05-22
TW201432670A (zh) 2014-08-16
MX2015005885A (es) 2015-09-23
AU2022202856B2 (en) 2023-06-08
KR20170107590A (ko) 2017-09-25
JP6872597B2 (ja) 2021-05-19
RU2690775C1 (ru) 2019-06-05
US11749292B2 (en) 2023-09-05
AU2017208369B2 (en) 2019-01-03
JP2018112749A (ja) 2018-07-19
JP7209032B2 (ja) 2023-01-19
KR102307492B1 (ko) 2021-09-29
MX362139B (es) 2019-01-07
US20220059108A1 (en) 2022-02-24

Similar Documents

Publication Publication Date Title
US11211077B2 (en) Audio coding device, audio coding method, audio coding program, audio decoding device, audio decoding method, and audio decoding program

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150420

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1209229

Country of ref document: HK

RA4 Supplementary search report drawn up and despatched (corrected)

Effective date: 20160608

RIC1 Information provided on ipc code assigned before grant

Ipc: G10L 19/09 20130101ALI20160601BHEP

Ipc: H03M 7/30 20060101ALI20160601BHEP

Ipc: G10L 19/005 20130101AFI20160601BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180528

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20190320

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602013059856

Country of ref document: DE

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: VALIPAT S.A. C/O BOVARD SA NEUCHATEL, CH

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1173369

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190915

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DK

Ref legal event code: T3

Effective date: 20190919

REG Reference to a national code

Ref country code: PT

Ref legal event code: SC4A

Ref document number: 2922053

Country of ref document: PT

Date of ref document: 20191015

Kind code of ref document: T

Free format text: AVAILABILITY OF NATIONAL TRANSLATION

Effective date: 20190926

REG Reference to a national code

Ref country code: SE

Ref legal event code: TRGR

REG Reference to a national code

Ref country code: NO

Ref legal event code: T2

Effective date: 20190828

REG Reference to a national code

Ref country code: EE

Ref legal event code: FG4A

Ref document number: E018195

Country of ref document: EE

Effective date: 20190927

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20190828

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: GR

Ref legal event code: EP

Ref document number: 20190402923

Country of ref document: GR

Effective date: 20191128

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191128

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191228

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 2747353

Country of ref document: ES

Kind code of ref document: T3

Effective date: 20200310

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1173369

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190828

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200224

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602013059856

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG2D Information on lapse in contracting state deleted

Ref country code: IS

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20191112

26N No opposition filed

Effective date: 20200603

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20131112

REG Reference to a national code

Ref country code: HK

Ref legal event code: WD

Ref document number: 1209229

Country of ref document: HK

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190828

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 10

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230509

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GR

Payment date: 20231121

Year of fee payment: 11

Ref country code: GB

Payment date: 20231123

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: TR

Payment date: 20231110

Year of fee payment: 11

Ref country code: SE

Payment date: 20231120

Year of fee payment: 11

Ref country code: PT

Payment date: 20231102

Year of fee payment: 11

Ref country code: NO

Payment date: 20231124

Year of fee payment: 11

Ref country code: IT

Payment date: 20231124

Year of fee payment: 11

Ref country code: IE

Payment date: 20231121

Year of fee payment: 11

Ref country code: FR

Payment date: 20231120

Year of fee payment: 11

Ref country code: FI

Payment date: 20231121

Year of fee payment: 11

Ref country code: EE

Payment date: 20231117

Year of fee payment: 11

Ref country code: DK

Payment date: 20231124

Year of fee payment: 11

Ref country code: DE

Payment date: 20231121

Year of fee payment: 11

Ref country code: CH

Payment date: 20231201

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: PL

Payment date: 20231103

Year of fee payment: 11

Ref country code: BE

Payment date: 20231120

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: ES

Payment date: 20240129

Year of fee payment: 11