EP2696343A1 - Verschlüsselungsverfahren, entschlüsselungsverfahren, verschlüsselungsvorrichtung, entschlüsselungsvorrichtung, programm und aufzeichnungsmedium - Google Patents
Verschlüsselungsverfahren, entschlüsselungsverfahren, verschlüsselungsvorrichtung, entschlüsselungsvorrichtung, programm und aufzeichnungsmedium Download PDFInfo
- Publication number
- EP2696343A1 EP2696343A1 EP12767213.7A EP12767213A EP2696343A1 EP 2696343 A1 EP2696343 A1 EP 2696343A1 EP 12767213 A EP12767213 A EP 12767213A EP 2696343 A1 EP2696343 A1 EP 2696343A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- error
- values
- sequence
- sample
- encoding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims description 65
- 238000004364 calculation method Methods 0.000 claims description 33
- 238000013139 quantization Methods 0.000 abstract description 162
- 238000001228 spectrum Methods 0.000 description 80
- 238000010606 normalization Methods 0.000 description 36
- 238000012545 processing Methods 0.000 description 24
- 230000008569 process Effects 0.000 description 21
- 239000013598 vector Substances 0.000 description 18
- 238000009499 grossing Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 13
- 230000001174 ascending effect Effects 0.000 description 11
- 230000008707 rearrangement Effects 0.000 description 8
- 241000209094 Oryza Species 0.000 description 5
- 235000007164 Oryza sativa Nutrition 0.000 description 5
- 230000015654 memory Effects 0.000 description 5
- 235000009566 rice Nutrition 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 4
- 230000003044 adaptive effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 101000666171 Homo sapiens Protein-glutamine gamma-glutamyltransferase 2 Proteins 0.000 description 1
- 206010021403 Illusion Diseases 0.000 description 1
- 102100038095 Protein-glutamine gamma-glutamyltransferase 2 Human genes 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 238000005315 distribution function Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000010561 standard procedure Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/005—Correction of errors induced by the transmission channel, if related to the coding algorithm
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/032—Quantisation or dequantisation of spectral components
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/167—Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/18—Vocoders using multiple modes
- G10L19/24—Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/038—Speech enhancement, e.g. noise reduction or echo cancellation using band spreading techniques
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/0017—Lossless audio signal coding; Perfect reconstruction of coded audio signal by transmission of coding error
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/0212—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using orthogonal transformation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/032—Quantisation or dequantisation of spectral components
- G10L19/035—Scalar quantisation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/032—Quantisation or dequantisation of spectral components
- G10L19/038—Vector quantisation, e.g. TwinVQ audio
Definitions
- the present invention relates to a technique for encoding acoustic signals and a technique for decoding code sequences obtained by the encoding technique, and more specifically, to encoding of a frequency-domain sample sequence obtained by converting an acoustic signal into the frequency domain and decoding of the encoded sample sequence.
- Adaptive encoding of orthogonal transform coefficients in the discrete Fourier transform (DFT), modified discrete cosine transform (MDCT), and the like is a known method of encoding speech signals and acoustic signals having a low bit rate (about 10 to 20 kbit/s, for example).
- a standard technique AMR-WB+ extended adaptive multi-rate wideband
- TCX transform coded excitation
- Non-patent literature 1 ETSI TS 126 290 V6.3.0 (2005-06 )
- AMR-WB+ and other TCX-based encoding do not consider variations in the amplitudes of frequency-domain coefficients caused by periodicity, if amplitudes that vary greatly are encoded together, the encoding efficiency would decrease.
- modified TCX-based quantization or encoding techniques a case will now be considered, for example, in which a sequence of MDCT coefficients arranged in ascending order of frequency, the coefficients being discrete values obtained by quantizing a signal obtained by dividing coefficients by a gain, is compressed by entropy encoding of arithmetic codes and the like.
- a plurality of samples form a single symbol (encoding unit), and a code to be assigned is adaptively controlled depending on the symbol immediately preceding the symbol of interest. Generally, if the amplitude is small, a short code is assigned, and if the amplitude is large, a long code is assigned. This reduces the number of bits per frame generally. If the number of bits to be assigned per frame is fixed, there is a possibility that the reduced number of bits cannot be used efficiently.
- an object of the present invention is to provide encoding and decoding techniques that can improve the quality of discrete signals, especially the quality of digital speech or acoustic signals after they have been encoded at a low bit rate, with a small amount of calculation.
- An encoding method is a method for encoding, with a predetermined number of bits, a frequency-domain sample sequence derived from an acoustic signal in a predetermined time interval.
- the encoding method includes an encoding step of encoding, by variable-length encoding, an integer corresponding to the value of each sample in the frequency-domain sample sequence to generate a variable-length code; an error calculation step of calculating a sequence of error values each obtained by subtracting the integer corresponding to the value of each sample in the frequency-domain sample sequence from the value of the sample; and an error encoding step of encoding the sequence of error values with the number of surplus bits obtained by subtracting the number of bits of the variable-length code from the predetermined number of bits to generate error codes.
- a decoding method is a method for decoding an input code formed of a predetermined number of bits.
- the decoding method includes a decoding step of decoding a variable-length code included in the input code to generate a sequence of integers; an error decoding step of decoding an error code included in the input code, the error code being formed of the number of surplus bits obtained by subtracting the number of bits of the variable-length code from the predetermined number of bits, to generate a sequence of error values; and an adding step of adding each sample in the sequence of integers to a corresponding error sample in the sequence of error values.
- One characteristic feature of this embodiment is an improvement in encoding, that is, a reduction in encoding distortion in a framework of quantizing a frequency-domain sample sequence derived from an acoustic signal in a frame, which is a predetermined time interval, through variable-length encoding of the frequency-domain sample sequence after weighted smoothing and quantization of an error signal by using surplus bits saved by the variable-length encoding, with a determined order of priority. Even if a fixed number of bits are assigned per frame, the advantage of variable-length encoding can be obtained.
- Examples of frequency-domain sample sequences derived from acoustic signals include a DFT coefficient sequence and an MDCT coefficient sequence that can be obtained by converting a digital speech or acoustic signal in units of frames from the time domain to the frequency domain, and a coefficient sequence obtained by applying a process such as normalization, weighting, or quantization to the DFT or MDCT coefficient sequence.
- a process such as normalization, weighting, or quantization to the DFT or MDCT coefficient sequence.
- an encoder 1 includes a frequency-domain converter 11, a linear prediction analysis unit 12, a linear-prediction-coefficient quantization and encoding unit 13, a power-spectrum-envelope calculation unit 14, a weighted-envelope normalization unit 15, a normalization-gain calculation unit 16, a quantizer 17, an error calculation unit 18, an encoding unit 19, and an error encoding unit 110, for example.
- the encoder 1 performs individual steps of an encoding method illustrated in Fig. 2 . The steps of the encoder 1 will be described next.
- the frequency-domain converter 11 converts a digital speech or acoustic signal in units of frames into an N-point MDCT coefficient sequence in the frequency domain (step S11).
- an encoding part quantizes an MDCT coefficient sequence, encodes the quantized MDCT coefficient sequence, and sends the obtained code sequence to a decoding part, and the decoding part can reconstruct a quantized MDCT coefficient sequence from the code sequence and can also reconstruct a digital speech or acoustic signal in the time domain by performing an inverse MDCT transform.
- the amplitude envelope of the MDCT coefficients is approximately the same as the amplitude envelope (power-spectrum envelope) of a usual DFT power spectrum. Therefore, by assigning information proportional to the logarithmic value of the amplitude envelope, the quantization distortion (quantization error) of the MDCT coefficients can be distributed evenly in the entire band, the overall quantization distortion can be reduced, and information can be compressed.
- the power-spectrum envelope can be efficiently estimated by using linear prediction coefficients obtained by linear prediction analysis.
- the quantization error can be controlled by adaptively assigning a quantization bit(s) for each MDCT coefficient (adjusting the quantization step width after smoothing the amplitude) or by determining a code by performing adaptive weighting through weighted vector quantization.
- An example of the quantization method executed in the embodiment of the present invention is described here, but the present invention is not confined to the described quantization method.
- the linear prediction analysis unit 12 performs linear prediction analysis of the digital speech or acoustic signal in units of frames and obtains and outputs linear prediction coefficients up to a preset order (step S12).
- the linear-prediction-coefficient quantization and encoding unit 13 obtains and outputs codes corresponding to the linear prediction coefficients obtained by the linear prediction analysis unit 12 and quantized linear prediction coefficients (step S 13).
- the linear prediction coefficients may be converted to line spectral pairs (LSPs); codes corresponding to the LSPs and quantized LSPs may be obtained; and the quantized LSPs may be converted to quantized linear prediction coefficients.
- LSPs line spectral pairs
- linear prediction coefficient codes are part of the codes sent to the decoder 2.
- the power-spectrum-envelope calculation unit 14 obtains a power-spectrum envelope by converting the quantized linear prediction coefficients output by the linear-prediction-coefficient quantization and encoding unit 13 into the frequency domain (step S14).
- the obtained power-spectrum envelope is sent to the weighted-envelope normalization unit 15.
- the power-spectrum envelope is sent to the error encoding unit 110, as indicated by a broken line in Fig. 1 .
- Individual coefficients W(1) to W(N) in a power-spectrum envelope coefficient sequence corresponding to the individual coefficients X(1) to X(N) in the N-point MDCT coefficient sequence can be obtained by converting the quantized linear prediction coefficients into the frequency domain.
- a temporal signal y(t) of time t is expressed by Formula (1) with its own past values y(t - 1) to y(t - p) back to point p, a prediction residual e(t), and quantized linear prediction coefficients ⁇ 1 to ⁇ p .
- y t + ⁇ 1 ⁇ y ⁇ t - 1 + ⁇ + ⁇ p ⁇ y ⁇ t - p e t
- W n ⁇ 2 2 ⁇ ⁇ ⁇ 1 1 + ⁇ 1 ⁇ exp - jn + ⁇ 2 ⁇ exp - 2 ⁇ jn + ⁇ + ⁇ p ⁇ exp - pjn 2
- the order p may be identical to the order of the quantized linear prediction coefficients output by the linear-prediction-coefficient quantization and encoding unit 13 or may be smaller than the order of the quantized linear prediction coefficients output by the linear-prediction-coefficient quantization and encoding unit 13.
- the power-spectrum-envelope calculation unit 14 may calculate approximate values of the power-spectrum envelope or estimates of the power-spectrum envelope instead of values of the power-spectrum envelope.
- the values of the power-spectrum envelope are the coefficients W(1) to W(N) of the power-spectrum envelope coefficient sequence.
- the weighted-envelope normalization unit 15 normalizes the coefficients of the MDCT coefficient sequence with the power-spectrum envelope output by the power-spectrum-envelope calculation unit 14 (step S15).
- the weighted-envelope normalization unit 15 normalizes the coefficients of the MDCT coefficient sequence in units of frames by using the weighted spectrum envelope coefficients obtained by smoothing the power-spectrum envelope value sequence or its square root sequence along the frequency axis.
- coefficients x(1) to x(N) of a frame-based weighted normalization MDCT coefficient sequence are obtained.
- the weighted normalization MDCT coefficient sequence is sent to the normalization-gain calculation unit 16, the quantizer 17, and the error calculation unit 18.
- the weighted normalization MDCT coefficient sequence generally has a rather large amplitude in the low-frequency region and has a fine structure resulting from the pitch period, but the gradient and unevenness of the amplitude are not large in comparison with the original MDCT coefficient sequence.
- the normalization-gain calculation unit 16 determines the quantization step width by using the sum of amplitude values or energy values across the entire frequency band so that the coefficients x(1) to x(N) of the weighted normalization MDCT coefficient sequence can be quantized with a given total number of bits in frames and obtains a coefficient g (hereafter gain) by which each coefficient of the weighted normalization MDCT coefficient sequence is to be divided to yield the quantization step width (step S16).
- Gain information that indicates this gain is part of the codes sent to the decoder 2.
- the quantizer 17 quantizes the coefficients x(1) to x(N) of the weighted normalization MDCT coefficient sequence in frames with the quantization step width determined in step 16 (step S 17).
- an integer u(n) obtained by rounding off x(n)/g to the closest whole number, x(n)/g being obtained by dividing the coefficient x(n) [1 ⁇ n ⁇ N] of the weighted normalization MDCT coefficient sequence by the gain g serves as a quantized MDCT coefficient.
- the quantized MDCT coefficient sequence in frames is sent to the error calculation unit 18 and the encoding unit 19.
- a value obtained by rounding up or down the fractional x(n)/g may be used as the integer u(n).
- the integer u(n) may be a value corresponding to x(n)/g.
- a sequence of x(n)/g corresponds to a sequence of samples in the frequency domain in the claims.
- the x(n)/g sequence is an example of a sample sequence in the frequency domain.
- the quantized MDCT coefficient which is the integer u(n), corresponds to an integer corresponding to the value of each sample in the sample sequence in the frequency domain.
- the weighted normalization MDCT coefficient sequence obtained in step S 15, the gain g obtained in step S16, and the frame-based quantized MDCT coefficient sequence obtained in step S 17 are input to the error calculation unit 18.
- a value obtained by subtracting the quantized MDCT coefficient u(n) corresponding to each coefficient x(n) of the weighted normalization MDCT coefficient sequence from a value obtained by dividing the coefficient x(n) by the gain g serves as a quantization error r(n) corresponding to the coefficient x(n).
- a sequence of quantization errors r(n) corresponds to the sequence of errors in the claims.
- the encoding unit 19 encodes the quantized MDCT coefficient sequence (a sequence of the quantized MDCT coefficients u(n)) output by the quantizer 17 in frames and outputs obtained codes and the number of bits of the codes (step S19).
- the encoding unit 19 can reduce the average code amount by employing variable-length encoding, which, for example, assigns codes having lengths depending on the frequencies of the values of the quantized MDCT coefficient sequence.
- Variable-length codes include Rice codes, Huffman codes, arithmetic codes, and run-length codes.
- Reference literature 1 David Salomon, “Data Compression: The Complete Reference,” 3rd edition, Springer-Verlag, ISBN-10: 0-387-40697-2, 2004 .
- variable-length codes become part of the codes sent to the decoder 2.
- the variable-length encoding method which has been executed is indicated by selection information.
- the selection information may be sent to the decoder 2.
- variable-length encoding of the coefficients u(1) to u(N), which are integers, of the quantized MDCT coefficient sequence the number of bits needed to express the quantized MDCT coefficient sequence is obtained and the number of surplus bits produced by compression in variable-length encoding is obtained from the predetermined number of bits. If bits can be manipulated among several frames, the surplus bits can be used effectively in the subsequent frames. If a fixed number of bits is assigned in each frame, the surplus bits should be used effectively for encoding another item, otherwise, reducing the number of average bits by variable-length encoding would become meaningless.
- the surplus bits that have not been used in encoding of the quantization error r(n) are used for other purposes, such as correcting the gain g.
- the quantization error r(n) is generated by rounding off fractions made by quantization and is distributed almost evenly in the range of -0.5 to +0.5.
- the error encoding unit 110 calculates the number of surplus bits by subtracting the number of bits in variable-length codes output by the encoding unit 19 from the number of bits preset as the code amount of the weighted normalization MDCT coefficient sequence. Then, the quantization error sequence obtained by the error calculation unit 18 is encoded with the number of surplus bits, and the obtained error codes are output (step S110). The error codes are part of the codes sent to the decoder 2.
- vector quantization When quantization errors are encoded, vector quantization may be applied to a plurality of samples collectively. Generally, however, this requires a code sequence to be accumulated in a table (codebook) and requires calculation of the distance between the input and the code sequence, increasing the size of the memory and the amount of calculation. Furthermore, separate codebooks would be needed to handle any number of bits, and the configuration would become complicated.
- One codebook for each possible number of surplus bits is stored beforehand in a codebook storage unit in the error encoding unit 110.
- Each codebook stores in advance as many vectors as the number of samples in the quantization error sequence that can be expressed with the number of surplus bits corresponding to the codebook, associated with codes corresponding to the vectors.
- the error encoding unit 110 calculates the number of surplus bits, selects a codebook corresponding to the calculated number of surplus bits from the codebooks stored in the codebook storage unit, and performs vector quantization by using the selected codebook.
- the encoding process after selecting the codebook is the same as that in general vector quantization.
- the error encoding unit 110 outputs codes corresponding to vectors that minimize the distances between the vectors of the selected codebook and the input quantization error sequence or that minimize the correlation between them.
- the number of vectors stored in the codebook is the same as the number of samples in the quantization error sequence.
- the number of sample vectors stored in the codebook may also be a integral submultiple of the number of samples in the quantization error sequence; the quantization error sequence may be vector-quantized for each group of a plurality of samples; and a plurality of obtained codes may be used as error codes.
- the order of priority of the quantization error samples included in the quantization error sequence is determined, and the quantization error samples that can be encoded with the surplus bits are encoded in descending order of priority.
- the quantization error samples are encoded in descending order of absolute value or energy.
- the order of priority can be determined with reference to the values of the power-spectrum envelope, for example. Like the values of the power-spectrum envelope, approximate values of the power-spectrum envelope, estimates of the power-spectrum envelope, values obtained by smoothing any of these values along the frequency axis, mean values of a plurality of samples of any of these values, or values having the same magnitude relationship as at least one of these values may be used, of course, but using values of the power-spectrum envelope will be described below. As an example shown in Fig.
- FIG. 3 illustrates, perceptual distortion in an acoustic signal such as speech or musical sound can be reduced by making a trend in the amplitudes of the sequence of samples to be quantized in the frequency domain (corresponding to the spectrum envelope after weighted smoothing in Fig. 3 ) closer to the power-spectrum envelope of the acoustic signal (corresponding to the spectrum envelope of the original sound in Fig. 3 ). If the values of the power-spectrum envelope turn out to be large, corresponding weighted normalization MDCT coefficients x(n) would also be large. Even if the weighted normalization MDCT coefficients x(n) are large, the quantization error r(n) ranges from -0.5 to +0. 5.
- weighted normalization MDCT coefficients x(n) are very small, in other words, if the coefficients are smaller than half of the step width, values obtained by dividing the weighted normalization MDCT coefficients x(n) by the gain g is 0, and the quantization errors r(n) are far smaller than 0.5. If the values of the power-spectrum envelope are rather small, encoding of the quantization errors r(n) as well as the weighted normalization MDCT coefficients x(n) would produce a small effect on the perceptual quality, and they may be excluded from the items to be encoded in the error encoding unit 110. If the power-spectrum envelope is rather large, it is impossible to distinguish a sample having a large quantization error from other samples.
- quantization error samples r(n) are encoded using one bit each, only for the number of quantization error samples corresponding to the number of surplus bits, in ascending order of the position of the sample on the frequency axis (ascending order of frequency) or in descending order of the value of the power-spectrum envelope. Just excluding values of the power-spectrum envelope up to a certain level would be enough.
- the distribution of the quantization errors r(n) tends to converge on '0', and the centroid of the distribution should be used as the value of ⁇ .
- a quantization error sample to be encoded may be selected for each set of a plurality of quantization error samples whose corresponding quantized MDCT coefficients u(n) are '0', and the position of the selected quantization error sample in the set of quantization error samples and the value of the selected quantization error sample may be encoded and sent as an error code to the decoder 2. For example, among four quantization error samples whose corresponding quantized MDCT coefficients u(n) are '0', a quantization error sample having the largest absolute value is selected; the value of the selected quantization error sample is quantized (it is determined whether it is positive or negative, for example), and this information is sent as a single bit; and the position of the selected quantization error sample is sent as two bits.
- the codes of the quantization error samples that have not been selected are not sent to the decoder 2, and the corresponding decoded values in the decoder 2 are '0'. Generally, q bits are needed to report to the decoder the position of the sample which has been selected from among 2 q samples.
- ⁇ should be the value of the centroid of the distribution of samples having the largest absolute values of quantization errors in the sets of the plurality of samples.
- scattered samples can be expressed by combining a plurality of sequences, as shown in Fig. 4 .
- a positive or negative pulse (requiring two bits) is set at just one of four positions, and the other positions can be set to zero. Three bits are needed to express the first sequence.
- the second to fifth sequences can be encoded in the same way, with a total of 15 bits.
- Encoding can be performed as described below, where the number of surplus bits is U, the number of quantization error samples whose corresponding quantized MDCT coefficients u(n) are not '0' among the quantization error samples constituting the quantization error sequence is T, and the number of quantization error samples whose corresponding quantized MDCT coefficients u(n) are '0' is S.
- U the number of surplus bits is U
- T the number of quantization error samples whose corresponding quantized MDCT coefficients u(n) are not '0' among the quantization error samples constituting the quantization error sequence
- S the number of quantization error samples whose corresponding quantized MDCT coefficients u(n) are '0'
- the error encoding unit 110 selects U quantization error samples among T quantization error samples whose corresponding quantized MDCT coefficients u(n) are not '0' in the quantization error sequence, in descending order of the corresponding value of the power-spectrum envelope; generates a one-bit code serving as information expressing whether the quantization error sample is positive or negative for each of the selected quantization error samples; and outputs the generated U bits of codes as error codes. If the corresponding values of the power-spectrum envelope are the same, the samples should be selected, for example, in accordance with another preset rule, such as selecting quantization error samples in ascending order of the position on the frequency axis (quantization error samples in ascending order of frequency). T ⁇ U ⁇ T + S
- the error encoding unit 110 generates a one-bit code serving as information expressing whether the quantization error sample is positive or negative, for each of the T quantization error samples whose corresponding quantized MDCT coefficients u(n) are not '0' in the quantization error sequence.
- the error encoding unit 110 also encodes quantization error samples whose corresponding quantized MDCT coefficients u(n) are '0' in the quantization error sequence, with U - T bits. If there are a plurality of quantization error samples whose corresponding quantized MDCT coefficients u(n) are '0', they are encoded in descending order of the corresponding value of the power-spectrum envelope. Specifically, a one-bit code expressing whether the quantization error sample is positive or negative is generated for each of U-T samples among the quantization error samples whose corresponding quantized MDCT coefficients u(n) are '0', in descending order of the corresponding value of the power-spectrum envelope.
- a plurality of quantization error samples are taken out in descending order of the corresponding value of the power-spectrum envelope from the quantization error samples whose corresponding quantized MDCT coefficients u(n) are '0' and are vector-quantized in each group of a plurality quantization error samples to generate U - T bits of codes. If the corresponding values of the power-spectrum envelope are the same, the samples are selected, for example, in accordance with a preset rule, such as selecting quantization error samples in ascending order of the position on the frequency axis (quantization error samples in ascending order of frequency).
- the error encoding unit 110 further outputs a combination of the generated U-bit codes and the U - T-bit codes as error codes.
- the error encoding unit 110 generates a one-bit first-round code expressing whether the quantization error sample is positive or negative, for each of the quantization error samples included in the quantization error sequence.
- the error encoding unit 110 further encodes quantization error samples by using the remaining U - (T + S) bits, in a way described in (A) or (B) above.
- a second round of (A) is executed on the encoding errors of the first round with the U - (T + S) bits being set anew to U bits.
- two-bit quantization per quantization error sample is performed on at least some of the quantization error samples.
- the values of quantization errors r(n) in the first-round encoding range evenly from -0.5 to +0.5, and the values of the errors in the first round to be encoded in the second round range from - 0.25 to +0.25.
- the error encoding unit 110 generates a one-bit second-round code expressing whether the value obtained by subtracting a reconstructed value of 0.25 from the value of the quantization error sample is positive or negative, for quantization error samples whose corresponding quantized MDCT coefficients u(n) are not '0' and whose corresponding quantization errors r(n) are positive among the quantization error samples included in the quantization error sequence.
- the error encoding unit 110 also generates a one-bit second-round code expressing whether the value obtained by subtracting a reconstructed value - 0.25 from the value of the quantization error sample is positive or negative, for quantization error samples whose corresponding quantized MDCT coefficients u(n) are not '0' and whose corresponding quantization errors r(n) are negative among the quantization error samples included in the quantization error sequence.
- the error encoding unit 110 further generates a one-bit second-round code expressing whether the value obtained by subtracting a reconstructed value A (A is a preset positive value smaller than 0.25) from the value of the quantization error sample is positive or negative, for quantization error samples whose corresponding quantized MDCT coefficients u(n) are '0' and whose corresponding quantization errors r(n) are positive among the quantization error samples included in the quantization error sequence.
- A is a preset positive value smaller than 0.25
- the error encoding unit 110 further generates a one-bit second-round code expressing whether the value obtained by subtracting a reconstructed value -A (A is a preset positive value smaller than 0.25) from the value of the quantization error sample is positive or negative, for error samples whose corresponding quantized MDCT coefficients u(n) are '0' and whose corresponding quantization errors r(n) are negative among the quantization error samples included in the quantization error sequence.
- A is a preset positive value smaller than 0.25
- the error encoding unit 110 outputs a combination of the first-round code and the second-round code as an error code.
- the quantization error sequence is encoded by using UU bits, which are fewer than U bits.
- the condition of (C) can be expressed as T + S ⁇ UU.
- Approximate values of the power-spectrum envelope or estimates of the power-spectrum envelope may be used instead of the values of power-spectrum envelope in (A) and (B) above.
- Values obtained by smoothing the values of power-spectrum envelope, by smoothing approximate values of the power-spectrum envelope, or by smoothing estimates of the power-spectrum envelope along the frequency axis may also be used instead of the values of the power-spectrum envelope in (A) and (B) above.
- the weighted spectrum envelope coefficients obtained by the weighted-envelope normalization unit 15 may be input to the error encoding unit 110, or the values may also be calculated by the error encoding unit 110.
- Mean values of a plurality of values of the power-spectrum envelope may also be used instead of the values of the power-spectrum envelope in (A) and (B) above.
- Mean values of approximate values of the power-spectrum envelope or mean values of estimates of the power-spectrum envelope may be used instead of the values of power-spectrum envelope W(n) [1 ⁇ n ⁇ N].
- Mean values of values obtained by smoothing the values of the power-spectrum envelope, by smoothing approximate values of the power-spectrum envelope, or by smoothing estimates of the power-spectrum envelope along the frequency axis may also be used.
- Each mean value here is a value obtained by averaging target values over a plurality of samples, that is, a value obtained by averaging target values in a plurality of samples.
- Values having the same magnitude relationship as at least one type of the values of the power-spectrum envelope, approximate values of the power-spectrum envelope, estimates of the power-spectrum envelope, values obtained by smoothing any of the above-mentioned values, and values obtained by averaging any of the above-mentioned values over a plurality of samples may also be used instead of the values of the power-spectrum envelope in (A) and (B) above.
- the values having the same magnitude relationship are calculated by the error encoding unit 110 and used.
- the values having the same magnitude relationship include squares and square roots.
- values having the same magnitude relationship as the values of the power-spectrum envelope W(n) [1 ⁇ n ⁇ N] are the squares (W(n)) 2 [1 ⁇ n ⁇ N] of the values of the power-spectrum envelope and the square roots (W(n)) 1/2 [1 ⁇ n ⁇ N] of the values of the power-spectrum envelope.
- a rearrangement unit 111 may be provided to rearrange the quantized MDCT coefficient sequence.
- the encoding unit 19 variable-length-encodes the quantized MDCT coefficient sequence rearranged by the rearrangement unit 111. Since the rearrangement of the quantized MDCT coefficient sequence based on periodicity can sometimes reduce the number of bits greatly in variable-length encoding, an improvement in encoding efficiency can be expected by encoding errors.
- the rearrangement unit 111 outputs, in units of frames, a rearranged sample sequence which (1) includes all samples in the quantized MDCT coefficient sequence, and in which (2) some of those samples included in the quantized MDCT coefficient sequence have been rearranged to put together samples having an equal index or a nearly equal index reflecting the magnitude of the sample (step S111).
- the index reflecting the magnitude of the sample is the absolute value of the amplitude of the sample or the power (square) of the sample, for example, but is not confined to them.
- Japanese Patent Application No. 2010-225949 PCT/JP2011/072752 is corresponding to WO2012/046685 ).
- the decoder 2 reconstructs an MDCT coefficient by performing the encoding process performed in the encoder 1 in reverse order.
- codes input to the decoder 2 include variable-length codes, error codes, gain information, and linear-prediction-coefficient codes. If selection information is output from the encoder 1, the selection information is also input to the decoder 2.
- the decoder 2 includes a decoding unit 21, a power-spectrum-envelope calculation unit 22, an error decoding unit 23, a gain decoding unit 24, an adder 25, a weighted-envelope inverse normalization unit 26, and a time-domain converter 27, for example.
- the decoder 2 performs the steps of a decoding method shown in Fig. 6 as an example. The steps of the decoder 2 will be described next.
- the decoding unit 21 decodes variable-length codes included in the input codes in units of frames and outputs a sequence of decoded quantized MDCT coefficients u(n), that is, coefficients that are identical to the quantized MDCT coefficients u(n) in the encoder, and the number of bits of the variable-length codes (step S21).
- a variable-length decoding method corresponding to the variable-length encoding method executed to obtain the code sequence is executed, of course. Details of the decoding process performed by the decoding unit 21 corresponds to the details of the encoding process performed by the encoding unit 19 of the encoder 1. The description of the encoding process is quoted here as a substitute for a detailed description of the decoding process because the decoding corresponding to the encoding that has been executed is the decoding process to be performed in the decoding unit 21.
- sequence of decoded quantized MDCT coefficients u(n) corresponds to the sequence of integers in the claims.
- the variable-length encoding method that has been executed is indicated by the selection information. If the selection information includes, for example, information indicating the area in which Rice encoding has been applied and Rice parameters, information indicating the area in which run-length encoding has been applied, and information indicating the type of entropy encoding, decoding methods corresponding to the encoding methods are applied to the corresponding areas of the input code sequence.
- a decoding process corresponding to Rice encoding, a decoding process corresponding to entropy encoding, and a decoding process corresponding to run-length encoding are widely known, and a description of them will be omitted (for example, refer to Reference literature 1, described above).
- the power-spectrum-envelope calculation unit 22 decodes the linear-prediction-coefficient codes input from the encoder 1 to obtain quantized linear prediction coefficients and converts the obtained quantized linear prediction coefficients into the frequency domain to obtain a power-spectrum envelope (step 22).
- the process for obtaining the power-spectrum envelope from the quantized linear prediction coefficients is the same as that in the power-spectrum-envelope calculation unit 14 of the encoder 1.
- Approximate values of the power-spectrum envelope or estimates of the power-spectrum envelope may be calculated instead of the values of the power-spectrum envelope, as in the power-spectrum-envelope calculation unit 14 of the encoder 1.
- the type of the values must be the same as that in the power-spectrum-envelope calculation unit 14 of the encoder 1. For example, if the power-spectrum-envelope calculation unit 14 of the encoder 1 has obtained approximate values of the power-spectrum envelope, the power-spectrum-envelope calculation unit 22 of the decoder 2 must also obtain approximate values of the power-spectrum envelope.
- quantized linear prediction coefficients corresponding to the linear-prediction-coefficient codes are obtained by another means in the decoder 2, the quantized linear prediction coefficients should be used to calculate the power-spectrum envelope. If a power-spectrum envelope has been calculated by another means in the decoder 2, the decoder 2 does not have to include the power-spectrum-envelope calculation unit 22.
- the error decoding unit 23 calculates the number of surplus bits by subtracting the number of bits output by the decoding unit 21 from the number of bits preset as the encoding amount of the quantized MDCT coefficient sequence.
- the error decoding unit 23 then decodes the error codes output by the error encoding unit 110 of the encoder 1 by using the decoding method corresponding to the encoding method used in the error encoding unit 110 of the encoder 1 and obtains decoded quantization errors q(n) (step S23).
- the number of bits assigned to the quantization error sequence in the encoder 1 is obtained from the number of surplus bits based on the number of bits used in the variable-length encoding indicated by the decoding unit 21. Since the encoder 1 and decoder 2 determine the correspondence of samples and steps between encoding and decoding in units of sets of surplus bits, unique decoding becomes possible.
- a sequence of decoded quantization errors corresponds to the sequence of errors in the claims.
- One codebook for each possible value of the number of surplus bits is stored beforehand in a codebook storage unit in the error decoding unit 23.
- Each codebook stores in advance as many vectors as the number of samples in the decoded quantization error sequence that can be expressed with the number of surplus bits corresponding to the codebook, associated with codes corresponding to the vectors.
- the error decoding unit 23 calculates the number of surplus bits, selects a codebook corresponding to the calculated number of surplus bits from the codebooks stored in the codebook storage unit, and performs vector inverse-quantization by using the selected codebook.
- the decoding process after selecting the codebook is the same as the general vector inverse-quantization. In other words, among vectors in the selected codebook, vectors corresponding to the input error codes are output as decoded quantization errors q(n).
- the number of vectors stored in the codebook is the same as the number of samples in the decoded quantization error sequence.
- the number of sample vectors stored in the codebook may also be an integral submultiple of the number of samples in the decoded quantization error sequence, and a plurality of codes included in the input error codes may be vector-inverse-quantized for each of a plurality of parts to generate the decoded quantization error sequence.
- the error decoding unit 23 selects U samples of T samples whose corresponding decoded quantized MDCT coefficients u(n) are not '0', in descending order of the corresponding value of the power-spectrum envelope, decodes a one-bit code included in the input error code to obtain information expressing whether the sample is positive or negative, adds the obtained positive-negative information to the absolute value 0.25 of the reconstructed value, and outputs the reconstructed value +0.25 or -0.25 as a decoded quantization error q(n) corresponding to the decoded quantized MDCT coefficient u(n), for each of the selected samples.
- the samples should be selected in accordance with a preset rule, such as selecting quantization error samples in ascending order of the position on the frequency axis (quantization error samples in ascending order of frequency), for example.
- a preset rule such as selecting quantization error samples in ascending order of the position on the frequency axis (quantization error samples in ascending order of frequency), for example.
- a rule corresponding to the rule used in the error encoding unit 110 of the encoder 1 is held beforehand in the error decoding unit 23, for example.
- the error decoding unit 23 decodes a one-bit code included in the input error code for each of samples whose corresponding decoded quantized MDCT coefficients u(n) are not '0' to obtain information indicating whether the decoded quantization error sample is positive or negative, adds the obtained positive-negative information to the absolute value 0.25 of the reconstructed value, and outputs the reconstructed value +0.25 or -0.25 as a decoded quantization error q(n) corresponding to the decoded quantized MDCT coefficient u(n).
- the error decoding unit 23 also decodes a one-bit code included in the input error code, for each of U - T samples whose corresponding decoded quantized MDCT coefficients u(n) are '0', in descending order of the corresponding value of the power-spectrum envelope, to obtain information indicating whether the decoded quantization error sample is positive or negative; adds the obtained positive-negative information to the absolute value A of the reconstructed value, which is a preset positive value smaller than 0.25; and outputs the reconstructed value +A or -A as the decoded quantization error q(n) corresponding to the decoded quantized MDCT coefficient u(n).
- the error decoding unit 23 vector-inverse-quantizes (U - T)-bit codes included in the error codes for a plurality of samples whose corresponding decoded quantized MDCT coefficients u(n) are '0', in descending order of the corresponding value of the power-spectrum envelope to obtain a sequence of corresponding decoded quantization error samples, and outputs each value of the obtained decoded quantization error samples as the decoded quantization error q(n) corresponding to the decoded quantized MDCT coefficient u(n).
- the absolute value of the reconstructed value is set to '0.25', for example; when the values of the quantized MDCT coefficient u(n) and the decoded quantized MDCT coefficient u(n) are '0', the absolute value of the reconstructed value is set to A (0 ⁇ A ⁇ 0.25), as described above.
- the absolute values of reconstructed values are examples.
- the absolute value of the reconstructed value obtained when the values of the quantized MDCT coefficient u(n) and the decoded quantized MDCT coefficient u(n) are not '0' needs to be larger than the absolute value of the reconstructed value obtained when the values of the quantized MDCT coefficient u(n) and the decoded quantized MDCT coefficient u(n) are '0'.
- the values of the quantized MDCT coefficient u(n) and the decoded quantized MDCT coefficient u(n) correspond to the integers in the claims.
- samples should be selected in accordance with a preset rule, such as selecting samples in ascending order of the position on the frequency axis (in ascending order of frequency), for example.
- a preset rule such as selecting samples in ascending order of the position on the frequency axis (in ascending order of frequency), for example.
- the error decoding unit 23 performs the following process on samples whose decoded quantized MDCT coefficients u(n) are not '0'.
- the error decoding unit 23 decodes the one-bit first-round code included in the input error code to obtain positive-negative information, adds the obtained positive-negative information to the absolute value 0.25 of the reconstructed value, and sets the reconstructed value +0.25 or -0.25 as a first-round decoded quantization error q 1 (n) corresponding to the decoded quantized MDCT coefficient u(n).
- the error decoding unit 23 further decodes the one-bit second-round code included in the input error code to obtain positive-negative information, adds the obtained positive-negative information to the absolute value 0.125 of the reconstructed value, and sets the reconstructed value +0.125 or -0.125 as a second-round decoded quantization error q 2 (n).
- the first-round decoded quantization error q 1 (n) and the second-round decoded quantization error q 2 (n) are added to make a decoded quantization error q(n).
- the error decoding unit 23 performs the following process on samples whose decoded quantized MDCT coefficients u(n) are '0'.
- the error decoding unit 23 decodes the one-bit first-round code included in the input error code to obtain positive-negative information, adds the obtained positive-negative information to the absolute value A of the reconstructed value, which is a positive value smaller than 0.25, and sets the reconstructed value +A or -A as a first-round decoded quantization error q 1 (n) corresponding to the decoded quantized MDCT coefficient u(n).
- the error decoding unit 23 further decodes the one-bit second-round code included in the input error code to obtain positive-negative information, adds the obtained positive-negative information to the absolute value A/2 of the reconstructed value, and sets the reconstructed value +A/2 or -A/2 as a second-round decoded quantization error q 2 (n).
- the first-round decoded quantization error q 1 (n) and the second-round decoded quantization error q 2 (n) are added to make a decoded quantization error q(n).
- the absolute value of the reconstructed value corresponding to the second-round code is a half of the absolute value of the reconstructed value corresponding to the first-round code.
- Approximate values of the power-spectrum envelope, estimates of the power-spectrum envelope, values obtained by smoothing any of those values, values obtained by averaging any of those values over pluralities of samples, or values having the same magnitude relationship as any of those values may also be used instead of the values of the power-spectrum envelope in (A) and (B) above.
- the same type of values as used in the error encoding unit 110 of the encoder 1 must be used.
- the gain decoding unit 24 decodes input gain information to obtain gain g and outputs it (step S24).
- the gain g is sent to the adder 25.
- the adder 25 adds the coefficients u(n) of the decoded quantized MDCT coefficient sequence output by the decoding unit 21 and the corresponding coefficients q(n) of the decoded quantization error sequence output by the error decoding unit 23 in units of frames to obtain their sums.
- the adder 25 generates a sequence by multiplying the sums by the gain g output by the gain decoding unit 24 and provides it as a decoded weighted normalization MDCT coefficient sequence (S25).
- S25 decoded weighted normalization MDCT coefficient sequence
- the sequence of sums generated by the adder 25 corresponds to the sample sequence in the frequency domain in the claims.
- the weighted-envelope inverse normalization unit 26 then obtains an MDCT coefficient sequence by dividing the coefficients x ⁇ (n) of the decoded weighted normalization MDCT coefficient sequence by the values of the power-spectrum envelope in units of frames (step S26).
- the time-domain converter 27 converts the MDCT coefficient sequence output by the weighted-envelope inverse normalization unit 26 into the time domain in units of frames and obtains a digital speech or acoustic signal in unit of frames (step S27).
- steps S26 and S27 are a conventional one, and its detailed description is omitted here.
- the sequence of decoded quantized MDCT coefficients u(n) generated by the decoding unit 21 is rearranged by a rearrangement unit in the decoder 2 (step S28), and the rearranged sequence of decoded quantized MDCT coefficients u(n) is sent to the error decoding unit 23 and the adder 25.
- the error decoding unit 23 and the adder 25 perform the processing described above on the rearranged sequence of decoded quantized MDCT coefficients u(n), instead of the sequence of decoded quantized MDCT coefficients u(n) generated by the decoding unit 21.
- the encoder 1 and the decoder 2 in the embodiment described above include an input unit to which a keyboard or the like can be connected, an output unit to which a liquid crystal display or the like can be connected, a central processing unit (CPU), memories such as a random access memory (RAM) and a read only memory (ROM), an external storage unit such as a hard disk drive, and a bus to which the input unit, the output unit, the CPU, the RAM, the ROM, and the external storage unit are connected to allow data exchange among them, for example.
- a unit (drive) for reading and writing a CD-ROM or other recording media may also be added to the encoder 1 or decoder 2.
- the external storage unit of the encoder 1 and the decoder 2 stores programs for executing encoding and decoding and data needed in the programmed processing.
- the programs may also be stored in the ROM, which is a read-only storage device, as well as the external storage unit.
- Data obtained in the programmed processing are stored in the RAM or the external storage unit as needed.
- the storage devices for storing the data and the addresses of storage areas will be referred to just as a storage unit.
- the storage unit of the encoder 1 stores programs for encoding a sample sequence in the frequency domain derived from a speech or acoustic signal and for encoding errors.
- the storage unit of the decoder 2 stores programs for decoding input codes.
- each program and data needed in the processing of the program are read into the RAM from the storage unit when necessary, and the CPU interprets them and executes the processing.
- Encoding is implemented by the CPU performing given functions (such as the error calculation unit 18, the error encoding unit 110, and the encoding unit 19).
- each program and data needed in the processing of the program are read into the RAM from the storage unit when needed, and the CPU interprets them and executes the processing.
- Decoding is implemented by the CPU performing given functions (such as the decoding unit 21).
- the quantizer 17 in the encoder 1 may use G(x(n)/g) obtained by companding the value of x(n)/g by a given function G, instead of x(n)/g.
- the quantizer 17 uses an integer corresponding to G(x(n)/g) obtained by companding x(n)/g with a function G, x(n)/g being obtained by dividing the coefficient x(n) [1 ⁇ n ⁇ N] of the weighted normalization MDCT coefficient sequence by the gain g, such as an integer u(n) obtained by rounding off G(x(n)/g) to the nearest whole number or by rounding up or down a fractional part.
- This quantized MDCT coefficient is encoded by the encoding unit 19.
- G(h) sign(h) x
- represents the absolute value of h, and a is a given number such as 0.75.
- the value G(x(n)/g) obtained by companding the value x(n)/g by a given function G corresponds to the sample sequence in the frequency domain in the claims.
- the quantization error r(n) obtained by the error calculation unit 18 is G(x(n)/g) - u(n).
- the quantization error r(n) is encoded by the error encoding unit 110.
- the processing functions of the hardware entities (the encoder 1 and the decoder 2) described above are implemented by a computer, the processing details of the functions that should be provided by the hardware entities are described in a program.
- the program is executed by a computer, the processing functions of the hardware entities are implemented on the computer.
- the program containing the processing details can be recorded in a computer-readable recording medium.
- the computer-readable recording medium can be any type of medium, such as a magnetic storage device, an optical disc, a magneto-optical storage medium, or a semiconductor memory.
- a hard disk drive, a flexible disk, a magnetic tape or the like can be used as the magnetic recording device;
- a DVD (digital versatile disc), DVD-RAM (random access memory), CD-ROM (compact disc read only memory), CD-R/RW (recordable/rewritable), or the like can be used as the optical disc;
- an MO magneticto-optical disc
- an EEP-ROM electrotronically erasable and programmable read only memory
- This program is distributed by selling, transferring, or lending a portable recording medium such as a DVD or a CD-ROM with the program recorded on it, for example.
- the program may also be distributed by storing the program in a storage unit of a server computer and transferring the program from the server computer to another computer through the network.
- a computer that executes this type of program first stores the program recorded on the portable recording medium or the program transferred from the server computer in its storage unit. Then, the computer reads the program stored in its storage unit and executes processing in accordance with the read program.
- the computer may read the program directly from the portable recording medium and execute processing in accordance with the program, or the computer may execute processing in accordance with the program each time the computer receives the program transferred from the server computer.
- the above-described processing may be executed by a so-called application service provider (ASP) service, in which the processing functions are implemented just by giving program execution instructions and obtaining the results without transferring the program from the server computer to the computer.
- the program of this form includes information that is provided for use in processing by the computer and is treated correspondingly as a program (something that is not a direct instruction to the computer but is data or the like that has characteristics that determine the processing executed by the computer).
- the hardware entities are implemented by executing the predetermined program on the computer, but at least a part of the processing may be implemented by hardware.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Quality & Reliability (AREA)
- Mathematical Physics (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP16195433.4A EP3154057B1 (de) | 2011-04-05 | 2012-03-26 | Dekodierung eines akustischen signals |
EP18196322.4A EP3441967A1 (de) | 2011-04-05 | 2012-03-26 | Decodierungsverfahren, decodierungsvorrichtung, programm und aufzeichnungsmedium |
PL16195433T PL3154057T3 (pl) | 2011-04-05 | 2012-03-26 | Dekodowanie sygnału akustycznego |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011083740 | 2011-04-05 | ||
PCT/JP2012/057685 WO2012137617A1 (ja) | 2011-04-05 | 2012-03-26 | 符号化方法、復号方法、符号化装置、復号装置、プログラム、記録媒体 |
Related Child Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18196322.4A Division EP3441967A1 (de) | 2011-04-05 | 2012-03-26 | Decodierungsverfahren, decodierungsvorrichtung, programm und aufzeichnungsmedium |
EP16195433.4A Division-Into EP3154057B1 (de) | 2011-04-05 | 2012-03-26 | Dekodierung eines akustischen signals |
EP16195433.4A Division EP3154057B1 (de) | 2011-04-05 | 2012-03-26 | Dekodierung eines akustischen signals |
Publications (3)
Publication Number | Publication Date |
---|---|
EP2696343A1 true EP2696343A1 (de) | 2014-02-12 |
EP2696343A4 EP2696343A4 (de) | 2014-11-12 |
EP2696343B1 EP2696343B1 (de) | 2016-12-21 |
Family
ID=46969018
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16195433.4A Active EP3154057B1 (de) | 2011-04-05 | 2012-03-26 | Dekodierung eines akustischen signals |
EP12767213.7A Active EP2696343B1 (de) | 2011-04-05 | 2012-03-26 | Kodierung eines akustischen signals |
EP18196322.4A Pending EP3441967A1 (de) | 2011-04-05 | 2012-03-26 | Decodierungsverfahren, decodierungsvorrichtung, programm und aufzeichnungsmedium |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16195433.4A Active EP3154057B1 (de) | 2011-04-05 | 2012-03-26 | Dekodierung eines akustischen signals |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18196322.4A Pending EP3441967A1 (de) | 2011-04-05 | 2012-03-26 | Decodierungsverfahren, decodierungsvorrichtung, programm und aufzeichnungsmedium |
Country Status (10)
Country | Link |
---|---|
US (3) | US10515643B2 (de) |
EP (3) | EP3154057B1 (de) |
JP (1) | JP5603484B2 (de) |
KR (1) | KR101569060B1 (de) |
CN (1) | CN103460287B (de) |
ES (2) | ES2704742T3 (de) |
PL (1) | PL3154057T3 (de) |
RU (1) | RU2571561C2 (de) |
TR (1) | TR201900411T4 (de) |
WO (1) | WO2012137617A1 (de) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106537500A (zh) * | 2014-05-01 | 2017-03-22 | 日本电信电话株式会社 | 周期性综合包络序列生成装置、周期性综合包络序列生成方法、生成程序、记录介质 |
EP3139380A4 (de) * | 2014-05-01 | 2017-11-01 | Nippon Telegraph And Telephone Corporation | Codierungsvorrichtung, decodierungsvorrichtung, codierungsverfahren, decodierungsverfahren, codierungsprogramm, decodierungsprogramm und aufzeichnungsmedium |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5997592B2 (ja) * | 2012-04-27 | 2016-09-28 | 株式会社Nttドコモ | 音声復号装置 |
EP2757559A1 (de) * | 2013-01-22 | 2014-07-23 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zur Codierung räumlicher Audioobjekte mittels versteckter Objekte zur Signalmixmanipulierung |
CN104934035B (zh) * | 2014-03-21 | 2017-09-26 | 华为技术有限公司 | 语音频码流的解码方法及装置 |
KR101826237B1 (ko) * | 2014-03-24 | 2018-02-13 | 니폰 덴신 덴와 가부시끼가이샤 | 부호화 방법, 부호화 장치, 프로그램 및 기록 매체 |
EP3648103B1 (de) * | 2014-04-24 | 2021-10-20 | Nippon Telegraph And Telephone Corporation | Decodierungsverfahren, decodierungsvorrichtung, korrespondierendes programm und aufzeichnungsmedium |
US10418042B2 (en) * | 2014-05-01 | 2019-09-17 | Nippon Telegraph And Telephone Corporation | Coding device, decoding device, method, program and recording medium thereof |
EP3786949B1 (de) * | 2014-05-01 | 2022-02-16 | Nippon Telegraph And Telephone Corporation | Kodierung eines akustischen signals |
CN111968656B (zh) | 2014-07-28 | 2023-11-10 | 三星电子株式会社 | 信号编码方法和装置以及信号解码方法和装置 |
EP3252758B1 (de) * | 2015-01-30 | 2020-03-18 | Nippon Telegraph and Telephone Corporation | Kodierungsvorrichtung, dekodierungsvorrichtung, und verfahren, computerprogramme und aufzeichnungsmedia für eine kodierungsvorrichtung und eine dekodierungsvorrichtung |
CN107430869B (zh) * | 2015-01-30 | 2020-06-12 | 日本电信电话株式会社 | 参数决定装置、方法及记录介质 |
TWI758146B (zh) | 2015-03-13 | 2022-03-11 | 瑞典商杜比國際公司 | 解碼具有增強頻譜帶複製元資料在至少一填充元素中的音訊位元流 |
US11468905B2 (en) * | 2016-09-15 | 2022-10-11 | Nippon Telegraph And Telephone Corporation | Sample sequence converter, signal encoding apparatus, signal decoding apparatus, sample sequence converting method, signal encoding method, signal decoding method and program |
EP3637418B1 (de) * | 2017-06-07 | 2022-03-16 | Nippon Telegraph And Telephone Corporation | Codierungsvorrichtung, decodierungsvorrichtung, glättungsvorrichtung, inversglättungsvorrichtung, verfahren dafür und programm |
JP6766264B2 (ja) * | 2017-06-22 | 2020-10-07 | 日本電信電話株式会社 | 符号化装置、復号装置、符号化方法、復号方法、およびプログラム |
EP3761313B1 (de) * | 2018-03-02 | 2023-01-18 | Nippon Telegraph And Telephone Corporation | Codierungsvorrichtung, codierverfahren, programm und aufzeichnungsmedium |
CN112154502B (zh) * | 2018-04-05 | 2024-03-01 | 瑞典爱立信有限公司 | 支持生成舒适噪声 |
JP7173134B2 (ja) * | 2018-04-13 | 2022-11-16 | 日本電信電話株式会社 | 符号化装置、復号装置、符号化方法、復号方法、プログラム、および記録媒体 |
JP7322620B2 (ja) * | 2019-09-13 | 2023-08-08 | 富士通株式会社 | 情報処理装置、情報処理方法および情報処理プログラム |
CN114913863B (zh) * | 2021-02-09 | 2024-10-18 | 同响科技股份有限公司 | 数字音信数据编码方法 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090248424A1 (en) * | 2008-03-25 | 2009-10-01 | Microsoft Corporation | Lossless and near lossless scalable audio codec |
Family Cites Families (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH03191628A (ja) * | 1989-12-21 | 1991-08-21 | Toshiba Corp | 可変レート符号化方式 |
JP2686350B2 (ja) * | 1990-07-11 | 1997-12-08 | シャープ株式会社 | 音声情報圧縮装置 |
US6091460A (en) * | 1994-03-31 | 2000-07-18 | Mitsubishi Denki Kabushiki Kaisha | Video signal encoding method and system |
JP3170193B2 (ja) * | 1995-03-16 | 2001-05-28 | 松下電器産業株式会社 | 画像信号の符号化装置及び復号装置 |
US5956674A (en) * | 1995-12-01 | 1999-09-21 | Digital Theater Systems, Inc. | Multi-channel predictive subband audio coder using psychoacoustic adaptive bit allocation in frequency, time and over the multiple channels |
JP3491425B2 (ja) * | 1996-01-30 | 2004-01-26 | ソニー株式会社 | 信号符号化方法 |
US20030039648A1 (en) * | 1998-09-16 | 2003-02-27 | Genentech, Inc. | Compositions and methods for the diagnosis and treatment of tumor |
US6677262B2 (en) * | 2000-07-05 | 2004-01-13 | Shin-Etsu Chemical Co., Ltd. | Rare earth oxide, basic rare earth carbonate, making method, phosphor, and ceramic |
US7136418B2 (en) * | 2001-05-03 | 2006-11-14 | University Of Washington | Scalable and perceptually ranked signal coding and decoding |
WO2003077425A1 (fr) * | 2002-03-08 | 2003-09-18 | Nippon Telegraph And Telephone Corporation | Procedes de codage et de decodage signaux numeriques, dispositifs de codage et de decodage, programme de codage et de decodage de signaux numeriques |
US7275036B2 (en) * | 2002-04-18 | 2007-09-25 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for coding a time-discrete audio signal to obtain coded audio data and for decoding coded audio data |
JP4296753B2 (ja) * | 2002-05-20 | 2009-07-15 | ソニー株式会社 | 音響信号符号化方法及び装置、音響信号復号方法及び装置、並びにプログラム及び記録媒体 |
DE10236694A1 (de) * | 2002-08-09 | 2004-02-26 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Vorrichtung und Verfahren zum skalierbaren Codieren und Vorrichtung und Verfahren zum skalierbaren Decodieren |
US7502743B2 (en) * | 2002-09-04 | 2009-03-10 | Microsoft Corporation | Multi-channel audio encoding and decoding with multi-channel transform selection |
KR100477699B1 (ko) | 2003-01-15 | 2005-03-18 | 삼성전자주식회사 | 양자화 잡음 분포 조절 방법 및 장치 |
US8107535B2 (en) * | 2003-06-10 | 2012-01-31 | Rensselaer Polytechnic Institute (Rpi) | Method and apparatus for scalable motion vector coding |
DE10345996A1 (de) * | 2003-10-02 | 2005-04-28 | Fraunhofer Ges Forschung | Vorrichtung und Verfahren zum Verarbeiten von wenigstens zwei Eingangswerten |
US7668712B2 (en) * | 2004-03-31 | 2010-02-23 | Microsoft Corporation | Audio encoding and decoding with intra frames and adaptive forward error correction |
US7587254B2 (en) * | 2004-04-23 | 2009-09-08 | Nokia Corporation | Dynamic range control and equalization of digital audio using warped processing |
JP4734859B2 (ja) * | 2004-06-28 | 2011-07-27 | ソニー株式会社 | 信号符号化装置及び方法、並びに信号復号装置及び方法 |
US7895034B2 (en) * | 2004-09-17 | 2011-02-22 | Digital Rise Technology Co., Ltd. | Audio encoding system |
WO2006062142A1 (ja) * | 2004-12-07 | 2006-06-15 | Nippon Telegraph And Telephone Corporation | 情報圧縮符号化装置、その復号化装置、これらの方法、およびこれらのプログラムとその記録媒体 |
EP1851866B1 (de) * | 2005-02-23 | 2011-08-17 | Telefonaktiebolaget LM Ericsson (publ) | Adaptive bitzuweisung für die mehrkanal-audiokodierung |
KR100818268B1 (ko) * | 2005-04-14 | 2008-04-02 | 삼성전자주식회사 | 오디오 데이터 부호화 및 복호화 장치와 방법 |
US7617436B2 (en) * | 2005-08-02 | 2009-11-10 | Nokia Corporation | Method, device, and system for forward channel error recovery in video sequence transmission over packet-based network |
KR20070046752A (ko) * | 2005-10-31 | 2007-05-03 | 엘지전자 주식회사 | 신호 처리 방법 및 장치 |
TWI276047B (en) * | 2005-12-15 | 2007-03-11 | Ind Tech Res Inst | An apparatus and method for lossless entropy coding of audio signal |
JP4548348B2 (ja) | 2006-01-18 | 2010-09-22 | カシオ計算機株式会社 | 音声符号化装置及び音声符号化方法 |
US8036903B2 (en) * | 2006-10-18 | 2011-10-11 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Analysis filterbank, synthesis filterbank, encoder, de-coder, mixer and conferencing system |
KR101471978B1 (ko) | 2007-02-02 | 2014-12-12 | 삼성전자주식회사 | 오디오 신호의 음질 향상을 위한 데이터 삽입 방법 및 그장치 |
JP4871894B2 (ja) * | 2007-03-02 | 2012-02-08 | パナソニック株式会社 | 符号化装置、復号装置、符号化方法および復号方法 |
CN101308661B (zh) * | 2007-05-16 | 2011-07-13 | 中兴通讯股份有限公司 | 一种基于先进音频编码器的量化器码率失真控制方法 |
WO2009004727A1 (ja) * | 2007-07-04 | 2009-01-08 | Fujitsu Limited | 符号化装置、符号化方法および符号化プログラム |
US7937574B2 (en) * | 2007-07-17 | 2011-05-03 | Advanced Micro Devices, Inc. | Precise counter hardware for microcode loops |
EP2063417A1 (de) * | 2007-11-23 | 2009-05-27 | Deutsche Thomson OHG | Formung des Rundungsfehlers für die auf ganzzahligen Transformationen basierende Kodierung und Dekodierung |
WO2009075326A1 (ja) * | 2007-12-11 | 2009-06-18 | Nippon Telegraph And Telephone Corporation | 符号化方法、復号化方法、これらの方法を用いた装置、プログラム、記録媒体 |
KR101452722B1 (ko) * | 2008-02-19 | 2014-10-23 | 삼성전자주식회사 | 신호 부호화 및 복호화 방법 및 장치 |
CN102282770B (zh) * | 2009-01-23 | 2014-04-16 | 日本电信电话株式会社 | 一种参数选择方法、参数选择装置 |
US20100191534A1 (en) * | 2009-01-23 | 2010-07-29 | Qualcomm Incorporated | Method and apparatus for compression or decompression of digital signals |
JP2010225949A (ja) * | 2009-03-25 | 2010-10-07 | Kyocera Corp | 発熱体の放熱構造 |
KR101381272B1 (ko) * | 2010-01-08 | 2014-04-07 | 니뽄 덴신 덴와 가부시키가이샤 | 부호화 방법, 복호 방법, 부호화 장치, 복호 장치, 프로그램 및 기록 매체 |
WO2012046685A1 (ja) | 2010-10-05 | 2012-04-12 | 日本電信電話株式会社 | 符号化方法、復号方法、符号化装置、復号装置、プログラム、記録媒体 |
-
2012
- 2012-03-26 RU RU2013143624/08A patent/RU2571561C2/ru active
- 2012-03-26 EP EP16195433.4A patent/EP3154057B1/de active Active
- 2012-03-26 EP EP12767213.7A patent/EP2696343B1/de active Active
- 2012-03-26 WO PCT/JP2012/057685 patent/WO2012137617A1/ja active Application Filing
- 2012-03-26 ES ES16195433T patent/ES2704742T3/es active Active
- 2012-03-26 ES ES12767213.7T patent/ES2617958T3/es active Active
- 2012-03-26 TR TR2019/00411T patent/TR201900411T4/tr unknown
- 2012-03-26 JP JP2013508811A patent/JP5603484B2/ja active Active
- 2012-03-26 US US14/007,844 patent/US10515643B2/en active Active
- 2012-03-26 CN CN201280015955.3A patent/CN103460287B/zh active Active
- 2012-03-26 EP EP18196322.4A patent/EP3441967A1/de active Pending
- 2012-03-26 PL PL16195433T patent/PL3154057T3/pl unknown
- 2012-03-26 KR KR1020137025380A patent/KR101569060B1/ko active IP Right Grant
-
2019
- 2019-11-18 US US16/687,176 patent/US11074919B2/en active Active
- 2019-11-18 US US16/687,144 patent/US11024319B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090248424A1 (en) * | 2008-03-25 | 2009-10-01 | Microsoft Corporation | Lossless and near lossless scalable audio codec |
Non-Patent Citations (2)
Title |
---|
HENRIQUE S MALVAR ED - ANONYMOUS: "Lossless and Near-Lossless Audio Compression Using Integer-Reversible Modulated Lapped Transforms", DATA COMPRESSION CONFERENCE, 2007. DCC '07, IEEE, PI, 1 March 2007 (2007-03-01), pages 323-332, XP031073812, ISBN: 978-0-7695-2791-8 * |
See also references of WO2012137617A1 * |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106537500A (zh) * | 2014-05-01 | 2017-03-22 | 日本电信电话株式会社 | 周期性综合包络序列生成装置、周期性综合包络序列生成方法、生成程序、记录介质 |
EP3139380A4 (de) * | 2014-05-01 | 2017-11-01 | Nippon Telegraph And Telephone Corporation | Codierungsvorrichtung, decodierungsvorrichtung, codierungsverfahren, decodierungsverfahren, codierungsprogramm, decodierungsprogramm und aufzeichnungsmedium |
EP3139381A4 (de) * | 2014-05-01 | 2017-11-08 | Nippon Telegraph and Telephone Corporation | Vorrichtung für periodische-kombinierte envelope-sequenz, verfahren für periodische-kombinierte envelope-sequenz, programm zur erzeugung von periodischer-kombinierter envelope-sequenz und aufzeichnungsmedium |
EP3509063A3 (de) * | 2014-05-01 | 2019-08-07 | Nippon Telegraph and Telephone Corporation | Codierer, codierungsverfahren, decodierungsverfahren, codierungsprogramm, decodierungsprogramm und aufzeichnungsmedium |
EP3537439A1 (de) * | 2014-05-01 | 2019-09-11 | Nippon Telegraph and Telephone Corporation | Vorrichtung für periodische-kombinierte envelope-sequenz, verfahren für periodische-kombinierte envelope-sequenz, programm zur erzeugung von periodischer-kombinierter envelope-sequenz und aufzeichnungsmedium |
CN106537500B (zh) * | 2014-05-01 | 2019-09-13 | 日本电信电话株式会社 | 周期性综合包络序列生成装置、周期性综合包络序列生成方法、记录介质 |
CN110491401A (zh) * | 2014-05-01 | 2019-11-22 | 日本电信电话株式会社 | 周期性综合包络序列生成装置、方法、程序、记录介质 |
CN110491402A (zh) * | 2014-05-01 | 2019-11-22 | 日本电信电话株式会社 | 周期性综合包络序列生成装置、方法、程序、记录介质 |
EP3696812A1 (de) * | 2014-05-01 | 2020-08-19 | Nippon Telegraph and Telephone Corporation | Codierer, decodierer, codierungsverfahren, decodierungsverfahren, codierungsprogramm, decodierungsprogramm und aufzeichnungsmedium |
EP3696816A1 (de) * | 2014-05-01 | 2020-08-19 | Nippon Telegraph and Telephone Corporation | Vorrichtung für periodische-kombinierte envelope-sequenz, verfahren für periodische-kombinierte envelope-sequenz, programm zur erzeugung von periodischer-kombinierter envelope-sequenz und aufzeichnungsmedium |
EP3699910A1 (de) * | 2014-05-01 | 2020-08-26 | Nippon Telegraph and Telephone Corporation | Vorrichtung für periodische-kombinierte envelope-sequenz, verfahren für periodische-kombinierte envelope-sequenz, programm zur erzeugung von periodischer-kombinierter envelope-sequenz und aufzeichnungsmedium |
EP3703051A1 (de) * | 2014-05-01 | 2020-09-02 | Nippon Telegraph and Telephone Corporation | Codierer, decodierer, codierungsverfahren, decodierungsverfahren, codierungsprogramm, decodierungsprogramm und aufzeichnungsmedium |
CN110491401B (zh) * | 2014-05-01 | 2022-10-21 | 日本电信电话株式会社 | 周期性综合包络序列生成装置、方法、记录介质 |
CN110491402B (zh) * | 2014-05-01 | 2022-10-21 | 日本电信电话株式会社 | 周期性综合包络序列生成装置、方法、记录介质 |
Also Published As
Publication number | Publication date |
---|---|
RU2013143624A (ru) | 2015-05-10 |
US11074919B2 (en) | 2021-07-27 |
US11024319B2 (en) | 2021-06-01 |
RU2571561C2 (ru) | 2015-12-20 |
ES2617958T3 (es) | 2017-06-20 |
EP2696343B1 (de) | 2016-12-21 |
CN103460287B (zh) | 2016-03-23 |
US20140019145A1 (en) | 2014-01-16 |
CN103460287A (zh) | 2013-12-18 |
JPWO2012137617A1 (ja) | 2014-07-28 |
US10515643B2 (en) | 2019-12-24 |
EP3154057A1 (de) | 2017-04-12 |
EP3441967A1 (de) | 2019-02-13 |
KR101569060B1 (ko) | 2015-11-13 |
WO2012137617A1 (ja) | 2012-10-11 |
KR20130133854A (ko) | 2013-12-09 |
ES2704742T3 (es) | 2019-03-19 |
US20200090665A1 (en) | 2020-03-19 |
JP5603484B2 (ja) | 2014-10-08 |
EP3154057B1 (de) | 2018-10-17 |
EP2696343A4 (de) | 2014-11-12 |
TR201900411T4 (tr) | 2019-02-21 |
US20200090664A1 (en) | 2020-03-19 |
PL3154057T3 (pl) | 2019-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11024319B2 (en) | Encoding method, decoding method, encoder, decoder, program, and recording medium | |
US9711158B2 (en) | Encoding method, encoder, periodic feature amount determination method, periodic feature amount determination apparatus, program and recording medium | |
KR100889399B1 (ko) | 스위치식예측양자화방법 | |
US20180182405A1 (en) | Encoding method, decoding method, encoder, decoder, program and recording medium | |
JP5612698B2 (ja) | 符号化方法、復号方法、符号化装置、復号装置、プログラム、記録媒体 | |
US10290310B2 (en) | Gain adjustment coding for audio encoder by periodicity-based and non-periodicity-based encoding methods | |
US9552821B2 (en) | Encoding method, encoder, program and recording medium | |
US10199046B2 (en) | Encoder, decoder, coding method, decoding method, coding program, decoding program and recording medium | |
JP5579932B2 (ja) | 符号化方法、装置、プログラム及び記録媒体 | |
JP5714172B2 (ja) | 符号化装置、この方法、プログラムおよび記録媒体 | |
WO2013129439A1 (ja) | 符号化装置、この方法、プログラム及び記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20130919 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20141010 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G10L 19/02 20130101AFI20141006BHEP Ipc: G10L 19/035 20130101ALN20141006BHEP Ipc: H03M 7/30 20060101ALI20141006BHEP Ipc: G10L 19/032 20130101ALI20141006BHEP Ipc: G10L 19/038 20130101ALN20141006BHEP Ipc: G10L 19/00 20130101ALN20141006BHEP |
|
17Q | First examination report despatched |
Effective date: 20150714 |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H03M 7/30 20060101ALI20160704BHEP Ipc: G10L 19/032 20130101ALI20160704BHEP Ipc: G10L 19/00 20130101ALN20160704BHEP Ipc: G10L 19/035 20130101ALN20160704BHEP Ipc: G10L 19/02 20130101AFI20160704BHEP Ipc: G10L 19/038 20130101ALN20160704BHEP |
|
INTG | Intention to grant announced |
Effective date: 20160729 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 856099 Country of ref document: AT Kind code of ref document: T Effective date: 20170115 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602012026869 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 6 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170322 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170321 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 856099 Country of ref document: AT Kind code of ref document: T Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
REG | Reference to a national code |
Ref country code: ES Ref legal event code: FG2A Ref document number: 2617958 Country of ref document: ES Kind code of ref document: T3 Effective date: 20170620 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170421 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170321 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20170421 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: BE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602012026869 Country of ref document: DE |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
26N | No opposition filed |
Effective date: 20170922 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: MM4A |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170326 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170331 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170331 Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170326 Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
REG | Reference to a national code |
Ref country code: FR Ref legal event code: PLFP Year of fee payment: 7 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MT Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20170326 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20120326 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: CY Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: TR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20161221 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230530 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20240320 Year of fee payment: 13 Ref country code: GB Payment date: 20240320 Year of fee payment: 13 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: IT Payment date: 20240329 Year of fee payment: 13 Ref country code: FR Payment date: 20240327 Year of fee payment: 13 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: ES Payment date: 20240426 Year of fee payment: 13 |