WO2012046685A1 - Procédé de codage, procédé de décodage, dispositif de codage, dispositif de décodage, programme et support d'enregistrement - Google Patents

Procédé de codage, procédé de décodage, dispositif de codage, dispositif de décodage, programme et support d'enregistrement Download PDF

Info

Publication number
WO2012046685A1
WO2012046685A1 PCT/JP2011/072752 JP2011072752W WO2012046685A1 WO 2012046685 A1 WO2012046685 A1 WO 2012046685A1 JP 2011072752 W JP2011072752 W JP 2011072752W WO 2012046685 A1 WO2012046685 A1 WO 2012046685A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
samples
decoding
sample sequence
frequency
Prior art date
Application number
PCT/JP2011/072752
Other languages
English (en)
Japanese (ja)
Inventor
守谷 健弘
登 原田
優 鎌本
祐介 日和▲崎▼
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to JP2012537696A priority Critical patent/JP5612698B2/ja
Publication of WO2012046685A1 publication Critical patent/WO2012046685A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/0212Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using orthogonal transformation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/09Long term prediction, i.e. removing periodical redundancies, e.g. by using adaptive codebook or pitch predictor

Definitions

  • the present invention relates to an encoding technique for an acoustic signal and a decoding technique for a code string obtained by the encoding technique. More specifically, the present invention relates to encoding and decoding of a frequency domain sample sequence obtained by converting an acoustic signal into the frequency domain.
  • Adaptive coding for orthogonal transform coefficients such as DFT (Discrete Fourier Transform) and MDCT (Modified Discrete Cosine Transform) is known as a coding method for low-bit (for example, about 10 kbit / s to 20 kbit / s) speech and acoustic signals. It has been.
  • AMR-WB + Extended-Adaptive-Multi-Rate-Wideband
  • TCX transform-coded-excitation
  • TwinVQ TransformTransdomain Weighted Interleave Vector Quantization
  • a collection of samples after the entire MDCT coefficient is rearranged according to a fixed rule is encoded as a vector.
  • a large component for each pitch period is extracted from the MDCT coefficient, information corresponding to the pitch period is encoded, and the remaining MDCT coefficient sequence from which the large component for each pitch period is further removed is rearranged.
  • a method of encoding the subsequent MDCT coefficient sequence by vector quantization for each predetermined number of samples may be employed.
  • Non-patent documents 1 and 2 can be exemplified as documents related to TwinVQ.
  • Patent Document 1 can be exemplified as a technique for extracting and encoding samples at regular intervals.
  • coding based on TCX such as AMR-WB +
  • AMR-WB + does not take into account variations in the frequency domain coefficient based on periodicity
  • coding with a large amount of variation reduces the coding efficiency.
  • quantization and coding in TCX For example, a case where a sequence in which MDCT coefficients that have become discrete values by quantization are arranged from the lowest frequency is compressed by adaptive arithmetic coding. Think. In this case, a plurality of samples are set as one symbol (coding unit), and the assigned code is adaptively controlled depending on the symbol immediately before the symbol. In general, a short code is assigned if the amplitude is small, and a long code is assigned if the amplitude is large.
  • the assigned code is adaptively controlled depending on the symbol immediately before the symbol, if a small amplitude value continues, an increasingly shorter code is assigned, while a large amplitude suddenly appears after a sample with a small amplitude.
  • a very long code is assigned. That is, if the fluctuation of the absolute value of the amplitude between adjacent samples in the sequence is large, the total code amount of codes obtained by adaptive arithmetic coding for the sequence becomes large.
  • the conventional TwinVQ is designed on the assumption that vector quantization of fixed-length code that assigns the same codebook code to all the vectors composed of predetermined samples, and MDCT using variable-length coding No coding of the coefficients was envisaged.
  • an object of the present invention is to provide an encoding / decoding technique for improving the quality of a discrete signal, in particular, an audio-acoustic digital signal, by encoding with a low bit with a low calculation amount. To do.
  • the encoding technique of the present invention (1) all samples in the sample sequence are included in the frequency domain sample sequence derived from the acoustic signal in a predetermined time interval, and (2) the acoustic in the sample sequence One or a plurality of consecutive samples including samples corresponding to the periodicity or fundamental frequency of the signal and one or consecutive including samples corresponding to an integer multiple of the periodicity or fundamental frequency of the acoustic signal in the sample sequence
  • a rearranged sample string is output by rearranging at least some of the samples included in the sample string so that all or some of the samples are collected [sorting process]. Then, the sample sequence obtained by the rearrangement is encoded [encoding procedure].
  • information representing the periodicity of the acoustic signal In consideration of decoding, information representing the periodicity of the acoustic signal, information representing the fundamental frequency of the acoustic signal, periodicity of the acoustic signal in the rearrangement process or the periodicity or fundamental of the acoustic signal and the sample corresponding to the fundamental frequency
  • information representing the periodicity of the acoustic signal In consideration of decoding, information representing the periodicity of the acoustic signal, information representing the fundamental frequency of the acoustic signal, periodicity of the acoustic signal in the rearrangement process or the periodicity or fundamental of the acoustic signal and the sample corresponding to the fundamental frequency.
  • the samples corresponding to the respective frequencies from the lowest frequency to the first predetermined frequency may be rearranged. In this case, the first frequency smaller than the first predetermined frequency is counted from the lowest frequency.
  • the rearrangement of the samples corresponding to the frequencies up to 2 predetermined frequencies may not be performed.
  • rearrangement of samples corresponding to each frequency from the lowest frequency to a predetermined frequency may not be performed.
  • the sample string when the prediction gain corresponding to the acoustic signal in the predetermined time interval or the estimated value thereof is equal to or less than a predetermined threshold value, the sample string may be output as the rearranged sample string.
  • the encoding technique of the present invention (1) all samples of the sample sequence are included in the frequency domain sample sequence derived from the acoustic signal in a predetermined time interval, and (2) the size of the sample A sample column is output as a sample column after rearrangement, in which at least a part of the samples included in the sample column is rearranged so that samples that have the same or similar index reflecting the same are collected [sorting process]. Then, the sample sequence obtained by the rearrangement is encoded [encoding procedure].
  • the index is, for example, the absolute value or power of the amplitude of the sample.
  • the envelope of the index of the sample sequence after the rearrangement shows an increasing tendency or a decreasing tendency as the frequency increases.
  • at least some of the samples included in the sample row may be rearranged.
  • the encoding procedure of the encoding technique of the present invention for example, in the sample sequence obtained by the rearrangement process, (1) within a range satisfying a predetermined condition regarding an index reflecting the sample size.
  • the first variable length encoding is performed for each sample of the included sample group, and (2) the second variable length encoding is performed for each of a plurality of samples for at least part of the other sample group.
  • the encoding process for example, among the sample sequence obtained by the rearrangement process, (1) one set of samples included in a range satisfying a predetermined condition regarding an index reflecting the sample size Perform first variable length coding for each sample, (2) perform coding to output a code representing the number of consecutive samples having an index corresponding to zero, and (3) collect other samples
  • the second variable length coding is performed for each of a plurality of samples for at least a part of.
  • the input code string is decoded every predetermined time interval to obtain a frequency domain sample string [decoding procedure], and at least some samples included in the sample string are obtained. Then, rearrangement is performed based on information (auxiliary information) specifying rearrangement to the sample sequence to obtain a sample sequence derived from the acoustic signal [recovery processing].
  • the sample sequence obtained by the decoding procedure includes (1) all samples constituting the sample sequence derived from the acoustic signal, and (2) one including samples corresponding to the periodicity or fundamental frequency of the acoustic signal. Or a sample in the frequency domain such that all or some of the samples are collected together, including one or more consecutive samples and samples that correspond to the periodicity of the acoustic signal or an integer multiple of the fundamental frequency. Are arranged.
  • the auxiliary information used in the recovery process includes, for example, information indicating the periodicity of the acoustic signal, information indicating the fundamental frequency of the acoustic signal, periodicity of the acoustic signal or the periodicity or fundamental frequency of the acoustic signal and the sample corresponding to the periodicity or fundamental frequency of the acoustic signal. Any one of the pieces of information indicating the interval with the sample corresponding to an integer multiple of.
  • the sample sequence obtained by the decoding procedure is, for example, a sample sequence in which samples corresponding to each frequency from the lowest frequency to the first predetermined frequency are rearranged. There may be a sample sequence in which the samples corresponding to each frequency up to a second predetermined frequency smaller than one predetermined frequency are not rearranged. Alternatively, the sample sequence obtained by the decoding procedure is, for example, a sample sequence in which samples corresponding to each frequency from the lowest frequency to a predetermined frequency are not rearranged.
  • the sample sequence obtained in the decoding procedure is directly used as the sample sequence derived from the acoustic signal. Also good.
  • the input code string is decoded every predetermined time interval to obtain a frequency domain sample string [decoding procedure], and at least a part of the sample string included in the sample string is obtained.
  • the samples are rearranged based on information (auxiliary information) specifying rearrangement to the sample sequence and returned to the sample sequence derived from the acoustic signal [recovery processing].
  • the sample sequence obtained by the decoding procedure includes (1) all samples constituting the sample sequence derived from the acoustic signal, and (2) samples having the same or similar index reflecting the sample size.
  • the frequency domain samples are arranged so as to gather.
  • the index is, for example, the absolute value or power of the amplitude of the sample
  • the sample string obtained by the decoding procedure has a tendency that the envelope of the index of the sample string after rearrangement increases as the frequency increases.
  • frequency domain samples are arranged so as to show a downward trend.
  • the decoding procedure of the decoding technique of the present invention for example, (1) a set of samples included in a range satisfying a predetermined condition related to an index reflecting the size of the sample in the input code string.
  • the code obtained by the first variable length coding is decoded for each sample, and (2) at least a part of the collection of other samples is obtained by the second variable length coding for each of a plurality of samples.
  • Decode the received code for example, in the input code string, (1) a first sample for each sample of a set of samples included in a range that satisfies a predetermined condition regarding an index reflecting the sample size.
  • the code obtained by the second variable length coding is decoded for each of a plurality of samples.
  • At least some of the samples included in the frequency-domain sample sequence derived from the acoustic signal may be one or a plurality of consecutive samples including samples corresponding to the periodicity or fundamental frequency of the acoustic signal, and the acoustic signal. Rearrangement is performed such that one or a plurality of consecutive samples including samples corresponding to the periodicity of the signal or an integer multiple of the fundamental frequency are collected. In this way, because the samples that reflect the sample size (for example, the absolute value of the amplitude) are gathered at the same or similar level, the sample size between the adjacent samples in the sample row is reflected.
  • the sample size for example, the absolute value of the amplitude
  • the variation of the index to be reduced is reduced, and the total code amount of the code obtained by adaptive arithmetic coding for the sample sequence can be suppressed.
  • processing that can be executed with a small amount of calculation such as rearrangement is performed, improvement in coding efficiency, reduction in quantization distortion, and the like are realized.
  • the present invention within the framework of quantizing the frequency domain sample sequence derived from the acoustic signal of a predetermined time interval, while reducing the quantization distortion by rearranging the samples based on the frequency domain sample features,
  • One of the features is an improvement in encoding that reduces the amount of code by using variable length encoding.
  • the predetermined time interval is referred to as a frame.
  • an improvement in coding is realized by concentrating samples having a large amplitude by rearranging samples according to periodicity.
  • a sample sequence in the frequency domain derived from the acoustic signal for example, a DFT coefficient sequence or an MDCT coefficient sequence obtained by converting the audio acoustic digital signal in frame units from the time domain to the frequency domain
  • a coefficient sequence to which processing such as normalization, weighting, and quantization is applied can be exemplified.
  • an embodiment of the present invention will be described using an MDCT coefficient sequence as an example.
  • the frequency domain conversion unit 1 converts the audio-acoustic digital signal into N-point MDCT coefficient sequences in the frequency domain in units of frames (step S1).
  • the encoding side quantizes the MDCT coefficient sequence, encodes the quantized MDCT coefficient sequence, transmits the obtained code sequence to the decoding side, and the decoding side quantizes the code sequence.
  • the MDCT coefficient sequence can be reconstructed, and the time-domain audio-acoustic digital signal can be reconstructed by inverse MDCT transformation.
  • the amplitude of the MDCT coefficient has approximately the same amplitude envelope (power spectrum envelope) as the power spectrum of a normal DFT. For this reason, by assigning information proportional to the logarithmic value of the amplitude envelope, the quantization distortion (quantization error) of the MDCT coefficients in all bands can be uniformly distributed, and the overall quantization distortion can be reduced.
  • the power spectrum envelope can be efficiently estimated using a linear prediction coefficient obtained by linear prediction analysis.
  • a method for controlling such quantization error a method of adaptively assigning quantization bits of each MDCT coefficient (adjusting the quantization step width after flattening the amplitude), or weighted vector quantization is used.
  • an example of the quantization method performed in the embodiment of the present invention will be described, it should be noted that the present invention is not limited to the quantization method described.
  • Weighting envelope normalization unit 2 uses the power spectrum envelope coefficient sequence of the speech acoustic digital signal estimated using the linear prediction coefficient obtained by the linear prediction analysis for the speech acoustic digital signal in units of frames to input the MDCT coefficient sequence Are normalized, and a weighted normalized MDCT coefficient sequence is output (step S2).
  • the weighted envelope normalization unit 2 uses the weighted power spectrum envelope coefficient sequence in which the power spectrum envelope is blunted to generate an MDCT coefficient sequence in units of frames. Normalize each coefficient of.
  • the weighted normalized MDCT coefficient sequence does not have the amplitude gradient and the amplitude irregularity as large as the input MDCT coefficient sequence, but has a similar magnitude relationship to the power spectrum envelope coefficient sequence of the audio-acoustic digital signal. That is, the coefficient side region corresponding to the low frequency has a slightly large amplitude and has a fine structure resulting from the pitch period.
  • Each coefficient W (1),..., W (N) of the power spectrum envelope coefficient sequence corresponding to each coefficient X (1),..., X (N) of the N-point MDCT coefficient sequence is linearly predicted. It can be obtained by converting the coefficients into the frequency domain. For example, the time signal x (t) at the time t becomes a past value x (t ⁇ 1),..., X ( tp) and the prediction residuals e (t) and the linear prediction coefficients alpha 1, ⁇ ⁇ ⁇ , represented by the formula (1) by alpha p.
  • each coefficient W (n) [1 ⁇ n ⁇ N] of the power spectrum envelope coefficient sequence is expressed by Expression (2). exp ( ⁇ ) is an exponential function with the Napier number as the base, j is an imaginary unit, and ⁇ 2 is the predicted residual energy.
  • the linear prediction coefficient may be obtained by performing linear prediction analysis on the audio-acoustic digital signal input to the frequency domain transform unit 1 by the weighted envelope normalization unit 2, or may not be shown in the encoding device 100. It may be obtained by linear predictive analysis of a speech sound digital signal by the means described above. In such a case, the weighted envelope normalization unit 2 obtains each coefficient W (1),..., W (N) of the power spectrum envelope coefficient sequence using the linear prediction coefficient. Further, the coefficients W (1),..., W (N) of the power spectrum envelope coefficient sequence are already obtained by other means (power spectrum envelope coefficient sequence calculation unit 7) in the encoding apparatus 100.
  • the weighted envelope normalization unit 2 can use each coefficient W (1),..., W (N) of this power spectrum envelope coefficient sequence. Note that since the decoding device 200 described later needs to obtain the same value as the value obtained by the coding device 100, a quantized linear prediction coefficient and / or power spectrum envelope coefficient sequence is used.
  • linear prediction coefficient or “power spectrum envelope coefficient sequence” means a quantized linear prediction coefficient or power spectrum envelope coefficient sequence.
  • the linear prediction coefficient is encoded by, for example, a conventional encoding technique, and the prediction coefficient code is transmitted to the decoding side.
  • the conventional encoding technique is, for example, an encoding technique in which a code corresponding to the linear prediction coefficient itself is a prediction coefficient code, a code corresponding to the LSP parameter by converting the linear prediction coefficient into an LSP parameter, and a prediction coefficient code.
  • An encoding technique for converting a linear prediction coefficient into a PARCOR coefficient and using a code corresponding to the PARCOR coefficient as a prediction coefficient code is obtained by other means existing in the encoding apparatus 100, the linear prediction coefficient is encoded by a conventional encoding technique in the other means existing in the encoding apparatus 100. And the prediction coefficient code is transmitted to the decoding side.
  • the weighted envelope normalization unit 2 converts each coefficient X (1),..., X (N) of the MDCT coefficient sequence to a correction value W ⁇ (1) of each coefficient of the power spectrum envelope coefficient sequence corresponding to each coefficient. , ..., W ⁇ (N), by dividing each coefficient X (1) / W ⁇ (1), ..., X (N) / W ⁇ (N) of the weighted normalized MDCT coefficient sequence Process to get.
  • the correction value W ⁇ (n) [1 ⁇ n ⁇ N] is given by Equation (3).
  • is a positive constant of 1 or less, and is a constant that dulls the power spectrum coefficient.
  • the weighted envelope normalization unit 2 converts each coefficient X (1),..., X (N) of the MDCT coefficient sequence to the ⁇ power of each coefficient of the power spectrum envelope coefficient sequence corresponding to each coefficient (0 ⁇ ⁇ 1) values W (1) ⁇ ,..., W (N) ⁇ by dividing each coefficient X (1) / W (1) ⁇ ,. (N) / W (N) ⁇ is obtained.
  • a frame-by-frame weighted normalized MDCT coefficient sequence is obtained, but the weighted normalized MDCT coefficient sequence does not have as large an amplitude gradient or amplitude unevenness as the input MDCT coefficient sequence, but the input MDCT coefficient It has a magnitude relationship similar to the power spectrum envelope of the column, that is, one having a slightly large amplitude in the coefficient side region corresponding to a low frequency and a fine structure resulting from the pitch period.
  • the inverse processing corresponding to the weighted envelope normalization process that is, the process of restoring the MDCT coefficient sequence from the weighted normalized MDCT coefficient sequence is performed on the decoding side, so the weighted power spectrum envelope coefficient sequence from the power spectrum envelope coefficient sequence It is necessary to set a common setting for the encoding side and the decoding side.
  • “Normalized gain calculator 3” Next, the sum or energy value of the amplitude values over all frequencies is calculated so that the normalization gain calculation unit 3 can quantize each coefficient of the weighted normalization MDCT coefficient sequence with the given total number of bits for each frame. Then, the quantization step width is determined, and a coefficient (hereinafter referred to as gain) for dividing each coefficient of the weighted normalized MDCT coefficient sequence so as to be the quantization step width is obtained (step S3). Information representing this gain is transmitted to the decoding side as gain information. The normalization gain calculation unit 3 normalizes (divides) each coefficient of the weighted normalization MDCT coefficient sequence by this gain for each frame.
  • the quantization unit 4 quantizes each coefficient of the weighted normalized MDCT coefficient sequence normalized by the gain for each frame with the quantization step width determined in the process of step S3 (step S4).
  • the frame-by-frame quantized MDCT coefficient sequence obtained in the process of step S4 is input to the rearrangement unit 5 which is a main part of the present embodiment.
  • the input of the rearrangement unit 5 is performed in steps S1 to S4.
  • the coefficient sequence obtained in each process is not limited.
  • the input of the rearrangement unit 5 will be referred to as a “frequency domain sample string” or simply a “sample string” derived from an acoustic signal.
  • the quantized MDCT coefficient sequence obtained in step S4 corresponds to a “frequency domain sample sequence”.
  • the samples constituting the frequency domain sample sequence are included in the quantized MDCT coefficient sequence. It corresponds to the coefficient.
  • the reordering unit 5 includes, for each frame, (1) all samples of the frequency domain sample sequence, and (2) frequency so that samples having the same or similar index that reflects the sample size are collected.
  • a rearranged sample string obtained by rearranging at least a part of samples included in the region sample string is output (step S5).
  • the “index reflecting the sample size” is, for example, the absolute value or power (square value) of the amplitude of the sample, but is not limited thereto.
  • the rearrangement unit 5 includes (1) all samples in the sample sequence, and (2) one or a plurality of consecutive samples including samples corresponding to the periodicity or fundamental frequency of the acoustic signal in the sample sequence. Included in the sample sequence such that all or some of the samples and one or more consecutive samples including samples corresponding to the periodicity of the acoustic signal in the sample sequence or an integer multiple of the fundamental frequency are collected A rearranged sample sequence is output as a rearranged sample sequence.
  • the absolute value and power of the amplitude corresponding to the fundamental frequency and harmonics (integer multiples of the fundamental frequency) and samples in the vicinity of them are the same as those of the samples corresponding to the frequency region excluding the fundamental frequency and harmonics.
  • This is based on a remarkable feature in an acoustic signal that is larger than the absolute value or power of the amplitude, particularly voice or musical sound.
  • the periodic feature amount (for example, pitch period) of the acoustic signal extracted from the acoustic signal such as voice or musical sound is equivalent to the fundamental frequency
  • the periodic feature amount (for example, pitch) of the acoustic signal is equivalent to the fundamental frequency.
  • the absolute value and power of the amplitude of the sample corresponding to the periodicity) and its integer multiples and the samples in the vicinity of them are larger than the absolute value and power of the amplitude of the sample corresponding to the frequency domain excluding the periodic feature and their integral multiples.
  • the feature of being large is also recognized.
  • T represents a symbol representing an interval (hereinafter simply referred to as an interval) between a sample corresponding to the periodicity or fundamental frequency of the acoustic signal and a sample corresponding to an integer multiple of the periodicity or fundamental frequency of the acoustic signal.
  • the rearrangement unit 5 includes samples F (nT ⁇ 1) and F (nT + 1) before and after the sample F (nT) corresponding to an integer multiple of the interval T from the input sample sequence. Three samples F (nT-1), F (nT), and F (nT + 1) are selected.
  • F (j) is a sample corresponding to the number j representing the sample index corresponding to the frequency.
  • n is an integer in a range where 1 to nT + 1 do not exceed the preset upper limit N of the target sample.
  • Let jmax be the maximum value of the number j representing the sample index corresponding to the frequency.
  • N A collection of samples selected according to n is called a sample group.
  • the upper limit N may be equal to jmax.
  • the high-frequency sample index is generally small enough, so that it is large for improving the encoding efficiency described later.
  • N may be a value smaller than jmax.
  • N may be a value about half of jmax. If the maximum value of n determined based on the upper limit N is nmax, samples corresponding to each frequency from the lowest frequency to the first predetermined frequency nmax * T + 1 among the samples included in the input sample sequence Are subject to sorting.
  • the symbol * represents multiplication.
  • the rearrangement unit 5 generates the sample sequence A by arranging the selected samples F (j) in order from the beginning of the sample sequence while maintaining the magnitude relationship of the original number j. For example, when n represents each integer from 1 to 5, the rearrangement unit 5 uses the first sample group F (T-1), F (T), F (T + 1), and the second sample group. F (2T-1), F (2T), F (2T + 1), third sample group F (3T-1), F (3T), F (3T), F (3T + 1), fourth sample group F ( 4T-1), F (4T), F (4T + 1), and fifth sample group F (5T-1), F (5T), F (5T), F (5T + 1) are arranged from the head of the sample sequence.
  • the rearrangement unit 5 arranges the unselected sample F (j) in order from the end of the sample row A while maintaining the magnitude relationship of the original numbers.
  • the unselected sample F (j) is a sample located between the sample groups constituting the sample row A, and such a continuous set of samples is referred to as a sample set. That is, in the above example, the first sample set F (1),..., F (T-2), the second sample set F (T + 2),. , F (3T-2), fourth sample set F (3T + 2), ..., F (4T-2), fifth sample set F (4T + 2),..., F (5T-2), the sixth sample set F (5T + 2),... F (jmax) are arranged in order from the end of the sample sequence A, and these samples constitute the sample sequence B .
  • the input sample sequence F (j) (1 ⁇ j ⁇ jmax) is F (T ⁇ 1), F (T), F (T + 1), F (2T ⁇ 1). ), F (2T), F (2T + 1), F (3T-1), F (3T), F (3T + 1), F (4T-1), F (4T), F (4T + 1 ), F (5T-1), F (5T), F (5T), F (5T + 1), F (1), ..., F (T-2), F (T + 2), ..., F (2T-2) , F (2T + 2), ..., F (3T-2), F (3T + 2), ..., F (4T-2), F (4T + 2), ..., F (5T-2), F (5T + 2),... F (jmax) are rearranged (see FIG. 3).
  • each sample In the low frequency band, each sample often has a large value in amplitude and power, even if it is a sample other than a sample corresponding to the periodicity and fundamental frequency of an acoustic signal or a sample that is an integer multiple of the sample. Therefore, the rearrangement of samples corresponding to each frequency from the lowest frequency to the predetermined frequency f may not be performed. For example, if the predetermined frequency f is nT + ⁇ , the samples F (1),..., F (nT + ⁇ ) before rearrangement are not rearranged, and after F (nT + ⁇ + 1) before rearrangement. This sample is subject to sorting.
  • is set in advance to an integer greater than or equal to 0 and somewhat smaller than T (for example, an integer not exceeding T / 2).
  • n may be an integer of 2 or more.
  • P samples F (1),..., F (P) from the sample corresponding to the lowest frequency before rearrangement are not rearranged, and after F (P + 1) before rearrangement Samples may be sorted.
  • the predetermined frequency f is P.
  • the criteria for the rearrangement for the collection of samples to be rearranged are as described above. Note that when the first predetermined frequency is set, the predetermined frequency f (second predetermined frequency) is smaller than the first predetermined frequency.
  • the input sample sequence F (j) (1 ⁇ j ⁇ jmax) is F (1),..., F (T + 1), F (2T-1), F (2T), F (2T + 1), F (3T-1), F (3T), F (3T + 1), F (4T-1), F (4T), F (4T + 1), F (5T-1 ), F (5T), F (5T + 1), F (T + 2), ..., F (2T-2), F (2T + 2), ..., F (3T-2), F (3T + 2), ..., F (4T-2), F (4T + 2), ..., F (5T-2), F (5T + 2), ... F (jmax) (see Fig. 4). reference).
  • the upper limit N or first predetermined frequency for determining the maximum value of the number j to be rearranged is not set to a value common to all frames, and a different upper limit N or first predetermined frequency is set for each frame. May be.
  • information specifying the upper limit N or the first predetermined frequency for each frame may be sent to the decoding side.
  • the number of sample groups to be rearranged may be specified. In this case, the number of sample groups is set for each frame, and the sample group is set. May be sent to the decoding side. Of course, the number of sample groups to be rearranged may be common to all frames.
  • the second predetermined frequency f may be set to a different second predetermined frequency f for each frame without being a value common to all frames. In this case, information specifying the second predetermined frequency for each frame may be sent to the decoding side.
  • the reordering unit 5 may reorder at least some of the samples included in the input sample sequence so that the envelope of the sample index shows a downward trend as the frequency increases.
  • all the samples included in the frequency domain sample string are positive values so that it can be easily understood that the samples having a larger amplitude are biased to the low frequency side by rearranging the samples.
  • An example of the case is shown. Actually, each sample included in the frequency domain sample string is often a positive, negative, or zero value. Even in such a case, the above-described rearrangement process or the rearrangement process described later is performed. Just do it.
  • one or a plurality of consecutive samples including samples corresponding to periodicity or fundamental frequency and one or a plurality including samples corresponding to integer multiples of periodicity or fundamental frequency on the low frequency side.
  • one or more consecutive samples including samples corresponding to periodicity or fundamental frequency, and integer multiples of periodicity or fundamental frequency may be performed to collect one or a plurality of consecutive samples including the corresponding sample.
  • the sample group is arranged in the reverse order in the sample row A
  • the sample set is arranged in the reverse order in the sample row B
  • the sample row B is arranged on the low frequency side
  • the sample row A is arranged behind the sample B. That is, in the above example, the sixth sample set F (5T + 2),...
  • the reordering unit 5 may reorder at least some of the samples included in the input sample sequence so that the envelope of the sample index shows a tendency to increase as the frequency increases. .
  • Interval T may be a decimal number (for example, 5.0, 5.25, 5.5, 5.75) instead of an integer.
  • F (R (nT-1)), F (R (nT)), and F (R (nT + 1)) are selected with RT (nT) rounded off to nT.
  • a value for the interval T it is preferable to set a value for the interval T according to the input sample sequence, that is, for each frame.
  • a method of determining the interval T for each frame for example, a method of searching for periodicity of sample indices (absolute value or square value) and setting the interval T so that the bias of the average absolute value or the mean square value becomes large. May be adopted.
  • E (T) is obtained by adding indexes of all samples included in the sample group selected corresponding to T.
  • the index of the sample is represented as
  • F (j) a set of numbers j of all samples included in the sample group selected corresponding to T.
  • E (T) ⁇ j ⁇ M
  • F (T-1) + F (T) + F (T + 1) + F (2T-1) + F (2T) + F (2T + 1) + F (3T-1) + F (3T) + F (3T + 1) + F (4T-1) + F (4T) + F (4T + 1) + F (5T-1) + F (5T) + F (5T + 1).
  • card (M) represents the number of elements (concentration) of the set M.
  • the frequency domain period (interval) T may be obtained by converting the fundamental frequency obtained by another means (not shown) in the encoding apparatus 100 or the time domain pitch period. Good. Further, not only the determination of the interval T using the periodicity as described above, but when the sample group is collected on the low frequency side, in the latter half of the sample sequence B, and when the sample group is collected on the high frequency side, the sample sequence In the first half of B, a method of determining the interval T so that a sample having an amplitude of 0 continues for a long time may be adopted.
  • the rearrangement unit 5 rearranges the sample strings based on each of a plurality of preset T values, and the encoding unit 6 described later calculates the code amount of the code string corresponding to each T value.
  • a method of selecting the interval T with the smallest code amount may be adopted.
  • auxiliary information for specifying rearrangement of the sample sequence to be described later is output from the encoding unit 6 instead of the rearrangement unit 5.
  • the interval T can be set to a predetermined value for all frames.
  • the rearrangement unit 5 or the encoding unit 6 includes auxiliary information (first auxiliary information) for specifying the rearrangement of the sample sequence, that is, information indicating the periodicity of the acoustic signal, information indicating the fundamental frequency, or the acoustic signal.
  • Information indicating the interval T between the sample corresponding to the periodicity or the fundamental frequency and the sample corresponding to the periodicity of the acoustic signal or an integer multiple of the fundamental frequency is output. For example, when the interval T is determined for each frame, auxiliary information for specifying the rearrangement of the sample sequence is also output for each frame.
  • the auxiliary information for specifying the rearrangement of the sample sequence is obtained by encoding the periodicity, the fundamental frequency, or the interval T for each frame.
  • This encoding may be fixed length encoding or variable length encoding to reduce the average code amount.
  • information obtained by variable length coding the difference between the interval T between the previous frame and the current frame may be used as information representing the interval T.
  • information obtained by variable-length coding the difference between the fundamental frequency of the previous frame and the fundamental frequency of the current frame may be used as information representing the fundamental frequency. If information representing the fundamental frequency is obtained by another means (not shown) in the encoding apparatus 100, the information representing the fundamental frequency obtained by the other means, not the rearrangement unit 5, is sampled.
  • n can be selected from a plurality of options, the upper limit value of n or the upper limit N described above may be included in the auxiliary information for specifying the rearrangement of the sample sequence.
  • the number of samples included in each sample group is a total of 3 samples including a sample corresponding to periodicity, a fundamental frequency or an integral multiple thereof (hereinafter referred to as a central sample) and one sample before and after the sample.
  • a central sample a sample corresponding to periodicity, a fundamental frequency or an integral multiple thereof
  • An example of a fixed number is shown.
  • the number of samples included in the sample group and the sample index are variable, the number of samples included in the sample group and the combination of sample indexes are different from the other options.
  • Information representing one selected from the above is also included in the auxiliary information for specifying the rearrangement of the sample sequence.
  • the rearrangement unit 5 performs rearrangement corresponding to each option, and the encoding unit 6 described later encodes a code string corresponding to each option.
  • a method may be adopted in which the code amount is obtained and the option with the smallest code amount is selected.
  • the auxiliary information specifying the rearrangement of the sample sequence is output from the encoding unit 6 instead of the rearrangement unit 5. This method is also valid when n can be selected.
  • the options include, for example, options related to the interval T, options related to the combination of the number of samples included in the sample group and the sample index, and options related to n, and all combinations of these options may be a considerable number. is expected.
  • Calculation of the final code amount for all combinations of these options requires a processing amount, which may be a problem from the viewpoint of efficiency.
  • the candidates for the interval T are narrowed down to a small number, and for each candidate, the number of samples included in the sample group is combined, The most preferable option may be selected.
  • measure the sum of the sample indices approximately, and select the choice based on the concentration of the sample indices in the low frequency range or the number of consecutive samples with zero amplitude from the highest frequency to the low frequency range on the frequency axis. You may decide. Specifically, the sum of the absolute values of the amplitudes of the sample sequences after the rearrangement is obtained for a region that is 1/4 from the low frequency side of the entire sample sequence, and if the sum is larger than a predetermined threshold value, It is assumed that this is a preferred permutation. Also, according to the method of selecting the option with the longest number of consecutive samples with zero amplitude from the highest frequency of the sample sequence after rearrangement toward the low frequency side, samples with large indexes are concentrated in the low frequency range. It is assumed that this is also a preferable rearrangement.
  • the processing amount is small, but the rearrangement of the sample sequence that minimizes the final code amount may not be selected. For this reason, it is only necessary to select a plurality of candidates by the approximation process as described above, and finally calculate the code amount accurately for only a small number of candidates and select the most preferable one (the code amount is small).
  • the encoding unit 6 encodes the sample sequence output from the rearrangement unit 5 and outputs the obtained code sequence (step S6). For example, the encoding unit 6 performs encoding by switching the variable-length encoding method according to the amplitude deviation of the samples included in the sample sequence output by the rearrangement unit 5. That is, since the rearrangement unit 5 collects samples having large amplitudes on the low frequency side (or high frequency side) in the frame, the encoding unit 6 performs variable length encoding by a method suitable for the bias. .
  • the average code is obtained by performing the rice coding with a different rice parameter for each region. The amount can be reduced.
  • samples having a large amplitude are collected on the low frequency side (side closer to the head of the frame) in the frame will be described as an example.
  • the encoding unit 6 applies Rice encoding (also referred to as Golomb-Rice encoding) for each sample in a region where samples having large amplitudes are gathered. In a region other than this region, the encoding unit 6 applies entropy encoding (Huffman encoding, arithmetic encoding, etc.) suitable for encoding a set of samples obtained by collecting a plurality of samples.
  • the application region of rice encoding and the rice parameter may be fixed, or one of a plurality of options having different combinations of the application region of rice encoding and the rice parameter can be selected. It may be a configuration.
  • a variable length code (binary value surrounded by the symbol "") as shown below can be used as selection information for rice encoding, and the encoding unit 6 also outputs selection information.
  • Rice coding is applied to the area 1/16 from the beginning with the Rice parameter set to 2.
  • the code amount of the code string corresponding to each rice encoding obtained by the encoding process is compared, and the option with the smallest code amount is selected.
  • a method of selecting may be adopted.
  • the average code amount can be reduced by, for example, run-length encoding the number of consecutive samples having an amplitude of 0.
  • the encoding unit 6 applies (1) Rice encoding for each sample in a region where samples having a large amplitude are gathered, and (2) (a) 0 in regions other than this region. In a region where samples having amplitude are continuous, encoding is performed to output a code representing the number of consecutive samples having amplitude of 0. (b) In the remaining region, encoding is performed on a set of samples in which a plurality of samples are collected.
  • Entropy coding (Huffman coding, arithmetic coding, etc.) is also applied. Even in such a case, the selection of the rice encoding as described above may be performed. In such a case, information indicating to which region run-length encoding has been applied needs to be transmitted to the decoding side. For example, this information is included in the selection information. Further, when a plurality of encoding methods belonging to entropy encoding are prepared as options, information for specifying which encoding is selected needs to be transmitted to the decoding side. Information is included in the selection information.
  • the rearrangement unit 5 also outputs a sample string before rearrangement (a sample string that has not been rearranged), and the encoding unit 6 can change the sample string before rearrangement and the sample string after rearrangement.
  • the code amount of the code string obtained by performing long-length coding and variable-length coding of the sample string before rearrangement, and the code string obtained by switching the sample string after rearrangement by switching variable-length coding for each region When the code amount of the sample sequence before rearrangement is the minimum, the code sequence obtained by variable-length coding the sample sequence before rearrangement is output.
  • auxiliary information (second auxiliary information) indicating whether or not the sample sequence corresponding to the code sequence is a sample sequence in which the samples are rearranged is also output. It is sufficient to use 1 bit as the second auxiliary information.
  • the second auxiliary information specifies a sample sequence in which the sample sequence corresponding to the code sequence is not rearranged, the first auxiliary information may not be output.
  • the rearrangement of the sample sequence is applied only when the prediction gain or its estimated value is larger than a predetermined threshold value.
  • This utilizes the property of voice and musical tone that vocal cord vibration and instrument vibration are strong and the periodicity is often high when the prediction gain is large.
  • the prediction gain is the original sound energy divided by the prediction residual energy.
  • a quantized parameter can be used in common by an encoding device and a decoding device.
  • the encoding unit 6 uses the i-th quantized PARCOR coefficient k (i) obtained by another means (not shown) in the encoding apparatus 100 to (1-k (i) * k ( i)) is multiplied by each order, and an estimated value of the prediction gain expressed by the reciprocal number is calculated. If the calculated estimated value is larger than a predetermined threshold, the rearranged sample sequence is variable-length encoded. The obtained code string is output, and if not, a code string obtained by variable-length coding the sample string before rearrangement is output. In this case, it is not necessary to output the second auxiliary information indicating whether or not the sample sequence corresponding to the code sequence is the sample sequence that has been rearranged. That is, since there is a high possibility that the effect is small at the time of noisy speech or silence where prediction is not possible, it is less wasteful of the second auxiliary information and calculation if it is determined that the rearrangement is not performed.
  • the rearrangement unit 5 calculates the prediction gain or the estimated value of the prediction gain, and performs the rearrangement on the sample string when the prediction gain or the estimated value of the prediction gain is larger than a predetermined threshold value. Is output to the encoding unit 6, otherwise, the sample sequence itself input to the rearrangement unit 5 is output to the encoding unit 6 without being rearranged with respect to the sample sequence.
  • the sample sequence output from the rearrangement unit 5 may be variable length encoded.
  • the threshold value is set in advance as a common value on the encoding side and the decoding side.
  • a symbol sequence frequency table for arithmetic coding is selected from the immediately preceding symbol sequence.
  • Arithmetic coding that divides the closed interval half-line [0, 1] according to the appearance probability of the selected symbol sequence and assigns a code for the symbol sequence to a binary decimal value indicating a position in the segmented interval. Is done.
  • the sample sequence in the frequency domain after the rearrangement (quantized MDCT coefficient sequence in the above example) is sequentially divided into symbols from the low frequency, and a frequency table for arithmetic coding is generated.
  • the closed interval half-line [0, 1] is divided according to the appearance probability of the selected symbol sequence, and the symbol sequence is converted into a binary decimal value indicating the position in the divided interval. Assign a sign for.
  • the sample sequence has already been rearranged so that samples having the same or similar index (for example, the absolute value of the amplitude) reflecting the sample size are collected by the rearrangement process. The fluctuation of the index reflecting the sample size between adjacent samples is reduced, the accuracy of the symbol frequency table is increased, and the total code amount of codes obtained by arithmetic coding on the symbols can be suppressed.
  • the decoding apparatus 200 receives at least the gain information, the auxiliary information, the code string, and the prediction coefficient code. When selection information is output from the encoding apparatus 100, this selection information is also input to the decoding apparatus 200.
  • Linear prediction coefficient decoding unit 10 decodes the input prediction coefficient code by a conventional decoding technique for each frame, and calculates each coefficient W (1),..., W (N) of the power spectrum envelope coefficient sequence. Obtained (step S10).
  • the linear prediction coefficient decoding unit 10 also obtains a PARCOR coefficient corresponding to the linear prediction coefficient.
  • the conventional decoding technique is, for example, a technique for decoding a prediction coefficient code to obtain a linear prediction coefficient when the prediction coefficient code is a code corresponding to a linear prediction coefficient, and a prediction coefficient code is a code corresponding to an LSP parameter.
  • a technique for obtaining a LSP parameter by decoding a prediction coefficient code in a certain case a technique for obtaining a PARCOR coefficient by decoding a prediction coefficient code when the prediction coefficient code is a code corresponding to the PARCOR coefficient, and the like.
  • the linear prediction coefficient, LSP parameter, PARCOR coefficient, and power spectrum envelope coefficient sequence can be converted to each other, and if the conversion process is performed according to the input prediction coefficient code and the information necessary for the subsequent processing, Good is well known. From the above, what includes the decoding process of the prediction coefficient code and the conversion process performed as necessary is “decoding by a conventional decoding technique”.
  • Decoding unit 11 decodes the input code string for each frame and outputs a frequency domain sample string (step S11).
  • the decoding unit 11 performs a decoding process on the input code string using a decoding method according to the selection information.
  • a decoding method corresponding to the encoding method executed to obtain the code string is executed.
  • the details of the decoding process performed by the decoding unit 11 correspond to the details of the encoding process performed by the encoding unit 6 of the encoding device 100. Therefore, the description of the encoding process is incorporated herein and the decoding corresponding to the executed encoding is performed.
  • the decoding unit 11 Is a decoding process performed by the decoding unit 11, and this is a detailed description of the decoding process.
  • selection information When selection information is input, what encoding method is executed is specified by the selection information.
  • the selection information includes, for example, information for specifying an application region and a rice parameter for Rice coding, information indicating an application region for run-length encoding, and information for specifying the type of entropy encoding
  • the decoding method corresponding to these encoding methods is applied to the corresponding region of the input code string.
  • the decoding process corresponding to the Rice encoding Since the decoding process corresponding to the Rice encoding, the decoding process corresponding to the entropy encoding, and the decoding process corresponding to the run length encoding are all well known, the description thereof is omitted (for example, refer to the above-mentioned Reference 1).
  • the recovery unit 12 performs, for each frame, according to auxiliary information (first auxiliary information) that specifies rearrangement of the sample sequence included in the input auxiliary information.
  • An original sample sequence is obtained (step S12).
  • the “original sample arrangement” corresponds to a “frequency domain sample string” input to the rearrangement unit 5 of the encoding apparatus 100.
  • the recovery unit 12 can restore the sequence of the original samples to the frequency domain sample sequence output from the decoding unit 11 based on the first auxiliary information.
  • auxiliary information second auxiliary information
  • the recovery unit 12 uses the frequency domain sample sequence output by the decoding unit 11 as the original sample. In the case where the result is output and indicates that rearrangement is not performed, the sample sequence in the frequency domain output by the decoding unit 11 is output as it is.
  • the recovery unit 12 uses, for example, the i-th quantized PARCOR coefficient k (i) input from the linear prediction coefficient decoding unit 10 in the decoding device 200 to (1-k (i) * k (i)) is multiplied for each order, and an estimated value of the prediction gain expressed by the reciprocal is calculated. If the calculated estimated value is larger than a predetermined threshold, the frequency domain sample output by the decoding unit 11 is calculated. The sequence is output after the original samples are arranged, and if not, the frequency domain sample sequence output by the decoding unit 11 is output as it is.
  • the details of the recovery process performed by the recovery unit 12 correspond to the details of the rearrangement process performed by the rearrangement unit 5 of the encoding device 100. Therefore, the description of the rearrangement process is incorporated herein, and the reverse process of the rearrangement process ( It is specified that the reverse sorting) is the recovery process performed by the recovery unit 12, and this will be a detailed description of the recovery process.
  • the reverse process of the rearrangement process It is specified that the reverse sorting is the recovery process performed by the recovery unit 12, and this will be a detailed description of the recovery process.
  • the rearrangement unit 5 collects the sample group on the low frequency side and F (T-1), F (T), F (T + 1), F (2T-1), F (2T), F (2T +1), F (3T-1), F (3T), F (3T + 1), F (4T-1), F (4T), F (4T + 1), F (5T-1), F (5T), F (5T), F (5T), F (5T + 1), F (1), ..., F (T-2), F (T + 2), ..., F (2T-2), F (2T + 2), ..., F (3T-2), F (3T + 2), ..., F (4T-2), F (4T + 2), ..., F (5T-2), F (5T + 2), ..., F (5T + 2), ...
  • F (jmax) In the above-described example in which the recovery unit 12 outputs the frequency domain sample sequences F (T ⁇ 1), F (T), F (T + 1), and F (2T ⁇ 1), F (2T), F (2T + 1), F (3T-1), F (3T), F (3T + 1), F (4T-1), F (4T), F (4T + 1), F (5T-1), F (5T), F (5T), F (5T), F (5T + 1), F (1), ..., F (T-2), F (T + 2), ..., F (2T-2 ), F (2T + 2), ..., F (3T-2), F (3T + 2), ..., F (4T-2), F (4T + 2), ..., F (5T-2), F (5T + 2), ... F (jmax) is input.
  • the auxiliary information includes, for example, information on the interval T, information indicating that n is an integer of 1 to 5, and information specifying that the sample group includes 3 samples. ing. Therefore, based on this auxiliary information, the recovery unit 12 inputs the sample sequences F (T-1), F (T), F (T + 1), F (2T-1), F (2T), F (2T + 1), F (3T-1), F (3T), F (3T + 1), F (4T-1), F (4T), F (4T), F (4T + 1), F (5T-1 ), F (5T), F (5T), F (5T + 1), F (1), ..., F (T-2), F (T + 2), ..., F (2T-2), F (2T + 2) , ..., F (3T-2), F (3T + 2), ..., F (4T-2), F (4T + 2), ..., F (5T-2), F (5T + 2), ... F (jmax) can be returned to the original sample sequence F (j) (1 ⁇ j ⁇ jmax).
  • the inverse quantization unit 13 inversely quantizes the original sample sequence output by the recovery unit 12 for each frame (step S13). If described in correspondence with the above example, the “weighted normalized MDCT coefficient sequence normalized by gain” input to the quantization unit 4 of the encoding apparatus 100 is obtained by inverse quantization.
  • the gain multiplication unit 14 multiplies each coefficient of the “weighted normalized MDCT coefficient sequence normalized by gain” output from the inverse quantization unit 13 for each frame by the gain specified by the gain information.
  • a “normalized weighted normalized MDCT coefficient sequence” is obtained (step S14).
  • the weighted envelope inverse normalization unit 15 obtains each coefficient of the “normalized weighted normalization MDCT coefficient sequence” output from the gain multiplication unit 14 for each frame by the linear prediction coefficient decoding unit 10. By applying a correction coefficient obtained from the power spectrum envelope coefficient sequence, an “MDCT coefficient sequence” is obtained (step S15).
  • an “MDCT coefficient sequence” is obtained (step S15).
  • the weighted envelope denormalization unit 15 outputs “normalized weighted normalization MDCT output from the gain multiplication unit 14.
  • each coefficient in the “coefficient sequence” values W (1) ⁇ ,..., W (N) ⁇ of ⁇ coefficients (0 ⁇ ⁇ 1) of the coefficients of the power spectrum envelope coefficient sequence corresponding to the coefficients
  • each coefficient X (1),..., X (N) of the MDCT coefficient sequence is obtained.
  • time domain conversion unit 16 converts the “MDCT coefficient sequence” output from the weighted envelope inverse normalization unit 15 into the time domain for each frame to obtain a frame-based audio-acoustic digital signal (step S16).
  • high-efficiency coding can be performed (that is, the average code length) by coding a sample sequence rearranged according to the fundamental frequency. Can be reduced).
  • samples with the same or similar index are concentrated for each local region by rearranging the sample sequence, so that not only variable-length coding efficiency but also quantization distortion and code amount can be reduced. It has become.
  • the encoding device / decoding device may include an input unit to which a keyboard or the like can be connected, an output unit to which a liquid crystal display or the like can be connected, a CPU (Central Processing Unit) [cache memory, or the like. ] RAM (Random Access Memory) or ROM (Read Only Memory) and external storage device as a hard disk, and data exchange between these input unit, output unit, CPU, RAM, ROM, and external storage device It has a bus that can be connected. If necessary, the encoding / decoding device may be provided with a device (drive) that can read and write a storage medium such as a CD-ROM.
  • a device drive
  • the external storage device of the encoding device / decoding device stores a program for executing encoding / decoding and data necessary for processing of this program [not limited to the external storage device, for example, a program It may be stored in a ROM which is a read-only storage device. ]. Data obtained by the processing of these programs is appropriately stored in a RAM or an external storage device.
  • a storage device that stores data, addresses of storage areas, and the like is simply referred to as a “storage unit”.
  • the storage unit of the encoding device stores a program for rearranging the frequency domain sample sequences derived from the audio-acoustic signal, a program for encoding the sample sequences obtained by the rearrangement, and the like. .
  • the storage unit of the decoding device stores a program for decoding the input code sequence, a program for restoring the sample sequence obtained by decoding to a sample sequence before being rearranged by the encoding device, and the like. Has been.
  • each program stored in the storage unit and data necessary for the processing of each program are read into the RAM as necessary, and interpreted and executed by the CPU.
  • the encoding is realized by the CPU realizing a predetermined function (sorting unit, encoding unit).
  • each program stored in the storage unit and data necessary for processing each program are read into the RAM as necessary, and are interpreted and executed by the CPU.
  • the decoding is realized by the CPU realizing a predetermined function (decoding unit, recovery unit).
  • the present invention is not limited to the above-described embodiment, and can be appropriately changed without departing from the spirit of the present invention.
  • the processing described in the above embodiment may be executed not only in time series according to the order of description but also in parallel or individually as required by the processing capability of the apparatus that executes the processing. .
  • the process by the linear prediction coefficient decoding unit 10 and the process by the decoding unit 11 can be executed in parallel.
  • processing functions in the hardware entity (encoding device / decoding device) described in the above embodiment are realized by a computer, the processing contents of the functions that the hardware entity should have are described by a program. Then, by executing this program on a computer, the processing functions in the hardware entity are realized on the computer.
  • the program describing the processing contents can be recorded on a computer-readable recording medium.
  • the computer-readable recording medium may be any recording medium such as a magnetic recording device, an optical disk, a magneto-optical recording medium, and a semiconductor memory.
  • a magnetic recording device a hard disk device, a flexible disk, a magnetic tape or the like, and as an optical disk, a DVD (Digital Versatile Disc), a DVD-RAM (Random Access Memory), a CD-ROM (Compact Disc Read Only) Memory), CD-R (Recordable) / RW (ReWritable), etc.
  • magneto-optical recording media MO (Magneto-Optical disc), etc., semiconductor memory, EEP-ROM (Electronically Erasable and Programmable-Read Only Memory), etc. Can be used.
  • this program is distributed by selling, transferring, or lending a portable recording medium such as a DVD or CD-ROM in which the program is recorded. Furthermore, the program may be distributed by storing the program in a storage device of the server computer and transferring the program from the server computer to another computer via a network.
  • a computer that executes such a program first stores a program recorded on a portable recording medium or a program transferred from a server computer in its own storage device.
  • the computer reads the program stored in its own recording medium and executes the process according to the read program.
  • the computer may directly read the program from a portable recording medium and execute processing according to the program, and the program is transferred from the server computer to the computer.
  • the processing according to the received program may be executed sequentially.
  • the program is not transferred from the server computer to the computer, and the above-described processing is executed by a so-called ASP (Application Service Provider) type service that realizes a processing function only by an execution instruction and result acquisition. It is good.
  • the program in this embodiment includes information that is used for processing by an electronic computer and that conforms to the program (data that is not a direct command to the computer but has a property that defines the processing of the computer).
  • the hardware entity is configured by executing a predetermined program on the computer.
  • a predetermined program on the computer.
  • at least a part of these processing contents may be realized in hardware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

La présente invention concerne un processus d'amélioration de la qualité produite en codant des bits d'ordre inférieur d'un signal acoustique par un faible nombre d'opérations. Un processus de codage consiste en outre à réarranger des échantillons contenus dans une séquence d'échantillons dans un domaine fréquentiel et déduits d'un signal acoustique. Par exemple, au moins certains des échantillons contenus dans une séquence d'échantillons sont réarrangés de sorte à rassembler un échantillon ou une pluralité d'échantillons successifs comportant un échantillon correspondant à une fréquence fondamentale, et un échantillon ou une pluralité d'échantillons successifs comportant un échantillon correspondant au multiple entier de la fréquence fondamentale. Des informations (informations auxiliaires) spécifiant le réarrangement sont également délivrées en sortie. Ensuite, une séquence d'échantillons obtenue par le réarrangement est codée. Dans le processus de décodage, une séquence d'échantillons dans un domaine fréquentiel est obtenue en décodant une chaîne de codes entrée et une séquence d'échantillons originale est obtenue à partir de la séquence d'échantillons obtenue sur la base des informations auxiliaires.
PCT/JP2011/072752 2010-10-05 2011-10-03 Procédé de codage, procédé de décodage, dispositif de codage, dispositif de décodage, programme et support d'enregistrement WO2012046685A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012537696A JP5612698B2 (ja) 2010-10-05 2011-10-03 符号化方法、復号方法、符号化装置、復号装置、プログラム、記録媒体

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010225949 2010-10-05
JP2010-225949 2010-10-05

Publications (1)

Publication Number Publication Date
WO2012046685A1 true WO2012046685A1 (fr) 2012-04-12

Family

ID=45927681

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/072752 WO2012046685A1 (fr) 2010-10-05 2011-10-03 Procédé de codage, procédé de décodage, dispositif de codage, dispositif de décodage, programme et support d'enregistrement

Country Status (2)

Country Link
JP (1) JP5612698B2 (fr)
WO (1) WO2012046685A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012128022A (ja) * 2010-12-13 2012-07-05 Nippon Telegr & Teleph Corp <Ntt> 符号化方法、復号方法、符号化装置、復号装置、プログラム、記録媒体
WO2012137617A1 (fr) 2011-04-05 2012-10-11 日本電信電話株式会社 Procédé de codage, procédé de décodage, dispositif de codage, dispositif de décodage, programme et support d'enregistrement
WO2014054556A1 (fr) 2012-10-01 2014-04-10 日本電信電話株式会社 Procédé de codage, dispositif de codage, programme et support d'enregistrement
WO2014118175A1 (fr) 2013-01-29 2014-08-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Concept d'introduction de bruit
WO2015008783A1 (fr) * 2013-07-18 2015-01-22 日本電信電話株式会社 Dispositif, procédé et programme d'analyse par prédiction linéaire, et support d'enregistrement
CN104321814A (zh) * 2012-05-23 2015-01-28 日本电信电话株式会社 编码方法、解码方法、编码装置、解码装置、程序以及记录介质
WO2015053109A1 (fr) * 2013-10-09 2015-04-16 ソニー株式会社 Dispositif et procédé de codage, dispositif et procédé de décodage et programme
WO2015146224A1 (fr) * 2014-03-24 2015-10-01 日本電信電話株式会社 Procédé d'encodage, dispositif d'encodage, programme et support d'enregistrement
JP2016045462A (ja) * 2014-08-26 2016-04-04 日本電信電話株式会社 周波数領域パラメータ列生成方法、周波数領域パラメータ列生成装置及びプログラム
WO2016121826A1 (fr) * 2015-01-30 2016-08-04 日本電信電話株式会社 Dispositif de codage, dispositif de décodage, procédés associés, programme, et support d'enregistrement
US10553231B2 (en) 2012-11-15 2020-02-04 Ntt Docomo, Inc. Audio coding device, audio coding method, audio coding program, audio decoding device, audio decoding method, and audio decoding program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09106299A (ja) * 1995-10-09 1997-04-22 Nippon Telegr & Teleph Corp <Ntt> 音響信号変換符号化方法および復号化方法
JP2009501943A (ja) * 2005-07-15 2009-01-22 マイクロソフト コーポレーション 適応コーディングおよびデコーディングでの複数のエントロピモデルの選択的使用

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09106299A (ja) * 1995-10-09 1997-04-22 Nippon Telegr & Teleph Corp <Ntt> 音響信号変換符号化方法および復号化方法
JP2009501943A (ja) * 2005-07-15 2009-01-22 マイクロソフト コーポレーション 適応コーディングおよびデコーディングでの複数のエントロピモデルの選択的使用

Cited By (73)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012128022A (ja) * 2010-12-13 2012-07-05 Nippon Telegr & Teleph Corp <Ntt> 符号化方法、復号方法、符号化装置、復号装置、プログラム、記録媒体
US10515643B2 (en) 2011-04-05 2019-12-24 Nippon Telegraph And Telephone Corporation Encoding method, decoding method, encoder, decoder, program, and recording medium
WO2012137617A1 (fr) 2011-04-05 2012-10-11 日本電信電話株式会社 Procédé de codage, procédé de décodage, dispositif de codage, dispositif de décodage, programme et support d'enregistrement
US11024319B2 (en) 2011-04-05 2021-06-01 Nippon Telegraph And Telephone Corporation Encoding method, decoding method, encoder, decoder, program, and recording medium
US11074919B2 (en) 2011-04-05 2021-07-27 Nippon Telegraph And Telephone Corporation Encoding method, decoding method, encoder, decoder, program, and recording medium
EP3441967A1 (fr) 2011-04-05 2019-02-13 Nippon Telegraph and Telephone Corporation Procédé de décodage, décodeur, programme et support d'enregistrement
CN109147827A (zh) * 2012-05-23 2019-01-04 日本电信电话株式会社 编码方法、编码装置、程序以及记录介质
EP3385950A1 (fr) * 2012-05-23 2018-10-10 Nippon Telegraph and Telephone Corporation Procédés de décodage audio, décodeurs audio ainsi que programme et support d'enregistrement correspondants
CN104321814A (zh) * 2012-05-23 2015-01-28 日本电信电话株式会社 编码方法、解码方法、编码装置、解码装置、程序以及记录介质
US9947331B2 (en) 2012-05-23 2018-04-17 Nippon Telegraph And Telephone Corporation Encoding method, decoding method, encoder, decoder, program and recording medium
US10096327B2 (en) 2012-05-23 2018-10-09 Nippon Telegraph And Telephone Corporation Long-term prediction and frequency domain pitch period based encoding and decoding
EP2830057A4 (fr) * 2012-05-23 2016-01-13 Nippon Telegraph & Telephone Procédé de codage, procédé de décodage, dispositif de codage, dispositif de décodage, programme et support d'enregistrement
JPWO2013176177A1 (ja) * 2012-05-23 2016-01-14 日本電信電話株式会社 符号化方法、復号方法、符号化装置、復号装置、プログラム、および記録媒体
CN109147827B (zh) * 2012-05-23 2023-02-17 日本电信电话株式会社 编码方法、编码装置以及记录介质
EP3576089A1 (fr) * 2012-05-23 2019-12-04 Nippon Telegraph And Telephone Corporation Codage d'un signal audio
US10083703B2 (en) 2012-05-23 2018-09-25 Nippon Telegraph And Telephone Corporation Frequency domain pitch period based encoding and decoding in accordance with magnitude and amplitude criteria
CN104704559B (zh) * 2012-10-01 2017-09-15 日本电信电话株式会社 编码方法以及编码装置
KR20150041090A (ko) 2012-10-01 2015-04-15 니폰 덴신 덴와 가부시끼가이샤 부호화 방법, 부호화 장치, 프로그램 및 기록 매체
WO2014054556A1 (fr) 2012-10-01 2014-04-10 日本電信電話株式会社 Procédé de codage, dispositif de codage, programme et support d'enregistrement
CN107316646B (zh) * 2012-10-01 2020-11-10 日本电信电话株式会社 编码方法、编码装置以及记录介质
US9524725B2 (en) 2012-10-01 2016-12-20 Nippon Telegraph And Telephone Corporation Encoding method, encoder, program and recording medium
JP5893153B2 (ja) * 2012-10-01 2016-03-23 日本電信電話株式会社 符号化方法、符号化装置、プログラム、および記録媒体
EP3525208A1 (fr) 2012-10-01 2019-08-14 Nippon Telegraph and Telephone Corporation Procédé de codage, codeur, programme et support d'enregistrement
EP3252762A1 (fr) 2012-10-01 2017-12-06 Nippon Telegraph and Telephone Corporation Procédé de codage, codeur, programme et support d'enregistrement
CN104704559A (zh) * 2012-10-01 2015-06-10 日本电信电话株式会社 编码方法、编码装置、程序、以及记录介质
CN107316646A (zh) * 2012-10-01 2017-11-03 日本电信电话株式会社 编码方法、编码装置、程序、以及记录介质
US20200126578A1 (en) 2012-11-15 2020-04-23 Ntt Docomo, Inc. Audio coding device, audio coding method, audio coding program, audio decoding device, audio decoding method, and audio decoding program
US11211077B2 (en) 2012-11-15 2021-12-28 Ntt Docomo, Inc. Audio coding device, audio coding method, audio coding program, audio decoding device, audio decoding method, and audio decoding program
US11749292B2 (en) 2012-11-15 2023-09-05 Ntt Docomo, Inc. Audio coding device, audio coding method, audio coding program, audio decoding device, audio decoding method, and audio decoding program
US11195538B2 (en) 2012-11-15 2021-12-07 Ntt Docomo, Inc. Audio coding device, audio coding method, audio coding program, audio decoding device, audio decoding method, and audio decoding program
US11176955B2 (en) 2012-11-15 2021-11-16 Ntt Docomo, Inc. Audio coding device, audio coding method, audio coding program, audio decoding device, audio decoding method, and audio decoding program
US10553231B2 (en) 2012-11-15 2020-02-04 Ntt Docomo, Inc. Audio coding device, audio coding method, audio coding program, audio decoding device, audio decoding method, and audio decoding program
RU2660605C2 (ru) * 2013-01-29 2018-07-06 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Концепция заполнения шумом
US11031022B2 (en) 2013-01-29 2021-06-08 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Noise filling concept
WO2014118175A1 (fr) 2013-01-29 2014-08-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Concept d'introduction de bruit
WO2014118176A1 (fr) 2013-01-29 2014-08-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Introduction de bruit dans un codage audio à transformation perceptuelle
EP3761312A1 (fr) 2013-01-29 2021-01-06 FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V. Remplissage de bruit dans le codage audio par transformée perceptuelle
EP3693962A1 (fr) 2013-01-29 2020-08-12 FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V. Concept de remplissage de bruit
US9524724B2 (en) 2013-01-29 2016-12-20 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Noise filling in perceptual transform audio coding
US9792920B2 (en) 2013-01-29 2017-10-17 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Noise filling concept
US10410642B2 (en) 2013-01-29 2019-09-10 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Noise filling concept
EP3471093A1 (fr) 2013-01-29 2019-04-17 FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V. Remplissage de bruit dans le codage audio par transformée perceptuelle
EP3451334A1 (fr) 2013-01-29 2019-03-06 FRAUNHOFER-GESELLSCHAFT zur Förderung der angewandten Forschung e.V. Concept de remplissage de bruit
RU2631988C2 (ru) * 2013-01-29 2017-09-29 Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. Заполнение шумом при аудиокодировании с перцепционным преобразованием
JPWO2015008783A1 (ja) * 2013-07-18 2017-03-02 日本電信電話株式会社 線形予測分析装置、方法、プログラム及び記録媒体
CN110085243B (zh) * 2013-07-18 2022-12-02 日本电信电话株式会社 线性预测分析装置、线性预测分析方法以及记录介质
CN109979471B (zh) * 2013-07-18 2022-12-02 日本电信电话株式会社 线性预测分析装置、线性预测分析方法以及记录介质
CN105378836A (zh) * 2013-07-18 2016-03-02 日本电信电话株式会社 线性预测分析装置、方法、程序以及记录介质
CN109887520B (zh) * 2013-07-18 2022-12-02 日本电信电话株式会社 线性预测分析装置、线性预测分析方法以及记录介质
WO2015008783A1 (fr) * 2013-07-18 2015-01-22 日本電信電話株式会社 Dispositif, procédé et programme d'analyse par prédiction linéaire, et support d'enregistrement
CN109887520A (zh) * 2013-07-18 2019-06-14 日本电信电话株式会社 线性预测分析装置、方法、程序以及记录介质
CN109979471A (zh) * 2013-07-18 2019-07-05 日本电信电话株式会社 线性预测分析装置、方法、程序以及记录介质
CN110085243A (zh) * 2013-07-18 2019-08-02 日本电信电话株式会社 线性预测分析装置、方法、程序以及记录介质
US9781539B2 (en) 2013-10-09 2017-10-03 Sony Corporation Encoding device and method, decoding device and method, and program
WO2015053109A1 (fr) * 2013-10-09 2015-04-16 ソニー株式会社 Dispositif et procédé de codage, dispositif et procédé de décodage et programme
JPWO2015053109A1 (ja) * 2013-10-09 2017-03-09 ソニー株式会社 符号化装置および方法、復号装置および方法、並びにプログラム
RU2677597C2 (ru) * 2013-10-09 2019-01-17 Сони Корпорейшн Способ и устройство кодирования, способ и устройство декодирования и программа
KR20160122257A (ko) 2014-03-24 2016-10-21 니폰 덴신 덴와 가부시끼가이샤 부호화 방법, 부호화 장치, 프로그램 및 기록 매체
US10290310B2 (en) 2014-03-24 2019-05-14 Nippon Telegraph And Telephone Corporation Gain adjustment coding for audio encoder by periodicity-based and non-periodicity-based encoding methods
JP2017227904A (ja) * 2014-03-24 2017-12-28 日本電信電話株式会社 符号化方法、符号化装置、プログラム、および記録媒体
KR101848899B1 (ko) 2014-03-24 2018-04-13 니폰 덴신 덴와 가부시끼가이샤 부호화 방법, 부호화 장치, 프로그램 및 기록 매체
KR101826237B1 (ko) 2014-03-24 2018-02-13 니폰 덴신 덴와 가부시끼가이샤 부호화 방법, 부호화 장치, 프로그램 및 기록 매체
KR101848898B1 (ko) 2014-03-24 2018-04-13 니폰 덴신 덴와 가부시끼가이샤 부호화 방법, 부호화 장치, 프로그램 및 기록 매체
US9911427B2 (en) 2014-03-24 2018-03-06 Nippon Telegraph And Telephone Corporation Gain adjustment coding for audio encoder by periodicity-based and non-periodicity-based encoding methods
JPWO2015146224A1 (ja) * 2014-03-24 2017-04-13 日本電信電話株式会社 符号化方法、符号化装置、プログラム、および記録媒体
WO2015146224A1 (fr) * 2014-03-24 2015-10-01 日本電信電話株式会社 Procédé d'encodage, dispositif d'encodage, programme et support d'enregistrement
EP3385948A1 (fr) 2014-03-24 2018-10-10 Nippon Telegraph and Telephone Corporation Procédé de codage, codeur, programme et support d'enregistrement
CN106133830A (zh) * 2014-03-24 2016-11-16 日本电信电话株式会社 编码方法、编码装置、程序以及记录介质
US10283132B2 (en) 2014-03-24 2019-05-07 Nippon Telegraph And Telephone Corporation Gain adjustment coding for audio encoder by periodicity-based and non-periodicity-based encoding methods
EP3413306A1 (fr) 2014-03-24 2018-12-12 Nippon Telegraph and Telephone Corporation Procédé de codage, codeur, programme et support d'enregistrement
JP2016045462A (ja) * 2014-08-26 2016-04-04 日本電信電話株式会社 周波数領域パラメータ列生成方法、周波数領域パラメータ列生成装置及びプログラム
JPWO2016121826A1 (ja) * 2015-01-30 2017-11-02 日本電信電話株式会社 符号化装置、復号装置、これらの方法、プログラム及び記録媒体
WO2016121826A1 (fr) * 2015-01-30 2016-08-04 日本電信電話株式会社 Dispositif de codage, dispositif de décodage, procédés associés, programme, et support d'enregistrement

Also Published As

Publication number Publication date
JPWO2012046685A1 (ja) 2014-02-24
JP5612698B2 (ja) 2014-10-22

Similar Documents

Publication Publication Date Title
JP5612698B2 (ja) 符号化方法、復号方法、符号化装置、復号装置、プログラム、記録媒体
US11074919B2 (en) Encoding method, decoding method, encoder, decoder, program, and recording medium
JP5596800B2 (ja) 符号化方法、周期性特徴量決定方法、周期性特徴量決定装置、プログラム
US10083703B2 (en) Frequency domain pitch period based encoding and decoding in accordance with magnitude and amplitude criteria
JP5893153B2 (ja) 符号化方法、符号化装置、プログラム、および記録媒体
JP5694751B2 (ja) 符号化方法、復号方法、符号化装置、復号装置、プログラム、記録媒体
JP2013120225A (ja) 符号化方法、符号化装置、プログラム、記録媒体
JP5579932B2 (ja) 符号化方法、装置、プログラム及び記録媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11830617

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2012537696

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11830617

Country of ref document: EP

Kind code of ref document: A1