WO2015146860A1 - 音声復号装置、音声符号化装置、音声復号方法、音声符号化方法、音声復号プログラム、および音声符号化プログラム - Google Patents
音声復号装置、音声符号化装置、音声復号方法、音声符号化方法、音声復号プログラム、および音声符号化プログラム Download PDFInfo
- Publication number
- WO2015146860A1 WO2015146860A1 PCT/JP2015/058608 JP2015058608W WO2015146860A1 WO 2015146860 A1 WO2015146860 A1 WO 2015146860A1 JP 2015058608 W JP2015058608 W JP 2015058608W WO 2015146860 A1 WO2015146860 A1 WO 2015146860A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- decoding
- time envelope
- speech
- signal
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/26—Pre-filtering or post-filtering
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/0204—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders using subband decomposition
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/028—Noise substitution, i.e. substituting non-tonal spectral components by noisy source
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/032—Quantisation or dequantisation of spectral components
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/08—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
- G10L19/12—Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/167—Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/038—Speech enhancement, e.g. noise reduction or echo cancellation using band spreading techniques
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/18—Vocoders using multiple modes
- G10L19/24—Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding
Definitions
- the present invention relates to a speech decoding device, a speech encoding device, a speech decoding method, a speech encoding method, a speech decoding program, and a speech encoding program.
- the speech coding technology that compresses the data amount of speech signals and acoustic signals to several tenths is an extremely important technology in signal transmission / storage.
- An example of a widely used speech coding technique is a transform coding method that codes a signal in the frequency domain.
- bit allocation method for minimizing distortion due to encoding is allocation according to the signal power of each frequency band, and bit allocation is also performed in consideration of human hearing.
- Patent Document 1 discloses a technique of approximating a transform coefficient in a frequency band having a smaller number of bits allocated than a predetermined threshold with a transform coefficient in another frequency band. Further, in Patent Document 2, a method of generating a pseudo-noise signal for a component that has been quantized to zero because power is small in the frequency band, and a component that is not quantized to zero in another frequency band A method of replicating the signal is disclosed.
- the high frequency band of the input signal is encoded low frequency, taking into account that the power of sound signals and acoustic signals is generally biased to the low frequency band rather than the high frequency band, and the influence on the subjective quality is large.
- a bandwidth expansion technique for generating a bandwidth is also widely used. Since the band expansion technique can generate a high frequency band with a small number of bits, high quality can be obtained at a low bit rate.
- Patent Document 3 after copying a spectrum of a low frequency band to a high frequency band, a method of generating a high frequency band by adjusting the spectrum shape based on information on the nature of the high frequency band spectrum transmitted from the encoder Is disclosed.
- the frequency band component encoded with a small number of bits is generated so as to resemble the component of the original sound in the frequency domain.
- distortion may be conspicuous in the time domain, and quality may deteriorate.
- the present invention provides a speech decoding apparatus, a speech encoding apparatus, and a speech decoding method capable of reducing distortion in the time domain and improving the quality of a frequency band component encoded with a small number of bits.
- An object is to provide a speech encoding method, a speech decoding program, and a speech encoding program.
- a speech decoding apparatus is a speech decoding apparatus that decodes an encoded speech signal and outputs the speech signal, and includes the encoded speech signal.
- a decoding unit that decodes an encoded sequence to obtain a decoded signal; and a selective time envelope shaping unit that shapes a time envelope of a frequency band in the decoded signal based on decoding related information related to decoding of the encoded sequence.
- the time envelope of the signal represents the variation of the energy or power (and equivalent parameters) of the signal with respect to the time direction.
- a speech decoding apparatus is a speech decoding apparatus that decodes an encoded speech signal and outputs the speech signal, and includes an encoded sequence that includes the encoded speech signal. And a demultiplexer that separates time envelope information related to the time envelope of the speech signal, a decoding unit that decodes the encoded sequence to obtain a decoded signal, and decoding related to decoding of the time envelope information and the encoded sequence
- a selective time envelope shaping unit that shapes the time envelope of the frequency band in the decoded signal based on at least one of the information.
- the speech encoding device that generates and outputs the encoded sequence of the speech signal, with a small number of bits, based on the time envelope information generated by referring to the speech signal input to the speech encoding device. It is possible to improve the quality by shaping the time envelope of the decoded signal in the encoded frequency band into a desired time envelope.
- a decoding unit that decodes or / and inverse-quantizes the encoded sequence to obtain a frequency-domain decoded signal; and a decoding or / and inverse quantization process in the decoding / inverse quantization unit
- a decoding-related information output unit that outputs at least one of the obtained information and information obtained by analyzing the coded sequence as decoding-related information, and converts the frequency-domain decoded signal into a time-domain signal. It is good also as providing the time frequency inverse transformation part to output. With this configuration, it is possible to shape the time envelope of a decoded signal in a frequency band encoded with a small number of bits into a desired time envelope and improve the quality.
- the decoding unit includes an encoded sequence analysis unit that separates the encoded sequence into a first encoded sequence and a second encoded sequence; and a first decoding by decoding or / and dequantizing the first encoded sequence
- a first decoding unit that obtains a signal and obtains first decoding related information as the decoding related information; and obtains and outputs a second decoded signal using at least one of the second coded sequence and the first decoded signal.
- a second decoding unit that outputs second decoding related information as the decoding related information.
- the time envelope of the decoded signal in the frequency band encoded with a small number of bits is shaped into a desired time envelope to improve the quality. It becomes possible to do.
- the first decoding unit includes: a first decoding / inverse quantization unit that decodes or / and inverse quantizes the first encoded sequence to obtain a first decoded signal; and a decoding or / and / or decoding in the first decoding / inverse quantization unit And a first decoding related information output unit that outputs at least one of information obtained in the process of inverse quantization and information obtained by analyzing the first encoded sequence as first decoding related information, It is good as well.
- the time of the decoded signal in the frequency band encoded with a small number of bits based on at least information related to the first decoding unit It is possible to shape the envelope into a desired time envelope and improve the quality.
- a second decoding unit configured to obtain a second decoded signal using at least one of the second encoded sequence and the first decoded signal; and the second decoding / inverse quantization
- a second decoding related information output unit that outputs at least one of information obtained in the process of obtaining a second decoded signal in the unit and information obtained by analyzing the second encoded sequence as second decoding related information; It is good also as having.
- the selective time envelope shaping unit shapes a time envelope of each frequency band of the frequency domain decoded signal based on the time-frequency conversion unit that converts the decoded signal into a frequency domain signal and the decoding related information.
- a frequency-selective time envelope shaping unit, and a time / frequency inverse transform unit that converts a frequency domain decoded signal in which the time envelope of each frequency band is shaped into a time domain signal may be provided.
- the decoding related information may be information related to the number of encoded bits in each frequency band.
- the decoding related information may be information related to the quantization step of each frequency band. According to this configuration, according to the quantization step of each frequency band, the time envelope of the decoded signal of the frequency band can be shaped into a desired time envelope to improve the quality.
- the decoding related information may be information related to the coding scheme of each frequency band. According to this configuration, it is possible to improve the quality by shaping the time envelope of the decoded signal of the frequency band into a desired time envelope according to the encoding method of each frequency band.
- the decoding related information may be information related to a noise component injected into each frequency band.
- the time envelope of the decoded signal in the frequency band can be shaped into a desired time envelope to improve the quality.
- the frequency-selective time envelope shaping unit uses the filter using the linear prediction coefficient obtained by linearly predicting the decoded signal corresponding to the frequency band for shaping the time envelope in the frequency domain. It is good also as shaping to the time envelope of. With this configuration, it is possible to improve the quality by shaping the time envelope of a decoded signal in a frequency band encoded with a small number of bits into a desired time envelope using a decoded signal in the frequency domain.
- the frequency selective time envelope shaping unit corresponds to a frequency for shaping the time envelope and a frequency for which the time envelope is not shaped after replacing the decoded signal corresponding to the frequency band not shaping the time envelope with another signal in the frequency domain.
- the signal is shaped into a desired time envelope, and after the time envelope shaping, the decoded signal corresponding to the frequency band in which the time envelope is not shaped may be returned to the original signal before being replaced with another signal.
- a speech decoding apparatus is a speech decoding apparatus that decodes an encoded speech signal and outputs the speech signal, and includes an encoded sequence that includes the encoded speech signal. And decoding the decoded signal in the frequency domain using a decoding unit that obtains a decoded signal and a filter using a linear prediction coefficient obtained by performing linear prediction analysis on the decoded signal in the frequency domain.
- a time envelope shaping unit for shaping into a desired time envelope.
- a speech encoding apparatus is a speech encoding apparatus that encodes an input speech signal and outputs an encoded sequence, and encodes the speech signal to generate the speech signal.
- An encoding unit that obtains an encoded sequence including: a time envelope information encoding unit that encodes information related to a temporal envelope of the speech signal; an encoded sequence obtained by the encoding unit; and the time envelope information encoding
- a multiplexing unit that multiplexes an encoded sequence of information related to the time envelope obtained by the unit.
- aspects according to one aspect of the present invention can be understood as a speech decoding method, a speech encoding method, a speech decoding program, and a speech encoding program as follows.
- a speech decoding method is a speech decoding method of a speech decoding apparatus that decodes a coded speech signal and outputs a speech signal, and includes a code that includes the coded speech signal.
- a speech decoding method is a speech decoding method for a speech decoding apparatus that decodes an encoded speech signal and outputs the speech signal, and includes a code including the encoded speech signal.
- a selective time envelope shaping step for shaping a time envelope of a frequency band in the decoded signal based on at least one of the decoding related information.
- a speech decoding program is based on a decoding step of obtaining a decoded signal by decoding an encoded sequence including the encoded audio signal, and decoding-related information related to decoding of the encoded sequence. Then, the selective time envelope shaping step for shaping the time envelope of the frequency band in the decoded signal is executed by the computer.
- a speech decoding method is a speech decoding method for a speech decoding apparatus that decodes an encoded speech signal and outputs the speech signal, and includes a code including the encoded speech signal.
- a selective time envelope shaping step of shaping a time envelope of a frequency band in the decoded signal based on at least one of the decoding related information is executed by a computer.
- a speech decoding method is a speech decoding method for a speech decoding apparatus that decodes an encoded speech signal and outputs the speech signal, and includes a code including the encoded speech signal.
- Decoding the decoded signal in the frequency domain by using a decoding step for decoding the coded sequence to obtain a decoded signal and a filter using a linear prediction coefficient obtained by linear prediction analysis of the decoded signal in the frequency domain
- a time envelope shaping step for shaping into a desired time envelope.
- a speech encoding method is a speech encoding method of a speech encoding apparatus that encodes an input speech signal and outputs an encoded sequence, and encodes the speech signal.
- a speech decoding program is obtained by decoding a coded sequence including a coded speech signal to obtain a decoded signal, and performing linear prediction analysis on the decoded signal in the frequency domain.
- the computer executes a time envelope shaping step of shaping the decoded signal into a desired time envelope by filtering the decoded signal in the frequency domain using a filter using the obtained linear prediction coefficient.
- a speech encoding program includes an encoding step of encoding an audio signal to obtain an encoded sequence including the audio signal, and a time envelope for encoding information related to the time envelope of the audio signal.
- An information coding step, a coding sequence obtained in the coding step, a multiplexing step for multiplexing a coded sequence of information related to the time envelope obtained in the time envelope information coding step, and a computer are executed.
- the present invention it is possible to improve the quality by shaping the time envelope of a decoded signal in a frequency band encoded with a small number of bits into a desired time envelope.
- 1 is a diagram illustrating a configuration of a speech decoding device 10 according to a first embodiment. It is a flowchart which shows operation
- FIG. 1 is a diagram illustrating a configuration of a speech decoding apparatus 10 according to the first embodiment.
- the communication device of the audio decoding device 10 receives an encoded sequence obtained by encoding an audio signal, and further outputs the decoded audio signal to the outside.
- the speech decoding apparatus 10 functionally includes a decoding unit 10a and a selective time envelope shaping unit 10b.
- FIG. 2 is a flowchart showing the operation of the speech decoding apparatus 10 according to the first embodiment.
- the decoding unit 10a decodes the encoded sequence and generates a decoded signal (step S10-1).
- the selective time envelope shaping unit 10b receives the decoding related information and the decoded signal, which are information obtained when decoding the encoded sequence from the decoding unit, and selectively selects the desired time envelope of the decoded signal components. (Step S10-2).
- the time envelope of a signal represents a change in signal energy or power (and parameters equivalent to these) in the time direction.
- FIG. 3 is a diagram illustrating a configuration of a first example of the decoding unit 10a of the speech decoding device 10 according to the first embodiment.
- the decoding unit 10a functionally includes a decoding / inverse quantization unit 10aA, a decoding related information output unit 10aB, and a time-frequency inverse conversion unit 10aC.
- FIG. 4 is a flowchart showing the operation of the first example of the decoding unit 10a of the speech decoding apparatus 10 according to the first embodiment.
- the decoding / inverse quantization unit 10aA generates a frequency domain decoded signal by performing at least one of decoding and inverse quantization on the encoded sequence according to the encoding scheme of the encoded sequence (step S10). -1-1).
- the decoding related information output unit 10aB receives the decoding related information obtained when the decoding / inverse quantization unit 10aA generates a decoded signal, and outputs the decoding related information (step S10-1-2). Furthermore, the decoding related information may be output by receiving and analyzing the encoded sequence to obtain the decoding related information.
- the decoding-related information may be, for example, the number of encoded bits for each frequency band, or information equivalent to this (for example, the average number of encoded bits per frequency component for each frequency band). Furthermore, the number of encoded bits for each frequency component may be used. Furthermore, the quantization step size for each frequency band may be used. Furthermore, the quantization value of a frequency component may be sufficient.
- the frequency component is a conversion coefficient of predetermined time frequency conversion, for example.
- energy or power for each frequency band may be used.
- it may be information presenting a predetermined frequency band (may be a frequency component).
- the information may be information related to the time envelope shaping process. For example, whether or not to perform the time envelope shaping process is determined. It may be at least one of information, information on the time envelope shaped by the time envelope shaping process, and information on the strength of time envelope shaping of the time envelope shaping process. At least one of the above examples is output as decoding related information.
- the time-frequency inverse transform unit 10aC converts the frequency domain decoded signal into a time-domain decoded signal by a predetermined time-frequency inverse transform and outputs it (step S10-1-3).
- the frequency domain decoded signal may be output without being subjected to time-frequency inverse transform.
- the selective time envelope shaping unit 10b requests a frequency domain signal as an input signal is applicable.
- FIG. 5 is a diagram illustrating a configuration of a second example of the decoding unit 10a of the speech decoding device 10 according to the first embodiment.
- the decoding unit 10a functionally includes an encoded sequence analysis unit 10aD, a first decoding unit 10aE, and a second decoding unit 10aF.
- FIG. 6 is a flowchart showing the operation of the second example of the decoding unit 10a of the speech decoding apparatus 10 according to the first embodiment.
- the encoded sequence analysis unit 10aD analyzes the encoded sequence and separates it into a first encoded sequence and a second encoded sequence (step S10-1-4).
- the first decoding unit 10aE generates a first decoded signal by decoding the first encoded sequence using the first decoding method, and outputs first decoding related information that is information related to the decoding (step S10-1). -5).
- the second decoding unit 10aF generates a decoded signal by decoding the second encoded sequence by the second decoding method using the first decoded signal, and generates second decoding related information that is information related to the decoding. Output (step S10-1-6).
- the combination of the first decoding related information and the second decoding related information is the decoding related information.
- FIG. 7 is a diagram illustrating a configuration of the first decoding unit of the second example of the decoding unit 10a of the speech decoding device 10 according to the first embodiment.
- the first decoding unit 10aE functionally includes a first decoding / inverse quantization unit 10aE-a and a first decoding related information output unit 10aE-b.
- FIG. 8 is a flowchart showing the operation of the first decoding unit of the second example of the decoding unit 10a of the speech decoding apparatus 10 according to the first embodiment.
- the first decoding / inverse quantization unit 10aE-a performs at least one of decoding and inverse quantization on the first encoded sequence according to the encoding scheme of the first encoded sequence, and performs the first A decoded signal is generated and output (step S10-1-5-1).
- the first decoding related information output unit 10aE-b receives the first decoding related information obtained when the first decoding / inverse quantization unit 10aE-a generates the first decoded signal, and receives the first decoding related information. Is output (step S10-1-5-2). Furthermore, the first encoded sequence may be received and analyzed to obtain first decoding related information, and the first decoding related information may be output. An example of the first decoding related information may be the same as the example of the decoding related information output by the decoding related information output unit 10aB. Furthermore, the first decoding related information may be that the decoding method of the first decoding unit is the first decoding method. Furthermore, information indicating the frequency band (may be a frequency component) included in the first decoded signal (the frequency band (may be a frequency component) of the audio signal encoded in the first encoded sequence) is related to the first decoding. It may be information.
- FIG. 9 is a diagram illustrating a configuration of the second decoding unit of the second example of the decoding unit 10a of the speech decoding device 10 according to the first embodiment.
- the second decoding unit 10aF functionally includes a second decoding / inverse quantization unit 10aF-a, a second decoding related information output unit 10aF-b, and a decoded signal combining unit 10aF-c.
- FIG. 10 is a flowchart showing the operation of the second decoding unit of the second example of the decoding unit 10a of the speech decoding apparatus 10 according to the first embodiment.
- the second decoding / inverse quantization unit 10aF-1 performs the second decoding by performing at least one of decoding and inverse quantization on the second encoded sequence according to the encoding scheme of the second encoded sequence A signal is generated and output (step s10-1-6-1).
- the first decoded signal may be used when generating the second decoded signal.
- the decoding scheme (second decoding scheme) of the second decoding unit may be a band expansion scheme or a band expansion scheme using the first decoded signal. Furthermore, as shown in Patent Document 1 (Japanese Patent Laid-Open No.
- the transform coefficient of the frequency band in which the number of bits allocated by the first encoding method is less than a predetermined threshold is expressed as
- a decoding method corresponding to an encoding method approximated by transform coefficients in other frequency bands may be used.
- the frequency component quantized to zero in the first encoding method is simulated in the second encoding method.
- a decoding method corresponding to an encoding method for generating a noise signal or replicating a signal of another frequency component may be used.
- a decoding method corresponding to an encoding method that approximates the frequency component using a signal of another frequency component in the second encoding method may be used.
- the frequency component quantized to zero by the first encoding method can be interpreted as a frequency component that is not encoded by the first encoding method.
- the decoding scheme corresponding to the first encoding scheme is the first decoding scheme which is the decoding scheme of the first decoding section
- the decoding scheme corresponding to the second encoding scheme is the decoding scheme of the second decoding section.
- a second decoding scheme may be used.
- the second decoding related information output unit 10aF-b receives second decoding related information obtained when the second decoding / inverse quantization unit 10aF-a generates the second decoded signal, and receives the second decoding related information. Is output (step S10-1-6-2). Further, the second encoded sequence may be received and analyzed to obtain second decoding related information, and the second decoding related information may be output. An example of the second decoding related information may be the same as the example of the decoding related information output by the decoding related information output unit 10aB.
- information indicating that the decoding method of the second decoding unit is the second decoding method may be used as the second decoding related information.
- information indicating that the second decoding method is a band extension method may be used as the second decoding related information.
- information indicating the band expansion scheme for each frequency band of the second decoded signal generated by the band expansion scheme may be used as the second decoding information.
- Information indicating the band expansion method for each frequency band includes, for example, a sine signal generated by duplicating a signal from another frequency band, approximating the signal of the frequency with a signal of another frequency band, and generating a pseudo noise signal It may be information such as Further, for example, when approximating a signal of the frequency with a signal of another frequency band, information on an approximation method may be used. Further, for example, when whitening is used when approximating a signal of the frequency with a signal of another frequency band, information regarding the intensity of whitening may be used as the second decoding information.
- information regarding the level of the pseudo noise signal may be used as the second decoding information.
- information regarding the level of the pseudo noise signal may be used as the second decoding information.
- the second decoding method uses a transform coefficient of a frequency band in which the number of bits allocated in the first coding system is less than a predetermined threshold, approximation with a transform coefficient of another frequency band, and pseudo Information indicating that the decoding method corresponds to an encoding method in which either or both of the conversion coefficients of the noise signal are added (may be replaced) may be used as the second decoding related information.
- information regarding the approximation method of the transform coefficient of the frequency band may be used as the second decoding related information.
- information regarding the intensity of whitening may be used as the second decoding information.
- information regarding the level of the pseudo noise signal may be used as the second decoding information.
- a pseudo noise signal is generated for a frequency component in which the second encoding scheme is quantized to zero by the first encoding scheme (that is, not encoded by the first encoding scheme).
- the second decoding-related information may be information indicating that the coding method is to generate the signal or to copy the signal of another frequency component. For example, for each frequency component, information indicating whether or not each frequency component is a frequency component quantized to zero by the first encoding scheme (that is, not encoded by the first encoding scheme) 2 It is good also as decoding related information. For example, information indicating whether a pseudo noise signal is generated for the frequency component or a signal of another frequency component is duplicated may be used as the second decoding related information.
- information regarding the duplication method may be used as the second decoding related information.
- the information regarding the duplication method may be, for example, the duplication source frequency.
- information on whether or not to add processing to the frequency component of the copy source at the time of duplication and information on the processing to be added may be used.
- information regarding the intensity of whitening may be used.
- information regarding the level of the pseudo noise signal may be used.
- the decoded signal synthesis unit 10aF-c synthesizes and outputs a decoded signal from the first decoded signal and the second decoded signal (step S10-1-6-3).
- the second encoding scheme is a band extension scheme
- the first decoded signal is a low frequency band signal
- the second decoded signal is a high frequency band signal
- the decoded signal is both It will have a frequency band.
- FIG. 11 is a diagram showing a configuration of a first example of the selective time envelope shaping unit 10b of the speech decoding apparatus 10 according to the first embodiment.
- the selective time envelope shaping unit 10b functionally includes a time frequency conversion unit 10bA, a frequency selection unit 10bB, a frequency selective time envelope shaping unit 10bC, and a time frequency inverse conversion unit 10bD.
- FIG. 12 is a flowchart showing the operation of the first example of the selective time envelope shaping unit 10b of the speech decoding apparatus 10 according to the first embodiment.
- the time-frequency conversion unit 10bA converts the time-domain decoded signal into a frequency-domain decoded signal by a predetermined time-frequency conversion (step S10-2-1). However, when the decoded signal is a frequency domain signal, the time-frequency conversion unit 10bA and the processing step S10-2-1 can be omitted.
- the frequency selection unit 10bB uses at least one of the decoded signal in the frequency domain and the decoding related information to select a frequency band to be subjected to the time envelope shaping process in the decoded signal in the frequency domain (Step S10-2-2).
- a frequency component to be subjected to a time envelope shaping process may be selected.
- the selected frequency band (may be a frequency component) may be a part of the decoded signal (may be a frequency component), or may be the entire frequency band (may be a frequency component) of the decoded signal.
- a frequency band in which the number of encoded bits is smaller than a predetermined threshold may be selected as a frequency band to be subjected to the time envelope shaping process.
- the frequency band to be subjected to the time envelope shaping process can be selected by comparison with a predetermined threshold value.
- a frequency component whose number of encoded bits is smaller than a predetermined threshold may be selected as a frequency component to be subjected to the time envelope shaping process.
- a frequency component in which no transform coefficient is encoded may be selected as a frequency component to be subjected to the time envelope shaping process.
- the decoding related information is a quantization step size for each frequency band
- a frequency band having the quantization step size larger than a predetermined threshold may be selected as a frequency band to be subjected to the time envelope shaping process.
- the decoding related information is a quantized value of the frequency component
- the quantized value may be compared with a predetermined threshold value to select a frequency band on which the time envelope shaping process is performed.
- a component having a quantized transform coefficient smaller than a predetermined threshold may be selected as a frequency component to be subjected to the time envelope shaping process.
- the energy or power may be compared with a predetermined threshold value to select a frequency band on which time envelope shaping processing is performed. For example, when the energy or power of the frequency band that is the target of the selective time envelope shaping process is smaller than a predetermined threshold, the time envelope shaping process may not be performed on the frequency band.
- a frequency band that is not subjected to the time envelope shaping process may be selected as a frequency band to be subjected to the time envelope shaping process in the present invention.
- the encoding unit of the second decoding unit corresponds to the encoding method.
- the frequency band decoded by the second decoding unit may be selected as the frequency band to be subjected to the time envelope shaping process.
- the encoding format of the second decoding unit is a band extension method
- the frequency band decoded by the second decoding unit may be selected as the frequency band to be subjected to the time envelope shaping process.
- the frequency band decoded by the second decoding unit may be selected as the frequency band to be subjected to the time envelope shaping process.
- the frequency band decoded by the second decoding unit may be selected as the frequency band to be subjected to the time envelope shaping process.
- a frequency band obtained by replicating a signal from another frequency band by a band expansion method may be selected as a frequency band to be subjected to time envelope shaping processing.
- a frequency band obtained by approximating a signal of the frequency using a signal of another frequency band by the band expansion method may be selected as a frequency band to be subjected to the time envelope shaping process.
- the frequency band in which the pseudo noise signal is generated by the band expansion method may be selected as the frequency band on which the time envelope shaping process is performed.
- a frequency band excluding a frequency band to which a sine signal is added by a band expansion method may be selected as a frequency band to be subjected to time envelope shaping processing.
- the decoding unit 10a has the configuration described in the second example of the decoding unit 10a, and the number of bits assigned to the second encoding method in the first encoding method is less than a predetermined threshold value.
- Conversion coefficients of frequency bands or components (which may be frequency bands or components not encoded by the first encoding method), approximation using conversion coefficients of other frequency bands or components, and pseudo-noise signal
- a frequency band or component approximated by using a transform coefficient of another frequency band or component is converted to a time. You may select as a frequency band or component which performs an envelope shaping process.
- the frequency band or component to which the conversion coefficient of the pseudo noise signal is added may be selected as the frequency band or component to be subjected to the time envelope shaping process.
- the frequency band or component to be subjected to the time envelope shaping process may be selected according to the whitening intensity.
- a frequency band or a component to be subjected to time envelope shaping processing may be selected according to the level of the pseudo noise signal.
- the decoding unit 10a has the configuration described in the second example of the decoding unit 10a, and the second encoding method is quantized to zero by the first encoding method (that is, the first encoding method).
- Coding method that generates a pseudo-noise signal or duplicates a signal of another frequency component may be an approximation using a signal of another frequency component
- the frequency component that generated the pseudo noise signal may be selected as the frequency component to be subjected to the time envelope shaping process.
- a frequency component obtained by duplicating a signal of another frequency component may be selected as a frequency component to be subjected to the time envelope shaping process.
- a frequency component to be subjected to the time envelope shaping process may be selected.
- the frequency component to be subjected to the time envelope shaping process may be selected depending on whether or not the process is applied to the frequency component of the replication source at the time of replication.
- the frequency component to be subjected to the time envelope shaping process may be selected according to the process to be added to the frequency component of the replication source (approximation source) at the time of replication (or approximation).
- the frequency component to be subjected to the time envelope shaping process may be selected according to the whitening intensity.
- a frequency component to be subjected to the time envelope shaping process may be selected according to an approximation method at the time of approximation.
- the method for selecting frequency components or frequency bands may be a combination of the above examples.
- a frequency component or frequency band selection method may be selected by using at least one of the decoded signal in the frequency domain and the decoding related information to select a frequency component or a band to be subjected to time envelope shaping processing in the decoded signal in the frequency domain. Is not limited to the above example.
- the frequency selective time envelope shaping unit 10bC shapes the time envelope of the frequency band selected by the frequency selection unit 10bB of the decoded signal into a desired time envelope (step S10-2-3).
- the time envelope shaping may be performed in units of frequency components.
- the method of shaping the time envelope is, for example, a method of flattening the time envelope by filtering with a linear prediction inverse filter using the linear prediction coefficient obtained by linear prediction analysis of the transform coefficient of the selected frequency band.
- a method may be used in which the time envelope rises and / or falls by filtering the transform coefficient of the selected frequency band with a linear prediction filter using the linear prediction coefficient.
- the transfer function of the linear prediction filter is It can be expressed as
- the strength for flattening the time envelope or rising or / and falling may be adjusted using the bandwidth expansion rate ⁇ .
- the power distribution in the time domain of the decoded signal can be changed and the time envelope can be shaped.
- the time envelope may be flattened.
- the time envelope can be flattened while maintaining the energy of the frequency component (or frequency band) of the time segment before the time envelope shaping process.
- the time envelope may be raised / fallen by changing the amplitude of the subband signal while maintaining the energy of the frequency component (or frequency band) of the time segment before the time envelope shaping process.
- a frequency component or frequency band that has not been selected as a frequency component or frequency band for shaping the time envelope by the frequency selection unit 10bB (referred to as a non-selected frequency component or a non-selected frequency band).
- the frequency band including the transform coefficient (or subsample) of the non-selected frequency component (or non-selected frequency band) of the decoded signal is replaced with another value, and then the time envelope shaping method is used.
- the non-selected frequency component (non-selected frequency band may be used) by returning to the original value before replacing the transform coefficient (or subsample) of the non-selected frequency component (or non-selected frequency band).
- the frequency components (or frequency band) subjected to the time envelope shaping process can be performed collectively (or the frequency band), and the amount of calculation can be reduced.
- the frequency components (or frequency bands) subjected to the finely divided time envelope shaping processing are subjected to linear prediction analysis, whereas the divided frequencies are used.
- the component (or frequency band) including the non-selected frequency component (or non-selected frequency band) may be collected and subjected to linear prediction analysis once, and further filtered with a linear prediction inverse filter (or linear prediction filter). Processing can also be realized with a low amount of computation by combining the divided frequency components (or frequency bands) together with non-selected frequency components (or non-selected frequency bands) and filtering them once.
- the conversion coefficient (or subsample) of the non-selected frequency component (which may be a non-selected frequency band) is replaced with, for example, the conversion coefficient (or subsample) of the non-selected frequency component (which may be a non-selected frequency band)
- the amplitude of the transform coefficient (or subsample) of the non-selected frequency component (or non-selected frequency band) may be replaced using the average value of the amplitude including the neighboring frequency component (or frequency band). Good.
- the sign of the transform coefficient may maintain the sign of the original transform coefficient
- the phase of the subsample may maintain the phase of the original subsample.
- the transform coefficient (or subsample) of the frequency component (which may be a frequency band) is not quantized / encoded, and the transform coefficient (or subsample) of another frequency component (may be a frequency band) is used. If it is selected to apply time envelope shaping to frequency components (may be frequency bands) generated by duplication / approximation, and / or generation / addition of pseudo-noise signal, and / or addition of sine signal, Duplicate / approximate transform coefficient (or subsample) of non-selected frequency component (may be non-selected frequency band) with transform coefficient (or subsample) of other frequency component (may be frequency band) It may be replaced with a transform coefficient (or subsample) generated by generating / adding a pseudo noise signal and / or adding a sine signal.
- the method for shaping the time envelope of the selected frequency band may be a combination of the above methods, and the method for shaping the time envelope is not limited to the above example.
- the time-frequency inverse transform unit 10bD converts the decoded signal subjected to frequency envelope shaping in a frequency selective manner into a time-domain signal and outputs it (step S10-2-4).
- FIG. 14 is a diagram showing a configuration of the speech decoding apparatus 11 according to the second embodiment.
- the communication device of the audio decoding device 11 receives an encoded sequence obtained by encoding an audio signal, and further outputs the decoded audio signal to the outside.
- the speech decoding apparatus 11 functionally includes a demultiplexing unit 11a, a decoding unit 10a, and a selective time envelope shaping unit 11b.
- FIG. 15 is a flowchart showing the operation of the speech decoding apparatus 11 according to the second embodiment.
- the demultiplexer 11a separates the encoded sequence from the encoded sequence and the time envelope information obtained by decoding / dequantizing the encoded sequence (step S11-1).
- the decoding unit 10a decodes the encoded sequence and generates a decoded signal (step S10-1).
- the time envelope information is encoded or / and quantized, the time envelope information is obtained by decoding or / and inverse quantization.
- the time envelope information may be information indicating that the time envelope of the input signal encoded by the encoding device is flat, for example. For example, it may be information indicating that the time envelope of the input signal is rising. For example, it may be information indicating that the time envelope of the input signal is falling.
- the time envelope information may be information indicating the degree of flatness of the time envelope of the input signal, for example, information indicating the degree of rise of the time envelope of the input signal, For example, it may be information indicating the degree of falling of the time envelope of the input signal.
- the time envelope information may be information indicating whether or not the time envelope is shaped by the selective time envelope shaping unit.
- the selective time envelope shaping unit 11b receives the decoding related information and the decoded signal, which are information obtained when decoding the encoded sequence from the decoding unit 10a, receives the time envelope information from the demultiplexing unit, Based on at least one, the time envelope of the components of the decoded signal is selectively shaped into a desired time envelope (step S11-2).
- the method of selective time envelope shaping in the selective time envelope shaping unit 11b may be, for example, the same as that of the selective time envelope shaping unit 10b, or may be subjected to selective time envelope shaping in consideration of time envelope information.
- time envelope information is information indicating that the time envelope of the input signal encoded by the encoding device is flat
- the time envelope may be shaped flat based on the information.
- the time envelope information is information indicating that the time envelope of the input signal is rising
- the time envelope may be shaped into rising based on the information.
- the time envelope information is information indicating that the time envelope of the input signal is falling
- the time envelope may be shaped to fall based on the information.
- the strength for flattening the time envelope may be adjusted based on the information.
- the time envelope information is information indicating the rising degree of the time envelope of the input signal
- the strength for rising the time envelope may be adjusted based on the information.
- the time envelope information is information indicating the degree of falling of the time envelope of the input signal
- the strength for falling the time envelope may be adjusted based on the information.
- time envelope information is information indicating whether or not the time envelope shaping unit 11b shapes the time envelope
- a frequency band (which may be a frequency component) on which the time envelope shaping is performed is selected similarly to the first embodiment.
- the time envelope of the selected frequency band (which may be a frequency component) in the decoded signal may be shaped into a desired time envelope.
- FIG. 16 is a diagram illustrating a configuration of the speech encoding device 21 according to the second embodiment.
- the communication device of the speech encoding device 21 receives a speech signal to be encoded from the outside, and further outputs an encoded encoded sequence to the outside.
- the speech encoding device 21 functionally includes an encoding unit 21a, a time envelope information encoding unit 21b, and a multiplexing unit 21c.
- FIG. 17 is a flowchart showing the operation of the speech encoding apparatus 21 according to the second embodiment.
- the encoding unit 21a encodes the input audio signal and generates an encoded sequence (step S21-1).
- the audio signal encoding method in the encoding unit 21a is an encoding method corresponding to the decoding method of the decoding unit 10a.
- the time envelope information encoding unit 21b generates time envelope information from at least one of the input audio signal and information obtained when the audio signal is encoded by the encoding unit 21a.
- the generated time envelope information may be encoded / quantized (step S21-2).
- the time envelope information may be time envelope information obtained by the demultiplexer 11a of the speech decoding device 11, for example.
- the decoding unit of the speech decoding device 11 when the decoding unit of the speech decoding device 11 generates a decoded signal, processing related to time envelope shaping different from the present invention is performed, and information related to the time envelope shaping processing is held in the speech encoding device 21 If so, the time envelope information may be generated using the information. For example, based on the information on whether or not to perform time envelope processing different from the present invention, information indicating whether or not the time envelope is shaped by the selective time envelope shaping unit 11b of the speech decoding apparatus 11 is generated. May be.
- the selective temporal envelope shaping unit 11b of the speech decoding device 11 performs the linear prediction analysis described in the first example of the selective temporal envelope shaping unit 10b of the speech decoding device 10 according to the first embodiment.
- the result of linear prediction analysis of the conversion coefficient (which may be a subband sample) of the input speech signal is used as in the linear prediction analysis in the time envelope shaping process.
- Time envelope information may be generated. Specifically, for example, a prediction gain by the linear prediction analysis may be calculated, and time envelope information may be generated based on the prediction gain.
- the conversion coefficients (may be subband samples) of all frequency bands of the input speech signal may be subjected to linear prediction analysis, and further, some frequencies of the input speech signal may be analyzed.
- Band transform coefficients (which may be subband samples) may be subjected to linear predictive analysis.
- the input speech signal may be divided into a plurality of frequency bands, and linear prediction analysis of transform coefficients (may be subband samples) may be performed for each frequency band.
- the time envelope information may be generated using the plurality of prediction gains.
- the information obtained when the speech signal is encoded by the encoding unit 21a is the encoding method (the first encoding method corresponding to the first decoding method) when the decoding unit 10a has the configuration of the second example.
- the multiplexing unit 21c multiplexes and outputs the encoded sequence obtained by the encoding unit and the time envelope information obtained by the time envelope information encoding unit (step S21-3).
- FIG. 18 is a diagram illustrating a configuration of the speech decoding apparatus 12 according to the third embodiment.
- the communication device of the audio decoding device 12 receives an encoded sequence obtained by encoding an audio signal, and further outputs the decoded audio signal to the outside.
- the speech decoding apparatus 12 functionally includes a decoding unit 10a and a time envelope shaping unit 12a.
- FIG. 19 is a flowchart showing the operation of the speech decoding apparatus 12 according to the third embodiment.
- the decoding unit 10a decodes the encoded sequence and generates a decoded signal (step S10-1).
- the time envelope shaping unit 12a shapes the time envelope of the decoded signal output from the decoding unit 10a into a desired time envelope (step S12-1).
- the temporal envelope shaping method filters the transform coefficient of the decoded signal with a linear prediction inverse filter using a linear prediction coefficient obtained by linear prediction analysis, thereby obtaining a temporal envelope.
- a method of flattening may be used, or a method of making the time envelope rise or / and fall by filtering with a linear prediction filter using the linear prediction coefficient, and further flattening using the bandwidth expansion rate.
- the intensity of rising / falling may be controlled, and the subband signal at any time t of the subband signal obtained by converting the decoded signal into the frequency domain signal by the filter bank instead of the conversion coefficient of the decoded signal.
- the sample may be subjected to the time envelope shaping in the above example.
- the amplitude of the subband signal may be corrected so that a desired time envelope is obtained in an arbitrary time segment.
- the time envelope may be flattened by setting the average amplitude of the component (or frequency band).
- the above time envelope shaping may be applied to the entire frequency band of the decoded signal or may be applied to a predetermined frequency band.
- FIG. 20 is a diagram illustrating a configuration of the speech decoding apparatus 13 according to the fourth embodiment.
- the communication device of the audio decoding device 13 receives an encoded sequence obtained by encoding an audio signal, and further outputs the decoded audio signal to the outside.
- the speech decoding apparatus 13 functionally includes a demultiplexing unit 11a, a decoding unit 10a, and a time envelope shaping unit 13a.
- FIG. 21 is a flowchart showing the operation of the speech decoding apparatus 13 according to the fourth embodiment.
- the demultiplexing unit 11a decodes / dequantizes the encoded sequence to separate the encoded sequence from which the decoded signal is obtained and the time envelope information (step S11-1), and the decoding unit 10a decodes the encoded sequence. Then, a decoded signal is generated (step S10-1).
- the time envelope shaping unit 13a receives the time envelope information from the demultiplexing unit 11a, and shapes the time envelope of the decoded signal output from the decoding unit 10a into a desired time envelope based on the time envelope information ( Step S13-1).
- the time envelope information is information indicating that the time envelope of the input signal encoded by the encoding device is flat, and that the time envelope of the input signal is rising.
- Information indicating that the time envelope of the input signal is falling, or information indicating the degree of flatness of the time envelope of the input signal It may be information indicating the degree of rising, information indicating the degree of falling of the time envelope of the input signal, and information indicating whether or not the time envelope shaping unit 13a shapes the time envelope. May be.
- FIG. 11 is a diagram illustrating an example of the hardware configuration of each of the speech decoding devices 10, 11, 12, 13 and the speech encoding device 21.
- each of the speech decoding devices 10, 11, 12, 13 and the speech encoding device 21 physically includes a CPU 100, a RAM 101 and a ROM 102 as main storage devices, an input / output device 103 such as a display,
- the computer system includes a communication module 104, an auxiliary storage device 105, and the like.
- the speech decoding devices 10, 11, 12, 13 and the speech encoding device 21 have the functions of the respective functional blocks read by loading predetermined computer software on the hardware such as the CPU 100 and the RAM 101 shown in FIG. This is realized by operating the input / output device 103, the communication module 104, and the auxiliary storage device 105 under the control of the CPU 100, and reading and writing data in the RAM 101.
- the audio decoding program 50 is inserted into a computer and accessed, or stored in a program storage area 41 formed in a recording medium 40 provided in the computer. More specifically, the audio decoding program 50 is stored in a program storage area 41 formed in the recording medium 40 provided in the audio decoding device 10.
- the speech decoding program 50 has functions realized by executing the decoding module 50a and the selective time envelope shaping module 50b, respectively, with the functions of the decoding unit 10a and the selective time envelope shaping unit 10b of the speech decoding device 10 described above. It is the same. Furthermore, the decoding module 50a includes modules for functioning as a decoding / inverse quantization unit 10aA, a decoding related information output unit 10aB, and a time-frequency inverse transform unit 10aC. Further, the decoding module 50a may include modules for functioning as the encoded sequence analysis unit 10aD, the first decoding unit 10aE, and the second decoding unit 10aF.
- the selective time envelope shaping module 50b includes modules for functioning as a time frequency conversion unit 10bA, a frequency selection unit 10bB, a frequency selective time envelope shaping unit 10bC, and a time frequency inverse conversion unit 10bD.
- the speech decoding program 50 includes modules for functioning as the demultiplexing unit 11a, the decoding unit 10a, and the selective time envelope shaping unit 11b in order to function with the speech decoding device 11 described above.
- the speech decoding program 50 includes modules for functioning as the decoding unit 10a and the time envelope shaping unit 12a in order to function as the speech decoding device 12 described above.
- the speech decoding program 50 includes modules for functioning as the demultiplexing unit 11a, the decoding unit 10a, and the time envelope shaping unit 13a in order to function as the speech decoding device 13.
- the speech encoding program 60 is inserted into a computer and accessed, or stored in a program storage area 41 formed in a recording medium 40 provided in the computer. More specifically, the audio encoding program 60 is stored in a program storage area 41 formed in the recording medium 40 provided in the audio encoding device 20.
- the speech encoding program 60 includes an encoding module 60a, a time envelope information encoding module 60b, and a multiplexing module 60c.
- the functions realized by executing the encoding module 60a, the time envelope information encoding module 60b, and the multiplexing module 60c are the encoding unit 21a, the time envelope information encoding unit 21b, and the like described above. And the function of the multiplexing unit 21c.
- each of the speech decoding program 50 and the speech encoding program 60 is transmitted via a transmission medium such as a communication line, and is received and recorded (including installation) by another device. It is good.
- Each module of the speech decoding program 50 and the speech encoding program 60 may be installed in any one of a plurality of computers instead of one computer. In that case, each of the above-described speech decoding program 50 and speech encoding program 60 is performed by the computer system of the plurality of computers.
- decoded signal synthesis unit 10b ... selective time envelope shaping unit, 10bA ... time frequency conversion unit, 10bB ... frequency selection unit, 10bC ... Frequency selective time envelope shaping unit, 10bD ... time frequency inverse transform unit, 11 ... speech decoding device, 11a ... demultiplexing unit, 11b ... selective time envelope shaping unit, 12 ... speech decoding device, 12a ... time envelope shaping unit , 13 ... voice decoding Location, 13a ... time envelope shaping unit, 21 ... sound coding apparatus, 21a ... encoding unit, 21b ... time envelope information encoding unit, 21c ... multiplexing section.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Quality & Reliability (AREA)
- Mathematical Physics (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Stereo-Broadcasting Methods (AREA)
- Reduction Or Emphasis Of Bandwidth Of Signals (AREA)
Priority Applications (32)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710975669.6A CN107767876B (zh) | 2014-03-24 | 2015-03-20 | 声音编码装置以及声音编码方法 |
| AU2015235133A AU2015235133B2 (en) | 2014-03-24 | 2015-03-20 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| MX2018002676A MX388622B (es) | 2014-03-24 | 2015-03-20 | Dispositivo decodificador de audio, dispositivo codificador de audio, metodo de decodificacion de audio, metodo de codificacion de audio, programa de decodificacion de audio, y programa de codificacion de audio |
| EP23207259.5A EP4293667A3 (en) | 2014-03-24 | 2015-03-20 | Audio encoding device and audio encoding method |
| KR1020207006991A KR102126044B1 (ko) | 2014-03-24 | 2015-03-20 | 음성 복호 장치, 음성 부호화 장치, 음성 복호 방법, 음성 부호화 방법, 음성 복호 프로그램, 및 음성 부호화 프로그램 |
| DK15768907.6T DK3125243T3 (da) | 2014-03-24 | 2015-03-20 | Lydafkodningsindretning, lydkodningsindretning, lydafkodningsfremgangsmåde, lydkodningsfremgangsmåde, lydafkodningsprogram og lydkodningsprogram |
| US15/128,364 US10410647B2 (en) | 2014-03-24 | 2015-03-20 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| ES15768907T ES2772173T3 (es) | 2014-03-24 | 2015-03-20 | Dispositivo de decodificación de audio, dispositivo de codificación de audio, método de decodificación de audio, método de codificación de audio, programa de decodificación de audio y programa de codificación de audio |
| KR1020187028501A KR102038077B1 (ko) | 2014-03-24 | 2015-03-20 | 음성 복호 장치, 음성 부호화 장치, 음성 복호 방법, 음성 부호화 방법, 음성 복호 프로그램, 및 음성 부호화 프로그램 |
| PL15768907T PL3125243T3 (pl) | 2014-03-24 | 2015-03-20 | Urządzenie do dekodowania audio, urządzenie do kodowania audio, sposób dekodowania audio, sposób kodowania audio, program do dekodowania audio, oraz program do kodowania audio |
| EP19205596.0A EP3621073B1 (en) | 2014-03-24 | 2015-03-20 | Audio encoding device and audio encoding method |
| KR1020197031274A KR102089602B1 (ko) | 2014-03-24 | 2015-03-20 | 음성 복호 장치, 음성 부호화 장치, 음성 복호 방법, 음성 부호화 방법, 음성 복호 프로그램, 및 음성 부호화 프로그램 |
| MX2016012393A MX354434B (es) | 2014-03-24 | 2015-03-20 | Dispositivo decodificador de audio, dispositivo codificador de audio, metodo de decodificacion de audio, metodo de codificacion de audio, programa de decodificacion de audio, y programa de codificacion de audio. |
| KR1020177026665A KR101906524B1 (ko) | 2014-03-24 | 2015-03-20 | 음성 복호 장치, 음성 부호화 장치, 음성 복호 방법, 음성 부호화 방법, 음성 복호 프로그램, 및 음성 부호화 프로그램 |
| KR1020167026675A KR101782935B1 (ko) | 2014-03-24 | 2015-03-20 | 음성 복호 장치, 음성 부호화 장치, 음성 복호 방법, 음성 부호화 방법, 음성 복호 프로그램, 및 음성 부호화 프로그램 |
| RU2016141264A RU2631155C1 (ru) | 2014-03-24 | 2015-03-20 | Устройство аудиодекодирования, устройство аудиокодирования, способ аудиодекодирования, способ аудиокодирования, программа аудиодекодирования и программа аудиокодирования |
| CA2942885A CA2942885C (en) | 2014-03-24 | 2015-03-20 | System and method for decoding an encoded audio signal using selective temporal shaping |
| KR1020207017473A KR102208915B1 (ko) | 2014-03-24 | 2015-03-20 | 음성 복호 장치, 음성 부호화 장치, 음성 복호 방법, 음성 부호화 방법, 음성 복호 프로그램, 및 음성 부호화 프로그램 |
| CN201580015128.8A CN106133829B (zh) | 2014-03-24 | 2015-03-20 | 声音解码装置、声音编码装置、声音解码方法以及声音编码方法 |
| KR1020207006992A KR102124962B1 (ko) | 2014-03-24 | 2015-03-20 | 음성 복호 장치, 음성 부호화 장치, 음성 복호 방법, 음성 부호화 방법, 음성 복호 프로그램, 및 음성 부호화 프로그램 |
| BR112016021165-0A BR112016021165B1 (pt) | 2014-03-24 | 2015-03-20 | dispositivos e métodos de decodificação de áudio e meios de gravação |
| EP15768907.6A EP3125243B1 (en) | 2014-03-24 | 2015-03-20 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| PH12016501844A PH12016501844B1 (en) | 2014-03-24 | 2016-09-21 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| AU2018201468A AU2018201468B2 (en) | 2014-03-24 | 2018-02-28 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| US16/528,163 US11437053B2 (en) | 2014-03-24 | 2019-07-31 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| AU2019257487A AU2019257487B2 (en) | 2014-03-24 | 2019-10-31 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| AU2019257495A AU2019257495B2 (en) | 2014-03-24 | 2019-10-31 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| AU2021200603A AU2021200603B2 (en) | 2014-03-24 | 2021-01-29 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| AU2021200607A AU2021200607B2 (en) | 2014-03-24 | 2021-01-29 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| AU2021200604A AU2021200604B2 (en) | 2014-03-24 | 2021-01-29 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| US17/874,975 US12223971B2 (en) | 2014-03-24 | 2022-07-27 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| US19/008,402 US20250140274A1 (en) | 2014-03-24 | 2025-01-02 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2014-060650 | 2014-03-24 | ||
| JP2014060650A JP6035270B2 (ja) | 2014-03-24 | 2014-03-24 | 音声復号装置、音声符号化装置、音声復号方法、音声符号化方法、音声復号プログラム、および音声符号化プログラム |
Related Child Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/128,364 A-371-Of-International US10410647B2 (en) | 2014-03-24 | 2015-03-20 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
| US16/528,163 Continuation US11437053B2 (en) | 2014-03-24 | 2019-07-31 | Audio decoding device, audio encoding device, audio decoding method, audio encoding method, audio decoding program, and audio encoding program |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2015146860A1 true WO2015146860A1 (ja) | 2015-10-01 |
Family
ID=54195375
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2015/058608 Ceased WO2015146860A1 (ja) | 2014-03-24 | 2015-03-20 | 音声復号装置、音声符号化装置、音声復号方法、音声符号化方法、音声復号プログラム、および音声符号化プログラム |
Country Status (20)
| Country | Link |
|---|---|
| US (4) | US10410647B2 (enExample) |
| EP (3) | EP4293667A3 (enExample) |
| JP (1) | JP6035270B2 (enExample) |
| KR (7) | KR102126044B1 (enExample) |
| CN (2) | CN107767876B (enExample) |
| AU (7) | AU2015235133B2 (enExample) |
| BR (1) | BR112016021165B1 (enExample) |
| CA (2) | CA2990392C (enExample) |
| DK (2) | DK3125243T3 (enExample) |
| ES (2) | ES2974029T3 (enExample) |
| FI (1) | FI3621073T3 (enExample) |
| HU (1) | HUE065961T2 (enExample) |
| MX (2) | MX354434B (enExample) |
| MY (1) | MY165849A (enExample) |
| PH (1) | PH12016501844B1 (enExample) |
| PL (2) | PL3125243T3 (enExample) |
| PT (2) | PT3621073T (enExample) |
| RU (7) | RU2654141C1 (enExample) |
| TW (6) | TWI773992B (enExample) |
| WO (1) | WO2015146860A1 (enExample) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| RU2732995C1 (ru) * | 2017-03-31 | 2020-09-28 | Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. | Устройство и способ для постобработки звукового сигнала с использованием основанного на прогнозе профилирования |
Families Citing this family (9)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP5997592B2 (ja) | 2012-04-27 | 2016-09-28 | 株式会社Nttドコモ | 音声復号装置 |
| JP6035270B2 (ja) * | 2014-03-24 | 2016-11-30 | 株式会社Nttドコモ | 音声復号装置、音声符号化装置、音声復号方法、音声符号化方法、音声復号プログラム、および音声符号化プログラム |
| EP2980795A1 (en) * | 2014-07-28 | 2016-02-03 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Audio encoding and decoding using a frequency domain processor, a time domain processor and a cross processor for initialization of the time domain processor |
| DE102017204181A1 (de) | 2017-03-14 | 2018-09-20 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Sender zum Emittieren von Signalen und Empfänger zum Empfangen von Signalen |
| EP3382700A1 (en) | 2017-03-31 | 2018-10-03 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for post-processing an audio signal using a transient location detection |
| US11496152B2 (en) * | 2018-08-08 | 2022-11-08 | Sony Corporation | Decoding device, decoding method, and program |
| CN111314778B (zh) * | 2020-03-02 | 2021-09-07 | 北京小鸟科技股份有限公司 | 基于多种压缩制式的编解码融合处理方法、系统及装置 |
| CN115472171B (zh) * | 2021-06-11 | 2024-11-22 | 华为技术有限公司 | 编解码方法、装置、设备、存储介质及计算机程序 |
| WO2024218334A1 (en) * | 2023-04-21 | 2024-10-24 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Apparatus and method for audio signal coding with temporal noise shaping on subband signals |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009530679A (ja) * | 2006-03-20 | 2009-08-27 | フランス テレコム | オーディオデコーダ内で信号を後処理する方法 |
| JP2013242514A (ja) * | 2012-04-27 | 2013-12-05 | Ntt Docomo Inc | 音声復号装置、音声符号化装置、音声復号方法、音声符号化方法、音声復号プログラム、および音声符号化プログラム |
Family Cites Families (32)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| DE2100747B2 (de) | 1970-01-08 | 1973-01-04 | Trw Inc., Redondo Beach, Calif. (V.St.A.) | Anordnung zur digitalen Geschwindigkeitsregelung zur Aufrechterhaltung einer gewählten konstanten Geschwindigkeit eines Kraftfahrzeuges |
| JPS5913508B2 (ja) | 1975-06-23 | 1984-03-30 | オオツカセイヤク カブシキガイシヤ | アシルオキシ置換カルボスチリル誘導体の製造法 |
| JP3155560B2 (ja) | 1991-05-27 | 2001-04-09 | 株式会社コガネイ | マニホールドバルブ |
| JP3283413B2 (ja) | 1995-11-30 | 2002-05-20 | 株式会社日立製作所 | 符号化復号方法、符号化装置および復号装置 |
| CN1232951C (zh) * | 2001-03-02 | 2005-12-21 | 松下电器产业株式会社 | 编码装置和译码装置 |
| US7447631B2 (en) | 2002-06-17 | 2008-11-04 | Dolby Laboratories Licensing Corporation | Audio coding system using spectral hole filling |
| CN100370517C (zh) * | 2002-07-16 | 2008-02-20 | 皇家飞利浦电子股份有限公司 | 一种对编码信号进行解码的方法 |
| JP2004134900A (ja) * | 2002-10-09 | 2004-04-30 | Matsushita Electric Ind Co Ltd | 符号化信号復号化装置および復号化方法 |
| US7672838B1 (en) * | 2003-12-01 | 2010-03-02 | The Trustees Of Columbia University In The City Of New York | Systems and methods for speech recognition using frequency domain linear prediction polynomials to form temporal and spectral envelopes from frequency domain representations of signals |
| CA2457988A1 (en) * | 2004-02-18 | 2005-08-18 | Voiceage Corporation | Methods and devices for audio compression based on acelp/tcx coding and multi-rate lattice vector quantization |
| TWI498882B (zh) * | 2004-08-25 | 2015-09-01 | Dolby Lab Licensing Corp | 音訊解碼器 |
| JP2008519991A (ja) * | 2004-11-09 | 2008-06-12 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | 音声の符号化及び復号化 |
| JP4800645B2 (ja) * | 2005-03-18 | 2011-10-26 | カシオ計算機株式会社 | 音声符号化装置、及び音声符号化方法 |
| AU2006232363B2 (en) * | 2005-04-01 | 2011-01-27 | Qualcomm Incorporated | Method and apparatus for anti-sparseness filtering of a bandwidth extended speech prediction excitation signal |
| EP1829424B1 (en) * | 2005-04-15 | 2009-01-21 | Dolby Sweden AB | Temporal envelope shaping of decorrelated signals |
| US8116459B2 (en) * | 2006-03-28 | 2012-02-14 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Enhanced method for signal shaping in multi-channel audio reconstruction |
| US8260609B2 (en) * | 2006-07-31 | 2012-09-04 | Qualcomm Incorporated | Systems, methods, and apparatus for wideband encoding and decoding of inactive frames |
| RU2449386C2 (ru) * | 2007-11-02 | 2012-04-27 | Хуавэй Текнолоджиз Ко., Лтд. | Способ и устройство для аудиодекодирования |
| DE102008009719A1 (de) * | 2008-02-19 | 2009-08-20 | Siemens Enterprise Communications Gmbh & Co. Kg | Verfahren und Mittel zur Enkodierung von Hintergrundrauschinformationen |
| CN101335000B (zh) * | 2008-03-26 | 2010-04-21 | 华为技术有限公司 | 编码的方法及装置 |
| JP5203077B2 (ja) | 2008-07-14 | 2013-06-05 | 株式会社エヌ・ティ・ティ・ドコモ | 音声符号化装置及び方法、音声復号化装置及び方法、並びに、音声帯域拡張装置及び方法 |
| CN101436406B (zh) * | 2008-12-22 | 2011-08-24 | 西安电子科技大学 | 音频编解码器 |
| JP4921611B2 (ja) | 2009-04-03 | 2012-04-25 | 株式会社エヌ・ティ・ティ・ドコモ | 音声復号装置、音声復号方法、及び音声復号プログラム |
| JP4932917B2 (ja) | 2009-04-03 | 2012-05-16 | 株式会社エヌ・ティ・ティ・ドコモ | 音声復号装置、音声復号方法、及び音声復号プログラム |
| US8725503B2 (en) * | 2009-06-23 | 2014-05-13 | Voiceage Corporation | Forward time-domain aliasing cancellation with application in weighted or original signal domain |
| MY163358A (en) * | 2009-10-08 | 2017-09-15 | Fraunhofer-Gesellschaft Zur Förderung Der Angenwandten Forschung E V | Multi-mode audio signal decoder,multi-mode audio signal encoder,methods and computer program using a linear-prediction-coding based noise shaping |
| ES3028558T3 (en) * | 2009-10-20 | 2025-06-19 | Fraunhofer Ges Forschung | Audio signal decoder, corresponding method and computer program |
| EP2631905A4 (en) * | 2010-10-18 | 2014-04-30 | Panasonic Corp | DEVICE FOR TONE CODING AND TONE DECODING |
| JP2012163919A (ja) * | 2011-02-09 | 2012-08-30 | Sony Corp | 音声信号処理装置、および音声信号処理方法、並びにプログラム |
| KR101699898B1 (ko) * | 2011-02-14 | 2017-01-25 | 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. | 스펙트럼 영역에서 디코딩된 오디오 신호를 처리하기 위한 방법 및 장치 |
| KR101897455B1 (ko) * | 2012-04-16 | 2018-10-04 | 삼성전자주식회사 | 음질 향상 장치 및 방법 |
| JP6035270B2 (ja) | 2014-03-24 | 2016-11-30 | 株式会社Nttドコモ | 音声復号装置、音声符号化装置、音声復号方法、音声符号化方法、音声復号プログラム、および音声符号化プログラム |
-
2014
- 2014-03-24 JP JP2014060650A patent/JP6035270B2/ja active Active
-
2015
- 2015-03-20 AU AU2015235133A patent/AU2015235133B2/en active Active
- 2015-03-20 KR KR1020207006991A patent/KR102126044B1/ko active Active
- 2015-03-20 EP EP23207259.5A patent/EP4293667A3/en active Pending
- 2015-03-20 CA CA2990392A patent/CA2990392C/en active Active
- 2015-03-20 ES ES19205596T patent/ES2974029T3/es active Active
- 2015-03-20 PL PL15768907T patent/PL3125243T3/pl unknown
- 2015-03-20 PL PL19205596.0T patent/PL3621073T3/pl unknown
- 2015-03-20 CN CN201710975669.6A patent/CN107767876B/zh active Active
- 2015-03-20 MY MYPI2016703472A patent/MY165849A/en unknown
- 2015-03-20 KR KR1020187028501A patent/KR102038077B1/ko active Active
- 2015-03-20 BR BR112016021165-0A patent/BR112016021165B1/pt active IP Right Grant
- 2015-03-20 KR KR1020207006992A patent/KR102124962B1/ko active Active
- 2015-03-20 CA CA2942885A patent/CA2942885C/en active Active
- 2015-03-20 US US15/128,364 patent/US10410647B2/en active Active
- 2015-03-20 WO PCT/JP2015/058608 patent/WO2015146860A1/ja not_active Ceased
- 2015-03-20 EP EP15768907.6A patent/EP3125243B1/en active Active
- 2015-03-20 RU RU2017131210A patent/RU2654141C1/ru active
- 2015-03-20 FI FIEP19205596.0T patent/FI3621073T3/fi active
- 2015-03-20 CN CN201580015128.8A patent/CN106133829B/zh active Active
- 2015-03-20 ES ES15768907T patent/ES2772173T3/es active Active
- 2015-03-20 EP EP19205596.0A patent/EP3621073B1/en active Active
- 2015-03-20 KR KR1020167026675A patent/KR101782935B1/ko active Active
- 2015-03-20 PT PT192055960T patent/PT3621073T/pt unknown
- 2015-03-20 KR KR1020197031274A patent/KR102089602B1/ko active Active
- 2015-03-20 RU RU2016141264A patent/RU2631155C1/ru active
- 2015-03-20 DK DK15768907.6T patent/DK3125243T3/da active
- 2015-03-20 MX MX2016012393A patent/MX354434B/es active IP Right Grant
- 2015-03-20 MX MX2018002676A patent/MX388622B/es unknown
- 2015-03-20 HU HUE19205596A patent/HUE065961T2/hu unknown
- 2015-03-20 KR KR1020207017473A patent/KR102208915B1/ko active Active
- 2015-03-20 KR KR1020177026665A patent/KR101906524B1/ko active Active
- 2015-03-20 PT PT157689076T patent/PT3125243T/pt unknown
- 2015-03-20 DK DK19205596.0T patent/DK3621073T3/da active
- 2015-03-24 TW TW109116739A patent/TWI773992B/zh active
- 2015-03-24 TW TW112119560A patent/TWI894565B/zh active
- 2015-03-24 TW TW106133758A patent/TWI666632B/zh active
- 2015-03-24 TW TW111125591A patent/TWI807906B/zh active
- 2015-03-24 TW TW104109387A patent/TWI608474B/zh active
- 2015-03-24 TW TW108117901A patent/TWI696994B/zh active
-
2016
- 2016-09-21 PH PH12016501844A patent/PH12016501844B1/en unknown
-
2018
- 2018-02-28 AU AU2018201468A patent/AU2018201468B2/en active Active
- 2018-04-27 RU RU2018115787A patent/RU2707722C2/ru active
-
2019
- 2019-07-31 US US16/528,163 patent/US11437053B2/en active Active
- 2019-10-31 AU AU2019257495A patent/AU2019257495B2/en active Active
- 2019-10-31 AU AU2019257487A patent/AU2019257487B2/en active Active
- 2019-11-13 RU RU2019136372A patent/RU2718421C1/ru active
-
2020
- 2020-03-20 RU RU2020111648A patent/RU2732951C1/ru active
- 2020-09-14 RU RU2020130138A patent/RU2741486C1/ru active
-
2021
- 2021-01-18 RU RU2021100857A patent/RU2751150C1/ru active
- 2021-01-29 AU AU2021200603A patent/AU2021200603B2/en active Active
- 2021-01-29 AU AU2021200607A patent/AU2021200607B2/en active Active
- 2021-01-29 AU AU2021200604A patent/AU2021200604B2/en active Active
-
2022
- 2022-07-27 US US17/874,975 patent/US12223971B2/en active Active
-
2025
- 2025-01-02 US US19/008,402 patent/US20250140274A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2009530679A (ja) * | 2006-03-20 | 2009-08-27 | フランス テレコム | オーディオデコーダ内で信号を後処理する方法 |
| JP2013242514A (ja) * | 2012-04-27 | 2013-12-05 | Ntt Docomo Inc | 音声復号装置、音声符号化装置、音声復号方法、音声符号化方法、音声復号プログラム、および音声符号化プログラム |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| RU2732995C1 (ru) * | 2017-03-31 | 2020-09-28 | Фраунхофер-Гезелльшафт Цур Фердерунг Дер Ангевандтен Форшунг Е.Ф. | Устройство и способ для постобработки звукового сигнала с использованием основанного на прогнозе профилирования |
Also Published As
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP6035270B2 (ja) | 音声復号装置、音声符号化装置、音声復号方法、音声符号化方法、音声復号プログラム、および音声符号化プログラム | |
| JP6691251B2 (ja) | 音声復号装置、音声復号方法、および音声復号プログラム | |
| JP6872056B2 (ja) | 音声復号装置および音声復号方法 | |
| JP6511033B2 (ja) | 音声符号化装置および音声符号化方法 | |
| HK1225493B (en) | Audio decoding device, audio encoding device, audio decoding method and audio encoding method | |
| HK1225493A1 (en) | Audio decoding device, audio encoding device, audio decoding method and audio encoding method |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15768907 Country of ref document: EP Kind code of ref document: A1 |
|
| DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
| REEP | Request for entry into the european phase |
Ref document number: 2015768907 Country of ref document: EP |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2015768907 Country of ref document: EP |
|
| ENP | Entry into the national phase |
Ref document number: 2942885 Country of ref document: CA |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 12016501844 Country of ref document: PH |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 15128364 Country of ref document: US |
|
| WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2016/012393 Country of ref document: MX |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| ENP | Entry into the national phase |
Ref document number: 20167026675 Country of ref document: KR Kind code of ref document: A |
|
| ENP | Entry into the national phase |
Ref document number: 2015235133 Country of ref document: AU Date of ref document: 20150320 Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: IDP00201607027 Country of ref document: ID |
|
| ENP | Entry into the national phase |
Ref document number: 2016141264 Country of ref document: RU Kind code of ref document: A |
|
| REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112016021165 Country of ref document: BR |
|
| ENP | Entry into the national phase |
Ref document number: 112016021165 Country of ref document: BR Kind code of ref document: A2 Effective date: 20160914 |
|
| WWD | Wipo information: divisional of initial pct application |
Ref document number: 2501003207 Country of ref document: TH |