WO2016023322A1 - 多声道声音信号编码方法、解码方法及装置 - Google Patents

多声道声音信号编码方法、解码方法及装置 Download PDF

Info

Publication number
WO2016023322A1
WO2016023322A1 PCT/CN2014/095394 CN2014095394W WO2016023322A1 WO 2016023322 A1 WO2016023322 A1 WO 2016023322A1 CN 2014095394 W CN2014095394 W CN 2014095394W WO 2016023322 A1 WO2016023322 A1 WO 2016023322A1
Authority
WO
WIPO (PCT)
Prior art keywords
channel
sound signal
frequency
channel sound
mapping
Prior art date
Application number
PCT/CN2014/095394
Other languages
English (en)
French (fr)
Inventor
潘兴德
吴超刚
Original Assignee
北京天籁传音数字技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京天籁传音数字技术有限公司 filed Critical 北京天籁传音数字技术有限公司
Publication of WO2016023322A1 publication Critical patent/WO2016023322A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing

Definitions

  • the present invention relates to the field of audio processing technologies, and in particular, to a multi-channel sound signal encoding method, a decoding method, and a device.
  • multi-channel sound signals are now played to the user by multiple channels, and the encoding method of the multi-channel sound signals is also represented by AC-3 and MP3 and poor stereo (M/ Wave coding techniques such as S Stereo) and Intensity Stereo have evolved to Parametric Stereo and Parametric Surround, represented by MP3Pro, ITU EAAC+, MPEG Surround, and Dolby DD+.
  • PS including Parametric Stereo and Parametric Surround
  • takes advantage of psychoacoustics such as binaural time/phase difference (ITD/IPD), binaural intensity difference (IID), and binaural correlation (IC) from the perspective of binaural psychoacoustics. Spatial characteristics to achieve parameter encoding of multi-channel sound signals.
  • the PS technology generally downmixes the multi-channel sound signal at the encoding end to generate one sum channel signal, and uses waveform coding (or waveform and parameter hybrid coding, such as EAAC+) for the channel signal, and each sound
  • waveform coding or waveform and parameter hybrid coding, such as EAAC+
  • the ITD/IPD, IID, and IC parameters of the channel corresponding and channel signals are parameter encoded.
  • the multi-channel signal is recovered from the sum channel signal. It is also possible to group multi-channel signals at the time of encoding and to adopt the above PS codec method in different channel groups. It is also possible to perform multi-stage PS encoding on multiple channels in a cascade manner.
  • both the traditional PS technology and the MPEG Surround technology rely too much on the psychoacoustic properties of both ears, ignoring the statistical properties of the multi-channel sound signal itself.
  • neither the traditional PS technology nor the MPEG Surround technology utilizes statistical redundancy information between pairs of channels.
  • MPEG Surround uses residual information coding, there is still statistical redundancy between the channel signal and the residual channel signal, so that the coding efficiency and the quality of the coded signal cannot be balanced.
  • the invention provides a multi-channel sound signal encoding method, a decoding method and a device, aiming at solving the prior art multi-channel sound signal encoding method, which has statistical redundancy and cannot balance the encoding efficiency and the quality of the encoded signal. The problem.
  • the present invention provides a multi-channel sound signal encoding method, the method comprising: A) using a modified discrete cosine transform MDCT or a modified discrete sine transform MDST to convert a first multi-channel sound signal Mapping to a first frequency domain signal; B) dividing the first frequency domain signal into different time-frequency sub-bands; C) calculating each of the time-frequency sub-bands in the different time-frequency sub-bands a first statistical characteristic of a multi-channel sound signal; D) estimating a principal component analysis PCA mapping model according to the first statistical characteristic; E) mapping the first multi-channel sound signal using the PCA mapping model a second multi-channel sound signal; F) perceptually encoding at least one of the second multi-channel sound signals and the PCA mapping model according to time, frequency, and channel, and multiplexing Edit Code multi-channel stream.
  • the present invention provides a multi-channel sound signal encoding apparatus, the apparatus comprising: a time-frequency mapping unit, configured to map a first multi-channel sound signal into a first frequency domain signal by using MDCT or MDST; And dividing the first frequency domain signal into different time-frequency sub-bands; the adaptive sub-space mapping unit is configured to calculate in each time-frequency sub-band in different time-frequency sub-bands divided by the time-frequency mapping unit a first statistical characteristic of the first multi-channel sound signal; estimating a PCA mapping model according to the first statistical characteristic; and mapping the first multi-channel sound signal to a second plurality by using the PCA mapping model a channel sound signal; a perceptual coding unit, configured to at least one of a second multi-channel sound signal mapped to the adaptive subspace mapping unit and the PCA mapping model according to time, frequency, and channel Perceptual coding is performed and multiplexed into a coded multi-channel code stream.
  • a time-frequency mapping unit configured to map a first multi-channel sound
  • the present invention provides a multi-channel sound signal decoding method, the method comprising: A) decoding an encoded multi-channel code stream to obtain at least one of a second multi-channel sound signal and a PCA mapping a model; B) mapping the second multi-channel sound signal back to the first multi-channel sound signal using the PCA mapping model; C) using the inverse MDCT or the inverse MDST, the first multi-channel The sound signal is mapped from the frequency domain to the time domain.
  • the present invention provides a multi-channel sound signal decoding apparatus, the apparatus comprising: a perceptual decoding unit, configured to decode an encoded multi-channel code stream to obtain at least one of a second multi-channel sound signal a group and PCA mapping model; a subspace inverse mapping unit, configured to map the second multichannel sound signal obtained by the perceptual decoding unit back to the first multichannel sound signal by using the PCA mapping model obtained by the perceptual decoding unit And a frequency-time mapping unit, configured to map the first multi-channel sound signal obtained by the sub-space inverse mapping unit from a frequency domain to a time domain by using an inverse MDCT or an inverse MDST.
  • a perceptual decoding unit configured to decode an encoded multi-channel code stream to obtain at least one of a second multi-channel sound signal a group and PCA mapping model
  • a subspace inverse mapping unit configured to map the second multichannel sound signal obtained by the perceptual decoding unit back to the first multichannel sound signal by using the PCA mapping
  • the first multi-channel sound signal is first mapped to the first frequency domain signal by using MDCT or MDST, and then the first frequency domain signal is divided into different time-frequency sub-bands. And calculating, in each time-frequency subband, a first statistical characteristic of the first multi-channel sound signal, estimating a PCA mapping model according to the first statistical characteristic, and using the PCA mapping model to convert the first multi-channel sound.
  • the signal is mapped to a second multi-channel sound signal, depending on time, frequency and channel, At least one of the second multi-channel sound signals and the PCA mapping model are perceptually encoded and multiplexed into an encoded multi-channel code stream.
  • the time-frequency mapping is specifically adopted by MDCT or MDST, and the PCA mapping model is specifically selected when estimating the mapping model according to statistical characteristics. Since MDCT or MDST has good audio compression characteristics, Moreover, the mapping matrix vectors in the PCA model are orthogonal, and the multi-channel signal components can be concentrated on as few channels as possible, which is advantageous for reducing the dimension of the encoded signal at a lower code rate, and thus can be maximized. The limit reduces the statistical redundancy between channels, achieving higher coding efficiency while ensuring the quality of the encoded signal.
  • FIG. 1 is a flow chart of a method for encoding a multi-channel sound signal according to an embodiment of the present invention
  • FIG. 2 is a flowchart of a method for encoding a multi-channel sound signal according to another embodiment of the present invention
  • FIG. 3 is a flowchart of a method for encoding a multi-channel sound signal according to another embodiment of the present invention.
  • FIG. 4 is a flowchart of a method for encoding a multi-channel sound signal according to another embodiment of the present invention.
  • FIG. 5 is a flowchart of a method for encoding a multi-channel sound signal according to another embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for decoding a multi-channel sound signal according to an embodiment of the present invention
  • FIG. 7 is a schematic structural diagram of a multi-channel sound signal encoding apparatus according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a multi-channel sound signal decoding apparatus according to an embodiment of the present invention.
  • the multi-channel sound signal encoding method in the embodiment of the present invention which is different from other methods in the prior art, fully utilizes the statistical characteristics and psychoacoustic characteristics of the multi-channel sound signal, and obtains extremely high encoding efficiency while obtaining extremely high encoding efficiency.
  • the Principal Component Analysis (PCA) method is adopted for adaptive subspace mapping, which can better estimate and utilize the statistical characteristics of the signal between channels and minimize the maximum. Statistical redundancy between channels for higher coding efficiency.
  • embodiments of the present invention are directed to multiple sounds using MDCT or MDST
  • the channel sound codec uses the PCA mapping method in the MDCT/MDST domain to eliminate the statistical redundancy of multi-channel signals and concentrate the multi-channel signals on as few channels as possible.
  • FIG. 1 is a flowchart of a method for encoding a multi-channel sound signal according to an embodiment of the present invention, where the method includes:
  • Step 101 Map a first multi-channel sound signal into a first frequency domain signal by using a Modified Discrete Cosine Transform (MDCT) or a Modified Discrete Sine Transform (MDST).
  • MDCT Modified Discrete Cosine Transform
  • MDST Modified Discrete Sine Transform
  • the first multi-channel sound signal is initially represented by a time domain signal u(m, t).
  • a time domain signal u(m, t) is a time domain signal.
  • u(m, t) is a channel.
  • t is the frame (or subframe) sequence number
  • k is the frequency sequence number.
  • Step 102 Divide the first frequency domain signal into different time-frequency sub-bands.
  • x(m, k) may be divided into different time-frequency sub-bands x i (t, k), where m is the channel number, i is the serial number of the time-frequency subband, t is the frame (or subframe) number, and k is the frequency number.
  • the multi-channel sound signal to be encoded may be first divided into frames to be encoded, and then subjected to MDCT/MDST conversion. If a larger frame length is used, one frame of data may be decomposed into multiple subframes, and then MDCT/MDST conversion is performed. After the frequency domain signal is obtained by the MDCT/MDST transform, multiple frequency subbands can be formed in frequency order; the frequency domain signals obtained by multiple MDCT/MDST can also be combined into a two-dimensional time-frequency plane, and the time-frequency is performed in this plane. The area is divided to obtain the time-frequency sub-band to be encoded.
  • time-frequency region is projected on each channel time-frequency plane, and the time-frequency sub-band x i (t, k) to be encoded can be obtained, i is the sequence number of the time-frequency sub-band, and t is the frame (or subframe) ) Serial number.
  • the signal range in the time-frequency subband x i (t, k) is: t i-1 ⁇ t ⁇ t i , k i-1 ⁇ k ⁇ k i , t I-1 and t i are the start and end frame (or subframe) numbers of the subband, and k i-1 and k i are the start and end frequencies or subband numbers of the subband. If the total number of time-frequency sub-bands is N, then i ⁇ N.
  • the area of a time-frequency subband can be represented by (t, k).
  • each time-frequency sub-band includes a signal projected by each channel in the time-frequency region.
  • x i (t, k, m) can be used. ) said.
  • Step 103 Calculate a first statistical characteristic of the first multi-channel sound signal in each of the different time-frequency sub-bands.
  • Step 104 Estimate the PCA mapping model according to the first statistical characteristic.
  • mapping coefficient of the PCA mapping model can be adaptively adjusted according to the first statistical characteristic.
  • the first statistical characteristic in the embodiment of the present invention may select a first-order statistic (mean), a second-order statistic (variance and correlation coefficient), and a high-order statistic (high-order moment) or a transformation thereof, and usually have more Select the second order statistic.
  • a second order statistic can be employed as the first statistical characteristic, for example, a covariance matrix.
  • Step 105 Map the first multi-channel sound signal into a second multi-channel sound signal by using a PCA mapping model.
  • the statistical characteristics of the multi-channel sound signal x i (t, k) can be calculated in different time-frequency sub-bands, and the optimized sub-space mapping model W i (t, k) is estimated, and the estimated mapping model is adopted.
  • a multi-channel signal is mapped to a new subspace to obtain a new set of multi-channel signals z i (t, k).
  • Step 106 Perceptually encode at least one of the second multi-channel sound signals and the PCA mapping model according to time, frequency and channel, and multiplex into the encoded multi-channel code stream.
  • At least one new set of multi-channel signals z i (t, k) and the corresponding mapping model W i (t, k) may be perceptually encoded and multiplexed into an encoded multi-channel code stream.
  • the above perceptual coding may specifically be hierarchical perceptual coding.
  • the first multi-channel sound signal is first mapped to the first frequency domain signal by using MDCT or MDST, and then the first frequency domain signal is divided.
  • Calculating a first statistical characteristic of the first multi-channel sound signal for each time-frequency sub-band, and estimating a PCA mapping model according to the first statistical characteristic, and using the PCA mapping model The first multi-channel sound signal is mapped to the second multi-channel sound signal, and at least one of the second multi-channel sound signals and the PCA mapping model are perceptually encoded according to time, frequency and channel, and Used to encode a multi-channel stream.
  • the PCA mapping model is specifically selected. Because MDCT or MDST has good audio compression characteristics, and the mapping matrix vectors in the PCA model are Orthogonal, the multi-channel signal components can be concentrated on as few channels as possible, which is beneficial to reduce the dimension of the encoded signal at a lower bit rate, thus minimizing statistical redundancy between channels. Achieve higher coding efficiency while ensuring the quality of the encoded signal.
  • the sound components of some channels are significantly different from the sound components of other channels.
  • these channels can be grouped separately, and the above method is adopted, and the optimized mapping model extraction is more accurate. Therefore, when encoding such a multi-channel sound signal, it is also possible to add a step of channel grouping processing to improve encoding efficiency.
  • FIG. 2 is a flowchart of a method for encoding a multi-channel sound signal according to another embodiment of the present invention.
  • a step of processing a channel group is added.
  • Step 201 Map the first multi-channel sound signal into a first frequency domain signal by using MDCT or MDST.
  • Step 202 Divide the first frequency domain signal into different time-frequency sub-bands.
  • the encoded sound signal may be first divided into frames to be encoded, and then subjected to time-frequency transform. If a larger frame length is used, one frame of data may be further decomposed into multiple subframes, and then time-frequency transform is performed. After obtaining the frequency domain signal, multiple frequency subbands may be formed in frequency order; the frequency domain signals obtained by multiple time-frequency transforms may be combined into a two-dimensional time-frequency plane, and time-frequency region division may be performed on the plane. Time-frequency subband to be encoded.
  • Step 203 Calculate a second statistical characteristic of the first multi-channel sound signal in each time-frequency sub-band of the different time-frequency sub-bands, and divide the first multi-channel sound signal into multiple according to the second statistical characteristic. Grouped sound signals.
  • the statistical characteristics of the multi-channel sound signal x i (t, k) are calculated in different time-frequency sub-bands; and the multi-channel signals are grouped according to the statistical characteristics of the sound components of each channel Or a plurality of sets of channel groups, and each group includes at least one channel signal; for one channel grouping, direct perceptual encoding is performed, and for more than one channel grouping, subsequent processing is performed.
  • the second statistical characteristic of the present invention may adopt a first-order statistic (mean), a second-order statistic (variance and correlation coefficient), and a high-order statistic (high-order moment) and a transformation form thereof, and generally more second-order selection Statistics, especially correlation coefficients.
  • the first statistical characteristic may also be used as a criterion for judging the group.
  • the second statistical characteristic and the first statistical characteristic may have the same value.
  • the corresponding grouping manner can be flexibly selected according to needs, and a fixed grouping method or an adaptive grouping method can be adopted.
  • a certain channel group 1 includes M l channels in x i (t, k), which may be x i (
  • the continuous M l channels in t, k) may also be any M l channels that are discontinuous in x i (t, k).
  • an adaptive grouping method is employed, the packet information of each subband needs to be encoded and multiplexed into the code stream, and each time-frequency subband requires a set of channel grouping information.
  • adaptive grouping algorithms for example, based on the inter-channel cross-correlation grouping algorithm. The main steps are as follows:
  • the multi-channel time-frequency sub-band x i (t, k) is divided into several groups. Specifically, if the absolute value of the normalized covariance coefficient C(m,n) between the two channels m,n is greater than the threshold, the channels m and n are divided into the same channel group, and vice versa. , then into different groups.
  • the packet information of each subband includes the number of packets and the sequence number of the channel included in each packet.
  • steps 204 to 207 are performed as each of the packet sound signals as the first multi-channel sound signal.
  • Step 204 Calculate a first statistical characteristic of the first multi-channel sound signal in each of the different time-frequency sub-bands.
  • Step 205 Estimate the PCA mapping model according to the first statistical characteristic.
  • Step 206 Map the first multi-channel sound signal into a second multi-channel sound signal by using a PCA mapping model.
  • the PCA mapping model W i (t, k) may be estimated according to the statistical characteristics of the sound components of each channel; the estimated PCA mapping model is used to map the multi-channel signal to the new subspace to obtain a new A set of multi-channel signals z i (t, k).
  • Step 207 Perceptually encode at least one of the second multi-channel sound signals, the channel grouping information, and the PCA mapping model according to different time, frequency, and channel, and multiplex into the encoded multi-channel code stream.
  • At least one new multi-channel signal z i (t, k) and the corresponding mapping model W i (t, k) and channel group information may be perceptually encoded, and all perceptual coding information may be multiplexed to obtain Encodes a multi-channel stream.
  • the second statistical characteristic of the first multi-channel sound signal may be calculated, and then the first multi-channel sound signal is divided into a plurality of grouped sound signals according to the second statistical characteristic, for each The grouped sound signals perform steps 102 to 106 as each of the grouped sound signals as the first multi-channel sound signals.
  • FIG. 3 is a flowchart of a method for encoding a multi-channel sound signal according to another embodiment of the present invention.
  • a multi-channel sound signal is first grouped, and then time-frequency mapping and the like are performed for each packet sound signal.
  • the method includes:
  • Step 301 Calculate a third statistical characteristic of the first multi-channel sound signal, divide the first multi-channel sound signal into a plurality of packet sound signals according to the third statistical characteristic, and encode and multiplex the channel group information into Encoded in a multi-channel stream.
  • the statistical characteristics of the multi-channel sound signal u(m, t) can be calculated, and according to the statistical characteristics, the multi-channel sound signal is divided into one or more groups of channel groups, and each group includes at least A channel signal, where m is the channel number and t is the frame (or subframe) number.
  • first-order statistic mean
  • second-order statistic variable
  • correlation coefficient the correlation coefficient
  • high-order statistic high-order moment
  • its transformation form usually more second-order statistic, especially the correlation coefficient
  • the corresponding grouping manner can be flexibly selected. It can be fixed grouping or adaptive grouping.
  • the channel grouping u l (m, t) includes M l channels in u(m, t), which may be consecutive M l channels in u(m, t), or may be u (m) , t) any M l channels that are not continuous.
  • an adaptive grouping method is employed, the packet information needs to be encoded and multiplexed into the code stream, in which case only one set of packet information is required for each frame of the signal.
  • There are many adaptive grouping algorithms for example, based on the inter-channel cross-correlation grouping algorithm. The main steps are as follows:
  • the multi-channel signal u(m, t) is divided into several groups. Specifically, if the absolute value of the normalized covariance coefficient C(m,n) between the two channels m,n is greater than the threshold, the channels m and n are divided into the same channel group, and vice versa. , then into different groups.
  • each of the packet sound signals is performed as steps 1 to 307 as the first multi-channel sound signal.
  • Step 302 Map the first multi-channel sound signal to the first frequency domain signal by using MDCT or MDST.
  • Step 303 dividing the first frequency domain signal into different time-frequency sub-bands.
  • the MDCT or MDST is used to map the grouped multi-channel time domain signal u l (m, t) into a multi-channel frequency domain signal x(m, k), and divide the time-frequency mapped signal into different time-frequency signals.
  • Step 304 Calculate a first statistical characteristic of the first multi-channel sound signal in each of the different time-frequency sub-bands.
  • Step 305 Estimate the PCA mapping model according to the first statistical characteristic.
  • an adaptive subspace mapping is used to estimate an optimized subspace mapping model, and the above adaptive subspace mapping is different from the existing multichannel speech coding method, and the innovative sub
  • the Subspace Mapping method estimates the multi-channel optimized subspace mapping model based on the statistical properties of the signal.
  • the model is an adaptive linear transformation matrix and subspace mapping method, which is developed in recent years. PCA mapping method.
  • Step 306 Map the first multi-channel sound signal into a second multi-channel sound signal by using a PCA mapping model.
  • the statistical characteristics of the multi-channel sound signal x i (t, k) can be calculated in different time-frequency sub-bands, and the PCA mapping model W i (t, k) can be estimated; using the estimated mapping model, multiple sounds will be used.
  • the track signal is mapped to the new subspace to obtain a new set of multichannel signals z i (t, k).
  • Step 307 Perceptually encode at least one of the second multi-channel sound signals and the PCA mapping model according to time, frequency and channel, and multiplex into the encoded multi-channel code stream.
  • At least one new multi-channel signal z i (t, k) and the corresponding mapping model W i (t, k) can be perceptually encoded; all perceptual coding information is multiplexed to obtain a coded multi-channel code flow.
  • Waveform coding perceptual quantization and Huffman entropy coding used in MP3, AAC, exponential-mantissa coding used in AC-3, perceptual vector quantization coding used in OggVorbis and TwinVQ, etc.
  • Parameter coding such as harmonics used in MPEG HILN, independent chord component and noise coding, harmonic vector excitation coding used in MPEG HVXC, code excitation and transform code excitation (TCX) coding in AMR WB+;
  • Waveform-parameter mixed coding For example, MP3Pro, AAC+, AMR WB+ and other methods use waveform coding for low frequencies and frequency band extension parameters for high frequencies.
  • the adaptive subspace mapping in the embodiment of the present invention adopts a PCA mapping model, and adaptively adjusts the mapping coefficient of the PCA model according to statistical characteristics between channels.
  • the adaptive subspace mapping strategy of the present invention has significant significance for achieving the object of the present invention, that is, to obtain a very high coding efficiency while encoding a multi-channel signal while ensuring the quality of the encoded signal.
  • the subspace mapping model can be described as follows:
  • x, x ⁇ x 1 , x 2 , v, x M ⁇ is the observation vector of the current subspace
  • A is the current subspace mapping matrix.
  • the present invention may employ dividing the spectrum of the MDCT/MDST domain (ie, the frequency domain signal) into at least two sub-spectrals of the spectral line spacing.
  • the odd-numbered sub-spectrum is further divided into a odd-numbered sub-spectrum x oo i (t, k)) and a parity-numbered sub-spectrum x oe i (t, k), and the above-mentioned even-numbered sub-spectrum can be further divided into even odd odd-numbered sub-spectrum x oo i (t,
  • the encoding method of the present invention includes the following processing procedure.
  • Step 401 Map the first multi-channel sound signal into a first frequency domain signal by using MDCT or MDST.
  • the multi-channel sound time domain signal u(m,t) can be mapped to the multi-channel frequency domain signal x(m,k) by using MDCT or MDST.
  • Step 402 Divide the first frequency domain signal into multiple according to the parity of the sequence number in the first frequency domain signal. Sub-spectrum.
  • Step 403 dividing the first frequency domain signal into different time-frequency sub-bands.
  • the time-frequency sub-band is a time-frequency sub-band including all sub-spectrals, and specifically, may include an odd-frequency spectrum and an even-frequency spectrum, and the first multi-channel sound signal may be represented by x i (t, k).
  • Step 404 Calculate a second statistical characteristic of the first multi-channel sound signal in each time-frequency sub-band of the different time-frequency sub-bands, and divide the first multi-channel sound signal into multiple according to the second statistical characteristic. Grouped sound signals.
  • steps 405 to 408 are performed as each of the packet sound signals as the first multi-channel sound signal.
  • the step 404 is an optional step, that is, the grouping process may not be performed in the embodiment of the present invention.
  • Step 405 Calculate a first statistical characteristic of the first multi-channel sound signal in each of the different time-frequency sub-bands.
  • Step 406 Estimate the PCA mapping model according to the first statistical characteristic.
  • Step 407 The first multi-channel sound signal is mapped to the second multi-channel sound signal by using a PCA mapping model.
  • the statistical characteristics of the multi-channel sound signal x i (t, k) can be calculated in different time-frequency sub-bands, and the PCA mapping model W i (t, k) can be estimated; using the estimated mapping model, multiple sounds will be used.
  • the track signal is mapped to the new subspace to obtain a new set of multichannel signals z i (t, k).
  • Step 408 Perceptively encode at least one of the second multi-channel sound signals, the channel grouping information, and the PCA mapping model according to different time, frequency, and channel, and multiplex into the encoded multi-channel code stream.
  • At least one new set of multi-channel signals z i (t, k) and corresponding mapping models W i (t, k) and channel grouping information may be perceptually encoded to obtain an encoded multi-channel code stream.
  • the channel grouping information is not subjected to perceptual encoding in step 408.
  • FIG. 5 is a flowchart of a method for encoding a multi-channel sound signal according to another embodiment of the present invention.
  • a multi-channel sound signal is first subjected to packet processing, and then time-frequency mapping is performed for each packet signal, and is performed.
  • the frequency domain signal is divided into a plurality of sub-spectrals, and the time-frequency sub-bands are divided for each sub-spectrum.
  • the encoding method of the present invention includes the following processing procedure.
  • Step 501 Calculate a third statistical characteristic of the first multi-channel sound signal, divide the first multi-channel sound signal into a plurality of grouped sound signals according to the third statistical characteristic, and encode and multiplex the channel grouping information to Encoded in a multi-channel stream.
  • Steps 502 through 508 are performed for each packet sound signal as each of the packet sound signals as the first multi-channel sound signal.
  • Step 502 Map the first multi-channel sound signal to the first frequency domain signal by using MDCT or MDST.
  • Step 503 Divide the first frequency domain signal into a plurality of sub-spectrums according to the parity of the sequence number in the first frequency domain signal.
  • Step 504 Divide each sub-spectrum into different time-frequency sub-bands for each of the plurality of sub-spectrals.
  • the time-frequency sub-band is a time-frequency sub-band including all sub-spectencies, and specifically, an odd-frequency spectrum and an even-frequency spectrum may be included.
  • Step 505 Calculate a first statistical characteristic of the first multi-channel sound signal in each of the different time-frequency sub-bands.
  • Step 506 estimating a PCA mapping model according to the first statistical characteristic.
  • Step 507 Map the first multi-channel sound signal into a second multi-channel sound signal by using a PCA mapping model.
  • Step 508 Perceptually encode at least one of the second multi-channel sound signals and the PCA mapping model according to different time, frequency, and channel, and multiplex into the encoded multi-channel code stream.
  • PCA technology is adopted, and multi-channel is estimated according to the statistical characteristics of the signal.
  • PCA mapping model which is an adaptive linear transformation matrix.
  • the adaptive PCA subspace mapping strategy has significant significance for achieving the object of the present invention, that is, to obtain a very high coding efficiency while encoding a multi-channel signal while ensuring the quality of the encoded signal.
  • W is the new subspace mapping matrix.
  • x, z are vectors of de-average scalar random variables.
  • Step one calculating a covariance matrix C of the observation vector x;
  • M is the number of channels included in the packet
  • x i (t, k, m) corresponds to the observation vector x.
  • a set of sample points of the element x m (t i-1 ⁇ t ⁇ t i , k i-1 ⁇ k ⁇ k i , t i-1 and t i are the start and end frames (or sub- ) of the sub-band Frame) sequence number
  • k i-1 and k i are the start and stop frequencies or subband numbers of the subband).
  • the covariance matrix C can be operated by the following formula:
  • Step two calculating the feature vectors e 1 , e 2 , . . . , e M of the covariance matrix and the eigenvalues ⁇ 1 , ⁇ 2 , . . . , ⁇ M , and classifying the feature values in descending order;
  • mapping matrix vectors in the PCA model are orthogonal, and the multi-channel signal components can be concentrated on as few channels as possible, which is advantageous for reducing the dimension of the encoded signal at a lower code rate.
  • the perceptual coding of the present invention is divided into a multi-channel sound signal z(m, k) code and a corresponding mapping model W(m, k) code.
  • the multi-channel sound signal z(m, k) encoding can adopt any of the following sound encoding methods:
  • Waveform coding such as perceptual quantization and Huffman entropy coding used in MP3, AAC, exponential-mantissa coding used in AC-3, perceptual vector quantization coding used in OggVorbis and TwinVQ, etc.
  • Parameter coding such as harmonics used in MPEG HILN, independent chord component and noise coding, harmonic vector excitation coding used in MPEG HVXC, code excitation and transform code excitation (TCX) coding used in AMR WB+;
  • Waveform-parameter mixed coding such as MP3Pro, AAC+, AMR WB+, etc. uses low-frequency waveform coding, and high-frequency uses band extension parameter coding.
  • the mapping model coding may encode a corresponding mapping matrix (ie, a feature vector), may also encode other transformation forms of the model, or may directly encode a covariance matrix of the mapping matrix.
  • mapping the model coding well-known methods such as scalar quantization, vector quantization, and predictive coding may be used, or entropy coding (such as huffman coding or arithmetic coding) may be used to further improve coding efficiency, such as when the frequency domain signal is divided into parity-specific sub-segments.
  • mapping matrix of the odd-spectrum and the mapping matrix of the even-spectrum are related to each other, that is, there is redundancy; there is also redundancy between the mapping matrices of adjacent frequency bands, and the redundant information is utilized. Can improve coding efficiency. For example, an odd spectrum can be used. A method of joint vector coding of a mapping matrix of a subband and a mapping matrix of adjacent even spectral subbands.
  • At least one new multi-channel signal and a corresponding mapping model are perceptually encoded.
  • the encoded signal component and the corresponding mapping model parameters may be selected based on the current coded target code rate and the perceived importance of the new multi-channel signal.
  • the adaptive subspace mapping and perceptual coding method of the present invention can also provide scalable coding, that is, the multi-channel sound signal is encoded only once, and a sound code stream is obtained, thereby providing transmission and decoding of multiple code rates and quality. This supports different application needs of multiple types of users.
  • the perceptual coding module can be further broken down into the following steps:
  • Step 1 selecting at least one set of signals and a corresponding mapping model to perform perceptual coding, and the code rate of the partial code stream is not higher than a base layer code rate constraint;
  • Step 2 selecting a second important at least one set of signals and a corresponding mapping model, performing perceptual coding, and the code rate of the partial code stream is not higher than the first enhancement layer code rate constraint;
  • Step 3 selecting a third important at least one group of signals and a corresponding mapping model, performing perceptual coding, and the code rate of the partial code stream is not higher than the second enhancement layer code rate constraint;
  • Step four and so on, until lossless coding is achieved, and an N-layer code stream is obtained.
  • step five all N layers of code streams are multiplexed into one compressed stream.
  • the compressed stream recombined from the scalable code stream according to the service request shall include at least the base layer code stream, and at a higher code rate, the enhancement layer code stream may be multiplexed in order of importance.
  • FIG. 6 is a flowchart of a method for decoding a multi-channel sound signal according to an embodiment of the present invention, where the method includes:
  • Step 601 Decode the encoded multi-channel code stream to obtain at least one of the second multi-channel sound signals and the PCA mapping model.
  • Step 602 Map the second multi-channel sound signal back to the first multi-channel sound signal by using a PCA mapping model.
  • Step 603 using modified discrete cosine inverse transform IMDCT or modified discrete sine inverse transform IMDST, The first multi-channel sound signal is mapped from the frequency domain to the time domain.
  • the method further includes: decoding channel group information in the code stream to obtain decoded channel group information; The plurality of packet sound signals are group-recovered according to the decoded channel grouping information to obtain a third multi-channel sound signal, and the third multi-channel sound signal is used as the first multi-channel sound signal to perform step 603.
  • the method when the first multi-channel sound signal is a plurality of packet sound signals in the time domain, in step 601, the method further includes: decoding the encoded multi-channel code stream to obtain the decoded channel grouping. After the step 603, the method further includes: recovering, according to the decoded channel grouping information, the plurality of packet sound signals to obtain a fourth multi-channel sound signal.
  • the method further includes: restoring the plurality of sub-spectrums of each channel to a natural-order frequency domain signal;
  • the frequency domain signal is used as the first multi-channel sound signal, and step 603 is performed.
  • the method may further include: performing demultiplexing processing on the encoded multi-channel code stream to obtain a plurality of layered code streams; performing step 601 as each of the layered code streams as the encoded multi-channel code stream; After step 601 is performed on all the layered code streams, step 602 and step 603 are uniformly performed.
  • FIG. 7 is a schematic structural diagram of a multi-channel sound signal encoding apparatus according to an embodiment of the present invention, the apparatus includes:
  • the time-frequency mapping unit 701 is configured to map the first multi-channel sound signal into a first frequency domain signal by using an MDCT or an MDST, and divide the first frequency domain signal or the first sub-band signal into different time-frequency signals. Subband;
  • the adaptive subspace mapping unit 702 is configured to calculate, in each time-frequency subband of the different time-frequency subbands divided by the time-frequency mapping unit 701, a first statistical characteristic of the first multi-channel sound signal. And estimating a PCA mapping model according to the first statistical characteristic; and mapping the first multi-channel sound signal into a second multi-channel sound signal by using the PCA mapping model;
  • a perceptual coding unit 703 configured to compare the adaptive sub-time according to time, frequency, and channel At least one of the second multi-channel sound signals mapped by the spatial mapping unit 702 and the PCA mapping model are perceptually encoded and multiplexed into an encoded multi-channel code stream.
  • the method further includes:
  • a first channel grouping unit configured to calculate, in the each time-frequency subband of the different time-frequency sub-bands, the first statistical characteristic of the first multi-channel sound signal in the adaptive subspace mapping unit 702
  • the second statistical characteristic of the first multi-channel sound signal is calculated in each time-frequency sub-band of the different time-frequency sub-bands divided by the time-frequency mapping unit 701; according to the second statistical characteristic Decomposing the first multi-channel sound signal into a plurality of packet sound signals;
  • the adaptive subspace mapping unit 702 and the perceptual coding unit 703 are specifically configured to: use, as the first, each packet sound signal for each packet sound signal divided by the first channel grouping unit.
  • the multi-channel sound signal is processed, and the perceptual encoding unit 703 is further configured to perform perceptual encoding on the channel grouping information.
  • the method further includes:
  • a second channel grouping unit configured to calculate, by the time-frequency mapping unit 701, the first multi-channel sound signal before mapping the first multi-channel sound signal to the first frequency domain signal by using an MDCT or an MDST a third statistical characteristic; dividing the first multi-channel sound signal into a plurality of packet sound signals according to the third statistical characteristic, and performing perceptual encoding on the channel grouping information;
  • the time-frequency mapping unit 701, the adaptive sub-space mapping unit 702, and the perceptual coding unit 703 are specifically configured to: each of the packet sound signals divided by the second channel grouping unit The packet sound signal is processed as the first multi-channel sound signal.
  • the method further includes:
  • a sub-spectrum dividing unit configured to: before the time-frequency mapping unit 701 divides the first frequency-domain signal into different time-frequency sub-bands, according to the parity of the sequence number in the first frequency domain signal, The frequency domain signal is divided into multiple sub-spectra;
  • the time-frequency mapping unit 701, the adaptive sub-space mapping unit 702, and the perceptual coding unit 703 are specifically configured to: each of a plurality of sub-bands divided by the sub-spectrum dividing unit a spectrum, each of the sub-spectrums being processed as the first frequency domain signal.
  • FIG. 8 is a schematic structural diagram of a multi-channel sound signal decoding apparatus according to an embodiment of the present invention, the apparatus includes:
  • a perceptual decoding unit 801 configured to decode the encoded multi-channel code stream, obtain at least one of the second multi-channel sound signals, and a PCA mapping model;
  • the sub-space inverse mapping unit 802 is configured to map, by using the PCA mapping model obtained by the perceptual decoding unit 801, the second multi-channel sound signal obtained by the perceptual decoding unit 801 to the first multi-channel sound signal;
  • the frequency time mapping unit 803 is configured to map the first multi-channel sound signal obtained by the sub-space inverse mapping unit 802 from the frequency domain to the time domain by using IMDCT or IMDST.
  • the first multi-channel sound signal obtained by the sub-space inverse mapping unit 802 is a plurality of packet sound signals in the frequency domain;
  • the sensing decoding unit 801 is specifically configured to: decode the encoded multi-channel code stream, obtain at least one of the second multi-channel sound signals, channel group information, and a PCA mapping model;
  • the device also includes:
  • a first packet restoring unit configured to use the IMDCT or IMDST in the time-frequency mapping unit 803 to map the first multi-channel sound signal obtained by the sub-space inverse mapping unit 802 from the frequency domain to the time domain, according to decoding
  • the channel grouping information is grouped and restored to obtain a third multi-channel sound signal
  • the frequency time mapping unit 803 is specifically configured to process the third multi-channel sound signal obtained by the first packet restoration unit as the first multi-channel sound signal.
  • the first multi-channel sound signal after the mapping processing by the frequency time mapping unit 803 is a plurality of packet sound signals in the time domain;
  • the sensing decoding unit 801 is specifically configured to: decode the encoded multi-channel code stream, obtain at least one of the second multi-channel sound signals, channel group information, and a PCA mapping model;
  • the device also includes:
  • a second packet restoring unit configured to: after the time-frequency mapping unit 803 uses the IMDCT or the IMDST, map the first multi-channel sound signal obtained by the sub-space inverse mapping unit 802 from the frequency domain to the time domain, according to the The channel grouping information is group-recovered to obtain a fourth multi-channel sound signal.
  • the first multi-channel sound signal obtained by the sub-space inverse mapping unit 802 is a plurality of sub-spectrums in the frequency domain, and the device further includes:
  • a sub-spectrum recovery unit configured by the time-frequency mapping unit 803 to use the IMDCT or the IMDST to obtain the first multi-channel sound signal from the frequency domain to the time domain, and obtain the first obtained by the sub-space inverse mapping unit 802.
  • a plurality of sub-spectra of each channel in a multi-channel sound signal is restored to a natural sequential frequency domain signal;
  • the time-frequency mapping unit 803 is specifically configured to process the natural-order frequency domain signal as the first multi-channel sound signal.
  • the steps of a method or algorithm described in connection with the embodiments disclosed herein can be implemented in hardware, a software module executed by a processor, or a combination of both.
  • the software module can be placed in random access memory (RAM), memory, read only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or technical field. Any other form of storage medium known.

Abstract

一种多声道声音信号编码方法、解码方法及装置,该编码方法包括:采用修正离散余弦变换MDCT或修正离散正弦变换MDST,将第一多声道声音信号映射为第一频域信号(101);将第一频域信号划分为不同时频子带(102);在每个时频子带内,计算第一多声道声音信号的第一统计特性(103);根据第一统计特性,估计主成分分析PCA映射模型(104);采用PCA映射模型,将第一多声道声音信号映射为第二多声道声音信号(105);根据时间、频率和声道的不同,对第二多声道声音信号中的至少一组和PCA映射模型进行感知编码,获得编码多声道码流(106)。由此可见,该编码方法采用了MDCT或MDST来进行时频映射,并且根据统计特性估计映射模型时,具体选取了PCA映射模型,可以实现更高的编码效率和编码质量。

Description

多声道声音信号编码方法、解码方法及装置 技术领域
本发明涉及音频处理技术领域,尤其涉及多声道声音信号编码方法、解码方法及装置。
背景技术
随着科技的发展,出现了多种对声音信号的编码技术,上述声音通常指的是语音、音乐、自然声音和人工合成声音等人耳可感知的信号在内的数字声音。目前,很多声音编码技术已经成为工业标准被大量应用,融入人们的日常生活中,常用的声音编码技术有杜比实验室的AC-3、数字影院系统公司的DTS、移动图像专家组(MPEG)组织的MP3和AAC、微软公司的WMA,以及索尼公司的ATRAC。
为了重现立体声的声音效果,现在多采用多个声道将多声道声音信号播放给用户,多声道声音信号的编码方法也从以AC-3和MP3为代表的和差立体声(M/S Stereo)和强度立体声(Intensity Stereo)等波形编码技术,演进到以MP3Pro、ITU EAAC+、MPEG Surround、Dolby DD+为代表的参数立体声(Parametric Stereo)和参数环绕声(Parametric Surround)技术。PS(包括Parametric Stereo和Parametric Surround)从双耳心理声学的角度出发,充分利用双耳时间/相位差(ITD/IPD)、双耳强度差(IID)、双耳相关性(IC)等心理声学空间特性,实现多声道声音信号的参数编码。
PS技术在编码端一般将多声道声音信号下混合(downmix),生成1个和声道信号,对和声道信号采用波形编码(或者波形和参数混合编码,如EAAC+),并将各声道对应和声道信号的ITD/IPD、IID和IC参数进行参数编码。在解 码端,根据这些参数,从和声道信号中恢复多声道信号。也可以在编码时,将多声道信号分组,并在不同的声道组采用如上的PS编解码方法。也可以采用级联的方式,将多声道进行多级的PS编码。
实践证明,单纯的波形编码(和声道)和PS编码技术,虽然可以在较低的码率下实现较高的编码质量;但在较高的码率下,PS技术却不能进一步提升信号质量,不适合高保真的应用场合。其原因在于,PS技术在编码端只编码和声道信号,而丢掉了残差声道信号,导致解码时不能完全恢复原始信号。为此,MPEG Surround采用残差信息编码的方法,来弥补PS技术的不足。
但是,无论是传统的PS技术还是MPEG Surround技术,都过分依赖了双耳的心理声学特性,而忽略了多声道声音信号本身的统计特性。例如,传统的PS技术和MPEG Surround技术都没有利用声道对之间的统计冗余信息。而且,MPEG Surround采用残差信息编码时,和声道信号和残差声道信号间仍然存在统计冗余,从而无法兼顾编码效率和编码信号的质量。
发明内容
本发明提供了一种多声道声音信号编码方法、解码方法及装置,目的是为了解决现有技术的多声道声音信号编码方法中,存在统计冗余,无法兼顾编码效率和编码信号的质量的问题。
为实现上述目的,第一方面,本发明提供了一种多声道声音信号编码方法,该方法包括:A)采用修正离散余弦变换MDCT或修正离散正弦变换MDST,将第一多声道声音信号映射为第一频域信号;B)将所述第一频域信号划分为不同时频子带;C)在所述不同时频子带中的每个时频子带内,计算所述第一多声道声音信号的第一统计特性;D)根据所述第一统计特性,估计主成分分析PCA映射模型;E)采用所述PCA映射模型,将所述第一多声道声音信号映射为第二多声道声音信号;F)根据时间、频率和声道的不同,对所述第二多声道声音信号中的至少一组和所述PCA映射模型进行感知编码,并复用成编 码多声道码流。
第二方面,本发明提供了一种多声道声音信号编码装置,该装置包括:时频映射单元,用于采用MDCT或MDST,将第一多声道声音信号映射为第一频域信号;将所述第一频域信号划分为不同时频子带;自适应子空间映射单元,用于在所述时频映射单元划分的不同时频子带中的每个时频子带内,计算所述第一多声道声音信号的第一统计特性;根据所述第一统计特性,估计PCA映射模型;采用所述PCA映射模型,将所述第一多声道声音信号映射为第二多声道声音信号;感知编码单元,用于根据时间、频率和声道的不同,对所述自适应子空间映射单元映射的第二多声道声音信号中的至少一组和所述PCA映射模型进行感知编码,并复用成编码多声道码流。
第三方面,本发明提供了一种多声道声音信号解码方法,该方法包括:A)对编码多声道码流进行解码,获得第二多声道声音信号中的至少一组和PCA映射模型;B)采用所述PCA映射模型,将所述第二多声道声音信号映射回第一多声道声音信号;C)采用逆的MDCT或逆的MDST,将所述第一多声道声音信号从频域映射为时域。
第四方面,本发明提供了一种多声道声音信号解码装置,该装置包括:感知解码单元,用于对编码多声道码流进行解码,获得第二多声道声音信号中的至少一组和PCA映射模型;子空间逆映射单元,用于采用所述感知解码单元获得的PCA映射模型,将所述感知解码单元获得的第二多声道声音信号映射回第一多声道声音信号;频时映射单元,用于采用逆的MDCT或逆的MDST,将所述子空间逆映射单元得到的第一多声道声音信号从频域映射为时域。
本发明实施例的多声道声音信号编码方法中,先采用MDCT或MDST,将第一多声道声音信号映射为第一频域信号,然后将第一频域信号划分为不同时频子带,再在每个时频子带内,计算第一多声道声音信号的第一统计特性,根据第一统计特性,估计PCA映射模型,以及采用该PCA映射模型,将第一多声道声音信号映射为第二多声道声音信号,根据时间、频率和声道的不同, 对第二多声道声音信号中的至少一组和PCA映射模型进行感知编码,并复用成编码多声道码流。由上可见,本发明实施例中,具体采用了MDCT或MDST来进行时频映射,并且根据统计特性估计映射模型时,具体选取了PCA映射模型,由于MDCT或MDST具有很好的音频压缩特性,并且,PCA模型中的映射矩阵矢量间是正交的,可以将多声道信号成分集中在尽可能少的声道上,有利于在较低的码率下降低编码信号的维度,因此可以最大限度的降低声道间的统计冗余,实现更高的编码效率的同时,保证编码信号的质量。
附图说明
图1为本发明一个实施例中的多声道声音信号编码方法流程图;
图2为本发明另一个实施例中的多声道声音信号编码方法流程图;
图3为本发明另一个实施例中的多声道声音信号编码方法流程图;
图4为本发明另一个实施例中的多声道声音信号编码方法流程图;
图5为本发明另一个实施例中的多声道声音信号编码方法流程图;
图6为本发明一个实施例中的多声道声音信号解码方法流程图;
图7为本发明一个实施例中的多声道声音信号编码装置结构示意图;
图8为本发明一个实施例中的多声道声音信号解码装置结构示意图。
具体实施方式
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。
本发明实施例中的多声道声音信号编码方法,不同于现有技术中的其他方法,充分利用了多声道声音信号的统计特性和心理声学特性,在获得极高的编码效率的同时,保证编码信号的质量,在进行自适应子空间映射时,采用了主成分分析(Principal Component Analysis,PCA)的方法,可以更好的估计和利用声道间信号的统计特性,并最大限度的降低声道间的统计冗余,实现更高的编码效率。特别地,本发明实施例针对采用MDCT或MDST的多声 道声音编解码器,在MDCT/MDST域采用PCA映射方法,消除多声道信号的统计冗余,将多声道信号集中在尽可能少的声道上。
图1为本发明一个实施例中的多声道声音信号编码方法流程图,该方法包括:
步骤101,采用修正离散余弦变换(MDCT,Modified Discrete Cosine Transform)或修正离散正弦变换(MDST,Modified Discrete Sine Transform),将第一多声道声音信号映射为第一频域信号。
其中,第一多声道声音信号的最初表现形式为时域信号u(m,t),通过上述映射处理,可以得到多声道频域信号x(m,k),其中,m为声道序号,t为帧(或子帧)序号,k为频率序号。
步骤102,将第一频域信号划分为不同时频子带。
本发明实施例中,若步骤101获得的第一频域信号为x(m,k),可以将x(m,k)划分为不同的时频子带xi(t,k),其中,m为声道序号,i是时频子带的序号,t为帧(或子帧)序号,k为频率序号。
其中,在步骤101之前,待编码的多声道声音信号可以先被分成待编码的帧,再进行MDCT/MDST变换。如果采用较大的帧长,可能会将一帧数据再分解为多个子帧,然后再进行MDCT/MDST变换。通过MDCT/MDST变换获得频域信号后,可以按频率顺序组成多个频率子带;也可以将多个MDCT/MDST获得的频域信号,组成二维时间-频率平面,在此平面进行时频区域划分,以便获得待编码的时频子带。进一步,将该时频区域在各声道时频平面投影,可以获得待编码的时频子带xi(t,k),i是该时频子带的序号,t为帧(或子帧)序号。假设每个时频子带是矩形区域,则时频子带xi(t,k)内的信号范围为:ti-1≤t<ti,ki-1≤k<ki,ti-1和ti为该子带的起始和终止帧(或子帧)序号,ki-1和ki为该子带的起始和终止频率或子带序号。若时频子带总个数为N,则i≤N。方便起见,某时频子带的区域可用(t,k)表示。需要注意的是,每个时频子带均包含各声道在该时频区域投影的信号,当需要特指某声道在该时 频区域的投影时,可用xi(t,k,m)表示。
步骤103,在不同时频子带中的每个时频子带内,计算第一多声道声音信号的第一统计特性。
步骤104,根据第一统计特性,估计PCA映射模型。
具体地,可以根据第一统计特性自适应调整PCA映射模型的映射系数。
本发明实施例中的第一统计特性,可以选择一阶统计量(均值)、二阶统计量(方差和相关系数)及高阶统计量(高阶矩)或其变换形式,通常较多的选择二阶统计量。较佳地,在估计PCA映射模型时,可以采用二阶统计量作为第一统计特性,例如,协方差矩阵。
步骤105,采用PCA映射模型,将第一多声道声音信号映射为第二多声道声音信号。
具体地,可以在不同时频子带内,计算多声道声音信号xi(t,k)的统计特性,并估计优化子空间映射模型Wi(t,k),采用估计的映射模型,将多声道信号映射到新的子空间,获得新的一组多声道信号zi(t,k)。
步骤106,根据时间、频率和声道的不同,对第二多声道声音信号中的至少一组和PCA映射模型进行感知编码,并复用成编码多声道码流。
具体地,可以将至少一组新的多声道信号zi(t,k)和对应的映射模型Wi(t,k)进行感知编码,并复用成编码多声道码流。
其中,上述感知编码具体可以为分级感知编码。
由上述处理过程可知,本发明实施例的多声道声音信号编码方法中,先采用MDCT或MDST,将第一多声道声音信号映射为第一频域信号,然后将第一频域信号划分为不同时频子带,再在每个时频子带内,计算第一多声道声音信号的第一统计特性,根据第一统计特性,估计PCA映射模型,以及采用该PCA映射模型,将第一多声道声音信号映射为第二多声道声音信号,根据时间、频率和声道的不同,对第二多声道声音信号中的至少一组和PCA映射模型进行感知编码,并复用成编码多声道码流。由上可见,本发明实施例中,具体 采用了MDCT或MDST来进行时频映射,并且根据统计特性估计映射模型时,具体选取了PCA映射模型,由于MDCT或MDST具有很好的音频压缩特性,并且,PCA模型中的映射矩阵矢量间是正交的,可以将多声道信号成分集中在尽可能少的声道上,有利于在较低的码率下降低编码信号的维度,因此可以最大限度的降低声道间的统计冗余,实现更高的编码效率的同时,保证编码信号的质量。
考虑到在多声道声音信号中,有些声道的声音成分和其他声道的声音成分显著不同。此时,可以将这些声道单独分组,采用上述方法,其优化映射模型提取更加精确。因此,针对此类的多声道声音信号进行编码时,也可以增加一个声道分组处理的步骤,来提高编码效率。
图2为本发明另一个实施例中的多声道声音信号编码方法流程图,该实施例中在对多声道声音信号进行时频映射之后,增加了一个声道分组处理的步骤,该方法包括:
步骤201,采用MDCT或MDST,将第一多声道声音信号映射为第一频域信号。
步骤202,将第一频域信号划分为不同时频子带。
其中,编码的声音信号可以先被分成待编码的帧,再进行时频变换,如果采用较大的帧长,可能会将一帧数据再分解为多个子帧,再进行时频变换。获得频域信号后,可以按频率顺序组成多个频率子带;也可以将多个时频变换获得的频域信号,组成二维时间-频率平面,在此平面进行时频区域划分,可以获得待编码的时频子带。
步骤203,在不同时频子带中的每个时频子带内,计算第一多声道声音信号的第二统计特性,根据第二统计特性,将第一多声道声音信号划分为多个分组声音信号。
本发明实施例中,在不同时频子带内,计算多声道声音信号xi(t,k)的统计特性;根据各声道声音成分的统计特性,将多声道信号分为一组或多组声 道分组,且每组包含至少一个声道信号;对于一个声道的分组,直接进行感知编码,对于多于一个声道的分组,执行后续的处理。
本发明的第二统计特性,可以采用一阶统计量(均值)、二阶统计量(方差和相关系数)及高阶统计量(高阶矩)及其变换形式,通常较多的选择二阶统计量,特别是相关系数。为节省计算量,也可以利用第一统计特性作为分组的评判基准,此时,第二统计特性和第一统计特性取值可以相同。
根据统计特性将多声道声音信号进行分组处理时,可以根据需要灵活选取相应的分组方式,可以采用固定的分组方式,也可以采用自适应的分组方式。本发明实施例中,如xi(t,k)被分为L个分组,其中某个声道分组l中包含xi(t,k)中的Ml个声道,可以是xi(t,k)中连续的Ml个声道,也可以是xi(t,k)中不连续的任意Ml个声道。当采用自适应的分组方法时,每个子带的分组信息需要被编码并复用到码流中,每个时频子带都需要一组声道分组信息。自适应的分组算法可以有多种,以基于声道间互相关的分组算法为例,其主要步骤为:
1)计算时频子带xi(t,k)中各声道信号间的协方差矩阵C;
2)根据矩阵C,将多声道时频子带xi(t,k)分为几个分组。具体来说,若两个声道m,n之间的归一化协方差系数C(m,n)的绝对值大于阈值,则将声道m、n分入同一个声道分组中,反之,则归入不同的分组。每个子带的分组信息包括分组数及每个分组所包含的声道的序号。
针对步骤203划分的每个分组声音信号,将每个分组声音信号作为第一多声道声音信号执行步骤204至207。
步骤204,在不同时频子带中的每个时频子带内,计算第一多声道声音信号的第一统计特性。
步骤205,根据第一统计特性,估计PCA映射模型。
步骤206,采用PCA映射模型,将第一多声道声音信号映射为第二多声道声音信号。
本发明实施例中,可以根据各声道声音成分的统计特性,估计PCA映射模型Wi(t,k);采用估计的PCA映射模型,将多声道信号映射到新的子空间,获得新的一组多声道信号zi(t,k)。
步骤207,根据时间、频率和声道的不同,对第二多声道声音信号中的至少一组、声道分组信息和PCA映射模型进行感知编码,并复用成编码多声道码流。
其中,可以将至少一组新的多声道信号zi(t,k)和对应的映射模型Wi(t,k)、声道分组信息进行感知编码,将所有感知编码信息复用,获得编码多声道码流。
另外,作为一个可替换的方案,特别是在较低的码率下,也可以选择在步骤101时频映射后、步骤102划分不同子带前,进行分组;这会带来一个显而易见的好处,即传输更少的分组信息,在较低的码率下,减少分组信息所占的比特更具实用性。此时,可以在执行步骤101之后,先计算第一多声道声音信号的第二统计特性,然后根据第二统计特性,将第一多声道声音信号划分为多个分组声音信号,针对每个分组声音信号,将每个分组声音信号作为第一多声道声音信号执行步骤102至106。
图3为本发明另一个实施例中的多声道声音信号编码方法流程图,该实施例中,先对多声道声音信号进行分组处理,然后针对每个分组声音信号进行时频映射等处理,该方法包括:
步骤301,计算第一多声道声音信号的第三统计特性,根据第三统计特性,将第一多声道声音信号划分为多个分组声音信号,对声道分组信息进行编码并复用到编码多声道码流中。
本发明实施例中,可以计算多声道声音信号u(m,t)的统计特性,并根据统计特性,将多声道声音信号分为一组或多组声道分组,且每组包含至少一个声道信号,其中,m为声道序号,t为帧(或子帧)序号。
此外,第三统计特性,可以采用一阶统计量(均值)、二阶统计量(方差 和相关系数)及高阶统计量(高阶矩)及其变换形式,通常较多的选择二阶统计量,特别是相关系数。
根据统计特性将多声道信号u(m,t)分为一组或多组声道分组ul(m,t)(l为该声道分组的序号)时,可以灵活选取相应的分组方式,可以采用固定的分组方式,也可以采用自适应的分组方式。如声道分组ul(m,t)中包含u(m,t)中的Ml个声道,可以是u(m,t)中连续的Ml个声道,也可以是u(m,t)中不连续的任意Ml个声道。当采用自适应的分组方法时,分组信息需要被编码并复用到码流中,此时对于每帧信号只需要一组分组信息。自适应的分组算法可以有多种,以基于声道间互相关的分组算法为例,其主要步骤为:
1)计算多声道信号u(m,t)中各声道信号间的协方差矩阵C;
2)根据矩阵C,将多声道信号u(m,t)分为几个分组。具体来说,若两个声道m,n之间的归一化协方差系数C(m,n)的绝对值大于阈值,则将声道m、n分入同一个声道分组中,反之,则归入不同的分组。
针对每个分组声音信号,将每个分组声音信号作为第一多声道声音信号执行步骤302至307。
步骤302,采用MDCT或MDST,将第一多声道声音信号映射为第一频域信号。
步骤303,将第一频域信号划分为不同时频子带。
采用MDCT或MDST,将分组后的多声道时域信号ul(m,t)映射为多声道频域信号x(m,k),并将时频映射后的信号划分为不同时频子带xi(t,k),其中,i是该时频子带的序号,t为帧(或子帧)序号。
步骤304,在不同时频子带中的每个时频子带内,计算第一多声道声音信号的第一统计特性。
步骤305,根据第一统计特性,估计PCA映射模型。
本发明实施例中采用了自适应子空间映射来估计优化子空间映射模型,上述自适应子空间映射不同于已有的多声道声音编码方法,创新的采用了子 空间映射(Subspace Mapping)方法,即根据信号的统计特性,估计多声道的优化子空间映射模型,该模型是一个自适应的线性变换矩阵,子空间映射方法,具体采用近些年发展起来的PCA映射方法。
步骤306,采用PCA映射模型,将第一多声道声音信号映射为第二多声道声音信号。
其中,可以在不同时频子带内,计算多声道声音信号xi(t,k)的统计特性,并估计PCA映射模型Wi(t,k);采用估计的映射模型,将多声道信号映射到新的子空间,获得新的一组多声道信号zi(t,k)。
步骤307,根据时间、频率和声道的不同,对第二多声道声音信号中的至少一组和PCA映射模型进行感知编码,并复用成编码多声道码流。
其中,可以将至少一组新的多声道信号zi(t,k)和对应的映射模型Wi(t,k)进行感知编码;将所有感知编码信息复用,获得编码多声道码流。
本发明实施例中的感知编码,可以采用如下任何一种声音编码方法:
波形编码:如MP3、AAC中采用的感知量化和哈夫曼熵编码,AC-3中采用的指数-尾数编码、OggVorbis和TwinVQ中采用的感知矢量量化编码等;
参数编码:如MPEG HILN中采用的谐波、独立弦成分和噪声编码、MPEG HVXC中采用的谐波矢量激励编码、AMR WB+中采用码激励和变换码激励(TCX)编码等;
波形-参数混合编码:如MP3Pro、AAC+、AMR WB+等方法中低频采用波形编码,高频采用频带扩展参数编码。
本发明实施例中的自适应子空间映射,即采用PCA映射模型,并根据声道间统计特性自适应调整PCA模型的映射系数。
本发明的自适应子空间映射策略,对于实现本发明的目的,即在编码多声道信号获得极高的编码效率的同时保证编码信号的质量,有着显著的意义。
子空间映射模型可以描述如下:
1.原子空间映射关系:
设M-维声源矢量为s,s={s1,s2,…,sM},
x,x={x1,x2,v,xM}为现子空间的观测矢量,且
x=As      (1)
其中A为现子空间映射矩阵。
2.新子空间映射关系:
z,z={z1,z2,…,zM}为新子空间的观测矢量,且
z=Wx      (2)
进一步的,本发明可以采用将MDCT/MDST域的频谱(即频域信号)分成谱线间隔的至少两个子频谱。在分成两个子频谱时,MDCT/MDST频谱分为奇序号子频谱xo i(t,k)和偶序号子频谱xe i(t,k),其中,xo i(t,k,m)=xi(t,2*k+1,m),xe i(t,k,m)=xi(t,2*k,m);在分成四个子谱线时,可以将上述的奇序号子频谱进一步分为奇奇序号子频谱xoo i(t,k))和奇偶序号子频谱xoe i(t,k),以及将可以将上述的偶序号子频谱进一步分为偶奇序号子频谱xeo i(t,k)和偶偶序号子频谱xee i(t,k),其中,xoo i(t,k,m)=xi(t,4*k+1,m),xoe i(t,k,m)=xi(t,4*k+3,m),xeo i(t,k,m)=xi(t,4*k+2,m),xee i(t,k,m)=xi(t,4*k,m)。如此划分成子频谱后,再进行上述的多声道编码,能够在一定程度上改进编码时导致的失真现象。
图4为本发明另一个实施例中的多声道声音信号编码方法流程图,该方法中,在进行时频映射之后,先将频域信号划分为多个子频谱,然后再针对每个子频谱划分时频子带,此时,本发明的编码方法包括下述处理过程。
步骤401,采用MDCT或MDST,将第一多声道声音信号映射为第一频域信号。
其中,可以采用MDCT或MDST,将多声道声音时域信号u(m,t)映射为多声道频域信号x(m,k)。
步骤402,根据第一频域信号中序号的奇偶,将第一频域信号划分为多个 子频谱。
步骤403,将第一频域信号划分为不同时频子带。
本发明实施例中,时频子带为包括所有子频谱的时频子带,具体地,可以包括奇频谱和偶频谱,第一多声道声音信号可以用xi(t,k)表示。
步骤404,在不同时频子带中的每个时频子带内,计算第一多声道声音信号的第二统计特性,根据第二统计特性,将第一多声道声音信号划分为多个分组声音信号。
针对每个分组声音信号,将每个分组声音信号作为第一多声道声音信号执行步骤405至408。
其中,步骤404为可选步骤,即本发明实施例中也可以不进行分组处理。
步骤405,在不同时频子带中的每个时频子带内,计算第一多声道声音信号的第一统计特性。
步骤406,根据第一统计特性,估计PCA映射模型。
步骤407,采用PCA映射模型,将第一多声道声音信号映射为第二多声道声音信号。
其中,可以在不同时频子带内,计算多声道声音信号xi(t,k)的统计特性,并估计PCA映射模型Wi(t,k);采用估计的映射模型,将多声道信号映射到新的子空间,获得新的一组多声道信号zi(t,k)。
步骤408,根据时间、频率和声道的不同,对第二多声道声音信号中的至少一组、声道分组信息和PCA映射模型进行感知编码,并复用成编码多声道码流。
具体地,可以将至少一组新的多声道信号zi(t,k)和对应的映射模型Wi(t,k)、声道分组信息进行感知编码,获得编码多声道码流。
本发明实施例中,当不进行步骤404的分组处理时,步骤408中也不包括对声道分组信息进行感知编码。
图5为本发明另一个实施例中的多声道声音信号编码方法流程图,该方法中,先对多声道声音信号进行分组处理,然后针对每个分组信号进行时频映射,并在进行时频映射之后,将频域信号划分为多个子频谱,再针对每个子频谱划分时频子带,此时,本发明的编码方法包括下述处理过程。
步骤501,计算第一多声道声音信号的第三统计特性,根据第三统计特性,将第一多声道声音信号划分为多个分组声音信号,对声道分组信息进行编码并复用到编码多声道码流中。
针对每个分组声音信号,将每个分组声音信号作为第一多声道声音信号执行步骤502至508。
步骤502,采用MDCT或MDST,将第一多声道声音信号映射为第一频域信号。
步骤503,根据第一频域信号中序号的奇偶,将第一频域信号划分为多个子频谱。
步骤504,针对多个子频谱中的每个子频谱,将每个子频谱划分为不同时频子带。
本发明实施例中,时频子带为包括所有子频谱的时频子带,具体地,可以包括奇频谱和偶频谱。
步骤505,在不同时频子带中的每个时频子带内,计算第一多声道声音信号的第一统计特性。
步骤506,根据第一统计特性,估计PCA映射模型。
步骤507,采用PCA映射模型,将第一多声道声音信号映射为第二多声道声音信号。
步骤508,根据时间、频率和声道的不同,对第二多声道声音信号中的至少一组和PCA映射模型进行感知编码,并复用成编码多声道码流。
本发明实施例中采用了PCA技术,根据信号的统计特性,估计多声道的 PCA映射模型,该模型是一个自适应的线性变换矩阵。其中,自适应PCA子空间映射策略,对于实现本发明的目的,即在编码多声道信号获得极高的编码效率的同时保证编码信号的质量,有着显著的意义。
设x,x={x1,x2,…,xM}为现子空间的观测矢量
z,z={z1,z2,…,zM}为新子空间的观测矢量,且
z=Wx      (1)
W为新子空间映射矩阵。且x,z为去均值标量随机变量组成的矢量。
PCA模型的基本计算步骤如下:
步骤一,计算观测矢量x的协方差矩阵C;
对每个分组中的第i个时频子带xi(t,k)进行PCA分析时,M为分组所含的声道数,xi(t,k,m)则对应为观测矢量x中元素xm的一组样本点(ti-1≤t<ti,ki-1≤k<ki,ti-1和ti为该子带的起始和终止帧(或子帧)序号,ki-1和ki为该子带的起始和终止频率或子带序号)。
1)对xi(t,k,m)进行去均值处理;
2)若该时频子带只包含一个帧(或子帧),即ti-1+1=ti,则协方差矩阵C可通过如下公式进行运算:
Figure PCTCN2014095394-appb-000001
若该时频子带包含多个帧(或子帧),即ti-1+1<ti时,则C(m,n)的运算可以采用如下公式进行计算:
Figure PCTCN2014095394-appb-000002
也可以先将xm=xi(t,k,m)转换为一维矢量,然后再进行运算,即xe m=Vxi(t,k,m),V为转换矩阵;
Figure PCTCN2014095394-appb-000003
步骤二,计算协方差矩阵的特征向量e1、e2、…、eM和特征值λ1、λ2、…、λM,特征值按由大到小的顺序排序;
步骤三,将观测矢量x映射到特征矢量张成的空间之中,获得映射矢量z,即z=Wx。
PCA模型中的映射矩阵矢量间是正交的,可以将多声道信号成分集中在尽可能少的声道上,有利于在较低的码率下降低编码信号的维度。
本发明的感知编码,分为多声道声音信号z(m,k)编码和对应的映射模型W(m,k)编码。其中,多声道声音信号z(m,k)编码可以采用如下任何一种声音编码方法:
波形编码,如MP3、AAC中采用的感知量化和哈夫曼熵编码,AC-3中采用的指数-尾数编码、OggVorbis和TwinVQ中采用的感知矢量量化编码等;
参数编码,如MPEG HILN中采用的谐波、独立弦成分和噪声编码、MPEG HVXC中采用的谐波矢量激励编码、AMR WB+中采用的码激励和变换码激励(TCX)编码等;
波形-参数混合编码,如MP3Pro、AAC+、AMR WB+等方法中低频采用波形编码,高频采用频带扩展参数编码。
映射模型编码可以编码对应的映射矩阵(即特征矢量),也可以编码该模型的其他变换形式,亦可以直接编码借以计算映射矩阵的协方差矩阵。映射模型编码时,可以采用众所周知的标量量化、矢量量化和预测编码等方法,也可以采用熵编码(如huffman编码或算数编码)来进一步提高编码效率,如当频域信号划分为奇偶不同的子频谱(或多个子频谱)时,奇频谱的映射矩阵与偶频谱的映射矩阵之间是相互关联的,即存在冗余;相邻的频带的映射矩阵间也存在冗余,利用这些冗余信息可以提高编码效率。比如可以采用奇频谱 子带的映射矩阵与相邻偶频谱子带的映射矩阵进行联合矢量编码的方法。
本发明实施例的感知编码,将至少一组新的多声道信号和对应的映射模型进行感知编码。可以根据当前编码的目标码率,以及新的多声道信号的感知重要度,选择编码的信号成分和对应的映射模型参数。
本发明的自适应子空间映射和感知编码方法,也可以提供可分级的编码,即多声道声音信号只编码一次,获得一个声音码流,即可提供多码率和质量的传输及解码,从而支持多种类型用户的不同应用需求。在支持可分级编码时,感知编码模块可进一步分解为如下步骤:
步骤一,选择最重要的至少一组信号和对应的映射模型,进行感知编码,并且该部分码流的码率不高于基础层码率约束;
步骤二,选择第二重要的至少一组信号和对应的映射模型,进行感知编码,并且该部分码流的码率不高于第一增强层码率约束;
步骤三,选择第三重要的至少一组信号和对应的映射模型,进行感知编码,并且该部分码流的码率不高于第二增强层码率约束;
步骤四,以此类推,直至实现无损编码,获得N层码流。
步骤五,所有N层码流复用成一个压缩流。
在可分级编码的应用场合,根据服务请求从可分级码流重新复合的压缩流,应至少包括基础层码流,在较高的码率下,可以按重要度顺序复用增强层码流。
图6为本发明一个实施例中的多声道声音信号解码方法流程图,该方法包括:
步骤601,对编码多声道码流进行解码,获得第二多声道声音信号中的至少一组和PCA映射模型。
步骤602,采用PCA映射模型,将第二多声道声音信号映射回第一多声道声音信号。
步骤603,采用修正离散余弦逆变换IMDCT或修正离散正弦逆变换IMDST, 将第一多声道声音信号从频域映射为时域。
其中,当第一多声道声音信号在频域为多个分组声音信号时,在步骤603之前,还可以包括:对码流中的声道分组信息进行解码,获得解码的声道分组信息;根据解码的声道分组信息将多个分组声音信号进行分组复原,获得第三多声道声音信号,将第三多声道声音信号作为第一多声道声音信号执行步骤603。
本发明实施例中,当第一多声道声音信号在时域为多个分组声音信号时,在步骤601中,还可以包括:对编码多声道码流进行解码,获得解码的声道分组信息;在步骤603之后,还可以包括:根据解码的声道分组信息,将所述多个分组声音信号进行分组复原,获得第四多声道声音信号。
当第一多声道声音信号在频域为多个子频谱时,在步骤603之前,还可以包括:将每个声道的多个子频谱恢复成自然顺序的频域信号;将所述自然顺序的频域信号作为第一多声道声音信号,执行步骤603。
此外,步骤601之前,还可以包括:对编码多声道码流进行解复用处理,获得多个分层码流;将每个分层码流作为编码多声道码流执行步骤601;当对全部分层码流都执行步骤601后,再统一执行步骤602和步骤603。
图7为本发明一个实施例中的多声道声音信号编码装置结构示意图,该装置包括:
时频映射单元701,用于采用MDCT或MDST,将第一多声道声音信号映射为第一频域信号;将所述第一频域信号或所述第一子带信号划分为不同时频子带;
自适应子空间映射单元702,用于在所述时频映射单元701划分的不同时频子带中的每个时频子带内,计算所述第一多声道声音信号的第一统计特性;根据所述第一统计特性,估计PCA映射模型;采用所述PCA映射模型,将所述第一多声道声音信号映射为第二多声道声音信号;
感知编码单元703,用于根据时间、频率和声道的不同,对所述自适应子 空间映射单元702映射的第二多声道声音信号中的至少一组和所述PCA映射模型进行感知编码,并复用成编码多声道码流。
较佳地,还包括:
第一声道分组单元,用于在所述自适应子空间映射单元702在不同时频子带中的每个时频子带内,计算所述第一多声道声音信号的第一统计特性之前,在所述时频映射单元701划分的不同时频子带中的每个时频子带内,计算所述第一多声道声音信号的第二统计特性;根据所述第二统计特性,将所述第一多声道声音信号划分为多个分组声音信号;
所述自适应子空间映射单元702和所述感知编码单元703具体用于,针对所述第一声道分组单元划分的每个分组声音信号,将所述每个分组声音信号作为所述第一多声道声音信号进行处理,所述感知编码单元703还用于对声道分组信息进行感知编码。
较佳地,还包括:
第二声道分组单元,用于在所述时频映射单元701采用MDCT或MDST,将第一多声道声音信号映射为第一频域信号之前,计算所述第一多声道声音信号的第三统计特性;根据所述第三统计特性,将所述第一多声道声音信号划分为多个分组声音信号,并对声道分组信息进行感知编码;
所述时频映射单元701、所述自适应子空间映射单元702和所述感知编码单元703具体用于,针对所述第二声道分组单元划分的每个分组声音信号,将所述每个分组声音信号作为所述第一多声道声音信号进行处理。
较佳地,还包括:
子频谱划分单元,用于在所述时频映射单元701将所述第一频域信号划分为不同时频子带之前,根据所述第一频域信号中序号的奇偶,将所述第一频域信号划分为多个子频谱;
所述时频映射单元701、所述自适应子空间映射单元702和所述感知编码单元703具体用于,针对所述子频谱划分单元划分的多个子频谱中的每个子 频谱,将所述每个子频谱作为所述第一频域信号进行处理。
图8为本发明一个实施例中的多声道声音信号解码装置结构示意图,该装置包括:
感知解码单元801,用于对编码多声道码流进行解码,获得第二多声道声音信号中的至少一组和PCA映射模型;
子空间逆映射单元802,用于采用所述感知解码单元801获得的PCA映射模型,将所述感知解码单元801获得的第二多声道声音信号映射回第一多声道声音信号;
频时映射单元803,用于采用IMDCT或IMDST,将所述子空间逆映射单元802得到的第一多声道声音信号从频域映射为时域。
较佳地,所述子空间逆映射单元802得到的第一多声道声音信号在频域为多个分组声音信号;
所述感知解码单元801具体用于,对编码多声道码流进行解码,获得第二多声道声音信号中的至少一组、声道分组信息和PCA映射模型;
所述装置还包括:
第一分组复原单元,用于在所述频时映射单元803采用IMDCT或IMDST,将所述子空间逆映射单元802得到的第一多声道声音信号从频域映射为时域之前,根据解码的声道分组信息将所述多个分组声音信号进行分组复原,获得第三多声道声音信号;
所述频时映射单元803具体用于,将所述第一分组复原单元获得的第三多声道声音信号作为所述第一多声道声音信号进行处理。
较佳地,所述频时映射单元803进行映射处理后的第一多声道声音信号在时域为多个分组声音信号;
所述感知解码单元801具体用于,对编码多声道码流进行解码,获得第二多声道声音信号中的至少一组、声道分组信息和PCA映射模型;
所述装置还包括:
第二分组复原单元,用于在所述频时映射单元803采用IMDCT或IMDST,将所述子空间逆映射单元802得到的第一多声道声音信号从频域映射为时域之后,根据所述声道分组信息将所述多个分组声音信号进行分组复原,获得第四多声道声音信号。
较佳地,所述子空间逆映射单元802获得的第一多声道声音信号在频域为多个子频谱,所述装置还包括:
子频谱恢复单元,用于所述频时映射单元803采用IMDCT或IMDST,将所述第一多声道声音信号从频域映射为时域之前,将所述子空间逆映射单元802获得的第一多声道声音信号中每个声道的多个子频谱恢复成自然顺序的频域信号;
所述频时映射单元803具体用于,将所述自然顺序的频域信号作为所述第一多声道声音信号进行处理。
专业人员应该还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
以上所述的具体实施方式,对本发明的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而 已,并不用于限定本发明的保护范围,凡在本发明的精神和原则之内,所做的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。

Claims (16)

  1. 一种多声道声音信号编码方法,其特征在于,所述方法包括:
    A)采用修正离散余弦变换MDCT或修正离散正弦变换MDST,将第一多声道声音信号映射为第一频域信号;
    B)将所述第一频域信号划分为不同时频子带;
    C)在所述不同时频子带中的每个时频子带内,计算所述第一多声道声音信号的第一统计特性;
    D)根据所述第一统计特性,估计主成分分析PCA映射模型;
    E)采用所述PCA映射模型,将所述第一多声道声音信号映射为第二多声道声音信号;
    F)根据时间、频率和声道的不同,对所述第二多声道声音信号中的至少一组和所述PCA映射模型进行感知编码,并复用成编码多声道码流。
  2. 如权利要求1所述的方法,其特征在于,在所述不同时频子带中的每个时频子带内,计算所述第一多声道声音信号的第一统计特性之前,还包括:
    在所述不同时频子带中的每个时频子带内,计算所述第一多声道声音信号的第二统计特性;根据所述第二统计特性,将所述第一多声道声音信号划分为多个分组声音信号;
    针对每个分组声音信号,将所述每个分组声音信号作为所述第一多声道声音信号执行步骤C)至F);
    所述步骤F)具体包括:根据时间、频率和声道的不同,对第二多声道声音信号中的至少一组、声道分组信息和PCA映射模型进行感知编码,并复用成编码多声道码流。
  3. 如权利要求1所述的方法,其特征在于,所述采用修正离散余弦变换MDCT或修正离散正弦变换MDST,将第一多声道声音信号映射为第一频域信号之前,还包括:
    计算所述第一多声道声音信号的第三统计特性;根据所述第三统计特性, 将所述第一多声道声音信号划分为多个分组声音信号,对声道分组信息进行编码并复用到编码多声道码流中;
    针对每个分组声音信号,将所述每个分组声音信号作为所述第一多声道声音信号执行步骤A)至F)。
  4. 如权利要求1至3中任一权利要求所述的方法,其特征在于,所述将所述第一频域信号划分为不同时频子带之前,还包括:
    根据所述第一频域信号中序号的奇偶,将所述第一频域信号划分为多个子频谱;
    针对所述多个子频谱中的每个子频谱,将所述每个子频谱作为所述第一频域信号执行步骤B)至F)。
  5. 一种多声道声音信号编码装置,其特征在于,所述装置包括:
    时频映射单元,用于采用修正离散余弦变换MDCT或修正离散正弦变换MDST,将第一多声道声音信号映射为第一频域信号;将所述第一频域信号划分为不同时频子带;
    自适应子空间映射单元,用于在所述时频映射单元划分的不同时频子带中的每个时频子带内,计算所述第一多声道声音信号的第一统计特性;根据所述第一统计特性,估计主成分分析PCA映射模型;采用所述PCA映射模型,将所述第一多声道声音信号映射为第二多声道声音信号;
    感知编码单元,用于根据时间、频率和声道的不同,对所述自适应子空间映射单元映射的第二多声道声音信号中的至少一组和所述PCA映射模型进行感知编码,并复用成编码多声道码流。
  6. 如权利要求5所述的装置,其特征在于,还包括:
    第一声道分组单元,用于在所述自适应子空间映射单元在不同时频子带中的每个时频子带内,计算所述第一多声道声音信号的第一统计特性之前,在所述时频映射单元划分的不同时频子带中的每个时频子带内,计算所述第一多声道声音信号的第二统计特性;根据所述第二统计特性,将所述第一多 声道声音信号划分为多个分组声音信号;
    所述自适应子空间映射单元和所述感知编码单元具体用于,针对所述第一声道分组单元划分的每个分组声音信号,将所述每个分组声音信号作为所述第一多声道声音信号进行处理,所述感知编码单元还用于对声道分组信息进行感知编码。
  7. 如权利要求5所述的装置,其特征在于,还包括:
    第二声道分组单元,用于在所述时频映射单元采用修正离散余弦变换MDCT或修正离散正弦变换MDST,将第一多声道声音信号映射为第一频域信号之前,计算所述第一多声道声音信号的第三统计特性;根据所述第三统计特性,将所述第一多声道声音信号划分为多个分组声音信号,并对声道分组信息进行感知编码;
    所述时频映射单元、所述自适应子空间映射单元和所述感知编码单元具体用于,针对所述第二声道分组单元划分的每个分组声音信号,将所述每个分组声音信号作为所述第一多声道声音信号进行处理。
  8. 如权利要求5所述的装置,其特征在于,还包括:
    子频谱划分单元,用于在所述时频映射单元将所述第一频域信号划分为不同时频子带之前,根据所述第一频域信号中序号的奇偶,将所述第一频域信号划分为多个子频谱;
    所述自适应子空间映射单元和所述感知编码单元具体用于,针对所述子频谱划分单元划分的多个子频谱中的每个子频谱,将所述每个子频谱作为所述第一频域信号进行处理。
  9. 一种多声道声音信号解码方法,其特征在于,所述方法包括:
    A)对编码多声道码流进行解码,获得第二多声道声音信号中的至少一组和主成分分析PCA映射模型;
    B)采用所述PCA映射模型,将所述第二多声道声音信号映射回第一多声道声音信号;
    C)采用修正离散余弦逆变换IMDCT或修正离散正弦逆变换IMDST,将所述第一多声道声音信号从频域映射为时域。
  10. 如权利要求9所述的方法,其特征在于,所述第一多声道声音信号在频域为多个分组声音信号;在所述采用IMDCT或IMDST,将所述第一多声道声音信号从频域映射为时域之前,还包括:
    对码流中的声道分组信息进行解码,获得解码的声道分组信息;根据解码的声道分组信息将所述多个分组声音信号进行分组复原,获得第三多声道声音信号;
    将所述第三多声道声音信号作为所述第一多声道声音信号执行步骤C)。
  11. 如权利要求9所述的方法,其特征在于,所述第一多声道声音信号在时域为多个分组声音信号;
    所述步骤A)还包括:对编码多声道码流进行解码,获得解码的声道分组信息;
    在所述采用IMDCT或IMDST,将所述第一多声道声音信号从频域映射为时域之后,还包括:
    根据所述解码的声道分组信息,将所述多个分组声音信号进行分组复原,获得第四多声道声音信号。
  12. 如权利要求9所述的方法,其特征在于,所述第一多声道声音信号在频域为多个子频谱,所述采用IMDCT或IMDST,将所述第一多声道声音信号从频域映射为时域之前,还包括:
    将每个声道的多个子频谱恢复成自然顺序的频域信号;
    将所述自然顺序的频域信号作为第一多声道声音信号,执行步骤C)。
  13. 一种多声道声音信号解码装置,其特征在于,所述装置包括:
    感知解码单元,用于对编码多声道码流进行解码,获得第二多声道声音信号中的至少一组和主成分分析PCA映射模型;
    子空间逆映射单元,用于采用所述感知解码单元获得的PCA映射模型, 将所述感知解码单元获得的第二多声道声音信号映射回第一多声道声音信号;
    频时映射单元,用于采用修正离散余弦逆变换IMDCT或修正离散正弦逆变换IMDST,将所述子空间逆映射单元得到的第一多声道声音信号从频域映射为时域。
  14. 如权利要求13所述的装置,其特征在于,所述子空间逆映射单元得到的第一多声道声音信号在频域为多个分组声音信号;
    所述感知解码单元801具体用于,对编码多声道码流进行解码,获得第二多声道声音信号中的至少一组、声道分组信息和PCA映射模型;
    所述装置还包括:
    第一分组复原单元,用于在所述频时映射单元采用IMDCT或IMDST,将所述子空间逆映射单元得到的第一多声道声音信号从频域映射为时域之前,根据解码的声道分组信息将所述多个分组声音信号进行分组复原,获得第三多声道声音信号;
    所述频时映射单元具体用于,将所述第一分组复原单元获得的第三多声道声音信号作为所述第一多声道声音信号进行处理。
  15. 如权利要求13所述的装置,其特征在于,所述频时映射单元进行映射处理后的第一多声道声音信号在时域为多个分组声音信号;
    所述感知解码单元具体用于,对编码多声道码流进行解码,获得第二多声道声音信号中的至少一组、声道分组信息和PCA映射模型;
    所述装置还包括:
    第二分组复原单元,用于在所述频时映射单元采用IMDCT或IMDST,将所述子空间逆映射单元得到的第一多声道声音信号从频域映射为时域之后,根据所述声道分组信息将所述多个分组声音信号进行分组复原,获得第四多声道声音信号。
  16. 如权利要求13所述的装置,其特征在于,所述子空间逆映射单元获 得的第一多声道声音信号在频域为多个子频谱,所述装置还包括:
    子频谱恢复单元,用于所述频时映射单元采用IMDCT或IMDST,将所述第一多声道声音信号从频域映射为时域之前,将所述子空间逆映射单元获得的第一多声道声音信号中每个声道的多个子频谱恢复成自然顺序的频域信号;
    所述频时映射单元具体用于,将所述自然顺序的频域信号作为所述第一多声道声音信号进行处理。
PCT/CN2014/095394 2014-08-15 2014-12-29 多声道声音信号编码方法、解码方法及装置 WO2016023322A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410404895.5 2014-08-15
CN201410404895.5A CN105336334B (zh) 2014-08-15 2014-08-15 多声道声音信号编码方法、解码方法及装置

Publications (1)

Publication Number Publication Date
WO2016023322A1 true WO2016023322A1 (zh) 2016-02-18

Family

ID=55286820

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/095394 WO2016023322A1 (zh) 2014-08-15 2014-12-29 多声道声音信号编码方法、解码方法及装置

Country Status (2)

Country Link
CN (1) CN105336334B (zh)
WO (1) WO2016023322A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023241205A1 (zh) * 2022-06-15 2023-12-21 腾讯科技(深圳)有限公司 音频处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101401152A (zh) * 2006-03-15 2009-04-01 法国电信公司 通过多通道音频信号的主分量分析进行编码的设备和方法
CN101401151A (zh) * 2006-03-15 2009-04-01 法国电信公司 根据主分量分析的多通道音频信号的可分级编码的设备和方法
US20110046946A1 (en) * 2008-05-30 2011-02-24 Panasonic Corporation Encoder, decoder, and the methods therefor
CN102947880A (zh) * 2010-04-09 2013-02-27 杜比国际公司 基于mdct的复合预测立体声编码
CN103366750A (zh) * 2012-03-28 2013-10-23 北京天籁传音数字技术有限公司 一种声音编解码装置及其方法

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101490744B (zh) * 2006-11-24 2013-07-17 Lg电子株式会社 用于编码和解码基于对象的音频信号的方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101401152A (zh) * 2006-03-15 2009-04-01 法国电信公司 通过多通道音频信号的主分量分析进行编码的设备和方法
CN101401151A (zh) * 2006-03-15 2009-04-01 法国电信公司 根据主分量分析的多通道音频信号的可分级编码的设备和方法
US20110046946A1 (en) * 2008-05-30 2011-02-24 Panasonic Corporation Encoder, decoder, and the methods therefor
CN102947880A (zh) * 2010-04-09 2013-02-27 杜比国际公司 基于mdct的复合预测立体声编码
CN103366750A (zh) * 2012-03-28 2013-10-23 北京天籁传音数字技术有限公司 一种声音编解码装置及其方法

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023241205A1 (zh) * 2022-06-15 2023-12-21 腾讯科技(深圳)有限公司 音频处理方法、装置、电子设备、计算机可读存储介质及计算机程序产品

Also Published As

Publication number Publication date
CN105336334B (zh) 2021-04-02
CN105336334A (zh) 2016-02-17

Similar Documents

Publication Publication Date Title
JP6641018B2 (ja) チャネル間時間差を推定する装置及び方法
KR101809592B1 (ko) 지능형 갭 필링 프레임워크 내의 2-채널 프로세싱을 이용한 오디오 인코더, 오디오 디코더 및 관련 방법들
US11908484B2 (en) Apparatus and method for generating an enhanced signal using independent noise-filling at random values and scaling thereupon
CN101253557A (zh) 立体声编码装置、立体声解码装置、及立体声编码方法
WO2016023323A1 (zh) 多声道声音信号编码方法、解码方法及装置
JP4685165B2 (ja) 仮想音源位置情報に基づいたチャネル間レベル差量子化及び逆量子化方法
WO2017206794A1 (zh) 一种声道间相位差参数的提取方法及装置
WO2016023322A1 (zh) 多声道声音信号编码方法、解码方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14899807

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14899807

Country of ref document: EP

Kind code of ref document: A1