CN101188878B - A space parameter quantification and entropy coding method for 3D audio signals and its system architecture - Google Patents

A space parameter quantification and entropy coding method for 3D audio signals and its system architecture Download PDF

Info

Publication number
CN101188878B
CN101188878B CN2007101686140A CN200710168614A CN101188878B CN 101188878 B CN101188878 B CN 101188878B CN 2007101686140 A CN2007101686140 A CN 2007101686140A CN 200710168614 A CN200710168614 A CN 200710168614A CN 101188878 B CN101188878 B CN 101188878B
Authority
CN
China
Prior art keywords
spatial parameter
quantization
frequency
parameter
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2007101686140A
Other languages
Chinese (zh)
Other versions
CN101188878A (en
Inventor
胡瑞敏
陈水仙
艾浩军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN2007101686140A priority Critical patent/CN101188878B/en
Publication of CN101188878A publication Critical patent/CN101188878A/en
Application granted granted Critical
Publication of CN101188878B publication Critical patent/CN101188878B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The invention discloses a space parameter quantification and entropy encoding method for a stereo voice-frequency signal, and a system structure thereof. Firstly, the space parameter quantification for the stereo voice-frequency signal uses different quantization tables in different frequency bands to perform the nonlinear scalar quantification by table search; then quantification indexes of each obtained space parameter are combined into a vector which executes a finite difference with the vector of the former quantification indexes; finally, the obtained finite difference quantification index vector acts the Huffman entropy encoding according to a Huffman code table corresponding to each present frequency band. The invention utilizes the dependency relationship between the hearing and esthesia of the space parameter and the frequency bands as well as the pertinency among the space parameters, thus the subjective and objective redundancy of the space parameter is effectively removed, the tone of the space parameter stereo code is enhanced under the same encoding rate, or the encoding rate is reduced under the same encoding tone.

Description

The spatial parameter quantification of stereo audio signal and entropy coding method and system for use in carrying
Technical field
The present invention relates to a kind of spatial parameter quantification and entropy coding method and system for use in carrying structure of stereo audio signal, belong to the audio coding field.
Background technology
The spatial parameter of stereo audio signal coding is a kind of audio coding method that stereo spatial information and parametrization are represented that extracts.Spatial parameter comprise the time difference between sound channel (Interchannel Time Difference, ITD), intensity difference between sound channel (Interchannel Level Difference, ILD), and the degree of correlation between sound channel (Interchannel Coherence, IC).Because the code check that the parameter code check is encoded far below independent each sound channel tradition usually, the framework that the traditional coding in mixing sound road adds the stereo spatial parameter coding under adopting can keep stereo audio when reducing code check greatly, so the stereo spatial parameter coding becomes an indispensable technology of low Bit Rate Audio Coding.
Existing stereo spatial parameter coding technology comprises binaural cue coding (the Binaural Cue Coding that people such as C.Faller proposes, BCC), parameter stereo (the ParametricStereo that people such as J.Breebaart propose, and the common MPEG surround sound (MPEG Surround) that proposes of many families tissue PS).For further reducing code check, these spatial parameter encoders all quantize spatial parameter to handle with entropy coding, are transferred to decoder then.Quantification and entropy coding method that BCC adopts are the simplest: at first the scope with ITD, ILD and IC be limited in respectively ± 800 μ s, ± 18dB and (0,1) in, and these parameters are carried out even scalar quantization, wherein ITD and ILD have 7 quantum steps, IC has 4 quantum steps, the calculating front and back frame and the difference of intersubband quantization index value then in succession; Adopt identical Huffman code table that these two differences are encoded at last.
It is non-linear to the perception of spatial parameter that but the space psychologic acoustics is pointed out people's ear, the highest as people's ear to ITD and ILD susceptibility near the central plane (median plane), and such two kinds of extreme cases can appear in the uniform quantization method that BCC adopts: first, quantization error surpasses the perception thresholding of people's ear near central plane, it is serious to cause tonequality to descend like this; The second, quantization error so also causes consuming more coded-bit near endpoint value, thereby increases encoder bit rate far below the perception thresholding of people's ear near endpoint value.Therefore, PS and MPEG Surround have used for reference the basic framework that among the BCC spatial parameter is adopted scalar quantization+Huffman coding, but quantized segment has wherein been used instead non-linear scalar quantization, and realize by look-up table.As mentioned above, the problem that similar BCC uses uniform quantization to bring, and PS and MPEG Surround use single quantization table to cause the problem that tonequality descends or code check raises at the full range band, the characteristic that does not all have abundant sharp people's ear that the non-linear frequency of spatial parameter perception is relied on.In addition, also be mutually related between the spatial parameter, for example ILD and ITD depend on the locus of sound source, and all are to quantize separately to each parameter among PS and the MPEG Surround, ignore the redundancy between the parameter, have reduced code efficiency to a certain extent.
Summary of the invention
In order to overcome above-mentioned the deficiencies in the prior art, the spatial parameter that the invention provides a kind of stereo audio signal quantizes and entropy coding method, the space parameter stereo coded system of frequency division carrying space parameter quantification that proposes and associating Huffman entropy coding method, can be at same tonequality decline low spatial parameter coding code check, and under same spatial parameter encoder bit rate, improve tonequality.
Realize that technical scheme of the present invention is: a kind of spatial parameter of stereo audio signal quantizes and entropy coding method comprises following steps:
(1) spatial parameter of input stereo audio audio signal: the degree of correlation between intensity difference and sound channel between time difference, sound channel between sound channel, find the nonlinear quantization table of coupling according to the frequency of spatial parameter correspondence, carry out non-linear scalar quantization, obtain quantizating index, wherein the foundation of nonlinear quantization table may further comprise the steps:
(1-a) resolution capability different to spatial parameter with people's ear according to the spatial hearing characteristic of people's ear is divided into some frequency ranges with full frequency-domain 20~16000Hz;
(1-b) with the stereo test tone of doing in arrowband, on each test frequency point, test macro passes through the earphone dull cycle tests that rises of playback spatial parameter in turn, and whether the point that tester's subjective judgement and test frequency point are adjacent is variant, and record;
(1-c) collect a plurality of testers' test data, find out average I difference in perception value, and, obtain the initial spatial parameter quantization table of each frequency range as at interval;
(2) will form a quantizating index vector at each spatial parameter quantizating index at same frequency place;
(3) the quantizating index vector of present frame is done calculus of differences with the quantizating index vector of previous frame, obtain the differential quantization indicator vector;
(4) find the Huffman code table of coupling according to the frequency of differential quantization indicator vector correspondence, carry out the Huffman coding, output Huffman code word, the foundation of Huffman code table may further comprise the steps:
(4-a) with a series of stereophonic signals of containing each type and having a statistical representativeness as the test signal collection;
(4-b) on different frequency ranges, calculate the energy ratio of time offset value, left and right acoustic channels voice signal normalization relevance degree and left and right acoustic channels voice signal that test signal is concentrated the maximum normalization degree of correlation of left and right acoustic channels voice signal of all signals respectively;
(4-c) according to the nonlinear quantization table of above-mentioned each spatial parameter, calculate the quantizating index of each spatial parameter;
(4-d) according to given selection of parameter scheme, selected parameter is formed indicator vector;
(4-e) indicator vector of front and back two frames is subtracted each other obtain the difference indicator vector;
(4-f) add up the number of times that different difference indicator vectors occur, obtain the probability distribution of difference indicator vector;
(4-g) according to the probability distribution of difference indicator vector, the Huffman code table building method by standard obtains the Huffman code table.
Above-mentioned different frequency range at stereo audio signal is used the quantization table of different spatial parameters.
Above-mentioned different frequency range at stereo audio signal is used different Huffman code tables.
A kind of spatial parameter of above-mentioned stereo audio signal quantizes and entropy coding method system for use in carrying structure comprises the quantization table mapping block at least, scalar nonlinear quantization module, the frame module of delaying time, spatial parameter difference block and code table mapping block and Huffman coding module, scalar nonlinear quantization module is connected with the time delay module input with the spatial parameter difference block, the output of a frame module of delaying time is connected with the spatial parameter difference block, spatial parameter difference block output is connected with the Huffman coding module, quantization table mapping block output is connected with scalar nonlinear quantization module, the code table mapping block is connected with audio signal, output is connected with the Huffman coding module, wherein the frequency that is input as current stereo audio signal spatial parameter correspondence of quantization table mapping block is exported the spatial parameter quantization table label that mates; The spatial parameter of the stereo audio signal that is input as the process match selection of scalar nonlinear quantization module, the quantizating index of output region parameter; The delay time spatial parameter quantizating index vector that is input as present frame of a frame module, the quantizating index vector of output previous frame spatial parameter; The spatial parameter quantizating index vector that is input as present frame and previous frame of spatial parameter difference block is output as the differential quantization indicator vector; The frequency that is input as current stereo audio signal spatial parameter correspondence of code table mapping block is output as the Huffman code table label of coupling; The Huffman coding module be input as the differential quantization indicator vector, be output as the Huffman coding codeword of differential quantization indicator vector.
The present invention has utilized the dependence of spatial parameter sense of hearing perception and frequency band and the correlation between spatial parameter, effectively removes the subjectivity and the objective redundancy of spatial parameter.Compared with prior art, can under same encoder bit rate, improve the tonequality of space parameter stereo coding, under same coding tonequality, reduce encoder bit rate.
Description of drawings
The invention will be further described below in conjunction with drawings and Examples.
Fig. 1 is the spatial parameter quantification of stereo audio signal of the present invention and the structured flowchart of entropy coding method system for use in carrying structure.
Fig. 2 is the spatial parameter quantification of stereo audio signal of the present invention and the flow chart of entropy coding method.
Fig. 3 is the flow chart that the present invention sets up the spatial parameter quantization table.
Fig. 4 is the flow chart that the present invention sets up the Huffman code table.
Embodiment
With L m(t) and R m(t) represent the left and right signal of the sound source m in the original input signal respectively, time (t 0, t 1) go up the space parameter and have:
ITD = arg max s { ∫ t 0 t 1 L m ( t ) R m ( t + s ) dt / ( ∫ t 0 t 1 L m 2 ( t ) dt ∫ t 0 t 1 R m 2 ( t ) dt ) 1 / 2 } - - - ( 1 )
ILD = 10 log 10 ( ∫ t 0 t 1 L m 2 ( t ) dt / ∫ t 0 t 1 R m 2 ( t ) dt ) - - - ( 2 )
IC = max s { ∫ t 0 t 1 L m ( t ) R m ( t + s ) dt / ( ∫ t 0 t 1 L m 2 ( t ) dt ∫ t 0 t 1 R m 2 ( t ) dt ) 1 / 2 } - - - ( 3 )
Wherein ITD and IC are respectively signal L m(t) and R m(t) time migration with maximum normalization degree of correlation and current normalization relevance degree, ILD then are the energy ratios of left and right acoustic channels.From the input signal separating sound-source with subband signal as sound source, i.e. L m(t) and R m(t) be the m subband output of sub-filter or the m spectral line band of time-frequency conversion.
The quantification of spatial parameter and entropy coding module adopt divides frequency band scalar nonlinear quantization and whole entropy coding structure as shown in Figure 1, comprises time-delay one frame module at least, scalar nonlinear quantization module, the quantization table mapping block, the spatial parameter difference block, Huffman coding module, code table mapping block.Scalar nonlinear quantization module is connected with time-delay one frame module input with the spatial parameter difference block, the output of a frame module of delaying time is connected with the spatial parameter difference block, spatial parameter difference block output is connected with the Huffman coding module, quantization table mapping block output is connected with scalar nonlinear quantization module, the code table mapping block is connected with audio signal, and output is connected with the Huffman coding module.At first select corresponding spatial parameter quantization table by the code table mapping block according to the frequency of current stereo audio signal spatial parameter correspondence, and through the spatial parameter ITD of scalar nonlinear quantization module to input, ILD, quantize with IC, the quantizating index of the spatial parameter of output subtracts each other the spatial parameter that obtains difference with the preceding frame data that obtain through time-delay one frame module in the spatial parameter difference block, last code table mapping block obtains Huffman code table label and finishes the entropy coding of difference spatial parameter, output encoder code word by the Huffman coding module according to working band.
The input of a frame module of delaying time is the spatial parameter quantizating index of present frame, the spatial parameter quantizating index of output previous frame.When the space parameter stereo encoder initialization, this module is output as complete 0.The frame module of delaying time is that the spatial parameter index that obtains last difference of time is prepared.
The input of scalar nonlinear quantization module is spatial parameter ITD, ILD and IC, quantizes to obtain the quantizating index ITD of spatial parameter q, ILD q, and I C q, with the whole consideration of these three quantizating index, output-index vector (ITD q, ILD q, IC q).This module is removed the subjective spatial redundancy of audio signal.
The input of quantization table mapping block is the frequency of current spatial parameter correspondence, and output is ITD, the ILD of this frequency correspondence and the quantization table label of IC.Full frequency-domain 20~16000Hz can be divided into two sections of low frequency and high frequencies, respectively corresponding 20~4000Hz and 4000~16000Hz, the quantization table that each frequency band is corresponding different.Each quantization table all is an one dimension, the mapping relations between expression parameter values and quantizating index.
The spatial parameter quantizating index vector sum former frame spatial parameter parameter quantification indicator vector of the input present frame of spatial parameter difference block, output is difference indicator vector (Δ ITD q, Δ ILD q, Δ IC q).Correlation between the frame before and after this module is removed.
The input of Huffman coding module is difference indicator vector (Δ ITD q, Δ ILD q, Δ IC q), output is the Huffman coding codeword.This module is removed the objective redundancy of quantizating index, further reduces code check.
Fig. 1 example has provided the situation of using whole ITD, ILD and IC parameter, but the present invention is not limited to and uses whole parameters, can also be partial parameters wherein.Simultaneously frequency band division also is not limited to two sections of low frequency and high frequencies, can be according to the storage complexity of spatial parameter coded system, and conditions such as computational complexity and performance decide.For example when allowing more memory space, full frequency-domain 20~16000Hz can be divided into basic, normal, high three frequency ranges.
Realize automatically that by programming or application-specific integrated circuit (ASIC) etc. spatial parameter of the present invention quantizes and the entropy coding course of work, the workflow of present embodiment the steps include: as shown in Figure 2
1. input space parameter values and respective frequencies according to the mapping relations of preassigned frequency and quantization table, find current quantized lsp parameter, enter step 2;
2. according to the quantizating index that obtains quantization table in the step 1, table look-up obtaining spatial parameter, enter step 3;
3. the quantizating index of input space parameter is combined into an indicator vector with different quantizating index; Enter step 4;
4. the quantizating index vector of input present frame puts it in the buffering area, takes out the quantizating index vector of previous frame simultaneously, enters step 5;
5. the quantizating index vector of input present frame and former frame is done vector and is subtracted each other, and obtains the differential quantization indicator vector of current and preceding frame, enters step 6;
6. according to the frequency of current spatial parameter correspondence, find the Huffman code table of corresponding differential quantization indicator vector, enter step 7;
7. obtain the Huffman code table according to step 6, the Huffman code word of the differential quantization indicator vector correspondence that finding step 5 obtains and output.
Set up each spatial parameter as shown in Figure 3 by statistical analysis space audition test data at the quantization table of different frequency bands.
The concrete steps of setting up the ITD quantization table are:
1. according to definite frequency band division, the foundation of frequency band division is the spatial hearing characteristic of people's ear: in different frequency ranges, people's ear is divided into low frequency and two frequency ranges of high frequency to the resolution capability difference of spatial parameter with full frequency-domain 20~16000Hz, respectively corresponding 20~4000Hz and 4000~16000Hz.
2. with the stereo test tone that produces the arrowband, these test tones contain the test frequency range, and has complete ITD dynamic range, as getting a test point at the every 100Hz of the low-frequency range of 20~4000Hz, get a test point at the every 200Hz of 4000~16000Hz, perhaps can be according to psychoacoustic equivalent rectangular bandwidth in space (Equivalent Rectangle Bandwidth) or the Bark frequency test point of getting heterogeneous.
3. by adjusting ITD dynamic range ± 1000 μ s that the different time-delay of left and right acoustic channels produces spatial hearing, the ITD test signal of 20 μ s at interval.At each test frequency point, to the ITD parameter, test macro passes through the earphone dull cycle tests that rises of this parameter of playback in turn, and whether tester's subjective judgement and consecutive points are variant, and note.
4. collect a plurality of testers' test data, (JustNoticeable Difference JND), and as at interval, obtains the initial ITD quantized lsp parameter of each frequency range to find out average I difference in perception value.When practical application, can do merging to quantization table, to reduce storage and computational complexity.
The simple signal of test is spaced apart the ILD test signal of 0.25dB by adjusting ILD dynamic range ± 20dB that the different gain controlling of left and right acoustic channels produces spatial hearing;
The quantization table of ILD is set up the foundation of the similar ITD quantization table of process, and test signal adopts the single-frequency stereophonic signal.Similarly travel through the test frequency point of each frequency range at certain intervals, the scope ± 20dB of ILD parameter value is spaced apart 0.25dB simultaneously.With the single-frequency signals is input, obtains having the stereo test signal of single-frequency of different ILD values by the different gain controlling of left and right acoustic channels.During test,, obtain the average JND value of majority, and, obtain the initial quantization table of each frequency range ILD as at interval at each frequency playback test tone in turn.Can also merge quantization table, to reduce storage and computational complexity.
The quantization table of IC is set up the foundation of the similar ITD quantization table of process, and test signal adopts the arrowband stereo audio signal.Similarly travel through the test frequency point of each frequency range at certain intervals, the dynamic model circle [0,1] of while IC, test point interval 0.05.With two independently narrow band signal be input, obtain having the stereo narrowband test signal of different IC values by the different linear combination of left and right acoustic channels.During test,, obtain the average JND value of majority, and, obtain the initial quantization table of each frequency range IC as at interval at each frequency playback test tone in turn.Can also merge quantization table, to reduce storage and computational complexity.
By analyzing the statistical property of actual stereo audio signal spatial parameter, set up associating Huffman coding code table as shown in Figure 4, concrete steps are as follows:
1. find out a stereophonic signal of containing each type, have the test signal collection of statistical representativeness, too small set of signals will reduce the universality of Huffman code table, and the computational complexity that excessive set of signals will make the Huffman code table set up increases greatly;
2. second step was on different frequency ranges, and as above-mentioned low frequency 20~4000Hz and high frequency 4000~16000Hz, through type (1), (2) and (3) are calculated ITD, ILD and the IC value of all signals of test signal collection respectively;
3. according to above-mentioned each nonlinearity in parameters quantization table, calculate the quantizating index ITD of each parameter q, ILD q, and IC q
4. according to given selection of parameter scheme, selected parameter is formed an indicator vector, when for example choosing whole parameter, indicator vector is exactly (ITD q, ILD q, IC q);
5. the indicator vector of front and back two frames is subtracted each other and obtain difference indicator vector (Δ ITD q, Δ ILD q, Δ IC q);
6. add up the number of times that different difference indicator vectors occur, obtain the distribution probability of difference indicator vector;
7. according to this probability distribution, the Huffman code table building method by standard obtains the Huffman code table.

Claims (4)

1. the spatial parameter of a stereo audio signal quantizes and entropy coding method, it is characterized in that may further comprise the steps:
(1) spatial parameter of input stereo audio audio signal: the degree of correlation between intensity difference and sound channel between time difference, sound channel between sound channel, find the nonlinear quantization table of coupling according to the frequency of spatial parameter correspondence, carry out non-linear scalar quantization, obtain quantizating index, wherein the foundation of nonlinear quantization table may further comprise the steps:
(1-a) resolution capability different to spatial parameter with people's ear according to the spatial hearing characteristic of people's ear is divided into some frequency ranges with full frequency-domain 20~16000Hz;
(1-b) with the stereo test tone of doing in arrowband, on each test frequency point, test macro passes through the earphone dull cycle tests that rises of playback spatial parameter in turn, and whether the point that tester's subjective judgement and test frequency point are adjacent is variant, and record;
(1-c) collect a plurality of testers' test data, find out average I difference in perception value, and, obtain the initial spatial parameter quantization table of each frequency range as at interval;
(2) will form a quantizating index vector at each spatial parameter quantizating index at same frequency place;
(3) the quantizating index vector of present frame is done calculus of differences with the quantizating index vector of previous frame, obtain the differential quantization indicator vector;
(4) find the Huffman code table of coupling according to the frequency of differential quantization indicator vector correspondence, carry out the Huffman coding, output Huffman code word, the foundation of Huffman code table may further comprise the steps:
(4-a) with a series of stereophonic signals of containing each type and having a statistical representativeness as the test signal collection;
(4-b) on different frequency ranges, calculate the energy ratio of time offset value, left and right acoustic channels voice signal normalization relevance degree and left and right acoustic channels voice signal that test signal is concentrated the maximum normalization degree of correlation of left and right acoustic channels voice signal of all signals respectively;
(4-c) according to the nonlinear quantization table of above-mentioned each spatial parameter, calculate the quantizating index of each spatial parameter;
(4-d) according to given selection of parameter scheme, selected parameter is formed indicator vector;
(4-e) indicator vector of front and back two frames is subtracted each other obtain the difference indicator vector;
(4-f) add up the number of times that different difference indicator vectors occur, obtain the probability distribution of difference indicator vector;
(4-g) according to the probability distribution of difference indicator vector, the Huffman code table building method by standard obtains the Huffman code table.
2. the spatial parameter according to the described stereo audio signal of claim 1 quantizes and entropy coding method, it is characterized in that: at the different frequency range of stereo audio signal, use the quantization table of different spatial parameters.
3. the spatial parameter according to the described stereo audio signal of claim 1 quantizes and entropy coding method, it is characterized in that: at the different frequency range of stereo audio signal, use different Huffman code tables.
4. the spatial parameter of the described stereo audio signal of claim 1 quantizes and the entropy coding method system for use in carrying, it is characterized in that comprising at least the quantization table mapping block, scalar nonlinear quantization module, the frame module of delaying time, spatial parameter difference block and code table mapping block and Huffman coding module, scalar nonlinear quantization module is connected with time-delay one frame module input with the spatial parameter difference block, the output of a frame module of delaying time is connected with the spatial parameter difference block, spatial parameter difference block output is connected with the Huffman coding module, quantization table mapping block output is connected with scalar nonlinear quantization module, the code table mapping block is connected with audio signal, output is connected with the Huffman coding module, wherein the frequency that is input as current stereo audio signal spatial parameter correspondence of quantization table mapping block is exported the spatial parameter quantization table label that mates; The spatial parameter of the stereo audio signal that is input as the process match selection of scalar nonlinear quantization module, the quantizating index of output region parameter; The delay time spatial parameter quantizating index vector that is input as present frame of a frame module, the quantizating index vector of output previous frame spatial parameter; The spatial parameter quantizating index vector that is input as present frame and previous frame of spatial parameter difference block is output as the differential quantization indicator vector; The frequency that is input as current stereo audio signal spatial parameter correspondence of code table mapping block is output as the Huffman code table label of coupling; The Huffman coding module be input as the differential quantization indicator vector, be output as the Huffman coding codeword of differential quantization indicator vector.
CN2007101686140A 2007-12-05 2007-12-05 A space parameter quantification and entropy coding method for 3D audio signals and its system architecture Expired - Fee Related CN101188878B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2007101686140A CN101188878B (en) 2007-12-05 2007-12-05 A space parameter quantification and entropy coding method for 3D audio signals and its system architecture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2007101686140A CN101188878B (en) 2007-12-05 2007-12-05 A space parameter quantification and entropy coding method for 3D audio signals and its system architecture

Publications (2)

Publication Number Publication Date
CN101188878A CN101188878A (en) 2008-05-28
CN101188878B true CN101188878B (en) 2010-06-02

Family

ID=39480997

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2007101686140A Expired - Fee Related CN101188878B (en) 2007-12-05 2007-12-05 A space parameter quantification and entropy coding method for 3D audio signals and its system architecture

Country Status (1)

Country Link
CN (1) CN101188878B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101635145B (en) * 2008-07-24 2012-06-06 华为技术有限公司 Method, device and system for coding and decoding
CN101408615B (en) * 2008-11-26 2011-11-30 武汉大学 Method and device for measuring binaural sound time difference ILD critical apperceive characteristic
CN101408614B (en) * 2008-11-26 2011-09-14 武汉大学 Method and device for measuring binaural sound strong difference ILD critical apperceive characteristic
CN101419801B (en) * 2008-12-03 2011-08-17 武汉大学 Method for subband measuring correlation sensing characteristic between ears and device thereof
CN101826326B (en) * 2009-03-04 2012-04-04 华为技术有限公司 Stereo encoding method and device as well as encoder
CN101504835B (en) * 2009-03-09 2011-11-16 武汉大学 Measurement method for spacial sensed information content in acoustic field and application thereof
BR112012008793B1 (en) * 2009-10-15 2021-02-23 France Telecom CODIFICATION AND PARAMETRIC DECODING PROCESSES OF A MULTIChannel SIGNAL AUDIO, DIGITAL PARAMETER ENCODER AND DECODER OF A MULTICANAL SIGNAL
WO2012105885A1 (en) * 2011-02-02 2012-08-09 Telefonaktiebolaget L M Ericsson (Publ) Determining the inter-channel time difference of a multi-channel audio signal
WO2013026196A1 (en) * 2011-08-23 2013-02-28 Huawei Technologies Co., Ltd. Estimator for estimating a probability distribution of a quantization index
CN102438145A (en) * 2011-11-22 2012-05-02 广州中大电讯科技有限公司 Image lossless compression method on basis of Huffman code
EP2856776B1 (en) * 2012-05-29 2019-03-27 Nokia Technologies Oy Stereo audio signal encoder
CN102760442B (en) * 2012-07-24 2014-09-03 武汉大学 3D video azimuth parametric quantification method
KR20160015280A (en) * 2013-05-28 2016-02-12 노키아 테크놀로지스 오와이 Audio signal encoder
EP3046105B1 (en) 2013-09-13 2020-01-15 Samsung Electronics Co., Ltd. Lossless coding method
CN104240712B (en) * 2014-09-30 2018-02-02 武汉大学深圳研究院 A kind of three-dimensional audio multichannel grouping and clustering coding method and system
CN107731238B (en) 2016-08-10 2021-07-16 华为技术有限公司 Coding method and coder for multi-channel signal
US10296286B2 (en) * 2016-12-13 2019-05-21 EVA Automation, Inc. Maintaining coordination following a wireless reset
CN109300480B (en) * 2017-07-25 2020-10-16 华为技术有限公司 Coding and decoding method and coding and decoding device for stereo signal
CN112415293A (en) * 2019-08-21 2021-02-26 华东师范大学 Amplitude-frequency characteristic measuring instrument and method based on stm32
CN113724717B (en) * 2020-05-21 2023-07-14 成都鼎桥通信技术有限公司 Vehicle-mounted audio processing system and method, vehicle-mounted controller and vehicle
CN111768793B (en) * 2020-07-11 2023-09-01 北京百瑞互联技术有限公司 LC3 audio encoder coding optimization method, system and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1669358A (en) * 2002-07-16 2005-09-14 皇家飞利浦电子股份有限公司 Audio coding
CN1860526A (en) * 2003-09-29 2006-11-08 皇家飞利浦电子股份有限公司 Encoding audio signals
WO2007031896A1 (en) * 2005-09-13 2007-03-22 Koninklijke Philips Electronics N.V. Audio coding

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1669358A (en) * 2002-07-16 2005-09-14 皇家飞利浦电子股份有限公司 Audio coding
CN1860526A (en) * 2003-09-29 2006-11-08 皇家飞利浦电子股份有限公司 Encoding audio signals
WO2007031896A1 (en) * 2005-09-13 2007-03-22 Koninklijke Philips Electronics N.V. Audio coding

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张新晨.低码率下H.264视频编码器实时优化.武汉大学学报(理学版)51 5.2005,51(5),599-602.
张新晨.低码率下H.264视频编码器实时优化.武汉大学学报(理学版)51 5.2005,51(5),599-602. *

Also Published As

Publication number Publication date
CN101188878A (en) 2008-05-28

Similar Documents

Publication Publication Date Title
CN101188878B (en) A space parameter quantification and entropy coding method for 3D audio signals and its system architecture
CN1993733B (en) Parameter quantizer and de-quantizer, parameter quantization and de-quantization of spatial audio frequency
JP3878952B2 (en) How to signal noise substitution during audio signal coding
RU2367033C2 (en) Multi-channel hierarchical audio coding with compact supplementary information
US8843378B2 (en) Multi-channel synthesizer and method for generating a multi-channel output signal
RU2325046C2 (en) Audio coding
RU2381571C2 (en) Synthesisation of monophonic sound signal based on encoded multichannel sound signal
JP4794448B2 (en) Audio encoder
CN101010725A (en) Multichannel signal coding equipment and multichannel signal decoding equipment
RU2005103637A (en) AUDIO CODING
CN1822508B (en) Method and apparatus for encoding and decoding digital signals
US20100169102A1 (en) Low complexity mpeg encoding for surround sound recordings
CN101002261A (en) Method and apparatus for encoding and decoding multi-channel audio signal using virtual source location information
DK1016320T3 (en) Method and apparatus for encoding and decoding multiple audio channels at low bit rates
CN102171754A (en) Coding device and decoding device
WO2006091151B1 (en) Optimized fidelity and reduced signaling in multi-channel audio encoding
JPH05304479A (en) High efficient encoder of audio signal
CN102257562A (en) Method and apparatus for applying reverb to a multi-channel audio signal using spatial cue parameters
CN101578655B (en) Stream generating device, decoding device, and method
CN101149925A (en) Space parameter selection method for parameter stereo coding
CN101297352A (en) Apparatus for encoding and decoding audio signal and method thereof
CN101506875B (en) Apparatus and method for combining multiple parametrically coded audio sources
CA2438431A1 (en) Bit rate reduction in audio encoders by exploiting inharmonicity effectsand auditory temporal masking
CN101313355A (en) Method and apparatus for encoding/decoding multi-channel audio signal
AU611067B2 (en) Perceptual coding of audio signals

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20100602

Termination date: 20141205

EXPY Termination of patent right or utility model