WO2019029724A1 - 时域立体声编解码方法和相关产品 - Google Patents

时域立体声编解码方法和相关产品 Download PDF

Info

Publication number
WO2019029724A1
WO2019029724A1 PCT/CN2018/100060 CN2018100060W WO2019029724A1 WO 2019029724 A1 WO2019029724 A1 WO 2019029724A1 CN 2018100060 W CN2018100060 W CN 2018100060W WO 2019029724 A1 WO2019029724 A1 WO 2019029724A1
Authority
WO
WIPO (PCT)
Prior art keywords
current frame
channel
signal
channel combination
combination scheme
Prior art date
Application number
PCT/CN2018/100060
Other languages
English (en)
French (fr)
Inventor
李海婷
王宾
苗磊
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to KR1020237002928A priority Critical patent/KR102641952B1/ko
Priority to JP2020507599A priority patent/JP6974927B2/ja
Priority to EP18843195.1A priority patent/EP3664087B1/en
Priority to KR1020247006315A priority patent/KR20240032159A/ko
Priority to SG11202001169WA priority patent/SG11202001169WA/en
Priority to RU2020109715A priority patent/RU2773022C2/ru
Priority to ES18843195T priority patent/ES2977791T3/es
Priority to EP23216612.4A priority patent/EP4358083A3/en
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to BR112020002682-4A priority patent/BR112020002682A2/pt
Priority to KR1020207006889A priority patent/KR102380431B1/ko
Priority to KR1020227010026A priority patent/KR102493482B1/ko
Publication of WO2019029724A1 publication Critical patent/WO2019029724A1/zh
Priority to US16/785,174 priority patent/US11062715B2/en
Priority to US17/355,785 priority patent/US11640825B2/en
Priority to JP2021180319A priority patent/JP7311573B2/ja
Priority to US18/193,922 priority patent/US20230306972A1/en
Priority to JP2023111288A priority patent/JP2023139041A/ja

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/20Vocoders using multiple modes using sound class specific coding, hybrid encoders or object based coding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/22Mode decision, i.e. based on audio signal content versus external parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/007Two-channel systems in which the audio signals are in digital form
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S3/00Systems employing more than two channels, e.g. quadraphonic
    • H04S3/008Systems employing more than two channels, e.g. quadraphonic in which the audio signals are in digital form, i.e. employing more than two discrete digital channels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/03Aspects of down-mixing multi-channel audio to configurations with lower numbers of playback channels, e.g. 7.1 -> 5.1
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • the present application relates to the field of audio codec technology, and in particular, to a time domain stereo codec method and related products.
  • stereo audio has the sense of orientation and distribution of each sound source, which can improve the clarity, intelligibility and presence of information, and is therefore favored by people.
  • Parametric stereo codec technology is a common stereo codec technology by converting stereo signals into mono signals and spatial sensing parameters to compress multi-channel signals.
  • the parametric stereo codec technology usually needs to extract spatial sensing parameters in the frequency domain, time-frequency transform is required, so that the delay of the entire codec is relatively large. Therefore, in the case of strict delay requirements, time domain stereo coding technology is a better choice.
  • the traditional time domain stereo coding technology is to downmix the signal into two mono signals in the time domain.
  • the MS coding technique first downmixes the left and right channel signals into a center channel signal and a side channel signal.
  • L represents the left channel signal
  • R represents the right channel signal
  • the Mid channel signal is 0.5*(L+R)
  • the Mid channel signal represents the related information between the left and right channels
  • the Side channel signal is 0.5*.
  • the Side channel signal characterizes the difference between the left and right channels.
  • the Mid channel signal and the Side channel signal are respectively encoded by a mono coding method, and for a Mid channel signal, a relatively large number of bits are usually used for encoding; for a Side channel signal, a relatively small number of bits is usually used for encoding.
  • the traditional time domain stereo coding technology sometimes has a phenomenon that the main signal energy is particularly small or even lack of energy, which leads to a decrease in the final coding quality.
  • Embodiments of the present application provide a time domain stereo coding method and related products.
  • an embodiment of the present application provides a time domain stereo coding method, which may include: determining an encoding mode of a current frame.
  • the time domain downmix processing mode corresponding to the non-correlation signal coding mode is used to perform time on the left and right channel signals of the current frame.
  • the domain downmix process is performed to obtain primary and secondary channel signals (i.e., primary channel signals and secondary channel signals) of the current frame.
  • the time domain downmix processing mode corresponding to the non-correlation signal coding mode is a time domain downmix processing mode corresponding to the non-correlated signal channel combination scheme, and the non-correlated signal channel combination scheme is a class inversion.
  • the obtained primary and secondary channel signals of the current frame are encoded.
  • the stereo signal of the current frame is composed, for example, of left and right channel signals of the current frame.
  • the encoding mode of the current frame may be one of a plurality of encoding modes.
  • the coding mode of the current frame may be one of the following coding modes: a correlation signal coding mode, a non-correlation signal coding mode, a correlation to a non-correlation signal coding mode, and a non-correlation to correlation signal coding mode.
  • the coding mode of the current frame needs to be determined in the above solution, which means that there are multiple possibilities for the coding mode of the current frame, which is different from the traditional coding scheme with only one coding mode, and multiple possible coding modes and A variety of possible scenarios help to achieve a better compatible match. Moreover, since a channel combining scheme corresponding to the inverted signal of the class is introduced, this makes a relatively more targeted channel combining scheme and encoding mode for the case where the stereo signal of the current frame is an inverted signal. Conducive to improve the quality of the code.
  • the method may further include: when determining that the coding mode of the current frame is a correlation signal coding mode, adopting a time domain downmix processing manner corresponding to the correlation signal coding mode, Performing time domain downmix processing on the left and right channel signals of the current frame to obtain primary and secondary channel signals of the current frame, and the time domain downmix processing mode corresponding to the correlation signal encoding mode is a correlation signal channel
  • the time domain downmix processing mode corresponding to the combination scheme, the correlation signal channel combination scheme is a channel combination scheme corresponding to the normal phase signal.
  • the method may further include: when determining that the coding mode of the current frame is a correlation to a non-correlation signal coding mode, adopting the correlation to the non-correlation signal coding mode a time domain downmix processing method, performing time domain downmix processing on left and right channel signals of the current frame to obtain primary and secondary channel signals of the current frame, the correlation to a non-correlation signal coding mode corresponding to The time domain downmix processing mode is a time domain downmix processing method corresponding to the correlation signal channel combination scheme to the non-correlated signal channel combination scheme.
  • the method may further include: when determining that the coding mode of the current frame is a non-correlation to correlation signal coding mode, adopting the non-correlation to correlation signal coding mode corresponding to a time domain downmix processing method, performing time domain downmix processing on the left and right channel signals of the current frame to obtain primary and secondary channel signals of the current frame, the non-correlation to correlation signal coding mode corresponding to The time domain downmix processing mode is a time domain downmix processing method corresponding to the non-correlated signal channel combination scheme to the correlation signal channel combination scheme.
  • time domain downmix processing methods corresponding to different coding modes are usually different.
  • each encoding mode may also correspond to one or more time domain downmix processing methods.
  • time-domain downmix processing of the left and right channel signals of the current frame is performed by using a time domain downmix processing manner corresponding to the non-correlation signal coding mode to obtain a
  • the primary and secondary channel signals of the current frame may include: performing time domain downmix processing on the left and right channel signals of the current frame according to a channel combination scale factor of the non-correlation signal channel combination scheme of the current frame And obtaining a primary and secondary channel signals of the current frame; or a left and right channel signals of the current frame according to a channel combination scaling factor of the non-correlated signal channel combination scheme of the current frame and a previous frame; Time domain downmix processing is performed to obtain primary and secondary channel signals of the current frame.
  • the channel combination scale factor of the channel combination scheme of the audio frame may be preset. Fixed value. It is of course also possible to determine the channel combination scale factor of this audio frame based on the channel combination scheme of the audio frame.
  • a corresponding downmix matrix may be constructed based on a channel combination scale factor of the audio frame, and the left and right channel signals of the current frame are time-domain downmixed by using a downmix matrix corresponding to the channel combination scheme. Processing to obtain the primary and secondary channel signals of the current frame.
  • the left and right channel signals of the current frame are subjected to time domain downmix processing according to a channel combination scaling factor of the non-correlated signal channel combination scheme of the current frame and the previous frame to obtain
  • a channel combination scaling factor of the non-correlated signal channel combination scheme of the current frame and the previous frame to obtain
  • the delay_com represents coding delay compensation.
  • the left and right channel signals of the current frame are subjected to time domain downmix processing according to a channel combination scaling factor of the non-correlated signal channel combination scheme of the current frame and the previous frame to obtain
  • a channel combination scaling factor of the non-correlated signal channel combination scheme of the current frame and the previous frame to obtain
  • fade_in(n) represents a fade-in factor.
  • fade_in(n) can also be a fade-in factor based on other functional relationships of n.
  • Fade_out(n) represents the fade factor.
  • fade_out(n) can also be a fade factor based on other functional relationships of n.
  • NOVA_1 represents the length of the transition process.
  • the value of NOVA_1 can be set according to the specific scene. NOVA_1 may, for example, be equal to 3/N or NOVA_1 may be other values less than N.
  • the left and right channel signals of the current frame are subjected to time domain downmix processing to obtain the primary and secondary sounds of the current frame.
  • the left and right channel signals of the current frame are subjected to time domain downmix processing to obtain the primary and secondary sounds of the current frame.
  • a signal In the case of a signal,
  • the X L (n) represents the left channel signal of the current frame.
  • the X R (n) represents a right channel signal of the current frame.
  • the Y(n) represents a primary channel signal of the current frame obtained by time domain downmix processing; the X(n) represents a secondary sound of the current frame obtained by time domain downmix processing Signal.
  • delay_com represents coding delay compensation
  • M 11 represents a downmix matrix corresponding to the correlation signal channel combination scheme of the previous frame, and M 11 is constructed based on a channel combination scale factor corresponding to the correlation signal channel combination scheme of the previous frame.
  • the M 12 represents a downmix matrix corresponding to the non-correlated signal channel combination scheme of the previous frame, and the M 12 is based on a channel combination ratio corresponding to the non-correlation signal channel combination scheme of the previous frame. Factor construction.
  • the M 22 represents a downmix matrix corresponding to the non-correlation signal channel combination scheme of the current frame, and the M 22 is constructed based on a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame. .
  • the M 21 represents a downmix matrix corresponding to the correlation signal channel combination scheme of the current frame, and the M 21 is constructed based on a channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame.
  • the M 21 may exist in various forms, for example:
  • the ratio represents a channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame.
  • the M 22 may exist in various forms, for example:
  • ratio_SM represents a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • the M 12 may exist in various forms, for example:
  • ⁇ 1_pre tdm_last_ratio_SM
  • ⁇ 2_pre 1-tdm_last_ratio_SM
  • tdm_last_ratio_SM represents the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the previous frame.
  • the left and right channel signals of the current frame may specifically be the original left and right channel signals of the current frame (the original left and right channel signals are left and right channel signals that are not preprocessed in the time domain, for example, the left and right channel signals may be sampled. Or, may be the time-domain preprocessed left and right channel signals of the current frame; or may be the left and right channel signals of the current frame that are subjected to the delay alignment processing.
  • x L (n) represents an original left channel signal of the current frame
  • the x R (n) represents an original right channel signal of the current frame
  • the x L_HP (n) represents the a time-domain preprocessed left channel signal of a current frame
  • the x R_HP (n) representing a time domain preprocessed right channel signal of the current frame
  • the x' L (n) representing the current The left channel signal of the processed frame is time-delayed
  • the x' R (n) represents the right channel signal of the time frame of the current frame.
  • the embodiment of the present application further provides a time domain stereo decoding method, which may include:
  • the code stream is decoded to obtain a primary and secondary channel decoding signal of the current frame. Determining a decoding mode of the current frame. And determining, in a case where the decoding mode of the current frame is a non-correlation signal decoding mode, performing a time domain upmix processing manner corresponding to the non-correlation signal decoding mode, and performing a primary and secondary channel decoding signals of the current frame.
  • the time domain upmix processing is performed to obtain the left and right channel reconstruction signals of the current frame, and the time domain upmix processing method corresponding to the non-correlation signal decoding mode is a time domain upmix processing corresponding to the non-correlated signal channel combination scheme
  • the non-correlated signal channel combination scheme is a channel combination scheme corresponding to the inverted signal.
  • the decoding mode of the current frame may be one of a plurality of decoding modes.
  • the decoding mode of the current frame may be one of the following decoding modes: a correlation signal decoding mode, a non-correlation signal decoding mode, a correlation to a non-correlation signal decoding mode, and a non-correlation to correlation signal decoding mode.
  • the decoding mode of the current frame needs to be determined in the above solution, which means that there are multiple possibilities for the decoding mode of the current frame, which is different from the conventional scheme with only one decoding mode, and multiple possible decoding modes and A variety of possible scenarios help to achieve a better compatible match. Moreover, since a channel combining scheme corresponding to the inverted signal of the class is introduced, this makes a relatively more targeted channel combining scheme and decoding mode for the case where the stereo signal of the current frame is an inverted signal. Conducive to improve the quality of decoding.
  • the method may further include:
  • the time domain upmix processing mode corresponding to the correlation signal decoding mode when the primary and secondary channel decoding signals of the current frame are performed Domain upmixing to obtain a left and right channel reconstruction signal of the current frame, and the time domain upmix processing method corresponding to the correlation signal decoding mode is a time domain upmix processing method corresponding to the correlation signal channel combination scheme,
  • the correlation signal channel combination scheme is a channel combination scheme corresponding to the normal phase signal.
  • the method may further include: when determining that the decoding mode of the current frame is a correlation to a non-correlation signal decoding mode, adopting the correlation to the non-correlation signal decoding mode a time domain upmix processing method, performing time domain upmix processing on the primary and secondary channel decoded signals of the current frame to obtain a left and right channel reconstruction signal of the current frame, the correlation to a non-correlated signal decoding mode
  • the corresponding time domain upmix processing mode is a time domain upmix processing method corresponding to the correlation signal channel combination scheme and the non-correlation signal channel combination scheme.
  • the method may further include: when determining that the decoding mode of the current frame is a non-correlation to correlation signal decoding mode, adopting the non-correlation to correlation signal decoding mode corresponding to a time domain upmix processing method, performing time domain upmix processing on the primary and secondary channel decoded signals of the current frame to obtain a left and right channel reconstruction signal of the current frame, the non-correlation to correlation signal decoding mode
  • the corresponding time domain upmix processing mode is a time domain upmix processing method corresponding to the non-correlation signal channel combination scheme to the correlation signal channel combination scheme.
  • each decoding mode may also correspond to one or more time domain upmix processing methods.
  • performing time domain upmix processing on the primary and secondary channel decoding signals of the current frame by using a time domain upmix processing manner corresponding to the non-correlation signal decoding mode including:
  • a corresponding upmix matrix may be constructed based on a channel combination scale factor of the audio frame, and the primary and secondary channel decoding signals of the current frame are performed by using an upmix matrix corresponding to the channel combination scheme.
  • the domain is upmixed to obtain the left and right channel reconstruction signals of the current frame.
  • the time domain upmixing process is performed on the primary and secondary channel decoding signals of the current frame according to the channel combination scaling factor of the non-correlation signal channel combination scheme of the current frame and the previous frame.
  • the left and right channel reconstruction signals of the current frame are obtained.
  • the delay_com represents coding delay compensation.
  • the time domain upmixing process is performed on the primary and secondary channel decoding signals of the current frame according to the channel combination scaling factor of the non-correlation signal channel combination scheme of the current frame and the previous frame.
  • the left and right channel reconstruction signals of the current frame are obtained.
  • the NOVA_1 represents the length of the transition process.
  • fade_in(n) represents a fade-in factor.
  • fade_in(n) can also be a fade-in factor based on other functional relationships of n.
  • fade_out(n) represents a fade factor.
  • fade_out(n) can also be a fade factor based on other functional relationships of n.
  • NOVA_1 represents the length of the transition process.
  • the value of NOVA_1 can be set according to the specific scene. NOVA_1 may, for example, be equal to 3/N or NOVA_1 may be other values less than N.
  • a channel combination scaling factor of the correlation signal channel combination scheme of the current frame to obtain the current frame.
  • the upmixing_delay indicates decoding delay compensation
  • An upmix matrix corresponding to the correlation signal channel combination scheme of the previous frame, The channel combination scale factor corresponding to the correlation signal channel combination scheme of the previous frame is constructed.
  • the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame is constructed.
  • the channel combination scale factor corresponding to the non-correlated signal channel combination scheme of the previous frame is constructed.
  • the channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame is constructed.
  • ⁇ 1 ratio_SM
  • ⁇ 2 1 - ratio_SM
  • the ratio_SM represents a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • ⁇ 1_pre tdm_last_ratio_SM
  • ⁇ 2_pre 1-tdm_last_ratio_SM.
  • tdm_last_ratio_SM represents the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the previous frame.
  • the ratio represents a channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame.
  • the embodiment of the present application further provides a time domain stereo coding device, which may include: a processor and a memory coupled to each other. Wherein, the processor is operative to perform some or all of the steps of any one of the stereo coding methods of the first aspect.
  • the embodiment of the present application further provides a time domain stereo decoding device, which may include: a processor and a memory coupled to each other. Wherein, the processor is operative to perform some or all of the steps of any one of the second aspects.
  • an embodiment of the present application provides a time domain stereo coding apparatus, including several functional units for implementing any one of the first aspects.
  • an embodiment of the present application provides a time domain stereo decoding apparatus, including a plurality of functional units for implementing any one of the methods of the second aspect.
  • the embodiment of the present application provides a computer readable storage medium, where the program code stores program code, where the program code includes a part for performing any one of the methods of the first aspect or Instructions for all steps.
  • the embodiment of the present application provides a computer readable storage medium, where the program code stores program code, where the program code includes a part for performing any one of the methods of the second aspect or Instructions for all steps.
  • the embodiment of the present application provides a computer program product, when the computer program product is run on a computer, causing the computer to perform some or all of the steps of any one of the first aspects.
  • the embodiment of the present application provides a computer program product, when the computer program product is run on a computer, causing the computer to perform some or all of the steps of any one of the second aspects.
  • FIG. 1 is a schematic diagram of an inverted signal according to an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of an audio encoding method according to an embodiment of the present application.
  • FIG. 3 is a schematic flowchart of a method for determining an audio decoding mode according to an embodiment of the present application
  • FIG. 4 is a schematic flowchart diagram of another audio encoding method according to an embodiment of the present disclosure.
  • FIG. 5 is a schematic flowchart of an audio decoding method according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic flowchart diagram of another audio encoding method according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic flowchart diagram of another audio decoding method according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic flowchart diagram of a method for determining a time domain stereo parameter according to an embodiment of the present disclosure
  • 9-A is a schematic flowchart of another audio encoding method provided by an embodiment of the present application.
  • 9-B is a schematic flowchart of a method for calculating and encoding a channel combination scale factor corresponding to a current frame non-correlation signal channel combination scheme according to an embodiment of the present application;
  • 9-C is a schematic flowchart of a method for calculating a difference correlation parameter between left and right channels of a current frame according to an embodiment of the present application
  • 9-D is a schematic flowchart of a method for converting an amplitude correlation difference parameter between left and right channels of a current frame into a channel combination scale factor according to an embodiment of the present application;
  • FIG. 10 is a schematic flowchart diagram of another audio decoding method according to an embodiment of the present disclosure.
  • 11-A is a schematic diagram of an apparatus provided by an embodiment of the present application.
  • 11-B is a schematic diagram of another apparatus provided by an embodiment of the present application.
  • 11-C is a schematic diagram of another apparatus provided by an embodiment of the present application.
  • 12-A is a schematic diagram of another apparatus provided by an embodiment of the present application.
  • 12-B is a schematic diagram of another apparatus provided by an embodiment of the present application.
  • 12-C is a schematic diagram of another apparatus provided by an embodiment of the present application.
  • the time domain signal may be referred to as “signal” for simplicity of description.
  • the left channel time domain signal may be referred to simply as "left channel signal.”
  • the right channel time domain signal may be referred to simply as a "right channel signal.”
  • a mono time domain signal may be referred to simply as a "mono signal.”
  • the reference channel time domain signal may be referred to simply as a "reference channel signal.”
  • the main channel time domain signal may be referred to as "main channel signal”.
  • the secondary channel time domain signal may be referred to as a "secondary channel signal”.
  • a Mid channel time domain signal may be referred to as a "central channel signal”.
  • the side channel time domain signal may be referred to as a “side channel signal”.
  • Other situations can be deduced by analogy.
  • the left channel time domain signal and the right channel time domain signal may be collectively referred to as “left and right channel time domain signals” or may be collectively referred to as “left and right channel signals”. That is, the left and right channel time domain signals include a left channel time domain signal and a right channel time domain signal.
  • the left and right channel time domain signals of the current frame subjected to the delay alignment processing include a left channel time domain signal of the current frame subjected to the delay alignment processing and a right channel time domain signal of the current frame subjected to the delay alignment processing.
  • the primary channel signal and the secondary channel signal can be collectively referred to as "primary and secondary channel signals.” That is, the primary and secondary channel signals include a primary channel signal and a secondary channel signal.
  • the primary and secondary channel decoding signals include a primary channel decoding signal and a secondary channel decoding signal.
  • the left and right channel reconstruction signals include a left channel reconstruction signal and a right channel reconstruction signal. And so on.
  • the conventional MS coding technique first downmixes the left and right channel signals into a center channel signal and a side channel signal.
  • L represents the left channel signal
  • R represents the right channel signal
  • the Mid channel signal is 0.5*(L+R).
  • the Mid channel signal characterizes the related information between the left and right channels.
  • the Side channel signal is 0.5*(L-R), and the Side channel signal characterizes the difference between the left and right channels.
  • the Mid channel signal and the Side channel signal are respectively encoded by a mono coding method.
  • the Mid channel signal it is usually encoded with a relatively large number of bits
  • for the Side channel signal it is usually encoded with a relatively small number of bits.
  • some schemes extract time domain stereo parameters for indicating the proportion of the left and right channels in the time domain downmix processing by analyzing the time domain signals of the left and right channels.
  • the purpose of this method is to improve the energy of the main channel in the time domain downmix signal and reduce the energy of the secondary channel when the energy difference between the stereo left and right channel signals is relatively large.
  • L represents the left channel signal
  • R represents the right channel signal.
  • Alpha and beta are real numbers from 0 to 1.
  • Figure 1 shows the amplitude variation of a left channel signal and a right channel signal.
  • the absolute values of the amplitudes between the corresponding samples of the left channel signal and the right channel signal are substantially the same, but the signs are opposite, which is a typical class-inverted signal.
  • Figure 1 only shows a typical example of a class-inverted signal.
  • the inverted signal of the class refers to a stereo signal whose phase difference between the left and right channel signals is close to 180 degrees.
  • a stereo signal having a phase difference between left and right channel signals belonging to [180- ⁇ , 180+ ⁇ ] may be referred to as an inversion-like signal, wherein ⁇ may take any angle between 0° and 90°, for example, ⁇ Equal to 0°, 5°, 15°, 17°, 20°, 30°, 40° and other angles.
  • a normal-like phase signal is a stereo signal in which the phase difference between the left and right channel signals is close to 0 degrees.
  • a stereo signal in which the phase difference between the left and right channel signals belongs to [- ⁇ , ⁇ ] can be referred to as a normal-like signal.
  • can take any angle between 0° and 90°, for example, ⁇ can be equal to angles of 0°, 5°, 15°, 17°, 20°, 30°, 40°.
  • the energy of the main channel signal generated by the time domain downmix processing is often significantly greater than the energy of the secondary channel signal. If the main channel signal is encoded with a larger number of bits and the secondary channel signal is encoded with a smaller number of bits, it is advantageous to obtain a better encoding effect. However, when the left and right channel signals are inverted signals, if the same time domain downmix processing method is used, the generated main channel signal energy may be particularly small or even missing, resulting in a degradation of the final encoding quality.
  • the encoding device and the decoding device mentioned in the embodiments of the present application may be devices having functions of collecting, storing, and transmitting voice signals to the outside.
  • the encoding device and the decoding device may be, for example, a mobile phone, a server, a tablet, a personal computer, or Laptops and more.
  • the left and right channel signals refer to left and right channel signals of the stereo signal.
  • the stereo signal may be an original stereo signal, or a stereo signal composed of two signals included in the multi-channel signal, or a stereo composed of two signals jointly generated by the multi-channel signals included in the multi-channel signal. signal.
  • the stereo coding method may also be a stereo coding method used in multi-channel coding.
  • the stereo encoding device may also be a stereo encoding device used in a multi-channel encoding device.
  • the stereo decoding method can also be a stereo decoding method used in multi-channel decoding.
  • the stereo decoding device may be a stereo decoding device used in a multi-channel decoding device.
  • the audio encoding method in the embodiment of the present application is, for example, directed to a stereo encoding scenario
  • the audio decoding method in the embodiment of the present application is, for example, directed to a stereo decoding scenario.
  • an audio encoding mode determining method may include: determining a channel combining scheme of a current frame, and determining an encoding mode of the current frame based on a channel combining scheme of a previous frame and a current frame.
  • FIG. 2 is a schematic flowchart of an audio encoding method according to an embodiment of the present application.
  • a related step of an audio encoding method may be implemented by an encoding device, for example, may include the following steps:
  • the channel combination scheme of the current frame is one of a plurality of channel combination schemes.
  • the plurality of channel combination schemes include an anticorrelated signal channel combination scheme and a correlated signal channel combination scheme.
  • the correlation signal channel combination scheme is a channel combination scheme corresponding to the normal phase-like signal.
  • the non-correlation signal channel combination scheme is a channel combination scheme corresponding to the inversion-like signal. It can be understood that the channel combination scheme corresponding to the normal phase-like signal is applicable to the normal phase-like signal, and the channel combination scheme corresponding to the inverted signal is applicable to the inverted signal.
  • the encoding mode of the current frame may be determined based on the channel combining scheme of the current frame. Alternatively, a default encoding mode may be used as the encoding mode of the current frame.
  • the coding mode of the current frame is one of multiple coding modes.
  • the multiple coding modes may include: a correlation-to-anticorrelated signal coding switching mode, and an uncorrelated-to-correlated signal coding (anticorrelated-to-correlated signal coding). Switching mode), correlated signal coding mode, and anticorrelated signal coding mode.
  • the time domain downmix mode corresponding to the correlation signal to the non-correlation signal coding mode may be referred to as a "correlated-to-anticorrelated signal downmix switching mode".
  • the time domain downmix mode corresponding to the non-correlation signal to the correlation signal coding mode may be referred to as an "anticorrelated-to-correlated signal downmix switching mode".
  • the time domain downmix mode corresponding to the correlation signal coding mode may be referred to as a "correlated signal downmix mode", for example.
  • the time domain downmix mode corresponding to the non-correlation signal coding mode may be referred to as an "anticorrelated signal downmix mode", for example.
  • the time-domain downmix processing of the left and right channel signals of the current frame can obtain the primary and secondary channel signals of the current frame, and further encode the primary and secondary channel signals to obtain a code stream.
  • the channel combination scheme identifier of the current frame (the channel combination scheme identifier of the current frame is used to indicate the channel combination scheme of the current frame) may be further written into the code stream, so that the decoding apparatus is based on the sound of the current frame included in the code stream.
  • the channel combination scheme identifies the channel combination scheme of the current frame.
  • the specific implementation manner of determining the coding mode of the current frame according to the channel combination scheme of the previous frame and the channel combination scheme of the current frame may be various.
  • determining an encoding mode of the current frame according to a channel combining scheme of a previous frame and a channel combining scheme of the current frame may include:
  • the channel combining scheme of the previous frame is a correlation signal channel combining scheme
  • the channel combining scheme of the current frame is a non-correlated signal channel combining scheme
  • determining the encoding mode of the current frame as correlation Signal to non-correlation signal coding mode, wherein the correlation signal to non-correlation signal coding mode adopts a downmix processing method corresponding to a transition from a correlation signal channel combination scheme to a non-correlated signal channel combination scheme in time domain Mixed processing.
  • the channel combining scheme of the previous frame is a non-correlated signal channel combining scheme
  • the channel combining scheme of the current frame is a non-correlated signal channel combining scheme
  • determining the current frame The coding mode is a non-correlation signal coding mode, and the non-correlation signal coding mode performs time domain downmix processing by using a downmix processing method corresponding to the non-correlated signal channel combination scheme.
  • the channel combining scheme of the previous frame is a non-correlated signal channel combining scheme
  • the channel combining scheme of the current frame is a correlation signal channel combining scheme
  • the time domain downmix processing mode corresponding to the non-correlation signal to the correlation signal coding mode may be a segment time domain downmix mode, and may be specifically configured according to the channel combination scheme of the current frame and the previous frame.
  • the left and right channel signals of the current frame are subjected to segmentation time domain downmix processing.
  • the channel combination scheme of the current frame is a correlation signal channel combination scheme
  • the channel combination scheme of the current frame is a correlation signal channel combination scheme
  • determining that the coding mode of the current frame is a correlation signal coding mode The correlation signal coding mode performs time domain downmix processing by using a downmix processing method corresponding to the correlation signal channel combination scheme.
  • time domain downmix processing methods corresponding to different coding modes are usually different.
  • each encoding mode may also correspond to one or more time domain downmix processing methods.
  • the current frame in a case where determining that an encoding mode of the current frame is a correlation signal encoding mode, using a time domain downmix processing manner corresponding to the correlation signal encoding mode, the current frame is used.
  • the left and right channel signals are subjected to time domain downmix processing to obtain primary and secondary channel signals of the current frame, and the time domain downmix processing method corresponding to the correlation signal encoding mode is when the correlation signal channel combination scheme corresponds Domain downmix processing.
  • an encoding mode of the current frame is an uncorrelated signal encoding mode
  • adopting a time domain downmix processing manner corresponding to the non-correlation signal encoding mode The left and right channel signals of the current frame are subjected to time domain downmix processing to obtain primary and secondary channel signals of the current frame.
  • the time domain downmix processing mode corresponding to the non-correlation signal coding mode is a time domain downmix processing mode corresponding to the non-correlated signal channel combination scheme.
  • time domain downmix processing corresponding to the correlation to the non-correlation signal coding mode is adopted.
  • the method performs time domain downmix processing on the left and right channel signals of the current frame to obtain primary and secondary channel signals of the current frame, and the correlation to the time domain downmix processing mode corresponding to the non-correlation signal coding mode.
  • the time domain downmix processing method corresponding to the correlation signal channel combining scheme to the non-correlated signal channel combining scheme.
  • the time domain downmix processing mode corresponding to the correlation signal to the non-correlation signal coding mode may be a segment time domain downmix mode, and may be specifically configured according to the channel combination scheme of the current frame and the previous frame.
  • the left and right channel signals of the current frame are subjected to segmentation time domain downmix processing.
  • an encoding mode of the current frame is a non-correlation to correlation signal encoding mode
  • adopting the non-correlation to correlation signal encoding mode corresponding to a time domain a mixed processing manner performing time domain downmix processing on the left and right channel signals of the current frame to obtain primary and secondary channel signals of the current frame, and the non-correlation to time domain downmix corresponding to the correlation signal coding mode
  • the processing manner is a time domain downmix processing method corresponding to the transition from the non-correlated signal channel combination scheme to the correlation signal channel combination scheme.
  • time domain downmix processing methods corresponding to different coding modes are usually different.
  • each encoding mode may also correspond to one or more time domain downmix processing methods.
  • time-domain downmix processing of the left and right channel signals of the current frame is performed by using a time domain downmix processing manner corresponding to the non-correlation signal coding mode to obtain a
  • the primary and secondary channel signals of the current frame may include: performing time domain downmix processing on the left and right channel signals of the current frame according to a channel combination scale factor of the non-correlation signal channel combination scheme of the current frame And obtaining a primary and secondary channel signals of the current frame; or a left and right channel signals of the current frame according to a channel combination scaling factor of the non-correlated signal channel combination scheme of the current frame and a previous frame; Time domain downmix processing is performed to obtain primary and secondary channel signals of the current frame.
  • the channel combination scheme of the current frame needs to be determined, which means that there are multiple possibilities for the channel combination scheme of the current frame, which is more than the traditional scheme with only one channel combination scheme.
  • a possible channel combination scheme and a plurality of possible scenes are advantageous for obtaining a better compatible matching effect.
  • the coding mode of the current frame needs to be determined based on the channel combination scheme of the previous frame and the channel combination scheme of the current frame, and the coding mode of the current frame has multiple possibilities, and this is relative to only one type.
  • the traditional scheme of coding mode a variety of possible coding modes and multiple possible scenarios are beneficial to obtain a better compatible matching effect.
  • the coding mode of the current frame may be, for example, a correlation signal to a non-correlation signal coding mode, or an uncorrelated signal to The correlation signal coding mode
  • the segmentation time domain downmix processing may be performed on the left and right channel signals of the current frame according to the channel combination scheme of the current frame and the previous frame.
  • the segmentation time domain downmix processing mechanism Since the mechanism for performing segmentation time domain downmix processing on the left and right channel signals of the current frame is introduced in a case where the channel combination scheme of the current frame and the previous frame is different, the segmentation time domain downmix processing mechanism It is beneficial to achieve smooth transition of the channel combination scheme, thereby facilitating the improvement of the encoding quality.
  • an audio decoding mode determining method is also provided.
  • the related steps of the audio decoding mode determining method may be implemented by a decoding device.
  • the method may specifically include:
  • the decoding mode of the current frame is one of multiple decoding modes.
  • the multiple decoding modes may include: a correlated-to-anticorrelated signal decoding switching mode, and an uncorrelated-to-correlated signal decoding (anticorrelated-to-correlated signal decoding). Switching mode), correlated signal decoding mode, and anticorrelated signal decoding mode.
  • the time domain upmix mode corresponding to the correlation signal to the non-correlation signal decoding mode may be referred to as a "correlated-to-anticorrelated signal upmix switching mode".
  • the time domain upmix mode corresponding to the non-correlation signal to the correlation signal decoding mode may be referred to as an "anticorrelated-to-correlated signal upmix switching mode".
  • the time domain upmix mode corresponding to the correlation signal decoding mode may be referred to as a "correlated signal upmix mode", for example.
  • the time domain upmix mode corresponding to the non-correlation signal decoding mode may be referred to as an "anticorrelated signal upmix mode", for example.
  • determining a decoding mode of the current frame according to a channel combining scheme of a previous frame and a channel combining scheme of the current frame including:
  • the channel combining scheme of the previous frame is a correlation signal channel combining scheme
  • the channel combining scheme of the current frame is a non-correlated signal channel combining scheme
  • determining a decoding mode of the current frame as a correlation Signal to non-correlation signal decoding mode, wherein the correlation signal to non-correlation signal decoding mode adopts an upmix processing method corresponding to a transition from a correlation signal channel combining scheme to a non-correlated signal channel combining scheme for time domain Mixed processing.
  • the non-correlation signal decoding mode performs time domain upmix processing by using an upmix processing method corresponding to the non-correlated signal channel combination scheme.
  • the channel combining scheme of the previous frame is a non-correlated signal channel combining scheme
  • the channel combining scheme of the current frame is a correlation signal channel combining scheme
  • the channel combination scheme of the current frame is a correlation signal channel combination scheme
  • the channel combination scheme of the current frame is a correlation signal channel combination scheme
  • it is determined that the decoding mode of the current frame is a correlation signal decoding mode.
  • the correlation signal decoding mode performs time domain upmix processing by using an upmix processing method corresponding to the correlation signal channel combination scheme.
  • the decoding device determines that the decoding mode of the current frame is the non-correlation signal decoding mode
  • the time-domain upmix processing mode corresponding to the non-correlation signal decoding mode is used, and the primary and secondary sounds of the current frame are used.
  • the channel decoding signal performs time domain upmix processing to obtain left and right channel reconstruction signals of the current frame.
  • the left and right channel reconstruction signals may be left and right channel decoding signals, or the left and right channel decoding signals may be subjected to delay adjustment processing and/or time domain post processing to obtain left and right channel decoding signals.
  • the time domain upmix processing mode corresponding to the non-correlation signal decoding mode is a time domain upmix processing mode corresponding to the non-correlated signal channel combination scheme, and the non-correlated signal channel combination scheme is a class inversion.
  • the channel combination scheme corresponding to the signal is a time domain upmix processing mode corresponding to the signal.
  • the decoding mode of the current frame may be one of a plurality of decoding modes.
  • the decoding mode of the current frame may be one of the following decoding modes: a correlation signal decoding mode, a non-correlation signal decoding mode, a correlation to a non-correlation signal decoding mode, and a non-correlation to correlation signal decoding mode.
  • the decoding mode of the current frame needs to be determined in the above solution, which means that there are multiple possibilities for the decoding mode of the current frame, which is different from the conventional scheme with only one decoding mode, and multiple possible decoding modes and A variety of possible scenarios help to achieve a better compatible match. Moreover, since a channel combination scheme corresponding to the inverted signal of the class is introduced, this makes a relatively more targeted channel combination scheme and decoding mode for the case where the stereo signal of the current frame is an inverted signal. In turn, it is beneficial to improve the decoding quality.
  • the decoding apparatus determines that the decoding mode of the current frame is the correlation signal decoding mode
  • the time domain upmix processing mode corresponding to the correlation signal decoding mode is used, and the primary and secondary sounds of the current frame are used.
  • the channel decoding signal is subjected to time domain upmix processing to obtain a left and right channel reconstruction signal of the current frame, and the time domain upmix processing method corresponding to the correlation signal decoding mode is in a time domain corresponding to the correlation signal channel combination scheme.
  • the correlation signal channel combination scheme is a channel combination scheme corresponding to the normal phase signal.
  • the time domain upmix processing method corresponding to the correlation to the non-correlation signal decoding mode is adopted,
  • the primary and secondary channel decoding signals of the current frame are subjected to time domain upmix processing to obtain left and right channel reconstruction signals of the current frame, and the time domain upmix processing method corresponding to the correlation to the non-correlation signal decoding mode is From the correlation signal channel combination scheme to the time domain upmix processing method corresponding to the non-correlated signal channel combination scheme.
  • the decoding apparatus determines that the decoding mode of the current frame is a non-correlation to correlation signal decoding mode
  • the time domain upmix processing corresponding to the non-correlation to correlation signal decoding mode is adopted
  • the primary and secondary channel decoding signals of the current frame are subjected to time domain upmix processing to obtain left and right channel reconstruction signals of the current frame
  • the time domain upmix processing method corresponding to the non-correlation to correlation signal decoding mode is From the non-correlated signal channel combination scheme to the time domain upmix processing corresponding to the correlation signal channel combination scheme.
  • each decoding mode may also correspond to one or more time domain upmix processing methods.
  • the channel combination scheme of the current frame needs to be determined, which means that there are multiple possibilities for the channel combination scheme of the current frame, which is more than the traditional scheme with only one channel combination scheme.
  • a possible channel combination scheme and a plurality of possible scenes are advantageous for obtaining a better compatible matching effect.
  • the decoding mode of the current frame needs to be determined based on the channel combination scheme of the previous frame and the channel combination scheme of the current frame, and the decoding mode of the current frame has multiple possibilities, and this is relative to only one type.
  • the traditional scheme of decoding mode a variety of possible decoding modes and multiple possible scenarios are beneficial to obtain a better compatible matching effect.
  • the decoding apparatus performs time domain upmix processing on the primary and secondary channel decoding signals of the current frame based on the time domain upmix processing corresponding to the decoding mode of the current frame to obtain a left and right channel reconstruction signal of the current frame.
  • the encoding device determines some specific implementations of the channel combination scheme of the current frame.
  • the specific implementation of the channel combining scheme for the encoding device to determine the current frame is varied.
  • determining a channel combination scheme of the current frame may include determining a channel combination scheme of the current frame by performing at least one channel combination scheme decision on the current frame.
  • the determining a channel combination scheme of the current frame includes: performing a channel combination scheme initial decision on the current frame to determine an initial channel combination scheme of the current frame. And performing a channel combination scheme correction decision on the current frame based on an initial channel combination scheme of the current frame to determine a channel combination scheme of the current frame.
  • the initial channel combination scheme of the current frame may be directly used as the channel combination scheme of the current frame, that is, the channel combination scheme of the current frame may be: performing channel combination on the current frame. The initial channel combination scheme of the current frame determined by the initial decision of the scheme.
  • performing a channel combining scheme initial decision on the current frame may include: determining, by using left and right channel signals of the current frame, a signal positive inversion type of a stereo signal of the current frame; using a stereo signal of the current frame
  • the signal positive and negative phase type and the channel combination scheme of the previous frame determine the initial channel combination scheme of the current frame.
  • the signal positive inversion type of the stereo signal of the current frame may be a normal-like phase-like signal or an inverted-like signal.
  • the signal positive inversion type of the stereo signal of the current frame may be indicated by a signal positive inversion type identification of the current frame (signal positive inversion type identification, for example, represented by tmp_SM_flag).
  • the signal positive inversion type identifier of the current frame takes a value of “1”
  • the signal positive inversion type of the stereo signal indicating the current frame is a normal-like phase signal
  • the positive inversion type identifier takes a value of “0”
  • the signal positive and negative inversion type of the stereo signal indicating the current frame is an inversion-like signal, and vice versa.
  • a channel combining scheme of an audio frame may be indicated by a channel combination scheme identification of the audio frame. For example, when the channel combination scheme identifier of the audio frame takes a value of “0”, the channel combination scheme indicating the audio frame is a correlation signal channel combination scheme. When the channel combination scheme identifier of the audio frame takes a value of “1”, the channel combination scheme indicating the audio frame is a non-correlated signal channel combination scheme, and vice versa.
  • an initial channel combining scheme of an audio frame may be indicated by an initial channel combining scheme identification of the audio frame (initial channel combining scheme identification, eg, represented by tdm_SM_flag_loc).
  • initial channel combining scheme identification eg, represented by tdm_SM_flag_loc.
  • the initial channel combination scheme indicating the audio frame is a correlation signal channel combination scheme.
  • the initial channel combination scheme indicating the audio frame is a non-correlated signal channel combination scheme, and vice versa.
  • the determining, by using the left and right channel signals of the current frame, the signal positive and negative inversion type of the stereo signal of the current frame may include: calculating a correlation value xorr between the left and right channel signals of the current frame, where Determining that the signal positive inversion type of the stereo signal of the current frame is a normal-like phase signal if xorr is less than or equal to the first threshold, and determining the stereo signal of the current frame if the xorr is greater than the first threshold
  • the positive and negative signal types are inverted signals.
  • the signal positive and negative type identification of the current frame is used to indicate the positive and negative signal type of the stereo signal of the current frame
  • determining the positive and negative signal types of the stereo signal of the current frame is In the case of a positive phase signal, the value of the positive and negative inversion type of the signal of the current frame may be set to indicate that the positive and negative phase of the signal of the stereo signal of the current frame is a normal phase-like signal; In the case that the positive-inverting type of the signal of the current frame is a normal-phase-like signal, the value of the positive-inversion type identifier of the signal of the current frame may be set to indicate that the positive-reverse type of the signal of the stereo signal of the current frame is Inverted signal.
  • the value of the first threshold may be, for example, (0.5, 1.0), for example, may be equal to 0.5, 0.85, 0.75, 0.65, or 0.81.
  • the signal positive inversion type identifier of the audio frame for example, the previous frame or the current frame
  • the signal positive and negative phase indicating the stereo signal of the audio frame is a normal-like phase
  • the audio frame when the signal positive and negative inversion type flag (for example, the previous frame or the current frame) takes a value of "1”, the signal indicating the positive and negative inversion type of the stereo signal of the audio frame is an inversion-like signal, and so on.
  • the initial channel combination scheme of the current frame is determined by using a positive and negative signal type of the stereo signal of the current frame and a channel combination scheme of the previous frame.
  • the method may include:
  • the channel combining scheme of the previous frame is a correlation signal channel combining scheme
  • the combining scheme is a correlation signal channel combining scheme
  • the signal positive inversion type of the stereo signal in the current frame is an inversion-like signal
  • the channel combining scheme of the previous frame is a non-correlated signal channel combining scheme
  • the initial channel combination scheme of the current frame is determined to be a non-correlated signal channel combination scheme.
  • the channel combining scheme of the previous frame is a non-correlated signal channel combining scheme
  • the initial channel combination scheme of the current frame is determined to be a correlation signal channel combination scheme
  • the left channel signal and/or the right channel signal of the current frame are The signal to noise ratio is greater than or equal to the second threshold
  • the initial channel combination scheme of the current frame is determined to be a non-correlated signal channel combination scheme
  • the channel combining scheme of the previous frame is a correlation signal channel combining scheme
  • the left and right channels of the current frame are The signal to noise ratio of the signal is less than the second threshold, determining that the initial channel combination scheme of the current frame is a non-correlated signal channel combination scheme; if the left channel signal and/or the right channel signal of the current frame are The signal to noise ratio is greater than or equal to the second threshold, and the initial channel combination scheme of the current frame is determined to be a correlation signal channel combination scheme.
  • the value range of the second threshold may be, for example, [0.8, 1.2], for example, may be equal to 0.8, 0.85, 0.9, 1, 1.1, or 1.18.
  • the performing a channel combination scheme correction decision on the current frame based on the initial channel combination scheme of the current frame may include: modifying, according to a channel combination scale factor of a previous frame, a signal of a stereo signal of the current frame.
  • the positive inversion type and the initial channel combination scheme of the current frame determine a channel combination scheme of the current frame.
  • the channel combination scheme identifier of the current frame may be recorded as tdm_SM_flag, and the channel combination scale factor correction identifier of the current frame is recorded as tdm_SM_modi_flag.
  • the channel combination scale factor correction flag takes a value of 0, which means that the correction of the channel combination scale factor is not required, and the channel combination scale factor correction flag takes a value of 1, indicating that the correction of the channel combination scale factor is required.
  • the channel combination scale factor correction flag can also use other different values to indicate whether the channel combination scale factor correction is needed.
  • the channel combination scheme correction decision for the current frame based on the initial decision result of the channel combination scheme of the current frame may include:
  • the non-correlation signal channel combination scheme is used as the channel combination scheme of the current frame; if the channel combination ratio of the previous frame is The factor correction indicator indicates that it is not necessary to correct the channel combination scale factor, determine whether the current frame satisfies the handover condition, and determine a channel combination scheme of the current frame based on the determination result of whether the current frame satisfies the handover condition.
  • the determining, according to the determination result that the current frame meets the handover condition, the channel combination scheme of the current frame may include:
  • the channel combination scheme of the previous frame is different from the initial channel combination scheme of the current frame, and the current frame satisfies a handover condition, and the initial channel combination scheme of the current frame is a correlation signal channel combination scheme.
  • the channel combination scheme of the previous frame is a non-correlation signal channel combination scheme, and the channel combination scheme of the current frame is determined to be a non-correlated signal channel combination scheme.
  • the channel combining scheme of the previous frame is different from the initial channel combining scheme of the current frame, and the current frame satisfies a switching condition, and the initial channel combining scheme of the current frame is a non-correlated signal channel combination.
  • a scheme, and the channel combination scheme of the previous frame is a correlation signal channel combination scheme, and if the channel combination scale factor of the previous frame is smaller than the first scale factor threshold, determining the channel of the current frame
  • the combination scheme is a correlation signal channel combination scheme.
  • the channel combining scheme of the previous frame is different from the initial channel combining scheme of the current frame, and the current frame satisfies a switching condition, and the initial channel combining scheme of the current frame is a non-correlated signal channel combination a scheme, and the channel combining scheme of the previous frame is a correlation signal channel combining scheme, and if the channel combining scale factor of the previous frame is greater than or equal to the first scale factor threshold, determining the current frame
  • the channel combination scheme is a non-correlated signal channel combination scheme.
  • the channel combination scheme of the first P-1 frame is different from the initial channel combination scheme of the first P frame, and the first P frame does not satisfy the handover condition, and the current frame satisfies a handover condition, and the
  • the signal positive inversion type of the stereo signal of the current frame is a normal-like phase signal
  • the initial channel combination scheme of the current frame is a correlation signal channel combination scheme
  • the previous frame is a non-correlated signal channel combination scheme.
  • determining a channel combination scheme of the current frame as a correlation signal channel combination scheme.
  • the channel combination scheme of the first P-1 frame and the initial channel combination scheme of the first P frame does not satisfy the handover condition, and the current frame satisfies the handover condition, and the current frame
  • the signal positive inversion type of the stereo signal is an inversion-like signal
  • the initial channel combination scheme of the current frame is a non-correlated signal channel combination scheme
  • the channel combination scheme of the previous frame is a correlation signal channel.
  • the channel combination scheme of the current frame is determined to be a correlation signal channel combination scheme.
  • the channel combination scheme of the first P-1 frame is different from the initial channel combination scheme of the first P frame, and the first P frame does not satisfy the handover condition, and the current frame satisfies the handover condition, and the current frame
  • the positive and negative inversion type of the stereo signal is a class-inverted signal
  • the initial channel combination scheme of the current frame is a non-correlated signal channel combination scheme
  • the channel combination scheme of the previous frame is a correlation signal channel.
  • the channel combination scheme of the current frame is determined to be a non-correlation signal channel combination scheme.
  • P may be an integer greater than 1, for example, P may be equal to 2, 3, 4, 5, 6, or other values.
  • the value range of the first scale factor threshold may be, for example, [0.4, 0.6], for example, may be equal to 0.4, 0.45, 0.5, 0.55, or 0.6.
  • the value range of the second scale factor threshold may be, for example, [0.4, 0.6], for example, may be equal to 0.4, 0.46, 0.5, 0.56, or 0.6.
  • determining whether the current frame satisfies the handover condition may include determining whether the current frame satisfies a handover condition according to a primary channel signal frame type and/or a secondary channel signal frame type of the previous frame.
  • determining whether the current frame meets the handover condition may include:
  • the main channel signal frame type of the previous frame of the previous frame is any one of the following: VOICED_CLAS frame (the voiced frame, the previous frame is a voiced frame or a voiced start frame), ONSET frame (voiced) Start frame), SIN_ONSET frame (start frame of harmonic and noise mixing), INACTIVE_CLAS frame, AUDIO_CLAS (audio frame), and the main channel signal frame type of the previous frame is UNVOICED_CLAS frame (unvoiced, mute) a frame of one of several characteristics, such as the end of noise or voiced sounds) or a VOICED_TRANSITION frame (the frame after the voiced sound is excessive and the voiced characteristics are already weak); or, the type of the secondary channel signal frame of the previous frame of the previous frame is Any of the following: VOICED_CLAS frame, ONSET frame, SIN_ONSET frame, INACTIVE_CLAS frame, and AUDIO_CLAS frame, and the secondary channel signal frame type of the previous frame is UNVOICED_
  • the second condition the original coding mode of the primary channel signal and the secondary channel signal of the previous frame are not VOICED (the coding type corresponding to the voiced frame).
  • the third condition up to the previous frame, the number of frames of the channel combination scheme that has been used continuously for the previous frame is greater than the preset number of frames threshold.
  • the value range of the frame number threshold may be, for example, [3, 10], for example, the frame number threshold may be equal to 3, 4, 5, 6, 7, 8, 9, or other values.
  • the main channel signal frame type of the previous frame is UNVOICED_CLAS, or the secondary channel signal frame type of the previous frame is UNVOICED_CLAS.
  • the long-term rms energy value of the left and right channel signals of the current frame is smaller than the energy threshold.
  • the value range of this energy threshold can be, for example, [300, 500], for example, the frame number threshold can be equal to 300, 400, 410, 451, 482, 500, 415 or other values.
  • the sixth condition the main channel signal frame type of the previous frame is a music signal, and the energy ratio of the low frequency band to the high frequency band of the main channel signal of the previous frame is greater than the first energy ratio threshold, and the secondary of the previous frame The energy ratio of the low frequency band to the high frequency band of the channel signal is greater than the second energy ratio threshold.
  • the first energy ratio threshold range may be, for example, [4000, 6000], for example, the frame number threshold may be equal to 4000, 4500, 5000, 5105, 5200, 6000, 5800 or other values.
  • the second energy ratio threshold range may be, for example, [4000, 6000], for example, the frame number threshold may be equal to 4000, 4501, 5000, 5105, 5200, 6000, 5800 or other values.
  • the implementation manner of determining whether the current frame satisfies the handover condition may be various, and is not limited to the above-exemplified manner.
  • an embodiment of the present application provides an audio encoding method.
  • the related steps of the audio encoding method may be implemented by an encoding device.
  • the method may include:
  • time domain downmix processing mode corresponding to the non-correlation signal coding mode uses the time domain downmix processing mode to perform left and right channel signals of the current frame.
  • Time domain downmix processing is performed to obtain primary and secondary channel signals of the current frame.
  • the time domain downmix processing mode corresponding to the non-correlation signal coding mode is a time domain downmix processing mode corresponding to the non-correlated signal channel combination scheme, and the non-correlated signal channel combination scheme is a class inversion.
  • the channel combination scheme corresponding to the signal is a time domain downmix processing mode corresponding to the signal.
  • time-domain downmix processing of the left and right channel signals of the current frame is performed by using a time domain downmix processing manner corresponding to the non-correlation signal coding mode to obtain a
  • the primary and secondary channel signals of the current frame may include: performing time domain downmix processing on the left and right channel signals of the current frame according to a channel combination scale factor of the non-correlation signal channel combination scheme of the current frame And obtaining a primary and secondary channel signals of the current frame; or a left and right channel signals of the current frame according to a channel combination scaling factor of the non-correlated signal channel combination scheme of the current frame and a previous frame; Time domain downmix processing is performed to obtain primary and secondary channel signals of the current frame.
  • the channel combination scale factor of the channel combination scheme of the audio frame may be preset. Fixed value. It is of course also possible to determine the channel combination scale factor of this audio frame based on the channel combination scheme of the audio frame.
  • a corresponding downmix matrix may be constructed based on a channel combination scale factor of the audio frame, and the left and right channel signals of the current frame are time-domain downmixed by using a downmix matrix corresponding to the channel combination scheme. Processing to obtain the primary and secondary channel signals of the current frame.
  • the left and right channel signals of the current frame are subjected to time domain downmix processing according to a channel combination scaling factor of the non-correlated signal channel combination scheme of the current frame and the previous frame to obtain
  • a channel combination scaling factor of the non-correlated signal channel combination scheme of the current frame and the previous frame to obtain
  • the delay_com represents coding delay compensation.
  • the left and right channel signals of the current frame are subjected to time domain downmix processing according to a channel combination scaling factor of the non-correlated signal channel combination scheme of the current frame and the previous frame to obtain
  • a channel combination scaling factor of the non-correlated signal channel combination scheme of the current frame and the previous frame to obtain
  • fade_in(n) represents a fade-in factor.
  • fade_in(n) can also be a fade-in factor based on other functional relationships of n.
  • Fade_out(n) represents the fade factor.
  • fade_out(n) can also be a fade factor based on other functional relationships of n.
  • NOVA_1 represents the length of the transition process.
  • the value of NOVA_1 can be set according to the specific scene. NOVA_1 may, for example, be equal to 3/N or NOVA_1 may be other values less than N.
  • the left and right channel signals of the current frame are subjected to time domain downmix processing to obtain the primary and secondary sounds of the current frame.
  • the left and right channel signals of the current frame are subjected to time domain downmix processing to obtain the primary and secondary sounds of the current frame.
  • a signal In the case of a signal,
  • the X L (n) represents the left channel signal of the current frame.
  • the X R (n) represents a right channel signal of the current frame.
  • the Y(n) represents a primary channel signal of the current frame obtained by time domain downmix processing; the X(n) represents a secondary sound of the current frame obtained by time domain downmix processing Signal.
  • delay_com represents coding delay compensation
  • M 11 represents a downmix matrix corresponding to the correlation signal channel combination scheme of the previous frame, and M 11 is constructed based on a channel combination scale factor corresponding to the correlation signal channel combination scheme of the previous frame.
  • the M 12 represents a downmix matrix corresponding to the non-correlated signal channel combination scheme of the previous frame, and the M 12 is based on a channel combination ratio corresponding to the non-correlation signal channel combination scheme of the previous frame. Factor construction.
  • the M 22 represents a downmix matrix corresponding to the non-correlation signal channel combination scheme of the current frame, and the M 22 is constructed based on a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame. .
  • the M 21 represents a downmix matrix corresponding to the correlation signal channel combination scheme of the current frame, and the M 21 is constructed based on a channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame.
  • the M 21 may exist in various forms, for example:
  • the ratio represents a channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame.
  • the M 22 may exist in various forms, for example:
  • ratio_SM represents a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • the M 12 may exist in various forms, for example:
  • ⁇ 1_pre tdm_last_ratio_SM
  • ⁇ 2_pre 1-tdm_last_ratio_SM
  • tdm_last_ratio_SM represents the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the previous frame.
  • the left and right channel signals of the current frame may specifically be the original left and right channel signals of the current frame (the original left and right channel signals are left and right channel signals that are not preprocessed in the time domain, for example, the left and right channel signals may be sampled. Or, may be the time-domain preprocessed left and right channel signals of the current frame; or may be the left and right channel signals of the current frame that are subjected to the delay alignment processing.
  • said Re represents the original left and right channel signals of the current frame.
  • Said A left and right channel signal representing the time-delayed processing of the current frame.
  • non-correlation signal decoding mode scenario is exemplified below.
  • an embodiment of the present application further provides an audio decoding method.
  • the related steps of the audio decoding method may be implemented by a decoding device.
  • the method may include:
  • step 501 and step 502 has no necessary sequence.
  • the decoding mode of the current frame is a non-correlation signal decoding mode
  • use a time domain upmix processing mode corresponding to the non-correlation signal decoding mode to perform a primary and secondary channel of the current frame.
  • the decoded signal is subjected to time domain upmix processing to obtain left and right channel reconstruction signals of the current frame.
  • the left and right channel reconstruction signals may be left and right channel decoding signals, or the left and right channel decoding signals may be subjected to delay adjustment processing and/or time domain post processing to obtain left and right channel decoding signals.
  • the time domain upmix processing mode corresponding to the non-correlation signal decoding mode is a time domain upmix processing mode corresponding to the non-correlated signal channel combination scheme, and the non-correlated signal channel combination scheme is a class inversion.
  • the channel combination scheme corresponding to the signal is a time domain upmix processing mode corresponding to the signal.
  • the decoding mode of the current frame may be one of a plurality of decoding modes.
  • the decoding mode of the current frame may be one of the following decoding modes: a correlation signal decoding mode, a non-correlation signal decoding mode, a correlation to a non-correlation signal decoding mode, and a non-correlation to correlation signal decoding mode.
  • the decoding mode of the current frame needs to be determined in the above solution, which means that there are multiple possibilities for the decoding mode of the current frame, which is different from the conventional scheme with only one decoding mode, and multiple possible decoding modes and A variety of possible scenarios help to achieve a better compatible match. Moreover, since a channel combination scheme corresponding to the inverted signal of the class is introduced, this makes a relatively more targeted channel combination scheme and decoding mode for the case where the stereo signal of the current frame is an inverted signal. In turn, it is beneficial to improve the decoding quality.
  • the method may further include:
  • the time domain upmix processing mode corresponding to the correlation signal decoding mode when the primary and secondary channel decoding signals of the current frame are performed Domain upmixing to obtain a left and right channel reconstruction signal of the current frame, and the time domain upmix processing method corresponding to the correlation signal decoding mode is a time domain upmix processing method corresponding to the correlation signal channel combination scheme,
  • the correlation signal channel combination scheme is a channel combination scheme corresponding to the normal phase signal.
  • the method may further include: when determining that the decoding mode of the current frame is a correlation to a non-correlation signal decoding mode, adopting the correlation to the non-correlation signal decoding mode a time domain upmix processing method, performing time domain upmix processing on the primary and secondary channel decoded signals of the current frame to obtain a left and right channel reconstruction signal of the current frame, the correlation to a non-correlated signal decoding mode
  • the corresponding time domain upmix processing mode is a time domain upmix processing method corresponding to the correlation signal channel combination scheme and the non-correlation signal channel combination scheme.
  • the method may further include: when determining that the decoding mode of the current frame is a non-correlation to correlation signal decoding mode, adopting the non-correlation to correlation signal decoding mode corresponding to a time domain upmix processing method, performing time domain upmix processing on the primary and secondary channel decoded signals of the current frame to obtain a left and right channel reconstruction signal of the current frame, the non-correlation to correlation signal decoding mode
  • the corresponding time domain upmix processing mode is a time domain upmix processing method corresponding to the non-correlation signal channel combination scheme to the correlation signal channel combination scheme.
  • each decoding mode may also correspond to one or more time domain upmix processing methods.
  • performing time domain upmix processing on the primary and secondary channel decoding signals of the current frame by using a time domain upmix processing manner corresponding to the non-correlation signal decoding mode including:
  • a corresponding upmix matrix may be constructed based on a channel combination scale factor of the audio frame, and the primary and secondary channel decoding signals of the current frame are performed by using an upmix matrix corresponding to the channel combination scheme.
  • the domain is upmixed to obtain the left and right channel reconstruction signals of the current frame.
  • the time domain upmixing process is performed on the primary and secondary channel decoding signals of the current frame according to the channel combination scaling factor of the non-correlation signal channel combination scheme of the current frame and the previous frame.
  • the left and right channel reconstruction signals of the current frame are obtained.
  • the delay_com represents coding delay compensation.
  • the time domain upmixing process is performed on the primary and secondary channel decoding signals of the current frame according to the channel combination scaling factor of the non-correlation signal channel combination scheme of the current frame and the previous frame.
  • the left and right channel reconstruction signals of the current frame are obtained.
  • the NOVA_1 represents the length of the transition process.
  • fade_in(n) represents a fade-in factor.
  • fade_in(n) can also be a fade-in factor based on other functional relationships of n.
  • fade_out(n) represents a fade factor.
  • fade_out(n) can also be a fade factor based on other functional relationships of n.
  • NOVA_1 represents the length of the transition process.
  • the value of NOVA_1 can be set according to the specific scene. NOVA_1 may, for example, be equal to 3/N or NOVA_1 may be other values less than N.
  • a channel combination scaling factor of the correlation signal channel combination scheme of the current frame to obtain the current frame.
  • the upmixing_delay indicates decoding delay compensation
  • An upmix matrix corresponding to the correlation signal channel combination scheme of the previous frame, The channel combination scale factor corresponding to the correlation signal channel combination scheme of the previous frame is constructed.
  • the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame is constructed.
  • the channel combination scale factor corresponding to the non-correlated signal channel combination scheme of the previous frame is constructed.
  • the channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame is constructed.
  • ⁇ 1 ratio_SM
  • ⁇ 2 1 - ratio_SM
  • the ratio_SM represents a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • ⁇ 1_pre tdm_last_ratio_SM
  • ⁇ 2_pre 1-tdm_last_ratio_SM.
  • tdm_last_ratio_SM represents the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the previous frame.
  • the ratio represents a channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame.
  • the following is an example of a correlation signal to non-correlation signal coding mode and a non-correlation signal to a non-correlation signal coding mode scenario.
  • the time-domain downmix processing method corresponding to the correlation signal to the non-correlation signal coding mode and the non-correlation signal to the non-correlation signal coding mode is, for example, a segmented time domain downmix processing mode.
  • an embodiment of the present application provides an audio encoding method, where the related steps of the audio encoding method may be implemented by an encoding device, and the method may specifically include:
  • the coding mode of the current frame may be determined to be a correlation signal to a non-correlation signal coding mode or a non-correlation signal to an uncorrelated signal coding.
  • the coding mode of the current frame is a correlation signal to a non-correlation signal coding mode or a non-correlation signal to a non-correlation signal coding mode, for example, according to the channel combination scheme of the current frame and the previous frame Performing segmentation time domain downmix processing on the left and right channel signals of the current frame.
  • the channel combination scheme of the current frame is a correlation signal channel combination scheme
  • the channel combination scheme of the current frame is a non-correlation signal channel combination scheme
  • the coding mode of the current frame can be determined as a correlation signal to Non-correlated signal coding mode.
  • the channel combination scheme of the current frame is a non-correlation signal channel combination scheme
  • the channel combination scheme of the current frame is a correlation signal channel combination scheme
  • the coding mode of the current frame can be determined to be an uncorrelated signal. To the correlation signal coding mode. And so on.
  • the segmented time domain downmix processing can be understood as the left and right channel signals of the current frame are divided into at least two segments, and the time domain downmix processing is performed for each segment using different time domain downmix processing methods. It can be appreciated that the segmented time domain downmix processing makes it more likely to obtain better smoothing over when the channel combination scheme of adjacent frames changes relative to the non-segmented time domain downmix processing.
  • the channel combination scheme of the current frame needs to be determined, which means that there are multiple possibilities for the channel combination scheme of the current frame, which is more than the traditional scheme with only one channel combination scheme.
  • a possible channel combination scheme and a plurality of possible scenes are advantageous for obtaining a better compatible matching effect.
  • a mechanism for performing segmentation time domain downmix processing on the left and right channel signals of the current frame is introduced in a case where the channel combination scheme of the current frame and the previous frame is different, the segmentation time domain downmixing is introduced.
  • the processing mechanism is beneficial to achieve smooth transition of the channel combination scheme, thereby facilitating the improvement of the encoding quality.
  • the channel combination scheme of the previous frame may be, for example, a correlation signal channel combination scheme or a non-correlated signal channel combination scheme.
  • the channel combination scheme of the current frame may be a correlation signal channel combination scheme or a non-correlated signal channel combination scheme.
  • the channel combination scheme of the previous frame is a correlation signal channel combination scheme and the channel combination scheme of the current frame is a non-correlation signal channel combination scheme
  • left and right channels of the current frame The signal includes a left and right channel signal start segment, a left and right channel signal intermediate segment, and a left and right channel signal end segment;
  • the primary and secondary channel signals of the current frame include a primary and secondary channel signal start segment, and a primary and secondary channel signal intermediate Segment and primary and secondary channel signal end segments.
  • the signal can include:
  • the segment Using the channel combination scale factor corresponding to the correlation signal channel combination scheme of the previous frame and the time domain downmix processing method corresponding to the correlation signal channel combination scheme, starting the left and right channel signals of the current frame The segment performs time domain downmix processing to obtain a start segment of the primary and secondary channel signals of the current frame;
  • the middle segment of the left and right channel signals of the current frame Performing time domain downmix processing to obtain a first primary and secondary channel signal intermediate segment; using a current frame non-correlated signal channel combination scheme corresponding to a channel combination scale factor and a non-correlated signal channel combination scheme corresponding to a time domain a downmix processing manner, performing time domain downmix processing on the middle segment of the left and right channel signals of the current frame to obtain a second primary and secondary channel signal intermediate segment; and the first primary and secondary channel signal intermediate segments and the The middle segment of the second primary and secondary channel signals is subjected to weighted summation processing to obtain an intermediate segment of the primary and secondary channel signals of the current frame.
  • the lengths of the left and right channel signal start segments, the left and right channel signal intermediate segments, and the left and right channel signal end segments of the current frame may be set as needed.
  • the lengths of the left and right channel signal start segments, the left and right channel signal intermediate segments, and the left and right channel signal end segments of the current frame may be equal, partially equal, or unequal to each other.
  • the length of the primary and secondary channel signal start segments, the primary and secondary channel signal intermediate segments, and the primary and secondary channel signal end segments of the current frame may be set as needed.
  • the lengths of the primary and secondary channel signal start segments, the primary and secondary channel signal intermediate segments, and the primary and secondary channel signal end segments of the current frame may be equal, partially equal, or unequal to each other.
  • the weighting coefficient corresponding to the middle segment of the first primary and secondary channel signals may be Equal to or not equal to the weighting coefficient corresponding to the middle segment of the second primary and secondary channel signals.
  • the weighting coefficients corresponding to the intermediate segments of the first primary and secondary channel signals are For the fade factor, the weighting coefficient corresponding to the middle segment of the second primary and secondary channel signals is a fade-in factor.
  • X 11 (n) represents the beginning segment of the main channel signal of the current frame.
  • Y 11 (n) represents the start segment of the secondary channel signal of the current frame.
  • X 31 (n) represents the end segment of the main channel signal of the current frame.
  • Y 31 (n) represents the end segment of the secondary channel signal of the current frame.
  • X 21 (n) represents the middle segment of the main channel signal of the current frame.
  • Y 21 (n) represents a middle segment of the secondary channel signal of the current frame;
  • X(n) represents the main channel signal of the current frame.
  • Y(n) represents the secondary channel signal of the current frame.
  • fade_in(n) represents a fade-in factor
  • fade_out(n) represents a fade-out factor
  • the sum of fade_in(n) and fade_out(n) is 1.
  • fade_in(n) can also be a fade-in factor based on other functional relationships of n.
  • fade_out(n) can also be a fade-in factor based on other functional relationships of n.
  • n 0, 1, ..., N-1. 0 ⁇ N 1 ⁇ N 2 ⁇ N-1.
  • N 1 is equal to 100, 107, 120, 150 or other values.
  • N 2 is equal to 180, 187, 200, 203 or other value.
  • the X 211 (n) represents a middle segment of the first primary channel signal of the current frame
  • the Y 211 (n) represents a middle segment of the first secondary channel signal of the current frame
  • the X 212 (n) represents a middle segment of the second primary channel signal of the current frame
  • the Y 212 (n) represents a middle segment of the second secondary channel signal of the current frame.
  • X L (n) represents a left channel signal of the current frame.
  • the X R (n) represents a right channel signal of the current frame.
  • the M 11 represents a downmix matrix corresponding to the correlation signal channel combination scheme of the previous frame, and the M 11 is constructed based on a channel combination scale factor corresponding to the correlation signal channel combination scheme of the previous frame.
  • the M 22 represents a downmix matrix corresponding to the non-correlation signal channel combination scheme of the current frame, and the M 22 is constructed based on a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame. .
  • the M 22 can have many possible forms, for example:
  • the ⁇ 1 ratio_SM
  • the ⁇ 2 1-ratio_SM
  • the ratio_SM represents a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • the M 11 can have many possible forms, for example:
  • the tdm_last_ratio represents a channel combination scale factor corresponding to the correlation signal channel combination scheme of the previous frame.
  • the channel combining scheme of the previous frame is a non-correlated signal channel combining scheme and the channel combining scheme of the current frame is a correlation signal channel combining scheme
  • the current frame is The left and right channel signals include a left and right channel signal start segment, a left and right channel signal intermediate segment, and a left and right channel signal end segment
  • the primary and secondary channel signals of the current frame include a primary and secondary channel signal start segment, a primary infrasound signal The middle segment of the track signal and the end segment of the primary and secondary channel signals.
  • the left and right channel signals of the current frame are used Performing time domain downmix processing on the initial segment to obtain a primary and secondary channel signal start segment of the current frame;
  • the left and right channel signals of the current frame are used The middle segment performs time domain downmix processing to obtain a third primary and secondary channel signal intermediate segment; the channel combination scale factor corresponding to the current frame correlation signal channel combination scheme and the time domain corresponding to the correlation signal channel combination scheme a downmix processing manner, performing time domain downmix processing on the middle segment of the left and right channel signals of the current frame to obtain a fourth primary and secondary channel signal intermediate segment; and the third primary and secondary channel signal intermediate segment and the The middle segment of the fourth primary sub-channel signal is subjected to weighted summation processing to obtain an intermediate segment of the primary and secondary channel signals of the current frame.
  • the weighting coefficient corresponding to the intermediate segment of the third primary and secondary channel signals may be Equal to or not equal to the weighting coefficient corresponding to the middle segment of the fourth primary and secondary channel signals.
  • the weighting coefficient corresponding to the intermediate segment of the third primary and secondary channel signals is faded out
  • the weighting coefficient corresponding to the middle segment of the fourth primary and secondary channel signals is a fade-in factor.
  • X 12 (n) represents a primary channel signal start segment of the current frame
  • Y 12 (n) represents a secondary channel signal start segment of the current frame
  • X 32 (n) represents the end segment of the main channel signal of the current frame
  • Y 32 (n) represents the end segment of the secondary channel signal of the current frame
  • X 22 (n) represents the middle segment of the main channel signal of the current frame
  • Y 22 (n) represents the middle segment of the secondary channel signal of the current frame.
  • X(n) represents the main channel signal of the current frame.
  • Y(n) represents the secondary channel signal of the current frame.
  • fade_in(n) represents a fade-in factor representation
  • fade_out(n) represents a fade-out factor
  • the sum of fade_in(n) and fade_out(n) is 1.
  • fade_in(n) can also be a fade-in factor based on other functional relationships of n.
  • fade_out(n) can also be a fade-in factor based on other functional relationships of n.
  • N 3 is equal to 101, 107, 120, 150 or other values.
  • N 4 is equal to 181, 187, 200, 205 or other value.
  • the X 221 (n) represents a middle segment of a third primary channel signal of the current frame
  • the Y 221 (n) represents a middle segment of a third secondary channel signal of the current frame
  • the X 222 (n) represents a middle segment of the fourth primary channel signal of the current frame
  • the Y 222 (n) represents a middle segment of the fourth secondary channel signal of the current frame.
  • X L (n) represents a left channel signal of the current frame
  • X R (n) represents a right channel signal of the current frame
  • the M 12 represents a downmix matrix corresponding to the non-correlated signal channel combination scheme of the previous frame, and the M 12 is based on a channel combination ratio corresponding to the non-correlation signal channel combination scheme of the previous frame.
  • the M 21 represents a downmix matrix corresponding to the current frame correlation signal channel combination scheme, and the M 21 is constructed based on a channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame.
  • the M 12 can have many possible forms, for example:
  • ⁇ 1_pre tdm_last_ratio_SM
  • ⁇ 2_pre 1-tdm_last_ratio_SM.
  • tdm_last_ratio_SM represents the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the previous frame.
  • the M 21 can have many possible forms, specifically for example:
  • the ratio represents a channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame.
  • the left and right channel signals of the current frame may be, for example, the original left and right channel signals of the current frame, the left and right channel signals preprocessed by the time domain, or the left and right channel signals processed by the delay alignment.
  • x L (n) represents an original left channel signal of the current frame (the original left channel signal is a left channel signal that has not been time domain preprocessed)
  • x R (n) represents the The original right channel signal of the current frame (the original right channel signal is a right channel signal that has not been time domain preprocessed).
  • the x L_HP (n) represents a time domain preprocessed left channel signal of the current frame
  • the x R_HP (n) represents a time domain preprocessed right channel signal of the current frame
  • the x' L (n) represents a left channel signal of the time frame alignment processing of the current frame
  • the x' R (n) represents a right channel signal of the time frame alignment processing of the current frame.
  • segmented time domain downmix processing mode of the above example is not necessarily all possible implementation manners, and other segmentation time domain downmix processing modes may also be adopted in practical applications.
  • the correlation signal to non-correlation signal decoding mode and non-correlation signal to non-correlation signal decoding mode scenario are exemplified below.
  • the time-domain downmix processing method corresponding to the correlation signal to the non-correlation signal decoding mode and the non-correlation signal to the non-correlation signal decoding mode is, for example, a segmented time domain downmix processing mode.
  • an embodiment of the present application provides an audio decoding method.
  • the related steps of the audio decoding method may be implemented by a decoding device.
  • the method may specifically include:
  • steps 701 and 702 are not in a proper order.
  • the channel combination scheme of the current frame is one of a plurality of channel combination schemes.
  • the plurality of channel combination schemes include a non-correlated signal channel combination scheme and a correlation signal channel combination scheme.
  • the correlation signal channel combination scheme is a channel combination scheme corresponding to the normal phase-like signal.
  • the non-correlation signal channel combination scheme is a channel combination scheme corresponding to the inversion-like signal. It can be understood that the channel combination scheme corresponding to the normal phase-like signal is applicable to the normal phase-like signal, and the channel combination scheme corresponding to the inverted signal is applicable to the inverted signal.
  • the segmentation time domain upmix processing can be understood as the left and right channel signals of the current frame are divided into at least two segments, and the time domain upmix processing is performed for each segment using different time domain upmix processing methods. It will be appreciated that the segmented time domain upmixing process makes it more likely to obtain better smoothing over when the channel combining scheme of adjacent frames changes relative to non-segmented time domain upmix processing.
  • the channel combination scheme of the current frame needs to be determined, which means that there are multiple possibilities for the channel combination scheme of the current frame, which is more than the traditional scheme with only one channel combination scheme.
  • a possible channel combination scheme and a plurality of possible scenes are advantageous for obtaining a better compatible matching effect.
  • a mechanism for performing segmentation time domain upmix processing on the left and right channel signals of the current frame is introduced in a case where the channel combination scheme of the current frame and the previous frame is different, the segmentation time domain is mixed.
  • the processing mechanism is beneficial to achieve smooth transition of the channel combination scheme, thereby facilitating the improvement of the encoding quality.
  • the channel combination scheme of the previous frame may be, for example, a correlation signal channel combination scheme or a non-correlated signal channel combination scheme.
  • the channel combination scheme of the current frame may be a correlation signal channel combination scheme or a non-correlated signal channel combination scheme.
  • the channel combination scheme of the previous frame is a correlation signal channel combination scheme and the channel combination scheme of the current frame is a non-correlation signal channel combination scheme.
  • the left and right channel reconstruction signals of the current frame include a left and right channel reconstruction signal start segment, a left and right channel reconstruction signal intermediate segment, and a left and right channel reconstruction signal end segment;
  • the primary and secondary channel decoding signals of the current frame include The primary and secondary channel decoding signal start segment, the primary and secondary channel decoding signal intermediate segment, and the primary and secondary channel decoding signal end segments. Then, performing segmentation time domain upmix processing on the primary and secondary channel decoding signals of the current frame according to the channel combination scheme of the current frame and the previous frame, to obtain left and right channel reconstruction of the current frame.
  • the signal includes: a channel combination scaling factor corresponding to the correlation signal channel combination scheme of the previous frame and a time domain upmix processing manner corresponding to the correlation signal channel combination scheme, and a primary and secondary processing manner of the current frame
  • the start segment of the channel decoding signal is subjected to time domain upmix processing to obtain a start segment of the left and right channel reconstruction signals of the current frame;
  • the signal end segment performs time domain upmix processing to obtain a left and right channel reconstruction signal end segment of the current frame;
  • the primary and secondary channel decoding signals of the current frame are used.
  • the middle segment performs time domain upmix processing to obtain a first left and right channel reconstruction signal intermediate segment; and the current frame non-correlation signal channel combination scheme corresponds to a channel combination scale factor and a non-correlated signal channel combination scheme corresponding to a time domain upmix processing method, performing time domain upmix processing on the middle segment of the primary and secondary channel decoding signals of the current frame to obtain a second left channel reconstruction signal intermediate segment; and intermediate the first left and right channel reconstruction signals
  • the segment and the second left and right channel reconstruction signal intermediate segments are subjected to weighted summation processing to obtain an intermediate segment of the left and right channel reconstruction signals of the current frame.
  • the lengths of the left and right channel reconstruction signal start segments, the left and right channel reconstruction signal intermediate segments, and the left and right channel reconstruction signal end segments of the current frame may be set as needed.
  • the lengths of the left and right channel reconstruction signal start segments, the left and right channel reconstruction signal intermediate segments, and the left and right channel reconstruction signal end segments of the current frame may be equal, partially equal, or unequal to each other.
  • the length of the primary and secondary channel decoding signal start segments, the primary and secondary channel decoding signal intermediate segments, and the primary and secondary channel decoding signal end segments of the current frame may be set as needed.
  • the lengths of the primary and secondary channel decoding signal start segments, the primary and secondary channel decoding signal intermediate segments, and the primary and secondary channel decoding signal end segments of the current frame may be equal, partially equal, or unequal to each other.
  • the left and right channel reconstruction signals may be left and right channel decoding signals, or the left and right channel decoding signals may be subjected to delay adjustment processing and/or time domain post processing to obtain left and right channel decoding signals.
  • the weighting coefficient corresponding to the middle segment of the first left and right channel reconstruction signal may be performed when the intermediate segment of the first left and right channel reconstruction signal and the middle segment of the second left and right channel reconstruction signal are subjected to weighted summation processing. Equal to or not equal to the weighting coefficient corresponding to the middle segment of the second left and right channel reconstruction signal.
  • the weighting coefficient corresponding to the middle segment of the first left and right channel reconstruction signal is For the fade factor, the weighting coefficient corresponding to the middle segment of the second left channel reconstruction signal is a fade factor.
  • Representing a start segment of a left channel reconstruction signal of the current frame Represents the start segment of the right channel reconstruction signal of the current frame.
  • Representing the end segment of the left channel reconstruction signal of the current frame Represents the end segment of the right channel reconstruction signal of the current frame.
  • Representing the middle segment of the left channel reconstruction signal of the current frame Represents the middle segment of the right channel reconstruction signal of the current frame.
  • a right channel reconstruction signal representing the current frame is generated.
  • fade_in(n) represents a fade-in factor
  • fade_out(n) represents a fade-out factor
  • the sum of fade_in(n) and fade_out(n) is 1.
  • fade_in(n) can also be a fade-in factor based on other functional relationships of n.
  • fade_out(n) can also be a fade-in factor based on other functional relationships of n.
  • Said An upmix matrix corresponding to the correlation signal channel combination scheme of the previous frame The channel combination scale factor corresponding to the correlation signal channel combination scheme of the previous frame is constructed.
  • Said An upmix matrix corresponding to the non-correlated signal channel combination scheme of the current frame The channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame is constructed.
  • ⁇ 1 ratio_SM
  • ⁇ 2 1 - ratio_SM
  • the ratio_SM represents a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • the tdm_last_ratio represents a channel combination scale factor corresponding to the correlation signal channel combination scheme of the previous frame.
  • the channel combination scheme of the previous frame is a non-correlated signal channel combination scheme and the channel combination scheme of the current frame is a correlation signal channel combination scheme.
  • the left and right channel reconstruction signals of the current frame include a left and right channel reconstruction signal start segment, a left and right channel reconstruction signal intermediate segment, and a left and right channel reconstruction signal end segment;
  • the primary and secondary channel decoding signals of the current frame include The primary and secondary channel decoding signal start segment, the primary and secondary channel decoding signal intermediate segment, and the primary and secondary channel decoding signal end segments. Then, performing segmentation time domain upmix processing on the primary and secondary channel decoding signals of the current frame according to the channel combination scheme of the current frame and the previous frame, to obtain left and right channel reconstruction of the current frame.
  • Signals including:
  • the segment Using the channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame and the time domain upmix processing method corresponding to the correlation signal channel combination scheme, decoding the end of the primary and secondary channels of the current frame The segment performs time domain upmix processing to obtain a left and right channel reconstruction signal end segment of the current frame;
  • the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the previous frame and the time domain upmix processing method corresponding to the non-correlation signal channel combination scheme for the primary and secondary channels of the current frame
  • the middle segment of the decoded signal is subjected to time domain upmix processing to obtain a middle segment of the third left and right channel reconstruction signal
  • the channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame is used to correspond to the correlation signal channel combination scheme a time domain upmix processing method, performing time domain upmix processing on the middle segment of the primary and secondary channel decoding signals of the current frame to obtain a fourth left and right channel reconstruction signal intermediate segment
  • the segment and the fourth left and right channel reconstruction signal intermediate segments are subjected to weighted summation processing to obtain an intermediate segment of the left and right channel reconstruction signals of the current frame.
  • the weighting coefficient corresponding to the middle segment of the third left and right channel reconstruction signal may be obtained by performing weighted summation processing on the intermediate segment of the third left and right channel reconstruction signal and the intermediate segment of the fourth left and right channel reconstruction signal. Equal to or not equal to the weighting coefficient corresponding to the middle segment of the fourth left and right channel reconstruction signal.
  • the weighting coefficient corresponding to the middle segment of the third left and right channel reconstruction signal is faded out
  • the weighting coefficient corresponding to the middle segment of the fourth left and right channel reconstruction signal is a fade-in factor.
  • Representing a start segment of a left channel reconstruction signal of the current frame Represents the start segment of the right channel reconstruction signal of the current frame.
  • Representing the end segment of the left channel reconstruction signal of the current frame Represents the end segment of the right channel reconstruction signal of the current frame.
  • Representing the middle segment of the left channel reconstruction signal of the current frame Representing an intermediate segment of the right channel reconstruction signal of the current frame;
  • a right channel reconstruction signal representing the current frame is generated.
  • fade_in(n) represents a fade-in factor representation
  • fade_out(n) represents a fade-out factor
  • the sum of fade_in(n) and fade_out(n) is 1.
  • fade_in(n) can also be a fade-in factor based on other functional relationships of n.
  • fade_out(n) can also be a fade-in factor based on other functional relationships of n.
  • N 3 is equal to 101, 107, 120, 150 or other values.
  • N 4 is equal to 181, 187, 200, 205 or other value.
  • Representing a middle segment of a third left channel reconstruction signal of the current frame Means a third right channel reconstruction signal intermediate segment of the current frame; Representing a middle segment of a fourth left channel reconstruction signal of the current frame, Representing the middle segment of the fourth right channel reconstruction signal of the current frame.
  • An upmix matrix corresponding to the non-correlated signal channel combination scheme of the previous frame Constructing a channel combination scale factor corresponding to the non-correlated signal channel combination scheme of the previous frame;
  • An upmix matrix corresponding to the correlation signal channel combination scheme of the current frame The channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame is constructed.
  • ⁇ 1_pre tdm_last_ratio_SM
  • ⁇ 2_pre 1-tdm_last_ratio_SM
  • tdm_last_ratio_SM represents the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the previous frame.
  • the ratio represents a channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame.
  • the stereo parameters of the current frame may be fixed values or may be based on a channel combination scheme of the current frame (eg, a correlation signal channel).
  • a channel combination scheme of the current frame eg, a correlation signal channel.
  • a combination scheme or a non-correlated signal channel combination scheme is determined.
  • a method for determining a time domain stereo parameter is exemplified.
  • the related steps of the method for determining a time domain stereo parameter may be implemented by an encoding device.
  • the method may specifically include:
  • time domain stereo parameter of the current frame according to a channel combination scheme of the current frame, where the time domain stereo parameter includes at least one of a channel combination scale factor and an inter-channel delay difference.
  • the channel combination scheme of the current frame is one of a plurality of channel combination schemes.
  • the plurality of channel combination schemes include a non-correlated signal channel combination scheme and a correlation signal channel combination scheme.
  • the correlation signal channel combination scheme is a channel combination scheme corresponding to the normal phase-like signal.
  • the non-correlation signal channel combination scheme is a channel combination scheme corresponding to the inversion-like signal. It can be understood that the channel combination scheme corresponding to the normal phase-like signal is applicable to the normal phase-like signal, and the channel combination scheme corresponding to the inverted signal is applicable to the inverted signal.
  • the time domain stereo parameter of the current frame is a time domain stereo corresponding to the correlation signal channel combination scheme of the current frame.
  • a parameter in a case where the channel combination scheme of the current frame is determined to be a non-correlated signal channel combination scheme, the time domain stereo parameter of the current frame is a non-correlation signal channel combination scheme of the current frame Time domain stereo parameters.
  • the channel combination scheme of the current frame needs to be determined, which means that there are multiple possibilities for the channel combination scheme of the current frame, which is more than the traditional scheme with only one channel combination scheme.
  • a possible channel combination scheme and a plurality of possible scenes are advantageous for obtaining a better compatible matching effect.
  • the time domain stereo parameter of the current frame is determined according to the channel combination scheme of the current frame, which facilitates obtaining a better compatible matching effect between the time domain stereo parameter and various possible scenarios, thereby facilitating improvement Codec quality.
  • the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame and the channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame may be separately calculated. And determining, in the case that the channel combination scheme of the current frame is a correlation signal channel combination scheme, determining a time domain stereo parameter of the current frame as a time domain stereo parameter corresponding to the correlation signal channel combination scheme of the current frame; Or determining, in a case where the channel combination scheme of the current frame is a non-correlated signal channel combination scheme, determining a time domain stereo parameter of the current frame as a time domain corresponding to the non-correlation signal channel combination scheme of the current frame Stereo parameters.
  • the time domain stereo parameter corresponding to the correlation signal channel combination scheme of the current frame may be calculated first, and when the channel combination scheme of the current frame is determined to be the correlation signal channel combination scheme, the current frame timing is determined.
  • the domain stereo parameter is a time domain stereo parameter corresponding to the correlation signal channel combination scheme of the current frame; and in the case of determining that the channel combination scheme of the current frame is a non-correlation signal channel combination scheme, The time domain stereo parameter corresponding to the uncorrelated signal channel combination scheme of the current frame, and the calculated time domain stereo parameter corresponding to the non-correlation signal channel combination scheme of the current frame is confirmed as the time domain stereo of the current frame. parameter.
  • the channel combination scheme of the current frame may be determined first, and in the case that the channel combination scheme of the current frame is determined to be a correlation signal channel combination scheme, the correlation signal channel combination scheme of the current frame is calculated.
  • the time domain stereo parameter of the current frame is the time domain stereo parameter corresponding to the correlation signal channel combination scheme of the current frame.
  • determining the time domain stereo parameter corresponding to the non-correlation signal channel combination scheme of the current frame in a case where the channel combination scheme of the current frame is determined to be a non-correlation signal channel combination scheme then, the current frame timing
  • the domain stereo parameter is a time domain stereo parameter corresponding to the non-correlated signal channel combination scheme of the current frame.
  • determining the time domain stereo parameter of the current frame according to the channel combination scheme of the current frame comprises: determining a channel combination scheme of the current frame according to a channel combination scheme of the current frame The corresponding channel combination scale factor initial value.
  • the current The channel combination scale factor corresponding to the channel combination scheme of the frame is equal to the initial value of the channel combination scale factor corresponding to the channel combination scheme of the current frame.
  • the initial value of the channel combination scale factor corresponding to the channel combination scheme (correlation signal channel combination scheme or non-correlation signal channel combination method) of the current frame needs to be corrected
  • the initial value of the channel combination scale factor corresponding to the channel combination scheme of the current frame is corrected to obtain a correction value of the channel combination scale factor corresponding to the channel combination scheme of the current frame, and the channel combination of the current frame
  • the channel combination scale factor corresponding to the scheme is equal to the correction value of the channel combination scale factor corresponding to the channel combination scheme of the current frame.
  • determining the time domain stereo parameter of the current frame according to the channel combination scheme of the current frame may include: calculating a frame of the left channel signal of the current frame according to the current frame left channel signal Calculating a frame energy of the right channel signal of the current frame according to the current frame right channel signal; calculating the current according to a frame energy of the current frame left channel signal and a frame energy of the right channel signal Frame Correlation The initial value of the channel combination scale factor corresponding to the signal channel combination scheme.
  • the channel combination corresponding to the correlation signal channel combination scheme of the current frame is equal to the channel combination scale factor initial value corresponding to the correlation signal channel combination scheme of the current frame, and the encoding index of the channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame is equal to the a coding index of an initial value of a channel combination scale factor corresponding to a correlation signal channel combination scheme of a current frame;
  • the channel combination ratio corresponding to the correlation signal channel combination scheme of the current frame is The initial value of the factor and its encoding index are corrected to obtain a correction value of the channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame and an encoding index thereof, and the correlation signal channel of the current frame
  • the channel combination scale factor corresponding to the combination scheme is equal to the correction value of the channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame; the channel combination corresponding to the correlation signal channel combination scheme of the current frame
  • the coding index of the scale factor is equal to the coding index of the correction value of the channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame.
  • Ratio_idx_mod 0.5*(tdm_last_ratio_idx+16);
  • Ratio_mod qua ratio_tabl[ratio_idx_mod]
  • the tdm_last_ratio_idx represents a coding index of a channel combination scale factor corresponding to a correlation signal channel combination scheme of a previous frame
  • the ratio_idx_mod represents a channel combination ratio corresponding to a correlation signal channel combination scheme of the current frame.
  • the correction index corresponding to the factor corresponds to a coding index
  • the ratio_mod qua represents a correction value of the channel combination scale factor corresponding to the correlation signal channel combination scheme of the current frame.
  • determining the time domain stereo parameter of the current frame according to the channel combination scheme of the current frame comprises: obtaining a reference channel signal of the current frame according to the left channel signal and the right channel signal of the current frame. Calculating an amplitude correlation parameter between the left channel signal and the reference channel signal of the current frame; calculating an amplitude correlation parameter between the right channel signal and the reference channel signal of the current frame; Calculating, according to an amplitude correlation parameter between the left and right channel signals of the current frame and the reference channel signal, calculating an amplitude correlation difference parameter between the left and right channel signals of the current frame; according to the left and right channel signals of the current frame The amplitude correlation difference parameter between the two is calculated, and the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame is calculated.
  • the calculating, according to the amplitude correlation difference parameter between the left and right channel signals of the current frame, the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame may include: An amplitude correlation difference parameter between the left and right channel signals of the current frame, calculating a channel combination scale factor initial value corresponding to the non-correlation signal channel combination scheme of the current frame; and an uncorrelated signal for the current frame The initial value of the channel combination scale factor corresponding to the channel combination scheme is corrected to obtain a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • the sound corresponding to the non-correlation signal channel combination scheme of the current frame is The channel combination scale factor is equal to the channel combination scale factor initial value corresponding to the non-correlation signal channel combination scheme of the current frame.
  • the mono_i(n) represents a reference channel signal of the current frame.
  • the x' L (n) represents a left channel signal of the current frame subjected to delay alignment processing; and the x' R (n) represents a right channel signal of the current frame subjected to delay alignment processing.
  • the corr_LM represents an amplitude correlation parameter between a left channel signal of the current frame and a reference channel signal, the corr_RM indicating an amplitude correlation between a right channel signal and a reference channel signal of the current frame parameter.
  • calculating an amplitude correlation difference between left and right channel signals of the current frame according to an amplitude correlation parameter between a left and right channel signal of the current frame and a reference channel signal comprises: calculating, according to an amplitude correlation parameter between the left channel signal and the reference channel signal processed by the current frame by the delay, calculating a smoothness between the left channel signal and the reference channel signal of the current frame length
  • the amplitude correlation parameter is calculated according to the amplitude correlation parameter between the right channel signal and the reference channel signal processed by the current frame, and the right channel is smoothed between the right channel signal and the reference channel signal.
  • Amplitude correlation parameter an amplitude correlation parameter between the left channel signal and the reference channel signal after the current frame length is smoothed, and between the right channel signal and the reference channel signal after the current frame length is smoothed
  • the amplitude correlation parameter calculates the amplitude correlation difference parameter between the left and right channels of the current frame.
  • the smoothing method can be varied, for example:
  • tdm_lt_corr_LM_SM cur ⁇ *tdm_lt_corr_LM_SM pre +(1- ⁇ )corr_LM;
  • tdm_lt_rms_L_SM cur (1-A)*tdm_lt_rms_L_SM pre +A*rms_L
  • the A represents an update factor of the long-term smoothed frame energy of the left channel signal of the current frame.
  • the tdm_lt_rms_L_SM cur represents a long-term smoothed frame energy of a left channel signal of the current frame; wherein the rms_L represents a frame energy of the left channel signal of the current frame.
  • tdm_lt_corr_LM_SM cur represents the amplitude correlation parameter between the left channel signal and the reference channel signal after the current frame length is smoothed.
  • tdm_lt_corr_LM_SM pre represents the amplitude correlation parameter between the left channel signal and the reference channel signal after the smoothing of the previous frame.
  • represents the left channel smoothing factor.
  • tdm_lt_corr_RM_SM cur ⁇ *tdm_lt_corr_RM_SM pre +(1- ⁇ )corr_LM.
  • tdm_lt_rms_R_SM cur (1-B) * tdm_lt_rms_R_SM pre + B * rms_R; the B represents an update factor of the long-term smoothed frame energy of the right channel signal of the current frame.
  • the tdm_lt_rms_R_SM pre represents a long-term smoothed frame energy of the right channel signal of the current frame.
  • the rms_R represents a frame energy of the right frame signal of the current frame.
  • tdm_lt_corr_RM_SM cur represents an amplitude correlation parameter between the right channel signal and the reference channel signal after the current frame length is smoothed.
  • tdm_lt_corr_RM_SM pre represents the amplitude correlation parameter between the right channel signal and the reference channel signal after the smoothing of the previous frame.
  • represents the right channel smoothing factor.
  • Diff_lt_corr tdm_lt_corr_LM_SM-tdm_lt_corr_RM_SM;
  • tdm_lt_corr_LM_SM represents an amplitude correlation parameter between the left channel signal and the reference channel signal after the current frame length is smoothed
  • tdm_lt_corr_RM_SM represents the right channel signal and the reference channel signal after the current frame length is smoothed.
  • the diff_lt_corr represents an amplitude correlation difference parameter between the left and right channel signals of the current frame.
  • the calculating a channel combination scaling factor corresponding to the non-correlation signal channel combination scheme of the current frame according to an amplitude correlation difference parameter between left and right channel signals of the current frame includes: mapping the amplitude correlation difference parameter between the left and right channel signals of the current frame, so that the range of the amplitude correlation difference parameter between the left and right channel signals of the current frame after the mapping process is in the range of [ Between MAP_MIN and MAP_MAX]; the amplitude correlation difference parameter between the left and right channel signals after the mapping process is converted into a channel combination scale factor.
  • mapping processing on an amplitude correlation difference parameter between left and right channels of the current frame includes: limiting an amplitude correlation difference parameter between left and right channel signals of the current frame Amplitude processing; mapping processing the amplitude correlation difference parameter between the left and right channel signals of the current frame after the clipping process.
  • the method of limiting processing can be various, for example:
  • RATIO_MAX represents the maximum value of the amplitude correlation difference parameter between the left and right channel signals of the current frame after the clipping process
  • RATIO_MIN represents the left and right channel signals of the current frame after the clipping process
  • mapping processing manner may be various, for example:
  • B 1 MAP_MAX-RATIO_MAX*A 1
  • B 1 MAP_HIGH-RATIO_HIGH*A 1
  • B 2 MAP_LOW - RATIO_LOW * A 2
  • B 2 MAP_MIN - RATIO_MIN * A 2
  • B 3 MAP_HIGH-RATIO_HIGH*A 3
  • B 3 MAP_LOW-RATIO_LOW*A 3
  • the diff_lt_corr_map represents an amplitude correlation difference parameter between left and right channel signals of the current frame after mapping processing
  • MAP_MAX represents the maximum value of the amplitude correlation difference parameter between the left and right channel signals of the current frame after mapping processing
  • MAP_HIGH represents the amplitude between the left and right channel signals of the current frame after the mapping process a high threshold of the correlation difference parameter
  • MAP_LOW represents a low threshold of the amplitude correlation difference parameter between the left and right channel signals of the current frame after mapping processing
  • MAP_MIN represents the left and right sound of the current frame after the mapping process The minimum value of the amplitude correlation difference parameter between the track signals
  • RATIO_MAX represents the maximum value of the amplitude correlation difference parameter between the left and right channel signals of the current frame after the clipping process
  • RATIO_HIGH represents the amplitude correlation between the left and right channel signals of the current frame after the mapping process a high threshold of the difference parameter
  • RATIO_LOW represents a low threshold of the amplitude correlation difference parameter between the left and right channel signals of the current frame after the mapping process
  • RATIO_MIN represents the left and right channels of the current frame after the mapping process The minimum value of the amplitude correlation difference parameter between signals;
  • diff_lt_corr_limit represents an amplitude correlation difference parameter between left and right channel signals of the current frame after clipping processing
  • diff_lt_corr_map represents amplitude correlation between left and right channel signals of the current frame after mapping processing Difference parameter.
  • the RATIO_MAX represents a maximum amplitude of an amplitude correlation difference parameter between left and right channel signals of the current frame
  • the -RATIO_MAX represents an amplitude correlation difference parameter between left and right channel signals of the current frame. Minimum range.
  • the diff_lt_corr_map represents an amplitude correlation difference parameter between the left and right channel signals of the current frame after the mapping process.
  • the ratio_SM indicates a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame, or the ratio_SM indicates a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame. The initial value.
  • the correction may be before or after encoding the channel combination scale factor.
  • the channel combination scale factor of the current frame for example, the channel combination scale factor corresponding to the non-correlation signal channel combination scheme or the channel combination scale factor corresponding to the correlation signal channel combination scheme
  • the channel combination scale factor of the current frame may be calculated first.
  • the coding index of the channel combination scale factor of the current frame is obtained (the coding index of the channel combination scale factor of the current frame is obtained, which is equivalent to the channel combination scale factor of the current frame).
  • the initial value of the channel combination scale factor of the current frame may be calculated first, and then the initial value of the channel combination scale factor of the current frame is corrected, thereby obtaining the channel combination scale factor of the current frame, and then The obtained channel combination scale factor of the current frame is encoded to obtain an encoding index of the channel combination scale factor of the current frame.
  • the manner of correcting the initial value of the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame may be various, for example, when it is required to pass the non-correlation of the current frame.
  • the initial value of the channel combination scale factor corresponding to the sex signal channel combination scheme is corrected to obtain the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame, for example, based on the previous one
  • the initial value of the channel combination scale factor of the frame and the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame to correspond to the sound of the non-correlation signal channel combination scheme of the current frame
  • the initial value of the channel combination scale factor is corrected; or, based on the initial value of the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame, the non-correlation signal sound of the current frame may be The initial value of the channel combination scale factor corresponding to the channel combination scheme is corrected.
  • the long-term smoothing frame energy of the left channel signal of the current frame For example, first, according to the long-term smoothing frame energy of the left channel signal of the current frame, the long-term smoothing frame energy of the right channel signal of the current frame, the inter-frame energy difference of the left channel signal of the current frame, and the history buffer Encoding the encoding parameters of the previous frame (such as the inter-frame correlation of the main channel signal, the inter-frame correlation of the secondary channel signal), the current frame and the channel combination scheme identifier of the previous frame, and the non-correlation of the previous frame.
  • the previous frame such as the inter-frame correlation of the main channel signal, the inter-frame correlation of the secondary channel signal
  • the channel combination scale factor corresponding to the sex signal channel combination scheme and the initial value of the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame determining whether a non-correlation signal channel combination of the current frame is required
  • the initial value of the channel combination scale factor corresponding to the scheme is corrected. If yes, the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the previous frame is used as the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame; otherwise, the current frame is not
  • the initial value of the channel combination scale factor corresponding to the correlation signal channel combination scheme is used as the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • the channel combination scale factor corresponding to the determined non-correlation signal channel combination scheme of the current frame is quantized
  • ratio_init_SM qua ratio_tabl_SM[ratio_idx_init_SM].
  • the ratio_tabl_SM represents a code combination of a channel combination scale factor scalar quantization corresponding to the non-correlation signal channel combination scheme of the current frame
  • the ratio_idx_init_SM indicates that the current frame has a non-correlation signal channel combination scheme corresponding to the current frame.
  • the initial coding index of the channel combination scale factor, the ratio_init_SM qua represents the quantization code initial value of the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • ratio_idx_SM ratio_idx_init_SM.
  • ratio_SM ratio_tabl[ratio_idx_SM].
  • the ratio_SM represents a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • Ratio_idx_SM represents a coding index of a channel combination scale factor corresponding to a non-correlation signal channel combination scheme of the current frame;
  • ratio_idx_SM ⁇ *ratio_idx_init_SM+(1- ⁇ )*tdm_last_ratio_idx_SM
  • ratio_SM ratio_tabl[ratio_idx_SM]
  • ratio_idx_init_SM represents an initial coding index corresponding to the non-correlation signal channel combination scheme of the current frame
  • tdm_last_ratio_idx_SM represents a final coding index of a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the previous frame, where , A correction factor for the channel combination scale factor corresponding to the non-correlated signal channel combination scheme.
  • the ratio_SM represents a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • the non-correlation signal sound of the current frame is obtained by modifying an initial value of a channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • the initial value of the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame may be first quantized, and the current frame is not encoded.
  • the initial coding index of the channel combination scale factor corresponding to the correlation signal channel combination scheme may then be based on the coding index of the channel combination scale factor of the previous frame and the non-correlation signal channel combination scheme of the current frame.
  • the initial coding index of the channel combination scale factor is used to correct the initial coding index of the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame; or, based on the current frame
  • the initial coding index of the channel combination scale factor corresponding to the correlation signal channel combination scheme, and the non-correlation of the current frame The initial combination of channel coding index scale factor corresponding to the number of channels is corrected combining scheme.
  • the initial value of the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame may be first quantized to obtain an initial coding index corresponding to the non-correlation signal channel combination scheme of the current frame. Then, when the initial value of the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame needs to be corrected, the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the previous frame is used.
  • the coding index is used as the coding index of the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame; otherwise, the initial coding index of the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • the coding index of the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame is used as the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame.
  • determining the time domain stereo parameter of the current frame according to the channel combination scheme of the current frame may include: the channel combination scheme in the current frame is In the case of a correlation signal channel combining scheme, the inter-channel time difference of the current frame is calculated. And the calculated inter-channel time difference of the current frame can be written into the code stream.
  • the default inter-channel time difference (eg, 0) is used as the inter-channel time difference of the current frame in the case where the channel combining scheme of the current frame is a non-correlated signal channel combining scheme. And the default inter-channel time difference can be written to the code stream, and the decoding device also uses the default inter-channel time difference.
  • the method for encoding a time domain stereo parameter may be provided, for example, including: determining a channel combination scheme of a current frame; determining a time domain stereo parameter of the current frame according to a channel combination scheme of the current frame; The time domain stereo parameter of the current frame is encoded, and the time domain stereo parameter includes at least one of a channel combination scale factor and an inter-channel delay difference.
  • the decoding device can obtain the time domain stereo parameters of the current frame from the code stream, and then perform correlation decoding based on the time domain stereo parameters of the current frame obtained from the code stream.
  • FIG. 9-A is a schematic flowchart of an audio encoding method according to an embodiment of the present application.
  • An audio coding method provided by the embodiment of the present application may be implemented by an encoding device, and the method may specifically include:
  • the stereo signal of the current frame includes a left channel signal of the current frame and a right channel signal of the current frame.
  • the original left channel signal of the current frame is recorded as x L (n)
  • the original right channel signal of the current frame is recorded as x R (n)
  • performing time domain pre-processing on the original left and right channel signals of the current frame may include: performing high-pass filtering processing on the original left and right channel signals of the current frame to obtain left and right channel signals preprocessed by the current frame in the current frame, and the current frame is processed by the current frame.
  • the left channel signal of the time domain preprocessing is denoted by x L_HP (n)
  • the right channel signal of the current frame preprocessed by the time domain is denoted as x R_HP (n).
  • n is the sample number.
  • n 0, 1, ..., N-1.
  • the filter used in the high-pass filtering process may be, for example, an Infinite Impulse Response (IIR) filter with a cutoff frequency of 20 Hz, or other types of filters.
  • IIR Infinite Impulse Response
  • the transfer function of the high-pass filter with a sampling rate of 16 kHz and a corresponding cutoff frequency of 20 Hz can be:
  • b 0 0.994461788958195
  • b 1 -1.988923577916390
  • b 2 0.994461788958195
  • a 1 1.988892905899653
  • a 2 -0.988954249933127
  • z is a conversion factor of the Z transform.
  • the transfer function of the corresponding time domain filter can be expressed as:
  • x L_HP (n) b 0 *x L (n)+b 1 *x L (n-1)+b 2 *x L (n-2)-a 1 *x L_HP (n-1)-a 2 *x L_HP (n-2)
  • x R_HP (n) b 0 *x R (n)+b 1 *x R (n-1)+b 2 *x R (n-2)-a 1 *x R_HP (n-1)-a 2 *x R_HP (n-2)
  • the signal processed by the delay alignment may be referred to as “delay-aligned signal”.
  • the left channel signal processed by the delay alignment may be referred to as “delay-aligned left channel signal”
  • the right channel signal processed by the delay alignment may be referred to as “delay-aligned left channel signal”, and so on. .
  • the inter-channel delay parameter may be extracted and encoded according to the left and right channel signals preprocessed by the current frame, and the left and right channel signals are time-aligned and aligned according to the encoded inter-channel delay parameters to obtain the current frame.
  • the left and right channel signals are processed by the delay alignment.
  • the left channel signal of the current frame subjected to the delay alignment processing is denoted as x' L (n)
  • the encoding device may calculate a time domain cross-correlation function between the left and right channels according to the left and right channel signals preprocessed by the current frame.
  • the maximum value (or other value) of the time domain cross-correlation function between the left and right channels is searched to determine the delay difference between the left and right channel signals.
  • the delay difference between the determined left and right channels is quantized and encoded.
  • the signal of the other channel is time-delayed based on the signal of the selected one of the left and right channels, thereby obtaining the current frame delay alignment processing.
  • Left and right channel signals are examples of the signal of the other channel is time-delayed based on the signal of the selected one of the left and right channels, thereby obtaining the current frame delay alignment processing.
  • the specific delay alignment processing method is not limited.
  • the time domain analysis may include transient detection or the like.
  • the transient detection may be to perform energy detection on the left and right channel signals of the current frame and the delay alignment processing respectively (specifically, whether the current frame has a sudden energy change).
  • the energy of the left channel signal of the current frame subjected to the delay alignment processing is represented as E cur — L
  • the energy of the left channel signal after the previous frame delay alignment is expressed as E pre — L
  • E pre — L the energy of the left channel signal after the previous frame delay alignment
  • the absolute value of the difference is used for transient detection to obtain the transient detection result of the left channel signal of the current frame subjected to the delay alignment processing.
  • time domain analysis may also include time domain analysis in other conventional ways than transient detection, such as may include band extension pre-processing and the like.
  • step 903 may be performed after step 902 at any location prior to encoding the primary channel signal of the current frame and encoding the secondary channel signal.
  • the correlation signal channel combination scheme corresponds to the case where the left and right channel signals of the current frame (time-delay aligned) are normal-like signals
  • the non-correlation signal channel combination scheme corresponds to the current frame ( The case where the left and right channel signals after the time delay is aligned is an inverted signal.
  • the channel combination scheme decision may be divided into a channel combination scheme initial decision and a channel combination scheme correction decision. It can be understood that the channel combination scheme of the current frame is determined by performing a channel combination scheme decision of the current frame. For a description of some example implementations of the channel combination scheme of the current frame, reference may be made to the related description of the foregoing embodiments, and details are not described herein again.
  • the frame energy of the left and right channel signals of the current frame is first calculated according to the left and right channel signals of the current frame subjected to the delay alignment processing.
  • the frame energy rms_L of the current frame left channel signal satisfies:
  • the frame energy rms_R of the right frame right channel signal satisfies:
  • x' L (n) represents the left channel signal of the current frame subjected to the delay alignment processing.
  • x' R (n) represents the right channel signal of the current frame subjected to the delay alignment processing.
  • the channel combination scale factor corresponding to the current frame correlation signal channel combination scheme is calculated.
  • the calculated channel combination ratio factor ratio_init corresponding to the current frame correlation signal channel combination scheme satisfies:
  • Channel combination ratio factor ratio_init corresponding to the calculated current frame correlation signal channel combination scheme is quantized and encoded, and the corresponding coding index ratio_idx_init is obtained, and the current frame correlation signal channel combination scheme corresponding to the quantization and encoding is corresponding.
  • Channel combination scale factor ratio_init qua
  • Ratio_init qua ratio_tabl[ratio_idx_init]
  • ratio_tabl is a scalar quantized codebook.
  • the quantization coding may be performed by any conventional scalar quantization method, such as uniform scalar quantization, or non-uniform scalar quantization, and the number of coding bits is, for example, 5 bits. The specific method for scalar quantization is not described herein.
  • the channel combination ratio factor ratio_init qua corresponding to the current frame correlation signal channel combination scheme of the quantized coding is the initial value of the channel combination scale factor corresponding to the obtained current frame correlation signal channel combination scheme
  • the coding index ratio_idx_init is The encoding index corresponding to the initial value of the channel combination scale factor corresponding to the current frame correlation signal channel combination scheme.
  • the code index corresponding to the initial value of the channel combination scale factor corresponding to the current frame correlation signal channel combination scheme may be corrected according to the value of the channel combination scheme identifier tdm_SM_flag of the current frame.
  • the quantization code is 5 bits of scalar quantization
  • a value for example, 15 or other value
  • the method for calculating the channel combination ratio factor corresponding to the channel combination scheme according to any one of the traditional techniques of the time domain stereo coding may be calculated, and the current frame correlation signal channel combination scheme is calculated.
  • Channel combination scale factor It is also possible to directly set the initial value of the channel combination scale factor corresponding to the current frame correlation signal channel combination scheme to a fixed value (for example, 0.5 or other value).
  • the identifier may be corrected according to the channel combination scale factor to determine whether the channel combination scale factor needs to be corrected.
  • the channel combination scale factor corresponding to the current frame correlation signal channel combination scheme and its encoding index are corrected, and the correction value of the channel combination scale factor corresponding to the current frame correlation signal channel combination scheme and the encoding index thereof are obtained.
  • the channel combination scale factor correction identifier of the current frame is recorded as tdm_SM_modi_flag.
  • the channel combination scale factor correction flag takes a value of 0, which means that the correction of the channel combination scale factor is not required, and the channel combination scale factor correction flag takes a value of 1, indicating that the correction of the channel combination scale factor is required.
  • the channel combination scale factor correction flag can also use other different values to indicate whether the channel combination scale factor correction is needed.
  • the channel combination scale factor corresponding to the current frame correlation signal channel combination scheme and the coding index thereof may specifically include:
  • ratio_idx_mod 0.5*(tdm_last_ratio_idx+16), where tdm_last_ratio_idx is the previous frame correlation signal channel combination scheme.
  • ratio_mod qua ratio_tabl[ratio_idx_mod].
  • the determined channel combination scale factor ratio corresponding to the determined correlation signal channel combination scheme satisfies:
  • the ratio_init qua represents an initial value of a channel combination scale factor corresponding to a correlation signal channel combination scheme of a current frame
  • the ratio_mod qua represents a correction of a channel combination scale factor corresponding to a correlation signal channel combination scheme of a current frame.
  • the above tdm_SM_modi_flag represents the channel combination scale factor correction flag of the current frame.
  • the coding index ratio_idx corresponding to the channel combination scale factor corresponding to the determined correlation signal channel combination scheme satisfies:
  • ratio_idx_init represents a coding index corresponding to an initial value of a channel combination scale factor corresponding to a current frame correlation signal channel combination scheme
  • ratio_idx_mod represents a correction value of a channel combination scale factor corresponding to a current frame correlation signal channel combination scheme.
  • the channel combination scheme identifier tdm_SM_flag of the current frame is equal to 1 (for example, tdm_SM_flag equal to 1 indicates that the channel combination scheme identifier of the current frame corresponds to the non-correlation signal channel combination scheme), and the channel combination scheme identifier tdm_last_SM_flag of the previous frame is equal to 0 (for example, tdm_last_SM_flag is equal to 0, indicating that the channel combination scheme identifier of the current frame corresponds to the correlation signal channel combination scheme), and it is required to calculate the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme.
  • the history cache is reset.
  • the history cache reset identifier tdm_SM_reset_flag is determined in the process of the scheme correction decision, and then implemented by determining the value of the history cache reset identifier. For example, tdm_SM_reset_flag is 1, indicating that the channel combination scheme identifier of the current frame corresponds to the non-correlation signal channel combination scheme and the channel combination scheme identifier of the previous frame corresponds to the correlation signal channel combination scheme.
  • the history cache reset flag tdm_SM_reset_flag is equal to 1, indicating that it is necessary to reset the history cache used for calculating the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme.
  • specific reset methods which may be that all parameters in the history buffer used for calculating the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme are reset according to preset initial values.
  • some parameters in the history buffer used to calculate the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme are reset according to a preset initial value; or the calculation may be performed
  • Some parameters in the history buffer used by the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme are reset according to a preset initial value, and another part of the parameters are combined according to the calculation correlation signal channel.
  • the channel combination scale factor corresponding to the scheme is reset by the corresponding parameter value in the history buffer used.
  • the non-correlated signal channel combination scheme is a channel combination scheme which is more suitable for time domain downmixing of the inverted-like stereo signals.
  • the channel combination scheme identifier tdm_SM_flag 1 of the current frame
  • the channel combination scheme identifier that represents the current frame corresponds to the non-correlation signal channel combination scheme
  • Determining whether the channel combination scheme identifier of the current frame corresponds to the non-correlation signal channel combination scheme may specifically include:
  • calculating the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme and encoding may include the following steps 9081-9085.
  • 9081 Perform signal energy analysis on the left and right channel signals of the current frame subjected to delay alignment processing.
  • the frame energy rms_L of the current frame left channel signal satisfies:
  • the frame energy rms_R of the right frame right channel signal satisfies:
  • x' L (n) represents the left channel signal of the current frame subjected to the delay alignment processing.
  • x' R (n) represents the right channel signal of the current frame subjected to the delay alignment processing.
  • the long-term smooth frame energy tdm_lt_rms_L_SM cur of the left channel of the current frame satisfies:
  • tdm_lt_rms_L_SM cur (1-A)*tdm_lt_rms_L_SM pre +A*rms_L
  • tdm_lt_rms_L_SM pre represents the long-term smoothed frame energy of the left channel of the previous frame
  • A represents the update factor of the left channel long-time smoothed frame energy
  • A may take, for example, a real number between 0 and 1, and A may be equal to 0.4, for example.
  • the long-term smoothing frame energy tdm_lt_rms_R_SM cur of the right channel of the current frame satisfies:
  • tdm_lt_rms_R_SM cur (1-B)*tdm_lt_rms_R_SM pre +B*rms_R
  • tdm_lt_rms_R_SM pre represents the long-term smoothed frame energy of the right channel of the previous frame
  • B represents the update factor of the smoothed frame energy of the right channel long time
  • B can take a real number between 0 and 1, for example, B can be left and left
  • the update factor of the smooth frame energy of the track length takes the same or different value, and B may be equal to 0.4, for example.
  • the inter-frame energy difference ener_L_dt of the left channel of the current frame satisfies:
  • the inter-frame energy difference ener_R_dt of the right channel of the current frame satisfies:
  • the reference channel signal can also be referred to as a mono signal. If the reference channel signal is referred to as a mono signal, then all subsequent descriptions and parameter naming associated with the reference channel can uniformly replace the reference channel signal. It is a mono signal.
  • the reference channel signal mono_i(n) satisfies:
  • x' L (n) is the left channel signal of the current frame subjected to the delay alignment processing
  • x' R (n) is the right channel signal of the current frame subjected to the delay alignment processing
  • the amplitude correlation parameter corr_LM between the left channel signal and the reference channel signal of the current frame subjected to the delay alignment processing for example, satisfies:
  • the amplitude correlation parameter corr_RM between the right channel signal and the reference channel signal of the current frame subjected to the delay alignment processing satisfies, for example:
  • x' L (n) represents the left channel signal of the current frame subjected to the delay alignment processing.
  • x' R (n) represents the right channel signal of the current frame subjected to the delay alignment processing.
  • Mono_i(n) represents the reference channel signal of the current frame.
  • means taking the absolute value.
  • the amplitude correlation parameter between the left channel signal and the reference channel signal processed according to the current frame and the time channel delay processing, and the amplitude correlation between the right channel signal and the reference channel signal processed by the current frame by the delay alignment The sex parameter calculates the amplitude correlation difference parameter diff_lt_corr between the left and right channels of the current frame.
  • step 9081 may be performed prior to steps 9082, 9083, or may be performed after steps 9082, 9083 and before step 9084.
  • calculating the amplitude correlation difference parameter diff_lt_corr between the left and right channels of the current frame may specifically include the following steps 90841-90842.
  • the amplitude correlation parameter between the left channel signal and the reference channel signal processed according to the current frame by the delay alignment, and the amplitude between the right channel signal and the reference channel signal processed by the current frame by the delay alignment Correlation parameter, calculating the amplitude correlation parameter between the left channel signal and the reference channel signal after the current frame length is smoothed, and the amplitude between the right channel signal and the reference channel signal after the current frame length is smoothed Relevance parameters.
  • an amplitude correlation parameter between the left channel signal and the reference channel signal after the current frame length is calculated, and an amplitude correlation between the right channel long-time smoothed right channel signal and the reference channel signal.
  • the parameter may include: an amplitude correlation parameter tdm_lt_corr_LM_SM between the left channel signal and the reference channel signal after the current frame length is smoothed to satisfy:
  • tdm_lt_corr_LM_SM cur ⁇ *tdm_lt_corr_LM_SM pre +(1- ⁇ )corr_LM.
  • tdm_lt_corr_LM_SM cur represents the amplitude correlation parameter between the left channel signal and the reference channel signal after the current frame length is smoothed
  • tdm_lt_corr_LM_SM pre represents the left channel signal and the reference channel signal after the smoothing of the previous frame.
  • the amplitude correlation parameter between the three, ⁇ represents the left channel smoothing factor, wherein ⁇ can be a preset real number between 0 and 1, such as 0.2, 0.5, 0.8. Alternatively, the value of ⁇ can also be obtained by adaptive calculation.
  • the amplitude correlation parameter tdm_lt_corr_RM_SM between the right channel signal and the reference channel signal after the current frame length is smoothed satisfies:
  • tdm_lt_corr_RM_SM cur ⁇ *tdm_lt_corr_RM_SM pre +(1- ⁇ )corr_LM .
  • tdm_lt_corr_RM_SM cur represents the amplitude correlation parameter between the smoothed right channel signal and the reference channel signal in the current frame length
  • tdm_lt_corr_RM_SM pre represents the smoothed right channel signal and the reference channel signal in the previous frame.
  • represents the right channel smoothing factor, where ⁇ can be a preset real number between 0 and 1, and ⁇ can be the same or different from the left channel smoothing factor ⁇ , for example, ⁇ can Equal to 0.2, 0.5, 0.8. Or the value of ⁇ can also be obtained by adaptive calculation.
  • the other is to calculate the amplitude correlation parameter between the left channel signal and the reference channel signal after the current frame length is smoothed, and the amplitude correlation between the right channel signal and the reference channel signal after the current frame length is smoothed.
  • the method of parameters can include:
  • the amplitude correlation parameter corr_LM between the left channel signal and the reference channel signal of the current frame subjected to the delay alignment processing is corrected, and the amplitude between the corrected left frame signal and the reference channel signal of the current frame is obtained.
  • the correlation parameter corr_LM_mod corrects the amplitude correlation parameter corr_RM between the right channel signal and the reference channel signal of the current frame by the delay alignment process, and obtains the corrected current frame right channel signal and the reference channel signal.
  • the amplitude correlation parameter between the two is corr_RM_mod.
  • the parameter tdm_lt_corr_RM_SM pre determines the amplitude correlation parameter diff_lt_corr_LM_tmp between the left channel signal and the reference channel signal after the current frame length is smoothed, and the right channel signal and the reference channel signal after the smoothing of the previous frame length
  • the amplitude correlation parameter diff_lt_corr_RM_tmp The amplitude correlation parameter diff_lt_corr_RM_tmp.
  • the amplitude correlation parameter diff_lt_corr_LM_tmp between the left channel signal and the reference channel signal after the current frame length is smoothed, and the amplitude between the smoothed right channel signal and the reference channel signal of the previous frame length
  • the correlation parameter diff_lt_corr_RM_tmp obtains an initial value diff_lt_corr_SM of the amplitude correlation difference parameter between the left and right channels of the current frame; and according to the obtained initial value diff_lt_corr_SM of the amplitude correlation difference parameter between the left and right channels of the current frame and the previous one
  • the amplitude correlation difference parameter tdm_last_diff_lt_corr_SM between the left and right channels of the frame determines an inter-frame variation parameter d_lt_corr of the amplitude correlation difference between the left and right channels of the current frame.
  • the frame energy of the left frame signal of the current frame obtained according to the signal energy analysis, the frame energy frame energy of the right channel signal of the current frame, the long-term smooth frame energy of the left channel of the current frame, and the length of the right channel of the current frame
  • Time-varying frame energy, inter-frame energy difference of the left frame of the current frame, inter-frame energy difference of the right frame of the current frame, and inter-frame variation parameter of amplitude correlation difference between left and right channels of the current frame, adaptive selection is different
  • the left channel smoothing factor, the right channel smoothing factor, and the amplitude correlation parameter tdm_lt_corr_LM_SM between the left channel signal and the reference channel signal after the current frame length is smoothed
  • the right channel smoothed right channel of the current frame length The amplitude correlation parameter tdm_lt_corr_RM_SM between the signal and the reference channel signal.
  • amplitude correlation parameters between the left channel signal and the reference channel signal that are smoothed at the current frame length and the right channel signal after the current frame length is smoothed.
  • the method for referring to the amplitude correlation parameter between the channel signals is not limited in this application.
  • the amplitude correlation parameter between the left channel signal and the reference channel signal after the current frame length is smoothed, and the amplitude correlation parameter between the right channel signal and the reference channel signal after the current frame length is smoothed Calculate the amplitude correlation difference parameter diff_lt_corr between the left and right channels of the current frame.
  • the amplitude correlation difference parameter diff_lt_corr between the left and right channels of the current frame satisfies:
  • tdm_lt_corr_LM_SM represents an amplitude correlation parameter between the left channel signal and the reference channel signal after the current frame length is smoothed
  • tdm_lt_corr_RM_SM represents the amplitude between the right channel signal and the reference channel signal after the current frame length is smoothed. Relevance parameters.
  • one possible method of converting the amplitude correlation difference parameter between the left and right channels of the current frame into the channel combination scale factor may specifically include steps 90851-90853.
  • a method of mapping the amplitude correlation difference parameter between the left and right channels may include:
  • the amplitude correlation difference parameter between the left and right channels is subjected to clipping processing, for example, the amplitude correlation difference parameter diff_lt_corr_limit between the left and right channels after the clipping processing satisfies:
  • RATIO_MAX represents the maximum value of the amplitude correlation difference parameter between the left and right channels after clipping
  • RATIO_MIN represents the minimum value of the amplitude correlation difference parameter between the left and right channels after clipping.
  • RATIO_MAX is, for example, a preset empirical value
  • RATIO_MAX is, for example, 1.5, 3.0 or other values.
  • the RATIO_MIN is, for example, a preset experience value
  • the RATIO_MIN is, for example, -1.5, -3.0, or other values.
  • the amplitude correlation difference parameter diff_lt_corr_map between the left and right channels after the mapping process satisfies:
  • B 1 MAP_MAX-RATIO_MAX*A 1
  • B 1 MAP_HIGH-RATIO_HIGH*A 1 .
  • B 2 MAP_LOW - RATIO_LOW * A 2
  • B 2 MAP_MIN - RATIO_MIN * A 2 .
  • B 3 MAP_HIGH-RATIO_HIGH*A 3
  • B 3 MAP_LOW-RATIO_LOW*A 3 .
  • MAP_MAX represents the maximum value of the amplitude correlation difference parameter between the left and right channels after the mapping process
  • MAP_HIGH represents the high threshold of the amplitude correlation difference parameter between the left and right channels after the mapping process
  • MAP_LOW indicates The lower threshold of the value of the amplitude correlation difference parameter between the left and right channels after the mapping process
  • MAP_MIN indicates the minimum value of the amplitude correlation difference parameter between the left and right channels after the mapping process.
  • MAP_MAX may be 2.0
  • MAP_HIGH may be 1.2
  • MAP_LOW may be 0.8
  • MAP_MIN may be 0.0.
  • the actual application is not limited to such an example of value.
  • RATIO_MAX indicates the maximum value of the amplitude correlation difference parameter between the left and right channels after clipping
  • RATIO_HIGH indicates the high threshold of the amplitude correlation difference parameter between the left and right channels after clipping
  • RATIO_LOW indicates the left and right channels after clipping.
  • the difference between the amplitude correlation difference parameter takes a low threshold
  • RATIO_MIN represents the minimum value of the amplitude correlation difference parameter between the left and right channels after clipping.
  • RATIO_MAX is 1.5
  • RATIO_HIGH is 0.75
  • RATIO_LOW is -0.75
  • RATIO_MIN is -1.5.
  • the actual application is not limited to such an example of value.
  • Another method of some embodiments of the present application is: the amplitude correlation difference parameter diff_lt_corr_map between the left and right channels after the mapping process satisfies:
  • diff_lt_corr_limit represents the amplitude correlation difference parameter between the left and right channels after the clipping process.
  • RATIO_MAX represents the maximum amplitude of the amplitude correlation difference parameter between the left and right channels
  • -RATIO_MAX represents the minimum amplitude of the amplitude correlation difference parameter between the left and right channels.
  • RATIO_MAX may be a preset empirical value, and RATIO_MAX may be, for example, 1.5, 3.0, or other real numbers greater than 0.
  • the channel combination scale factor ratio_SM satisfies:
  • cos( ⁇ ) represents a cosine operation.
  • the amplitude correlation difference parameter between the left and right channels can be converted into a channel combination scale factor by other methods, for example:
  • the long-term smooth frame energy of the left channel of the current frame obtained from the signal energy analysis, the long-term smooth frame energy of the right channel of the current frame, the inter-frame energy difference of the left channel of the current frame, and the pre-cache in the encoder history buffer Encoding parameters of one frame (such as the inter-frame correlation parameter of the main channel signal, the inter-frame correlation parameter of the secondary channel signal), the current frame and the channel combination scheme identifier of the previous frame, the current frame, and the previous frame
  • the channel combination scale factor corresponding to the non-correlation signal channel combination scheme determines whether the channel combination scale factor corresponding to the non-correlation signal channel combination scheme is updated.
  • channel combination scale factor corresponding to the non-correlation signal channel combination scheme If it is necessary to update the channel combination scale factor corresponding to the non-correlation signal channel combination scheme, use the above example method to convert the amplitude correlation difference parameter between the left and right channels into a channel combination scale factor; otherwise, directly
  • the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the previous frame and its encoding index are used as the channel combination scale factor corresponding to the non-correlation signal channel combination scheme of the current frame and its encoding index.
  • the channel combination scale factor obtained after the conversion is quantized and encoded, and the initial coding index ratio_idx_init_SM corresponding to the current frame non-correlation signal channel combination scheme is obtained, and the current frame non-correlation signal channel combination scheme after quantization encoding is obtained.
  • the initial value ratio_init_SM qua of the corresponding channel combination scale factor.
  • ratio_init_SM qua ratio_tabl_SM[ratio_idx_init_SM].
  • ratio_tabl_SM represents a codebook of the channel combination scale factor scalar quantization corresponding to the non-correlation signal channel combination scheme.
  • the quantization coding may adopt any scalar quantization method in the conventional technology, such as uniform scalar quantization, or non-uniform scalar quantization, and the number of coding bits may be 5 bits, which will not be described in detail herein.
  • the code combination of the channel combination scale factor scalar quantization corresponding to the non-correlation signal channel combination scheme may use the same or different codebooks as the code combination scaled scalar quantized codebook corresponding to the correlation signal channel combination scheme.
  • the codebooks are the same, it is only necessary to store a codebook for the channel combination scale factor scalar quantization. At this time, the initial value ratio_init_SM qua of the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme after quantization is quantized.
  • ratio_init_SM qua ratio_tabl[ratio_idx_init_SM].
  • one method is to directly use the initial value of the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme of the quantized coding as the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme. And directly using the initial coding index of the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme as the coding index of the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme, namely:
  • the channel combination scaling factor corresponding to the current frame non-correlation signal channel combination scheme satisfies:
  • ratio_SM ratio_tabl[ratio_idx_SM]
  • Another method may be: a coding index of a channel combination scale factor corresponding to a non-correlation signal channel combination scheme of a previous frame or a channel combination scale factor corresponding to a non-correlation signal channel combination scheme of a previous frame. And correcting the initial value of the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme and the initial coding index corresponding to the current frame non-correlation signal channel combination scheme, and correcting the current
  • the coding index of the channel combination scale factor corresponding to the frame non-correlation signal channel combination scheme is used as the coding index of the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme, and the corrected non-correlation signal sound is to be corrected.
  • the channel combination scale factor corresponding to the channel combination scheme is used as the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme.
  • ratio_idx_SM ⁇ *ratio_idx_init_SM+(1- ⁇ )*tdm_last_ratio_idx_SM.
  • ratio_idx_init_SM indicates an initial coding index corresponding to a current frame non-correlation signal channel combination scheme
  • tdm_last_ratio_idx_SM is a coding index of a channel combination scale factor corresponding to a previous frame non-correlation signal channel combination scheme
  • a correction factor for the channel combination scale factor corresponding to the non-correlated signal channel combination scheme can be an empirical value, for example Can be equal to 0.8.
  • the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme satisfies:
  • ratio_SM ratio_tabl[ratio_idx_SM]
  • the channel combination scale factor corresponding to the unquantized non-correlation signal channel combination scheme is used as the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme, that is, the current frame is not correlated.
  • the ratio_SM of the channel combination scale factor corresponding to the sex signal channel combination scheme satisfies:
  • the fourth method is: according to the channel combination scale factor corresponding to the uncorrelated signal channel combination scheme of the previous frame, the channel combination scale factor corresponding to the unquantized current frame non-correlation signal channel combination scheme The correction is performed, and the channel combination scale factor corresponding to the corrected non-correlation signal channel combination scheme is used as the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme, and is quantized and encoded.
  • the channel combination scale factor corresponding to the channel combination scheme and its coding index are not limited in this application.
  • the channel combination scheme identifier of the current frame is recorded as tdm_SM_flag
  • the channel combination scheme identifier of the previous frame is recorded as tdm_last_SM_flag
  • the joint identifier of the channel combination scheme identifier of the previous frame and the channel combination scheme identifier of the current frame may be represented.
  • the coding mode decision may be performed according to the joint identifier, for example:
  • the joint identification of the channel combination scheme identifiers of the previous frame and the current frame has the following four cases (01), ( 11), (10), (00), the coding mode of the current frame is respectively determined as: correlation signal coding mode, non-correlation signal coding mode, correlation signal to non-correlation signal coding mode, non-correlation signal to Correlation signal coding mode.
  • the joint identifier of the channel combination scheme identifier of the current frame is (00), it indicates that the coding mode of the current frame is the correlation signal coding mode; if the joint identifier of the channel combination scheme identifier of the current frame is (11), the current The coding mode of the frame is a non-correlation signal coding mode; the joint identifier of the channel combination scheme identifier of the current frame is (01), indicating that the coding mode of the current frame is a correlation signal to a non-correlation signal coding mode; the sound of the current frame
  • the joint identifier of the track combination scheme identifier is (10), indicating that the coding mode of the current frame is a non-correlation signal to a correlation signal coding mode.
  • the encoding apparatus After obtaining the encoding mode stereo_tdm_coder_type of the current frame, the encoding apparatus performs time domain downmix processing on the left and right channel signals of the current frame according to the encoding mode of the current frame by using a corresponding time domain downmix processing method to obtain a main frame of the current frame.
  • Channel signal and secondary channel signal After obtaining the encoding mode stereo_tdm_coder_type of the current frame, the encoding apparatus performs time domain downmix processing on the left and right channel signals of the current frame according to the encoding mode of the current frame by using a corresponding time domain downmix processing method to obtain a main frame of the current frame.
  • Channel signal and secondary channel signal Channel signal and secondary channel signal.
  • the coding mode of the current frame is one of multiple coding modes.
  • the plurality of coding modes may include: a correlation signal to a non-correlation signal coding mode, a non-correlation signal to a correlation signal coding mode, a correlation signal coding mode, and a non-correlation signal coding mode, and the like.
  • the encoding device separately encodes the primary channel signal and the secondary channel signal to obtain a primary channel encoded signal and a secondary channel encoded signal.
  • the main channel can be firstly based on the parameter information obtained in the primary channel signal and/or the secondary channel signal encoding of the previous frame, and the total number of bits of the primary channel signal encoding and the secondary channel signal encoding.
  • Signal coding and secondary channel signal coding are used for bit allocation.
  • the main channel signal and the secondary channel signal are respectively encoded to obtain a coding index of the main channel coding and a coding index of the secondary channel coding.
  • Main channel coding and secondary channel coding any mono audio coding technology can be used, and will not be described here.
  • the encoding apparatus selects, according to the channel combination scheme identifier, a corresponding channel combination scale factor encoding index to write the code stream, and writes the primary channel encoded signal, the secondary channel encoded signal, and the channel combination scheme identifier of the current frame. Code stream.
  • the coding index ratio_idx of the channel combination scale factor corresponding to the current frame correlation signal channel combination scheme is written into the code stream; If the channel combination scheme identifier tdm_SM_flag of the current frame corresponds to the non-correlation signal channel combination scheme, the coding index ratio_idx_SM of the channel combination scale factor corresponding to the current frame non-correlation signal channel combination scheme is written into the code stream.
  • the primary channel encoded signal, the secondary channel encoded signal, and the channel combination scheme identifier of the current frame are written into the bitstream. It can be understood that the write stream operation has no order.
  • an audio decoding method is further provided.
  • the related steps of the audio decoding method may be specifically implemented by the decoding device, and may specifically include:
  • 1001 Decode according to a code stream to obtain a primary and secondary channel decoding signals of a current frame.
  • the time domain stereo parameter of the current frame includes a channel combination scale factor of the current frame (the code stream includes a coding index of a channel combination scale factor of the current frame, and is decoded based on a coding index of a channel combination scale factor of the current frame.
  • the channel combination scale factor of the current frame may be obtained, and may also include the inter-channel time difference of the current frame (for example, the code stream includes an encoding index of the inter-channel time difference of the current frame, based on the inter-channel time difference of the current frame)
  • the encoding index can be decoded to obtain the inter-channel time difference of the current frame; or the code stream includes the absolute worth encoding index of the inter-channel time difference of the current frame, and the encoding index based on the absolute value of the inter-channel time difference of the current frame can be decoded.
  • the absolute value of the inter-channel time difference of the current frame is obtained) and the like.
  • the determining, according to the channel combination scheme of the current frame and the channel combination scheme of the previous frame, the decoding mode of the current frame refer to the method for determining the encoding mode of the current frame in step 909, according to the channel of the current frame.
  • the combination scheme and the channel combination scheme of the previous frame determine the decoding mode of the current frame.
  • the decoding mode of the current frame is one of multiple decoding modes.
  • the plurality of decoding modes may include: a correlation signal to a non-correlation signal decoding mode, a non-correlation signal to a correlation signal decoding mode, a correlation signal encoding mode, and a non-correlation signal decoding mode, and the like.
  • the coding mode and the decoding mode are one-to-one correspondence.
  • the decoding mode of the current frame is also the correlation signal decoding mode; if the joint identifier of the channel combination scheme identifier of the current frame is (11), the current The decoding mode of the frame is a non-correlation signal decoding mode; the joint identifier of the channel combination scheme identifier of the current frame is (01), indicating that the decoding mode of the current frame is a correlation signal to the non-correlation signal decoding mode; the sound of the current frame
  • the joint identifier of the track combination scheme identifier is (10), indicating that the decoding mode of the current frame is a non-correlation signal to the correlation signal decoding mode.
  • step 1001, step 1002, and steps 1003-1004 has no necessary sequence.
  • 1005 Perform time domain upmix processing on the primary and secondary channel decoding signals of the current frame to obtain a left and right channel reconstruction signal of the current frame by using a time domain upmix processing manner corresponding to the determined decoding mode of the current frame.
  • the upmix matrix used in the time domain upmix processing is constructed based on the obtained channel combination scale factor of the current frame.
  • the left and right channel reconstruction signals of the current frame may be used as the left and right channel decoding signals of the current frame.
  • delay adjustment of the left and right channel reconstruction signals of the current frame may be performed based on the inter-channel time difference of the current frame, to obtain a left and right channel reconstruction signal of the current frame adjusted by the delay, and the current frame is delayed.
  • the adjusted left and right channel reconstruction signals can be used as the left and right channel decoding signals of the current frame.
  • the left and right channel reconstruction signals of the current frame may be subjected to time domain post-processing, wherein the left and right channel reconstruction signals processed by the current frame in the time domain may be used as the left and right sounds of the current frame.
  • Channel decoding signal may be performed based on the inter-channel time difference of the current frame, to obtain a left and right channel reconstruction signal of the current frame adjusted by the delay, and the current frame is delayed.
  • the adjusted left and right channel reconstruction signals can be used as the left and right channel decoding signals of the current frame.
  • the left and right channel reconstruction signals of the current frame may be subjected to time domain post-processing, wherein the left and right channel reconstruction signals processed
  • an embodiment of the present application further provides an apparatus 1100, which may include:
  • the processor 1110 can be used to perform some or all of the steps of any of the methods provided by the embodiments of the present application.
  • the memory 1120 includes, but is not limited to, a random access memory (English: Random Access Memory, RAM for short), a read-only memory (English: Read-Only Memory, ROM for short), and an erasable programmable read-only memory (English: Erasable Programmable Read Only Memory (EPROM), or Portable Read-Only Memory (CD-ROM), which is used for related commands and data.
  • a random access memory English: Random Access Memory, RAM for short
  • ROM Read-Only Memory
  • EPROM Erasable Programmable Read Only Memory
  • CD-ROM Portable Read-Only Memory
  • apparatus 1100 can also include a transceiver 1130 for receiving and transmitting data.
  • the processor 1110 may be one or more central processing units (English: Central Processing Unit, CPU for short). In the case that the processor 1110 is a CPU, the CPU may be a single core CPU or a multi-core CPU. The processor 1110 may specifically be a digital signal processor.
  • CPU Central Processing Unit
  • each step of the above method may be completed by an integrated logic circuit of hardware in the processor 1110 or an instruction in a form of software.
  • the processor 1110 can be a general purpose processor, a digital signal processor, an application specific integrated circuit, an off-the-shelf programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component.
  • the processor 1110 can implement or perform the various methods, steps, and logic blocks disclosed in the embodiments of the present invention.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present invention may be directly implemented by the hardware decoding processor, or may be performed by a combination of hardware and software modules in the decoding processor.
  • the software modules can be located in random memory, flash memory, read only memory, programmable read only memory or electrically erasable programmable memory, registers, etc., which are well established in the art.
  • the storage medium is located in the memory 1120.
  • the processor 1110 can read the information in the memory 1120 and complete the steps of the above method in combination with its hardware.
  • the device 1100 can also include a transceiver 1130 that can be used, for example, for transceiving related data, such as commands or channel signals or streams.
  • the device 1100 can perform some or all of the steps of the corresponding method in the embodiment shown in any of the above-described Figures 2 to 9-D.
  • the device 1100 when the device 1100 performs the correlation step of the above encoding, the device 1100 may be referred to as an encoding device (or an audio encoding device).
  • the device 1100 When the device 1100 performs the related steps of the above decoding, the device 1100 may be referred to as a decoding device (or audio decoding device).
  • the device 1100 may further include, for example, a microphone 1140, an analog to digital converter 1150, and the like.
  • the microphone 1140 can be used, for example, to sample an analog audio signal.
  • Analog to digital converter 1150 can be used, for example, to convert an analog audio signal into a digital audio signal.
  • the device 1100 may further include, for example, a speaker 1160, a digital to analog converter 1170, and the like.
  • Digital to analog converter 1170 can be used, for example, to convert a digital audio signal to an analog audio signal.
  • the speaker 1160 can be used, for example, to play an analog audio signal.
  • an embodiment of the present application provides a device 1200, which includes several functional units for implementing any one of the methods provided by the embodiments of the present application.
  • the device 1200 when the device 1200 performs the corresponding method in the embodiment shown in FIG. 2, the device 1200 can include:
  • the first determining unit 1210 is configured to determine a channel combination scheme of the current frame, and determine an encoding mode of the current frame based on a channel combination scheme of the previous frame and the current frame.
  • the encoding unit 1220 is configured to perform time domain downmix processing on the left and right channel signals of the current frame according to the time domain downmix processing corresponding to the encoding mode of the current frame, to obtain the primary and secondary channel signals of the current frame.
  • the apparatus 1200 may further include a second determining unit 1230 for determining a time domain stereo parameter of the current frame.
  • the encoding unit 1220 can also be used to encode the time domain stereo parameters of the current frame.
  • the device 1200 when the device 1200 performs the corresponding method in the embodiment shown in FIG. 3, the device 1200 can include:
  • a third determining unit 1240 configured to determine a channel combination scheme of the current frame based on a channel combination scheme identifier of a current frame in the code stream; according to a channel combination scheme of the previous frame and a channel combination scheme of the current frame, Determining a decoding mode of the current frame.
  • the decoding unit 1250 is configured to obtain a primary and secondary channel decoding signal of the current frame based on the code stream decoding; and perform time domain upmixing on the primary and secondary channel decoding signals of the current frame based on the time domain upmix processing corresponding to the decoding mode of the current frame. Processing to obtain the left and right channel reconstruction signals of the current frame.
  • the embodiment of the present application provides a computer readable storage medium, where the program code includes program code, where the program code includes some or all steps for performing any one of the methods provided by the embodiments of the present application. Instructions.
  • the embodiment of the present application provides a computer program product, when the computer program product is run on a computer, causing the computer to perform some or all of the steps of any one of the methods provided by the embodiments of the present application.
  • the disclosed apparatus may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division, and the actual implementation may have another division manner, for example, multiple units or components may be combined or may be integrated. Go to another system, or some features can be ignored or not executed.
  • the indirect coupling or direct coupling or communication connection shown or discussed herein may be an indirect coupling or communication connection through some interface, device or unit, and may be electrical or otherwise.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Stereophonic System (AREA)

Abstract

一种音频编解码方法和相关装置。方法可以包括:确定当前帧的编码模式(401),在确定当前帧的编码模式为非相关性信号编码模式的情况之下,采用非相关性信号编码模式对应的时域下混处理方式,对当前帧的左右声道信号进行时域下混处理以得到当前帧的主要声道信号和次要声道信号(402),非相关性信号编码模式对应的时域下混处理方式为非相关性信号声道组合方案对应的时域下混处理方式,非相关性信号声道组合方案为类反相信号对应的声道组合方案,对得到的当前帧的主要声道信号和次要声道信号进行编码(403)。有利于提高编解码质量。

Description

时域立体声编解码方法和相关产品 技术领域
本申请涉及音频编解码技术领域,尤其涉及一种时域立体声编解码方法和相关产品。
背景技术
随着生活质量的提高,人们对高质量音频的需求不断增大。相对于单声道音频,立体声音频具有各声源的方位感和分布感,能够提高信息的清晰度、可懂度和临场感,因而备受人们青睐。
参数立体声编解码技术通过将立体声信号转换为单声道信号和空间感知参数,对多声道信号进行压缩处理,是一种常见的立体声编解码技术。但是由于参数立体声编解码技术通常需要在频域提取空间感知参数,需进行时频变换,使得整个编解码器的时延相对较大。因此在时延要求较严格的情况下,时域立体声编码技术,是一种更好的选择。
传统时域立体声编码技术是在时域将信号下混为两路单声道信号,例如MS编码技术先将左右声道信号下混为中央通道(Mid channel)信号和边通道(Side channel)信号。例如L表示左声道信号,R表示右声道信号,则Mid channel信号为0.5*(L+R),Mid channel信号表征了左右两个声道之间的相关信息;Side channel信号为0.5*(L-R),Side channel信号表征了左右两个声道之间的差异信息。然后,分别对Mid channel信号和Side channel信号采用单声道编码方法编码,对于Mid channel信号,通常用相对较多比特数进行编码;对于Side channel信号,通常用相对较少比特数进行编码。
本申请发明人研究和实践发现,采用传统时域立体声编码技术有时候出现主要信号能量特别小甚至能量缺失的现象,进而导致最终编码质量下降。
发明内容
本申请实施例提供时域立体声编码方法和相关产品。
第一方面,本申请实施例提供了一种时域立体声编码方法,可以包括:确定当前帧的编码模式。在确定所述当前帧的编码模式为非相关性信号编码模式的情况下,采用所述非相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号(即主要声道信号和次要声道信号)。其中,所述非相关性信号编码模式对应的时域下混处理方式为非相关性信号声道组合方案对应的时域下混处理方式,所述非相关性信号声道组合方案为类反相信号对应的声道组合方案。对得到的所述当前帧的主次声道信号进行编码。
其中,当前帧的立体声信号例如由当前帧的左右声道信号组成。
其中,当前帧的编码模式可为多种编码模式中的其中一种。例如当前帧的编码模式可能是如下编码模式中的其中一种:相关性信号编码模式、非相关性信号编码模式、相关性到非相关性信号编码模式、非相关性到相关性信号编码模式。
可以理解,上述方案中需确定当前帧的编码模式,这就表示当前帧的编码模式存在多种可能,这相对于只有唯一一种编码模式的传统方案而言,多种可能的编码模式和多种可能场景之间有利于获得更好的兼容匹配效果。并且,由于引入了针对类反相信号对应的声道组合方案,这使得对于当前帧的立体声信号为类反相信号的情况,有了针对性相对更强的声道组合方案和编码模式,进而有利于提高编码质量。
在一些可能实施方式中,所述方法还可包括:在确定所述当前帧的编码模式为相关性信号编码模式的情况下,采用所述相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号,所述相关性信号编码模式对应的时域下混处理方式为相关性信号声道组合方案对应的时域下混处理方式,所述相关性信号声道组合方案为类正相信号对应的声道组合方案。
在一些可能实施方式中,所述方法还可以包括:在确定所述当前帧的编码模式为相关性到非相关性信号编码模式的情况下,采用所述相关性到非相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号,所述相关性到非相关性信号编码模式对应的时域下混处理方式为从相关性信号声道组合方案过度到非相关性信号声道组合方案对应的时域下混处理方式。
在一些可能实施方式中,所述方法还可以包括:在确定所述当前帧的编码模式为非相关性到相关性信号编码模式的情况下,采用所述非相关性到相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号,所述非相关性到相关性信号编码模式对应的时域下混处理方式为从非相关性信号声道组合方案过度到相关性信号声道组合方案对应的时域下混处理方式。
可以理解,不同的编码模式所对应的时域下混处理方式通常不同。并且每种编码模式也可能对应一种或多种时域下混处理方式。
举例来说,在一些可能的实施方式之中,采用所述非相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号,可包括:根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号;或者根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号。
可以理解,音频帧(例如当前帧或前一帧)的声道组合方案(例如非相关性信号声道组合方案或非相关性信号声道组合方案)的声道组合比例因子可以是预设的固定值。当然也可根据音频帧的声道组合方案来确定这个音频帧的声道组合比例因子。
在一些可能实施方式中,可基于音频帧的声道组合比例因子构建相应的下混矩阵,利用声道组合方案对应的下混矩阵来对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号。
举例来说,在根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
Figure PCTCN2018100060-appb-000001
又举例来说,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
Figure PCTCN2018100060-appb-000002
Figure PCTCN2018100060-appb-000003
其中,所述delay_com表示编码时延补偿。
又举例来说,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
Figure PCTCN2018100060-appb-000004
Figure PCTCN2018100060-appb-000005
Figure PCTCN2018100060-appb-000006
其中,fade_in(n)表示淡入因子。例如
Figure PCTCN2018100060-appb-000007
当然fade_in(n)也可以是基于n的其它函数关系的淡入因子。
fade_out(n)表示淡出因子。例如
Figure PCTCN2018100060-appb-000008
当然fade_out(n)也可以是基于n的其它函数关系的淡出因子。
其中,NOVA_1表示过渡处理长度。NOVA_1取值可根据具体场景需要设定。NOVA_1例如可等于3/N或者NOVA_1可为小于N的其它值。
又举例来说,在采用所述相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
Figure PCTCN2018100060-appb-000009
在上述举例中,所述X L(n)表示所述当前帧的左声道信号。所述X R(n)表示所述当前帧的右声道信号。所述Y(n)表示经时域下混处理而得到的所述当前帧的主要声道信号;所述X(n)表示经时域下混处理而得到的所述当前帧的次要声道信号。
其中,在上述举例中,所述n表示样点序号。例如n=0,1,…,N-1。
其中,在上述举例中,delay_com表示编码时延补偿。
M 11表示所述前一帧的相关性信号声道组合方案对应的下混矩阵,M 11基于所述前一帧的相关性信号声道组合方案对应的声道组合比例因子构建。
所述M 12表示所述前一帧的非相关性信号声道组合方案对应的下混矩阵,所述M 12基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
所述M 22表示所述当前帧的非相关性信号声道组合方案对应的下混矩阵,所述M 22基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
所述M 21表示所述当前帧的相关性信号声道组合方案对应的下混矩阵,所述M 21基于所述当前帧的相关性信号声道组合方案对应的声道组合比例因子构建。
其中,所述M 21可能存在多种形式,例如:
Figure PCTCN2018100060-appb-000010
Figure PCTCN2018100060-appb-000011
其中,所述ratio表示当前帧的相关性信号声道组合方案对应的声道组合比例因子。
其中,所述M 22可能存在多种形式,例如:
Figure PCTCN2018100060-appb-000012
Figure PCTCN2018100060-appb-000013
Figure PCTCN2018100060-appb-000014
Figure PCTCN2018100060-appb-000015
Figure PCTCN2018100060-appb-000016
Figure PCTCN2018100060-appb-000017
其中,α 1=ratio_SM;α 2=1-ratio_SM。所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
其中,所述M 12可能存在多种形式,例如:
Figure PCTCN2018100060-appb-000018
Figure PCTCN2018100060-appb-000019
Figure PCTCN2018100060-appb-000020
Figure PCTCN2018100060-appb-000021
Figure PCTCN2018100060-appb-000022
Figure PCTCN2018100060-appb-000023
其中,α 1_pre=tdm_last_ratio_SM;α 2_pre=1-tdm_last_ratio_SM。tdm_last_ratio_SM表示前一帧的非相关性信号声道组合方案对应的声道组合比例因子。
其中,当前帧的左右声道信号具体可以是所述当前帧的原始左右声道信号(原始左右声道信号是未经时域预处理的左右声道信号,例如可以是采样得到左右声道信号),或者可是所述当前帧的经时域预处理的左右声道信号;或者可以是当前帧的经时延对齐处理的左右声道信号。
具体例如,
Figure PCTCN2018100060-appb-000024
Figure PCTCN2018100060-appb-000025
Figure PCTCN2018100060-appb-000026
其中,所述x L(n)表示所述当前帧的原始左声道信号,所述x R(n)表示所述当前帧的原始右声道信号;所述x L_HP(n)表示所述当前帧的经时域预处理的左声道信号,所述x R_HP(n)表示所述当前帧的经时域预处理的右声道信号;所述x′ L(n)表示所述当前帧的经时延对齐处理的左声道信号,所述x′ R(n)表示所述当前帧的经时延对齐处理的右声道信号。
第二方面,本申请实施例还提供一种时域立体声解码方法,可以包括:
解码码流以得到当前帧的主次声道解码信号。确定所述当前帧的解码模式。在确定所述当前帧的解码模式为非相关性信号解码模式的情况,采用所述非相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,所述非相关性信号解码模式对应的时域上混处理方式为非相关性信号声道组合方案对应的时域上混处理方式,所述非相关性信号声道组合方案为类反相信号对应的声道组合方案。
其中,当前帧的解码模式可为多种解码模式中的其中一种。例如当前帧的解码模式可能是如下解码模式中的其中一种:相关性信号解码模式、非相关性信号解码模式、相关性到非相关性信号解码模式、非相关性到相关性信号解码模式。
可以理解,上述方案中需确定当前帧的解码模式,这就表示当前帧的解码模式存在多种可能,这相 对于只有唯一一种解码模式的传统方案而言,多种可能的解码模式和多种可能场景之间有利于获得更好的兼容匹配效果。并且,由于引入了针对类反相信号对应的声道组合方案,这使得对于当前帧的立体声信号为类反相信号的情况,有了针对性相对更强的声道组合方案和解码模式,进而有利于提高解码质量。
在一些可能实施方式中,所述方法还可包括:
在确定所述当前帧的解码模式为相关性信号解码模式的情况下,采用所述相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,所述相关性信号解码模式对应的时域上混处理方式为相关性信号声道组合方案对应的时域上混处理方式,所述相关性信号声道组合方案为类正相信号对应的声道组合方案。
在一些可能实施方式中,所述方法还可包括:在确定所述当前帧的解码模式为相关性到非相关性信号解码模式的情况下,采用所述相关性到非相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,所述相关性到非相关性信号解码模式对应的时域上混处理方式为从相关性信号声道组合方案过度到非相关性信号声道组合方案对应的时域上混处理方式。
在一些可能实施方式中,所述方法还可包括:在确定所述当前帧的解码模式为非相关性到相关性信号解码模式的情况下,采用所述非相关性到相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,所述非相关性到相关性信号解码模式对应的时域上混处理方式为从非相关性信号声道组合方案过度到相关性信号声道组合方案对应的时域上混处理方式。
可以理解,不同的解码模式所对应的时域上混处理方式通常不同。并且每种解码模式也可能对应一种或多种时域上混处理方式。
举例来说,在一些可能的实施方式中,所述采用所述非相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,包括:
根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号;或者根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号。
在一些可能实施方式中,可基于音频帧的声道组合比例因子构建相应的上混矩阵,利用声道组合方案对应的上混矩阵,来对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号。
举例来说,在根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
Figure PCTCN2018100060-appb-000027
又举例来说,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
Figure PCTCN2018100060-appb-000028
Figure PCTCN2018100060-appb-000029
其中,所述delay_com表示编码时延补偿。
又举例来说,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
Figure PCTCN2018100060-appb-000030
Figure PCTCN2018100060-appb-000031
Figure PCTCN2018100060-appb-000032
其中,所述
Figure PCTCN2018100060-appb-000033
表示所述当前帧的左声道解码信号,所述
Figure PCTCN2018100060-appb-000034
表示所述当前帧的右声道重建信号,所述
Figure PCTCN2018100060-appb-000035
表示所述当前帧的主要声道解码信号,所述
Figure PCTCN2018100060-appb-000036
表示所述当前帧的次要声道解码信号;
其中,所述NOVA_1表示过渡处理长度。
其中,fade_in(n)表示淡入因子。例如
Figure PCTCN2018100060-appb-000037
当然fade_in(n)也可以是基于n的其它函数关系的淡入因子。
其中,fade_out(n)表示淡出因子。例如
Figure PCTCN2018100060-appb-000038
当然fade_out(n)也可以是基于n的其它函数关系的淡出因子。
其中,NOVA_1表示过渡处理长度。NOVA_1取值可根据具体场景需要设定。NOVA_1例如可等于3/N或者NOVA_1可为小于N的其它值。
又举例来说,在根据所述当前帧的相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
Figure PCTCN2018100060-appb-000039
在上述举例中,所述
Figure PCTCN2018100060-appb-000040
表示所述当前帧的左声道解码信号。所述
Figure PCTCN2018100060-appb-000041
表示所述当前帧的右声道重建信号。所述
Figure PCTCN2018100060-appb-000042
表示所述当前帧的主要声道解码信号。所述
Figure PCTCN2018100060-appb-000043
表示所述当前帧的次要声道解码信号。
其中,在上述举例中,所述n表示样点序号。例如n=0,1,…,N-1。
其中,在上述举例中,所述upmixing_delay表示解码时延补偿;
Figure PCTCN2018100060-appb-000044
表示所述前一帧的相关性信号声道组合方案对应的上混矩阵,所述
Figure PCTCN2018100060-appb-000045
基于所述前一帧的相关性信号声道组合方案对应的声道组合比例因子构建。
所述
Figure PCTCN2018100060-appb-000046
表示所述当前帧的非相关性信号声道组合方案对应的上混矩阵,所述
Figure PCTCN2018100060-appb-000047
基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
所述
Figure PCTCN2018100060-appb-000048
表示所述前一帧的非相关性信号声道组合方案对应的上混矩阵,所述
Figure PCTCN2018100060-appb-000049
基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
所述
Figure PCTCN2018100060-appb-000050
表示所述当前帧的相关性信号声道组合方案对应的上混矩阵,所述
Figure PCTCN2018100060-appb-000051
基于所述当前帧的相关性信号声道组合方案对应的声道组合比例因子构建。
其中,所述
Figure PCTCN2018100060-appb-000052
可能存在多种形式,例如:
Figure PCTCN2018100060-appb-000053
Figure PCTCN2018100060-appb-000054
Figure PCTCN2018100060-appb-000055
Figure PCTCN2018100060-appb-000056
Figure PCTCN2018100060-appb-000057
Figure PCTCN2018100060-appb-000058
其中,α 1=ratio_SM;α 2=1-ratio_SM;所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
其中,所述
Figure PCTCN2018100060-appb-000059
可能存在多种形式,例如:
Figure PCTCN2018100060-appb-000060
Figure PCTCN2018100060-appb-000061
Figure PCTCN2018100060-appb-000062
Figure PCTCN2018100060-appb-000063
Figure PCTCN2018100060-appb-000064
Figure PCTCN2018100060-appb-000065
其中,α 1_pre=tdm_last_ratio_SM;α 2_pre=1-tdm_last_ratio_SM。
其中,tdm_last_ratio_SM表示前一帧的非相关性信号声道组合方案对应的声道组合比例因子。
其中,所述
Figure PCTCN2018100060-appb-000066
可能存在多种形式,例如:
Figure PCTCN2018100060-appb-000067
Figure PCTCN2018100060-appb-000068
其中,所述ratio表示当前帧的相关性信号声道组合方案对应的声道组合比例因子。
第三方面,本申请实施例还提供一种时域立体声编码装置,可以包括:相互耦合的处理器和存储器。其中,所述处理器可用于执行第一方面中的任意一种立体声编码方法的部分或全部步骤。
第四方面,本申请实施例还提供一种时域立体声解码装置,可以包括:相互耦合的处理器和存储器。其中,所述处理器可用于执行第二方面中的任意一种立体声编码方法的部分或全部步骤。
第五方面,本申请实施例提供一种时域立体声编码装置,包括用于实施第一方面的任意一种方法的若干个功能单元。
第六方面,本申请实施例提供一种时域立体声解码装置,包括用于实施第二方面的任意一种方法的若干个功能单元。
第七方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储了程序代码,其中,所述程序代码包括用于执行第一方面的任意一种方法的部分或全部步骤的指令。
第八方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储了程序代码,其中,所述程序代码包括用于执行第二方面的任意一种方法的部分或全部步骤的指令。
第九方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行第一方面的任意一种方法的部分或全部步骤。
第十方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行第二方面的任意一种方法的部分或全部步骤。
附图说明
下面将对本申请实施例或背景技术中所涉及的附图进行说明。
图1是本申请实施例提供的一种类反相信号的示意图;
图2是本申请实施例提供的一种音频编码方法的流程示意图;
图3是本申请实施例提供的一种音频解码模式确定方法的流程示意图;
图4是本申请实施例提供的另一种音频编码方法的流程示意图;
图5是本申请实施例提供的一种音频解码方法的流程示意图;
图6是本申请实施例提供的另一种音频编码方法的流程示意图;
图7是本申请实施例提供的另一种音频解码方法的流程示意图;
图8是本申请实施例提供的一种时域立体声参数的确定方法的流程示意图;
图9-A是本申请实施例提供的另一种音频编码方法的流程示意图;
图9-B是本申请实施例提供的一种计算当前帧非相关性信号声道组合方案对应的声道组合比例因子并编码的方法的流程示意图;
图9-C是本申请实施例提供的一种计算当前帧左右声道之间的幅度相关性差异参数的方法的流程示意图;
图9-D是本申请实施例提供的一种将当前帧左右声道之间的幅度相关性差异参数转换为声道组合比例因子的方法的流程示意图;
图10是本申请实施例提供的另一种音频解码方法的流程示意图;
图11-A是本申请实施例提供的一种装置的示意图;
图11-B是本申请实施例提供的另一种装置的示意图;
图11-C是本申请实施例提供的另一种装置的示意图;
图12-A是本申请实施例提供的另一种装置的示意图;
图12-B是本申请实施例提供的另一种装置的示意图;
图12-C是本申请实施例提供的另一种装置的示意图。
具体实施方式
下面结合本申请实施例中的附图对本申请实施例进行描述。
本申请的说明书和权利要求书以及上述附图之中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包括。例如包括一系列步骤或单元的过程、方法、系统或产品或设备没有限定于已列出的步骤或单元,而是可选地还可包括没有列出的步骤或单元,或者可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。另外来说,术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。
需要说明,由于本申请各实施例方案针对的时域场景,因此为了简化描述,时域信号可简称“信号”。例如,左声道时域信号可简称“左声道信号”。又例如,右声道时域信号可以简称“右声道信号”。又例如,单声道时域信号可简称“单声道信号”。又例如参考声道时域信号可简称“参考声道信号”。又例如主要声道时域信号可简称“主要声道信号”。次要声道时域信号可简称“次要声道信号”。又例如中央通道(Mid channel)时域信号可以简称“中央通道信号”。又例如边通道(Side channel)时域信号可简称“边通道信号”。其他情况可以此类推。
需要说明,本申请各实施例中,左声道时域信号和右声道时域信号可合称“左右声道时域信号”或可合称“左右声道信号”。也就是说,左右声道时域信号包括左声道时域信号和右声道时域信号。又例如当前帧经时延对齐处理的左右声道时域信号包括当前帧经时延对齐处理的左声道时域信号和当前帧经时延对齐处理的右声道时域信号。类似的,主要声道信号和次要声道信号可合称“主次声道信号”。也就是说,主次声道信号包括主要声道信号和次要声道信号。又例如主次声道解码信号包括主要声道解码信号和次要声道解码信号。又例如左右声道重建信号包括左声道重建信号和右声道重建信号。以此类推。
其中,例如传统MS编码技术先将左右声道信号下混为中央通道(Mid channel)信号和边通道(Side channel)信号。例如L表示左声道信号,R表示右声道信号,则Mid channel信号为0.5*(L+R),Mid channel信号表征了左右两个声道之间的相关信息。Side channel信号为0.5*(L-R),Side channel信号表征了 左右两个声道之间的差异信息。然后,分别对Mid channel信号和Side channel信号采用单声道编码方法编码。其中,对于Mid channel信号,通常用相对较多比特数进行编码;对于Side channel信号,通常用相对较少比特数进行编码。
进一步的,为了提高编码质量,一些方案通过对左右声道的时域信号进行分析,提取用于指示时域下混处理中左右声道所占比例的时域立体声参数。提出这种方法的目的是:当立体声左右声道信号之间的能量相差比较大的时候,有利于提升时域下混信号中的主要声道的能量,降低次要声道的能量。例如,L表示左声道信号,R表示右声道信号,那么,则主要声道(Primary channel)信号记作Y,Y=alpha*L+beta*R,其中,Y表征了两个声道之间的相关信息。次要声道(Secondary channel)记作X,X=alpha*L-beta*R,X表征了两个声道之间的差异信息。alpha和beta为0到1的实数。
参见图1,图1示出了一种左声道信号和右声道信号的幅度变化情况。在时域某一时刻上,左声道信号、右声道信号的对应样点之间幅度的绝对值基本相同,但是符号相反,这种就是典型的类反相信号。图1只是给出了类反相信号的一个典型例子。实际上类反相信号是指左右声道信号之间的相位差接近180度的立体声信号。例如可将左右声道信号之间的相位差属于[180-θ,180+θ]的立体声信号称作类反相信号,其中,θ可取0°到90°之间的任意角度,例如θ可等于0°、5°、15°、17°、20°、30°、40°等角度。
类似的,类正相信号是指左右声道信号之间的相位差接近0度的立体声信号。例如可将左右声道信号之间的相位差属于[-θ,θ]的立体声信号称作类正相信号。θ可取0°到90°之间的任意角度,例如θ可等于0°、5°、15°、17°、20°、30°、40°等角度。
当左右声道信号为类正相信号时,时域下混处理生成的主要声道信号能量往往明显大于次要声道信号的能量。若用较多的比特数对主要声道信号进行编码,同时用较少的比特数对次要声道信号进行编码,那么有利于获得较好的编码效果。但是,当左右声道信号为类反相信号时,如果采用相同的时域下混处理方法,则生成的主要声道信号能量会出现特别小甚至能量缺失的现象,进而导致最终编码质量下降。
下面继续探讨一些有利于提升立体声编解码质量的技术方案。
本申请实施例提及的编码装置和解码装置可为具有采集、存储、向外传输话音信号等功能的装置,具体的,编码装置和解码装置例如可为手机、服务器、平板电脑、个人电脑或笔记本电脑等等。
可以理解,本申请方案中,左右声道信号是指立体声信号的左右声道信号。立体声信号可以是原始的立体声信号,也可以是多声道信号中包含的两路信号组成的立体声信号,还可以是由多声道信号中包含的多路信号联合产生的两路信号组成的立体声信号。其中,立体声编码方法,也可以是多声道编码中使用的立体声编码方法。立体声编码装置,也可以是多声道编码装置中使用的立体声编码装置。立体声解码方法,也可以是多声道解码中使用的立体声解码方法。立体声解码装置,也可以是多声道解码装置中使用的立体声解码装置。本申请实施例中的音频编码方法例如针对的是立体声编码场景,本申请实施例中的音频解码方法例如针对的是立体声解码场景。
下面首先提供一种音频编码模式确定方法,可包括:确定当前帧的声道组合方案,基于前一帧和当前帧的声道组合方案确定当前帧的编码模式。
参见图2,图2是本申请实施例提供的一种音频编码方法的流程示意图。一种音频编码方法的相关步骤可由编码装置来实施,例如可包括如下步骤:
201、确定当前帧的声道组合方案。
其中,所述当前帧的声道组合方案为多种声道组合方案中的其中一种。例如所述多种声道组合方案包括非相关性信号声道组合方案(anticorrelated signal Channel Combination Scheme)和相关性信号声道组合方案(correlated signal Channel Combination Scheme)。其中,所述相关性信号声道组合方案为类正相信号对应的声道组合方案。所述非相关性信号声道组合方案为类反相信号对应的声道组合方案。可以理解,类正相信号对应的声道组合方案适用于类正相信号,类反相信号对应的声道组合方案适用于类反相信号。
202、基于前一帧和当前帧的声道组合方案确定当前帧的编码模式。
此外,若当前帧为第一帧(即不存在当前帧的前一帧)的情况下,可以基于当前帧的声道组合方案确定当前帧的编码模式。或者,也可以将默认的某种编码模式作为当前帧的编码模式。
其中,所述当前帧的编码模式为多种编码模式中的其中一种。例如所述多种编码模式可包括:相关性信号到非相关性信号编码模式(correlated-to-anticorrelated signal coding switching mode)、非相关性信号到相关性信号编码模式(anticorrelated-to-correlated signal coding switching mode)、相关性信号编码模式(correlated signal coding mode))和非相关性信号编码模式(anticorrelated signal coding mode)等。
其中,相关性信号到非相关性信号编码模式对应的时域下混模式例如可称为“相关性信号到非相关性信号下混模式”(correlated-to-anticorrelated signal downmix switching mode)。非相关性信号到相关性信号编码模式对应的时域下混模式例如可称为“非相关性信号到相关性信号下混模式”(anticorrelated-to-correlated signal downmix switching mode)。相关性信号编码模式对应的时域下混模式例如可称为“相关性信号下混模式”(correlated signal downmix mode)。非相关性信号编码模式对应的时域下混模式例如可称为“非相关性信号下混模式”(anticorrelated signal downmix mode)。
可以理解,本申请实施例中对编码模式、解码模式和声道组合方案等对象的命名都是示意性的,在实际应用中也可能选用其他名称。
203、基于当前帧的编码模式所对应的时域下混处理对当前帧的左右声道信号进行时域下混处理,以得到当前帧的主次声道信号。
其中,对当前帧的左右声道信号进行时域下混处理可得到当前帧的主次声道信号,通过进一步对主次声道信号进行编码以得到码流。可进一步将当前帧的声道组合方案标识(当前帧的声道组合方案标识用于指示当前帧的声道组合方案)写入码流,以便于解码装置基于码流中包含的当前帧的声道组合方案标识来确定当前帧的声道组合方案。
其中,根据前一帧的声道组合方案和所述当前帧的声道组合方案确定所述当前帧的编码模式的具体实现方式可以是多种多样的,
具体例如,在一些可能的实施方式中,根据前一帧的声道组合方案和所述当前帧的声道组合方案确定所述当前帧的编码模式,可包括:
在前一帧的声道组合方案为相关性信号声道组合方案,并且当前帧的声道组合方案为非相关性信号声道组合方案的情况下,确定所述当前帧的编码模式为相关性信号到非相关性信号编码模式,其中,相关性信号到非相关性信号编码模式采用从相关性信号声道组合方案过渡到非相关性信号声道组合方案 对应的下混处理方法进行时域下混处理。
或者,在前一帧的声道组合方案为非相关性信号声道组合方案,并且所述当前帧的声道组合方案为非相关性信号声道组合方案的情况下,确定所述当前帧的编码模式为非相关性信号编码模式,所述非相关性信号编码模式采用非相关性信号声道组合方案对应的下混处理方法进行时域下混处理。
或者,在前一帧的声道组合方案为非相关性信号声道组合方案,并且当前帧的声道组合方案为相关性信号声道组合方案的情况下,确定所述当前帧的编码模式为非相关性信号到相关性信号编码模式,所述非相关性信号到相关性信号编码模式采用从非相关性信号声道组合方案过度到相关性信号声道组合方案对应的下混处理方法进行时域下混处理。其中,非相关性信号到相关性信号编码模式对应的时域下混处理方式具体可为分段时域下混方式,具体可以根据所述当前帧和前一帧的声道组合方案对所述当前帧的左右声道信号进行分段时域下混处理。
或者,当前一帧的声道组合方案为相关性信号声道组合方案,当前帧的声道组合方案为相关性信号声道组合方案,确定为所述当前帧的编码模式为相关性信号编码模式,所述相关性信号编码模式采用相关性信号声道组合方案对应的下混处理方法进行时域下混处理。
可以理解,不同的编码模式所对应的时域下混处理方式通常不同。并且每种编码模式也可能对应一种或多种时域下混处理方式。
例如,在一些可能实施方式中,在确定所述当前帧的编码模式为相关性信号编码模式的情况下,采用所述相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号,所述相关性信号编码模式对应的时域下混处理方式为相关性信号声道组合方案对应的时域下混处理方式。
又例如,在一些可能实施方式中,在确定所述当前帧的编码模式为非相关性信号编码模式的情况下,采用所述非相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号。所述非相关性信号编码模式对应的时域下混处理方式为非相关性信号声道组合方案对应的时域下混处理方式。
又例如,在一些可能实施方式中,在确定所述当前帧的编码模式为相关性到非相关性信号编码模式的情况下,采用相关性到非相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号,所述相关性到非相关性信号编码模式对应的时域下混处理方式为从相关性信号声道组合方案过度到非相关性信号声道组合方案对应的时域下混处理方式。其中,所述相关性信号到非相关性信号编码模式对应的时域下混处理方式具体可为分段时域下混方式,具体可根据所述当前帧和前一帧的声道组合方案对所述当前帧的左右声道信号进行分段时域下混处理。
又例如,在一些可能实施方式中,在确定所述当前帧的编码模式为非相关性到相关性信号编码模式的情况下,采用所述非相关性到相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号,所述非相关性到相关性信号编码模式对应的时域下混处理方式为从非相关性信号声道组合方案过度到相关性信号声道组合方案对应的时域下混处理方式。
可以理解,不同的编码模式所对应的时域下混处理方式通常不同。并且每种编码模式也可能对应一种或多种时域下混处理方式。
举例来说,在一些可能的实施方式之中,采用所述非相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号,可包括:根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号;或者根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号。
可以理解,上述方案中需确定当前帧的声道组合方案,这就表示当前帧的声道组合方案存在多种可能,这相对于只有唯一一种声道组合方案的传统方案而言,多种可能的声道组合方案和多种可能场景之间有利于获得更好的兼容匹配效果。上述方案中需基于前一帧的声道组合方案和所述当前帧的声道组合方案来确定当前帧的编码模式,当前帧的编码模式存在多种可能,而这相对于只有唯一一种编码模式的传统方案而言,多种可能的编码模式和多种可能场景之间有利于获得更好的兼容匹配效果。
具体例如,在所述当前帧和前一帧的声道组合方案不同的情况下,可确定当前帧的编码模式例如可能为相关性信号到非相关性信号编码模式、或为非相关性信号到相关性信号编码模式,那么,可根据所述当前帧和前一帧的声道组合方案对所述当前帧的左右声道信号进行分段时域下混处理。
由于在所述当前帧和前一帧的声道组合方案不同的情况下引入了对所述当前帧的左右声道信号进行分段时域下混处理的机制,分段时域下混处理机制有利于实现声道组合方案的平滑过度,进而有利于提高编码质量。
相应的,下面针对时域立体声的解码场景进行举例说明。
参见图3,下面还提供一种音频解码模式确定方法,音频解码模式确定方法的相关步骤可由解码装置来实施,方法具体可包括:
301、基于码流中的当前帧的声道组合方案标识确定当前帧的声道组合方案。
302、根据前一帧的声道组合方案和所述当前帧的声道组合方案,确定所述当前帧的解码模式。
其中,所述当前帧的解码模式为多种解码模式中的其中一种。例如所述多种解码模式可包括:相关性信号到非相关性信号解码模式(correlated-to-anticorrelated signal decoding switching mode)、非相关性信号到相关性信号解码模式(anticorrelated-to-correlated signal decoding switching mode)、相关性信号解码模式(correlatedsignal decoding mode))和非相关性信号解码模式(anticorrelated signal decoding mode)等。
其中,相关性信号到非相关性信号解码模式对应的时域上混模式例如可称为“相关性信号到非相关性信号上混模式”(correlated-to-anticorrelated signal upmix switching mode)。非相关性信号到相关性信号解码模式对应的时域上混模式例如可称为“非相关性信号到相关性信号上混模式”(anticorrelated-to-correlated signal upmix switching mode)。相关性信号解码模式对应的时域上混模式例如可称为“相关性信号上混模式”(correlated signal upmix mode)。非相关性信号解码模式对应的时域上混模式例如可称为“非相关性信号上混模式”(anticorrelated signal upmix mode)。
可以理解,本申请实施例中对编码模式、解码模式和声道组合方案等对象的命名都是示意性的,在实际应用中也可能选用其他名称。
在一些可能的实施方式中,根据前一帧的声道组合方案和所述当前帧的声道组合方案确定所述当前帧的解码模式,包括:
在前一帧的声道组合方案为相关性信号声道组合方案,并且当前帧的声道组合方案为非相关性信号声道组合方案的情况下,确定所述当前帧的解码模式为相关性信号到非相关性信号解码模式,其中,相关性信号到非相关性信号解码模式采用从相关性信号声道组合方案过渡到非相关性信号声道组合方案对应的上混处理方法进行时域上混处理。
或者,
在前一帧的声道组合方案为非相关性信号声道组合方案,并且所述当前帧的声道组合方案为非相关性信号声道组合方案的情况下,确定所述当前帧的解码模式为非相关性信号解码模式,所述非相关性信号解码模式采用非相关性信号声道组合方案对应的上混处理方法进行时域上混处理。
或者,
在前一帧的声道组合方案为非相关性信号声道组合方案,并且当前帧的声道组合方案为相关性信号声道组合方案的情况下,确定所述当前帧的解码模式为非相关性信号到相关性信号解码模式,所述非相关性信号到相关性信号解码模式采用从非相关性信号声道组合方案过度到相关性信号声道组合方案对应的上混处理方法进行时域上混处理。
或者,
当前一帧的声道组合方案为相关性信号声道组合方案,当前帧的声道组合方案为相关性信号声道组合方案,确定为所述当前帧的解码模式为相关性信号解码模式,所述相关性信号解码模式采用相关性信号声道组合方案对应的上混处理方法进行时域上混处理。
例如解码装置在确定所述当前帧的解码模式为非相关性信号解码模式的情况下,采用所述非相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号。
其中,左右声道重建信号可为左右声道解码信号,或可通过将左右声道重建信号进行时延调整处理和/或时域后处理以得到左右声道解码信号。
其中,所述非相关性信号解码模式对应的时域上混处理方式为非相关性信号声道组合方案对应的时域上混处理方式,所述非相关性信号声道组合方案为类反相信号对应的声道组合方案。
其中,当前帧的解码模式可为多种解码模式中的其中一种。例如当前帧的解码模式可能是如下解码模式中的其中一种:相关性信号解码模式、非相关性信号解码模式、相关性到非相关性信号解码模式、非相关性到相关性信号解码模式。
可以理解,上述方案中需确定当前帧的解码模式,这就表示当前帧的解码模式存在多种可能,这相对于只有唯一一种解码模式的传统方案而言,多种可能的解码模式和多种可能场景之间有利于获得更好的兼容匹配效果。并且,由于引入了针对类反相信号对应的声道组合方案,这使得对于当前帧的立体声信号为类反相信号的情况下,有了针对性相对更强的声道组合方案和解码模式,进而有利于提高解码质量。
又例如,解码装置在确定所述当前帧的解码模式为相关性信号解码模式的情况下,采用所述相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,所述相关性信号解码模式对应的时域上混处理方式为相关性信号声道组 合方案对应的时域上混处理方式,所述相关性信号声道组合方案为类正相信号对应的声道组合方案。
又例如,解码装置在确定所述当前帧的解码模式为相关性到非相关性信号解码模式的情况下,采用所述相关性到非相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,所述相关性到非相关性信号解码模式对应的时域上混处理方式为从相关性信号声道组合方案过度到非相关性信号声道组合方案对应的时域上混处理方式。
又例如,解码装置在确定所述当前帧的解码模式为非相关性到相关性信号解码模式的情况下,采用所述非相关性到相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,所述非相关性到相关性信号解码模式对应的时域上混处理方式为从非相关性信号声道组合方案过度到相关性信号声道组合方案对应的时域上混处理方式。
可以理解,不同的解码模式所对应的时域上混处理方式通常不同。并且每种解码模式也可能对应一种或多种时域上混处理方式。
可以理解,上述方案中需确定当前帧的声道组合方案,这就表示当前帧的声道组合方案存在多种可能,这相对于只有唯一一种声道组合方案的传统方案而言,多种可能的声道组合方案和多种可能场景之间有利于获得更好的兼容匹配效果。上述方案中需基于前一帧的声道组合方案和所述当前帧的声道组合方案来确定当前帧的解码模式,当前帧的解码模式存在多种可能,而这相对于只有唯一一种解码模式的传统方案而言,多种可能的解码模式和多种可能场景之间有利于获得更好的兼容匹配效果。
进一步的,解码装置基于当前帧的解码模式所对应的时域上混处理对当前帧的主次声道解码信号进行时域上混处理,以得到当前帧的左右声道重建信号。
下面举例编码装置确定当前帧的声道组合方案的一些具体实现方式。编码装置确定当前帧的声道组合方案的具体实现方式是多种多样的。
举例来说,在一些可能实施方式中,确定当前帧的声道组合方案可包括:通过对所述当前帧进行至少一次声道组合方案判决,确定当前帧的声道组合方案。
具体例如,所述确定当前帧的声道组合方案包括:对所述当前帧进行声道组合方案初始判决,以确定所述当前帧的初始声道组合方案。基于所述当前帧的初始声道组合方案对所述当前帧进行声道组合方案修正判决,以确定所述当前帧的声道组合方案。此外,也可直接将所述当前帧的初始声道组合方案作为所述当前帧的声道组合方案,即所述当前帧的声道组合方案可为:通过对所述当前帧进行声道组合方案初始判决而确定的所述当前帧的初始声道组合方案。
例如,对所述当前帧进行声道组合方案初始判决可包括:利用所述当前帧的左右声道信号确定所述当前帧的立体声信号的信号正反相类型;利用所述当前帧的立体声信号的信号正反相类型和前一帧的声道组合方案确定所述当前帧的初始声道组合方案。其中,所述当前帧的立体声信号的信号正反相类型可以是类正相信号或类反相信号。所述当前帧的立体声信号的信号正反相类型可通过所述当前帧的信号正反相类型标识(信号正反相类型标识例如用tmp_SM_flag表示)来指示。具体例如,当所述当前帧的信号正反相类型标识取值为“1”时,指示所述当前帧的立体声信号的信号正反相类型为类正相信号,当所述当前帧的信号正反相类型标识取值为“0”时,指示所述当前帧的立体声信号的信号正反相类型为 类反相信号,反之亦可。
音频帧(例如前一帧或当前帧)的声道组合方案可通过所述音频帧的声道组合方案标识来指示。例如当音频帧的声道组合方案标识取值为“0”时,指示该音频帧的声道组合方案为相关性信号声道组合方案。当音频帧的声道组合方案标识取值为“1”时,指示该音频帧的声道组合方案为非相关性信号声道组合方案,反之亦可。
类似的,音频帧(例如前一帧或当前帧)的初始声道组合方案可通过所述音频帧的初始声道组合方案标识(初始声道组合方案标识例如用tdm_SM_flag_loc表示)来指示。例如当音频帧的初始声道组合方案标识取值为“0”时,指示该音频帧的初始声道组合方案为相关性信号声道组合方案。又例如当音频帧的初始声道组合方案标识取值为“1”时,指示该音频帧的初始声道组合方案为非相关性信号声道组合方案,反之亦可。
其中,利用所述当前帧的左右声道信号确定所述当前帧的立体声信号的信号正反相类型可包括:计算所述当前帧的左右声道信号之间的相关性值xorr,在所述xorr小于或者等于第一阈值的情况下确定所述当前帧的立体声信号的信号正反相类型为类正相信号,在所述xorr大于第一阈值的情况下确定所述当前帧的立体声信号的信号正反相类型为类反相信号。进一步的,若利用所述当前帧的信号正反相类型标识来指示所述当前帧的立体声信号的信号正反相类型,则在确定所述当前帧的立体声信号的信号正反相类型为类正相信号的情况下,可置所述当前帧的信号正反相类型标识的取值指示出所述当前帧的立体声信号的信号正反相类型为类正相信号;那么,在确定所述当前帧的信号正反相类型为类正相信号的情况下,可置所述当前帧的信号正反相类型标识的取值指示出所述当前帧的立体声信号的信号正反相类型为类反相信号。
其中,第一阈值的取值范围例如可为(0.5,1.0),例如可等于0.5、0.85、0.75、0.65或0.81等。
具体例如,音频帧(例如前一帧或当前帧)的信号正反相类型标识取值为“0”时,指示该音频帧的立体声信号的信号正反相类型为类正相信号;音频帧(例如前一帧或当前帧)的信号正反相类型标识取值为“1”时,指示该音频帧的立体声信号的信号正反相类型为类反相信号,以此类推。
其中,利用所述当前帧的立体声信号的信号正反相类型和前一帧的声道组合方案确定所述当前帧的初始声道组合方案,例如可包括:
在所述当前帧的立体声信号的信号正反相类型为类正相信号,且前一帧的声道组合方案为相关性信号声道组合方案的情况下,确定所述当前帧的初始声道组合方案为相关性信号声道组合方案;在所述当前帧的立体声信号的信号正反相类型为类反相信号,且前一帧的声道组合方案为非相关性信号声道组合方案的情况下,确定所述当前帧的初始声道组合方案为非相关性信号声道组合方案。
或者,
在所述当前帧的立体声信号的信号正反相类型为类正相信号,并且前一帧的声道组合方案为非相关性信号声道组合方案的情况下,如果所述当前帧的左右声道信号的信噪比均小于第二阈值,确定所述当前帧的初始声道组合方案为相关性信号声道组合方案;如果所述当前帧的左声道信号和/或右声道信号的信噪比大于或等于第二阈值,确定所述当前帧的初始声道组合方案为非相关性信号声道组合方案。
或者,
在所述当前帧的立体声信号的信号正反相类型为类反相信号,并且前一帧的声道组合方案为相关性信号声道组合方案的情况下,如果所述当前帧的左右声道信号的信噪比均小于第二阈值,确定所述当前 帧的初始声道组合方案为非相关性信号声道组合方案;如果所述当前帧的左声道信号和/或右声道信号的信噪比大于或等于第二阈值,确定所述当前帧的初始声道组合方案为相关性信号声道组合方案。
其中,第二阈值的取值范围例如可为[0.8,1.2],例如可等于0.8、0.85、0.9、1、1.1或1.18等。
其中,基于所述当前帧的初始声道组合方案对所述当前帧进行声道组合方案修正判决可以包括:根据前一帧的声道组合比例因子修正标识、所述当前帧的立体声信号的信号正反相类型和所述当前帧的初始声道组合方案,确定所述当前帧的声道组合方案。
其中,当前帧的声道组合方案标识可记作tdm_SM_flag,当前帧的声道组合比例因子修正标识记作tdm_SM_modi_flag。例如声道组合比例因子修正标识取值为0,表示无需进行声道组合比例因子的修正,声道组合比例因子修正标识取值为1,表示需进行声道组合比例因子的修正。当然,声道组合比例因子修正标识也可选用其它不同的取值来表示是否需进行声道组合比例因子的修正。
具体例如,基于所述当前帧的声道组合方案初始判决结果对所述当前帧进行声道组合方案修正判决,可包括:
如果前一帧的声道组合比例因子修正标识指示需修正声道组合比例因子,将非相关性信号声道组合方案作为所述当前帧的声道组合方案;如果前一帧的声道组合比例因子修正标识指示无需修正声道组合比例因子,判决当前帧是否满足切换条件,基于当前帧是否满足切换条件的判决结果确定当前帧的声道组合方案。
其中,所述基于当前帧是否满足切换条件的判决结果确定当前帧的声道组合方案,可以包括:
在前一帧的声道组合方案与所述当前帧的初始声道组合方案不同,并且所述当前帧满足切换条件,且所述当前帧的初始声道组合方案为相关性信号声道组合方案,且前一帧的声道组合方案为非相关性信号声道组合方案,确定所述当前帧的声道组合方案为非相关性信号声道组合方案。
或者,
在前一帧的声道组合方案与所述当前帧的初始声道组合方案不同,并且所述当前帧满足切换条件,且所述当前帧的初始声道组合方案为非相关性信号声道组合方案,且前一帧的声道组合方案为相关性信号声道组合方案,并且所述前一帧的声道组合比例因子小于第一比例因子阈值的情况下,确定所述当前帧的声道组合方案为相关性信号声道组合方案。
或者,
在前一帧的声道组合方案与所述当前帧的初始声道组合方案不同,并且所述当前帧满足切换条件,并且所述当前帧的初始声道组合方案为非相关性信号声道组合方案,并且前一帧的声道组合方案为相关性信号声道组合方案,并且所述前一帧的声道组合比例因子大于或者等于第一比例因子阈值的情况下,确定所述当前帧的声道组合方案为非相关性信号声道组合方案。
或者,
在第前P-1帧的声道组合方案与第前P帧的初始声道组合方案不同,且所述第前P帧的不满足切换条件,且所述当前帧满足切换条件,并且所述当前帧的立体声信号的信号正反相类型为类正相信号,并且所述当前帧的初始声道组合方案为相关性信号声道组合方案,并且前一帧为非相关性信号声道组合方案,确定所述当前帧的声道组合方案为相关性信号声道组合方案。
或者,
在第前P-1帧的声道组合方案与第前P帧的初始声道组合方案,且所述第前P帧的不满足切换条件, 且所述当前帧满足切换条件,且当前帧的立体声信号的信号正反相类型为类反相信号,且所述当前帧的初始声道组合方案为非相关性信号声道组合方案,且前一帧的声道组合方案为相关性信号声道组合方案,并且所述前一帧的声道组合比例因子小于第二比例因子阈值的情况下,确定所述当前帧的声道组合方案为相关性信号声道组合方案。
或者,
在第前P-1帧的声道组合方案与第前P帧的初始声道组合方案不同,且所述第前P帧的不满足切换条件,且所述当前帧满足切换条件,且当前帧的立体声信号的正反相类型为类反相信号,且所述当前帧的初始声道组合方案为非相关性信号声道组合方案,且前一帧的声道组合方案为相关性信号声道组合方案,并且所述前一帧的声道组合比例因子大于或等于第二比例因子阈值的情况下,确定所述当前帧的声道组合方案为非相关性信号声道组合方案。
其中,P可为大于1的整数,例如P可等于2、3、4、5、6或其他值。
其中,第一比例因子阈值的取值范围例如可为[0.4,0.6],例如可等于0.4、0.45、0.5、0.55或0.6等。
其中,第二比例因子阈值的取值范围例如可为[0.4,0.6],例如可等于0.4、0.46、0.5、0.56或0.6等。
在一些可能实施方式中,判决当前帧是否满足切换条件可包括:根据前一帧的主要声道信号帧类型和/或次要声道信号帧类型判决当前帧是否满足切换条件。
在一些可能的实施方式中,判决当前帧是否满足切换条件可包括:
在第一条件、第二条件和第三条件都满足的情况下判决当前帧满足切换条件;或者在第二条件、第三条件、第四条件和第五条件都满足的情况下判决当前帧满足切换条件;或者在第六条件满足的情况下判决当前帧满足切换条件;
其中,
第一条件:前一帧的前一帧的主要声道信号帧类型为下列中的任意一种:VOICED_CLAS frame(浊音特性帧,其之前的帧为浊音帧或浊音开始帧)、ONSET frame(浊音开始帧)、SIN_ONSET frame(谐波和噪声混合的开始帧)、INACTIVE_CLAS frame(非活动特性帧)、AUDIO_CLAS(音频帧),且前一帧的主要声道信号帧类型为UNVOICED_CLAS frame(清音、静音、噪声或浊音结尾等几种特性之一的帧)或VOICED_TRANSITION frame(浊音之后的过度,浊音特性已经很弱的帧);或者,前一帧的前一帧的次要声道信号帧类型为下列中的任意一种:VOICED_CLAS frame、ONSET frame、SIN_ONSET frame、INACTIVE_CLAS frame和AUDIO_CLAS frame,且前一帧的次要声道信号帧类型为UNVOICED_CLAS frame或者VOICED_TRANSITION frame。
第二条件:前一帧的主要声道信号和次要声道信号的初始编码类型(raw coding mode)都不为VOICED(浊音帧对应的编码类型)。
第三条件:截至前一帧,已持续使用前一帧所使用的声道组合方案的帧数大于预设帧数阈值。帧数阈值的取值范围例如可为[3,10],例如帧数阈值可等于3、4、5、6、7、8、9或其他值。
第四条件:前一帧的主要声道信号帧类型为UNVOICED_CLAS,或前一帧的次要声道信号帧类型为UNVOICED_CLAS。
第五条件:当前帧的左右声道信号长时均方根能量值小于能量阈值。这个能量阈值的取值范围例如 可为[300,500],例如帧数阈值可等于300、400、410、451、482、500、415或其他值。
第六条件:前一帧的主要声道信号帧类型为音乐信号,且前一帧的主要声道信号的低频段与高频段的能量比大于第一能量比阈值,且前一帧的次要声道信号的低频段与高频段的能量比大于第二能量比阈值。
其中,第一能量比阈值范围例如可为[4000,6000],例如帧数阈值可等于4000、4500、5000、5105、5200、6000、5800或其他值。
其中,第二能量比阈值范围例如可为[4000,6000],例如帧数阈值可等于4000、4501、5000、5105、5200、6000、5800或其他值。
可以理解,判决当前帧是否满足切换条件的实施方式可以是多种多样的,不限于上述举例的方式。
可以理解,上述举例中给出了确定当前帧的声道组合方案的一些实施方式,但实际应用中也可能不限于上述举例方式。
下面进一步针对非相关性信号编码模式场景进行举例说明。
参见图4、本申请实施例提供了一种音频编码方法,音频编码方法的相关步骤可由编码装置来实施,方法具体可以包括:
401、确定当前帧的编码模式。
402、在确定所述当前帧的编码模式为非相关性信号编码模式的情况下,采用所述非相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号。
403、对得到的所述当前帧的主次声道信号进行编码。
其中,所述非相关性信号编码模式对应的时域下混处理方式为非相关性信号声道组合方案对应的时域下混处理方式,所述非相关性信号声道组合方案为类反相信号对应的声道组合方案。
举例来说,在一些可能的实施方式之中,采用所述非相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号,可包括:根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号;或者根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号。
可以理解,音频帧(例如当前帧或前一帧)的声道组合方案(例如非相关性信号声道组合方案或非相关性信号声道组合方案)的声道组合比例因子可以是预设的固定值。当然也可根据音频帧的声道组合方案来确定这个音频帧的声道组合比例因子。
在一些可能实施方式中,可基于音频帧的声道组合比例因子构建相应的下混矩阵,利用声道组合方案对应的下混矩阵来对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号。
例如,在根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
Figure PCTCN2018100060-appb-000069
又举例来说,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
Figure PCTCN2018100060-appb-000070
Figure PCTCN2018100060-appb-000071
其中,所述delay_com表示编码时延补偿。
又举例来说,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
Figure PCTCN2018100060-appb-000072
Figure PCTCN2018100060-appb-000073
Figure PCTCN2018100060-appb-000074
其中,fade_in(n)表示淡入因子。例如
Figure PCTCN2018100060-appb-000075
当然fade_in(n)也可以是基于n的其它函数关系的淡入因子。
fade_out(n)表示淡出因子。例如
Figure PCTCN2018100060-appb-000076
当然 fade_out(n)也可以是基于n的其它函数关系的淡出因子。
其中,NOVA_1表示过渡处理长度。NOVA_1取值可根据具体场景需要设定。NOVA_1例如可等于3/N或者NOVA_1可为小于N的其它值。
又举例来说,在采用所述相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
Figure PCTCN2018100060-appb-000077
在上述举例中,所述X L(n)表示所述当前帧的左声道信号。所述X R(n)表示所述当前帧的右声道信号。所述Y(n)表示经时域下混处理而得到的所述当前帧的主要声道信号;所述X(n)表示经时域下混处理而得到的所述当前帧的次要声道信号。
其中,在上述举例中,所述n表示样点序号。例如n=0,1,…,N-1。
其中,在上述举例中,delay_com表示编码时延补偿。
M 11表示所述前一帧的相关性信号声道组合方案对应的下混矩阵,M 11基于所述前一帧的相关性信号声道组合方案对应的声道组合比例因子构建。
所述M 12表示所述前一帧的非相关性信号声道组合方案对应的下混矩阵,所述M 12基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
所述M 22表示所述当前帧的非相关性信号声道组合方案对应的下混矩阵,所述M 22基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
所述M 21表示所述当前帧的相关性信号声道组合方案对应的下混矩阵,所述M 21基于所述当前帧的相关性信号声道组合方案对应的声道组合比例因子构建。
其中,所述M 21可能存在多种形式,例如:
Figure PCTCN2018100060-appb-000078
Figure PCTCN2018100060-appb-000079
其中,所述ratio表示当前帧的相关性信号声道组合方案对应的声道组合比例因子。
其中,所述M 22可能存在多种形式,例如:
Figure PCTCN2018100060-appb-000080
Figure PCTCN2018100060-appb-000081
Figure PCTCN2018100060-appb-000082
Figure PCTCN2018100060-appb-000083
Figure PCTCN2018100060-appb-000084
Figure PCTCN2018100060-appb-000085
其中,α 1=ratio_SM;α 2=1-ratio_SM。所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
其中,所述M 12可能存在多种形式,例如:
Figure PCTCN2018100060-appb-000086
Figure PCTCN2018100060-appb-000087
Figure PCTCN2018100060-appb-000088
Figure PCTCN2018100060-appb-000089
Figure PCTCN2018100060-appb-000090
Figure PCTCN2018100060-appb-000091
其中,α 1_pre=tdm_last_ratio_SM;α 2_pre=1-tdm_last_ratio_SM。tdm_last_ratio_SM表示前一帧的非相关性信号声道组合方案对应的声道组合比例因子。
其中,当前帧的左右声道信号具体可以是所述当前帧的原始左右声道信号(原始左右声道信号是未经时域预处理的左右声道信号,例如可以是采样得到左右声道信号),或者可是所述当前帧的经时域预处理的左右声道信号;或者可以是当前帧的经时延对齐处理的左右声道信号。
具体例如,
Figure PCTCN2018100060-appb-000092
Figure PCTCN2018100060-appb-000093
Figure PCTCN2018100060-appb-000094
其中,所述
Figure PCTCN2018100060-appb-000095
表示所述当前帧的原始左右声道信号。所述
Figure PCTCN2018100060-appb-000096
表示所述当前帧的经时域预处理的左右声道信号。所述
Figure PCTCN2018100060-appb-000097
表示所述当前帧的经时延对齐处理的左右声道信号。
相应的,下面针对非相关性信号解码模式场景进行举例说明。
参见图5,本申请实施例还提供一种音频解码方法,音频解码方法的相关步骤可由解码装置来实施,方法具体可以包括:
501、根据码流进行解码以得到当前帧的主次声道解码信号。
502、确定所述当前帧的解码模式。
可以理解,步骤501和步骤502的执行没有必然的先后顺序。
503、在确定所述当前帧的解码模式为非相关性信号解码模式的情况下,采用所述非相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号。
其中,左右声道重建信号可为左右声道解码信号,或可通过将左右声道重建信号进行时延调整处理和/或时域后处理以得到左右声道解码信号。
其中,所述非相关性信号解码模式对应的时域上混处理方式为非相关性信号声道组合方案对应的时域上混处理方式,所述非相关性信号声道组合方案为类反相信号对应的声道组合方案。
其中,当前帧的解码模式可为多种解码模式中的其中一种。例如当前帧的解码模式可能是如下解码模式中的其中一种:相关性信号解码模式、非相关性信号解码模式、相关性到非相关性信号解码模式、非相关性到相关性信号解码模式。
可以理解,上述方案中需确定当前帧的解码模式,这就表示当前帧的解码模式存在多种可能,这相对于只有唯一一种解码模式的传统方案而言,多种可能的解码模式和多种可能场景之间有利于获得更好的兼容匹配效果。并且,由于引入了针对类反相信号对应的声道组合方案,这使得对于当前帧的立体声信号为类反相信号的情况下,有了针对性相对更强的声道组合方案和解码模式,进而有利于提高解码质量。
在一些可能实施方式中,所述方法还可包括:
在确定所述当前帧的解码模式为相关性信号解码模式的情况下,采用所述相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,所述相关性信号解码模式对应的时域上混处理方式为相关性信号声道组合方案对应的时域上混处理方式,所述相关性信号声道组合方案为类正相信号对应的声道组合方案。
在一些可能实施方式中,所述方法还可包括:在确定所述当前帧的解码模式为相关性到非相关性信号解码模式的情况下,采用所述相关性到非相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,所述相关性到非相关性信号解码模式对应的时域上混处理方式为从相关性信号声道组合方案过度到非相关性信号声道组合方案对应的时域上混处理方式。
在一些可能实施方式中,所述方法还可包括:在确定所述当前帧的解码模式为非相关性到相关性信号解码模式的情况下,采用所述非相关性到相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,所述非相关性到相关性信号解码模式对应的时域上混处理方式为从非相关性信号声道组合方案过度到相关性信号声道组合方案对应的时域上混处理方式。
可以理解,不同的解码模式所对应的时域上混处理方式通常不同。并且每种解码模式也可能对应一种或多种时域上混处理方式。
举例来说,在一些可能的实施方式中,所述采用所述非相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,包括:
根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号;或者根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号。
在一些可能实施方式中,可基于音频帧的声道组合比例因子构建相应的上混矩阵,利用声道组合方案对应的上混矩阵,来对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号。
举例来说,在根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
Figure PCTCN2018100060-appb-000098
又举例来说,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
Figure PCTCN2018100060-appb-000099
Figure PCTCN2018100060-appb-000100
其中,所述delay_com表示编码时延补偿。
又举例来说,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
Figure PCTCN2018100060-appb-000101
Figure PCTCN2018100060-appb-000102
Figure PCTCN2018100060-appb-000103
其中,所述
Figure PCTCN2018100060-appb-000104
表示所述当前帧的左声道解码信号,所述
Figure PCTCN2018100060-appb-000105
表示所述当前帧的右声道重建信号,所述
Figure PCTCN2018100060-appb-000106
表示所述当前帧的主要声道解码信号,所述
Figure PCTCN2018100060-appb-000107
表示所述当前帧的次要声道解码信号;
其中,所述NOVA_1表示过渡处理长度。
其中,fade_in(n)表示淡入因子。例如
Figure PCTCN2018100060-appb-000108
当然fade_in(n)也可以是基于n的其它函数关系的淡入因子。
其中,fade_out(n)表示淡出因子。例如
Figure PCTCN2018100060-appb-000109
当然fade_out(n)也可以是基于n的其它函数关系的淡出因子。
其中,NOVA_1表示过渡处理长度。NOVA_1取值可根据具体场景需要设定。NOVA_1例如可等于3/N或者NOVA_1可为小于N的其它值。
又举例来说,在根据所述当前帧的相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
Figure PCTCN2018100060-appb-000110
在上述举例中,所述
Figure PCTCN2018100060-appb-000111
表示所述当前帧的左声道解码信号。所述
Figure PCTCN2018100060-appb-000112
表示所述当前帧的右声道重建信号。所述
Figure PCTCN2018100060-appb-000113
表示所述当前帧的主要声道解码信号。所述
Figure PCTCN2018100060-appb-000114
表示所述当前帧的次要声道解码信号。
其中,在上述举例中,所述n表示样点序号。例如n=0,1,…,N-1。
其中,在上述举例中,所述upmixing_delay表示解码时延补偿;
Figure PCTCN2018100060-appb-000115
表示所述前一帧的相关性信号声道组合方案对应的上混矩阵,所述
Figure PCTCN2018100060-appb-000116
基于所述前一帧的相关性信号声道组合方案对应的声道组合比例因子构建。
所述
Figure PCTCN2018100060-appb-000117
表示所述当前帧的非相关性信号声道组合方案对应的上混矩阵,所述
Figure PCTCN2018100060-appb-000118
基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
所述
Figure PCTCN2018100060-appb-000119
表示所述前一帧的非相关性信号声道组合方案对应的上混矩阵,所述
Figure PCTCN2018100060-appb-000120
基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
所述
Figure PCTCN2018100060-appb-000121
表示所述当前帧的相关性信号声道组合方案对应的上混矩阵,所述
Figure PCTCN2018100060-appb-000122
基于所述当前帧的相关性信号声道组合方案对应的声道组合比例因子构建。
其中,所述
Figure PCTCN2018100060-appb-000123
可能存在多种形式,例如:
Figure PCTCN2018100060-appb-000124
Figure PCTCN2018100060-appb-000125
Figure PCTCN2018100060-appb-000126
Figure PCTCN2018100060-appb-000127
Figure PCTCN2018100060-appb-000128
Figure PCTCN2018100060-appb-000129
其中,α 1=ratio_SM;α 2=1-ratio_SM;所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
其中,所述
Figure PCTCN2018100060-appb-000130
可能存在多种形式,例如:
Figure PCTCN2018100060-appb-000131
Figure PCTCN2018100060-appb-000132
Figure PCTCN2018100060-appb-000133
Figure PCTCN2018100060-appb-000134
Figure PCTCN2018100060-appb-000135
Figure PCTCN2018100060-appb-000136
其中,α 1_pre=tdm_last_ratio_SM;α 2_pre=1-tdm_last_ratio_SM。
其中,tdm_last_ratio_SM表示前一帧的非相关性信号声道组合方案对应的声道组合比例因子。
其中,所述
Figure PCTCN2018100060-appb-000137
可能存在多种形式,例如:
Figure PCTCN2018100060-appb-000138
Figure PCTCN2018100060-appb-000139
其中,所述ratio表示当前帧的相关性信号声道组合方案对应的声道组合比例因子。
下面针对相关性信号到非相关性信号编码模式和非相关性信号到非相关性信号编码模式场景进行举例说明。相关性信号到非相关性信号编码模式和非相关性信号到非相关性信号编码模式对应的时域下混处理方式例如为分段时域下混处理方式。
参见图6、本申请实施例提供了一种音频编码方法,音频编码方法的相关步骤可由编码装置来实施,方法具体可以包括:
601、确定当前帧的声道组合方案。
602、在所述当前帧和前一帧的声道组合方案不同的情况下,根据所述当前帧和前一帧的声道组合方案对所述当前帧的左右声道信号进行分段时域下混处理,以得到所述当前帧的主要声道信号和次要声 道信号。
603、对得到的所述当前帧的主要声道信号和次要声道信号进行编码。
其中,在所述当前帧和前一帧的声道组合方案不同的情况下,可确定当前帧的编码模式为相关性信号到非相关性信号编码模式或非相关性信号到非相关性信号编码模式,而如果当前帧的编码模式为相关性信号到非相关性信号编码模式或非相关性信号到非相关性信号编码模式,那么例如可根据所述当前帧和前一帧的声道组合方案对所述当前帧的左右声道信号进行分段时域下混处理。
具体例如,当前一帧的声道组合方案为相关性信号声道组合方案,且当前帧的声道组合方案为非相关性信号声道组合方案,可确定当前帧的编码模式为相关性信号到非相关性信号编码模式。又例如,当前一帧的声道组合方案为非相关性信号声道组合方案,且当前帧的声道组合方案为相关性信号声道组合方案,可确定当前帧的编码模式为非相关性信号到相关性信号编码模式。以此类推。
其中,分段时域下混处理可以理解为是当前帧的左右声道信号被分为至少两段,针对每段采用不同的时域下混处理方式进行时域下混处理。可以理解,相对于非分段时域下混处理而言,分段时域下混处理使得在相邻帧的声道组合方案发生变化时获得更好平滑过度变得更有可能。
可以理解,上述方案中需确定当前帧的声道组合方案,这就表示当前帧的声道组合方案存在多种可能,这相对于只有唯一一种声道组合方案的传统方案而言,多种可能的声道组合方案和多种可能场景之间有利于获得更好的兼容匹配效果。并且,由于在所述当前帧和前一帧的声道组合方案不同的情况下引入了对所述当前帧的左右声道信号进行分段时域下混处理的机制,分段时域下混处理机制有利于实现声道组合方案的平滑过度,进而有利于提高编码质量。
并且,由于引入了针对类反相信号对应的声道组合方案,这使得对于当前帧的立体声信号为类反相信号的情况下,有了针对性相对更强的声道组合方案和编码模式,进而有利于提高编码质量。
举例来说,前一帧的声道组合方案例如可能为相关性信号声道组合方案或非相关性信号声道组合方案。当前帧的声道组合方案可能为相关性信号声道组合方案或非相关性信号声道组合方案。那么当前帧和前一帧的声道组合方案不同也存在好几种可能情况。
具体例如,当所述前一帧的声道组合方案为相关性信号声道组合方案且所述当前帧的声道组合方案为非相关性信号声道组合方案,所述当前帧的左右声道信号包括左右声道信号起始段、左右声道信号中间段和左右声道信号结尾段;所述当前帧的主次声道信号包括主次声道信号起始段、主次声道信号中间段和主次声道信号结尾段。那么,根据所述当前帧和前一帧的声道组合方案对所述当前帧的左右声道信号进行分段时域下混处理,以得到所述当前帧的主要声道信号和次要声道信号,可以包括:
使用所述前一帧的相关性信号声道组合方案对应的声道组合比例因子和相关性信号声道组合方案对应的时域下混处理方式,对所述当前帧的左右声道信号起始段进行时域下混处理,以得到所述当前帧的主次声道信号起始段;
使用所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子和非相关性信号声道组合方案对应的时域下混处理方式,对所述当前帧的左右声道信号结尾段进行时域下混处理,以得到所述当前帧的主次声道信号结尾段;
使用所述前一帧的相关性信号声道组合方案对应的声道组合比例因子和相关性信号声道组合方案对应的时域下混处理方式,对所述当前帧的左右声道信号中间段进行时域下混处理以得到第一主次声道信号中间段;使用当前帧的非相关性信号声道组合方案对应的声道组合比例因子和非相关性信号声道组 合方案对应的时域下混处理方式,对所述当前帧的左右声道信号中间段进行时域下混处理以得到第二主次声道信号中间段;将所述第一主次声道信号中间段和所述第二主次声道信号中间段进行加权求和处理以得到所述当前帧的主次声道信号中间段。
其中,所述当前帧的左右声道信号起始段、左右声道信号中间段和左右声道信号结尾段的长度可根据需要进行设定。所述当前帧的左右声道信号起始段、左右声道信号中间段和左右声道信号结尾段的长度可以相等、部分相等或互不相等。
其中,所述当前帧的主次声道信号起始段、主次声道信号中间段和主次声道信号结尾段的长度可根据需要进行设定。所述当前帧的主次声道信号起始段、主次声道信号中间段和主次声道信号结尾段的长度可以相等、部分相等或互不相等。
其中,将所述第一主次声道信号中间段和所述第二主次声道信号中间段进行加权求和处理时,所述第一主次声道信号中间段对应的加权系数,可等于或不等于所述第二主次声道信号中间段对应的加权系数。
举例来说,将所述第一主次声道信号中间段和所述第二主次声道信号中间段进行加权求和处理时,所述第一主次声道信号中间段对应的加权系数为淡出因子,所述第二主次声道信号中间段对应的加权系数为淡入因子。
在一些可能实施方式中,
Figure PCTCN2018100060-appb-000140
其中,X 11(n)表示所述当前帧的主要声道信号起始段。Y 11(n)表示所述当前帧的次要声道信号起始段。X 31(n)表示所述当前帧的主要声道信号结尾段。Y 31(n)表示所述当前帧的次要声道信号结尾段。X 21(n)表示所述当前帧的主要声道信号中间段。Y 21(n)表示所述当前帧的次要声道信号中间段;
其中,X(n)表示所述当前帧的主要声道信号。
其中,Y(n)表示所述当前帧的次要声道信号。
例如,
Figure PCTCN2018100060-appb-000141
例如,fade_in(n)表示淡入因子,fade_out(n)表示淡出因子。例如,fade_in(n)和fade_out(n)之和为1。
具体例如,
Figure PCTCN2018100060-appb-000142
当然,fade_in(n)也可以是基于n的其它函数关系的淡入因子。当然,fade_out(n)也可以是基于n的其它函数关系的淡入因子。
其中,n表示样点序号,n=0,1,…,N-1。0<N 1<N 2<N-1。
例如N 1等于100,107、120、150或其他值。
例如N 2等于180,187、200、203或其他值。
其中,所述X 211(n)表示所述当前帧的第一主要声道信号中间段,所述Y 211(n)表示所述当前帧的第一次要声道信号中间段。其中,所述X 212(n)表示所述当前帧的第二主要声道信号中间段,所述Y 212(n)表示所述当前帧的第二次要声道信号中间段。
在一些可能实施方式中,
Figure PCTCN2018100060-appb-000143
Figure PCTCN2018100060-appb-000144
Figure PCTCN2018100060-appb-000145
Figure PCTCN2018100060-appb-000146
其中,所述X L(n)表示所述当前帧的左声道信号。所述X R(n)表示所述当前帧的右声道信号。
所述M 11表示所述前一帧的相关性信号声道组合方案对应的下混矩阵,所述M 11基于所述前一帧的相关性信号声道组合方案对应的声道组合比例因子构建。所述M 22表示所述当前帧的非相关性信号声道组合方案对应的下混矩阵,所述M 22基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
所述M 22可以有多种可能的形式,具体例如:
Figure PCTCN2018100060-appb-000147
Figure PCTCN2018100060-appb-000148
Figure PCTCN2018100060-appb-000149
Figure PCTCN2018100060-appb-000150
Figure PCTCN2018100060-appb-000151
Figure PCTCN2018100060-appb-000152
其中,所述α 1=ratio_SM,所述α 2=1-ratio_SM,所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
所述M 11可以有多种可能的形式,具体例如:
Figure PCTCN2018100060-appb-000153
Figure PCTCN2018100060-appb-000154
其中,所述tdm_last_ratio表示所述前一帧的相关性信号声道组合方案对应的声道组合比例因子。
又具体例如,当所述前一帧的声道组合方案为非相关性信号声道组合方案且所述当前帧的声道组合方案为相关性信号声道组合方案,其中,所述当前帧的左右声道信号包括左右声道信号起始段、左右声道信号中间段和左右声道信号结尾段;所述当前帧的主次声道信号包括主次声道信号起始段、主次声道信号中间段和主次声道信号结尾段。那么,所述根据所述当前帧和前一帧的声道组合方案对所述当前帧的左右声道信号进行分段时域下混处理,以得到所述当前帧的主要声道信号和次要声道信号,可以包括:
使用所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子和非相关性信号声道组合方案对应的时域下混处理方式,对所述当前帧的左右声道信号起始段进行时域下混处理,以得到所述当前帧的主次声道信号起始段;
使用所述当前帧的相关性信号声道组合方案对应的声道组合比例因子和相关性信号声道组合方案对应的时域下混处理方式,对所述当前帧的左右声道信号结尾段进行时域下混处理,以得到所述当前帧的主次声道信号结尾段;
使用所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子和非相关性信号声道组合方案对应的时域下混处理方式,对所述当前帧的左右声道信号中间段进行时域下混处理以得到第三主次声道信号中间段;使用当前帧的相关性信号声道组合方案对应的声道组合比例因子和相关性信号声道组合方案对应的时域下混处理方式,对所述当前帧的左右声道信号中间段进行时域下混处理以得到第四主次声道信号中间段;将所述第三主次声道信号中间段和所述第四主次声道信号中间段进行加权求和处理以得到所述当前帧的主次声道信号中间段。
其中,将所述第三主次声道信号中间段和所述第四主次声道信号中间段进行加权求和处理时,所述第三主次声道信号中间段对应的加权系数,可等于或不等于所述第四主次声道信号中间段对应的加权系数。
例如,将所述第三主次声道信号中间段和所述第四主次声道信号中间段进行加权求和处理时,所述第三主次声道信号中间段对应的加权系数为淡出因子,所述第四主次声道信号中间段对应的加权系数为淡入因子。
在一些可能实施方式中,
Figure PCTCN2018100060-appb-000155
其中,X 12(n)表示所述当前帧的主要声道信号起始段,Y 12(n)表示所述当前帧的次要声道信号起始段。X 32(n)表示所述当前帧的主要声道信号结尾段,Y 32(n)表示所述当前帧的次要声道信号结尾段。X 22(n)表示所述当前帧的主要声道信号中间段,Y 22(n)表示所述当前帧的次要声道信号中间段。
其中,X(n)表示所述当前帧的主要声道信号。
其中,Y(n)表示所述当前帧的次要声道信号。
例如,
Figure PCTCN2018100060-appb-000156
其中,fade_in(n)表示淡入因子表示,fade_out(n)表示淡出因子,fade_in(n)和fade_out(n)之和为1。
具体例如,
Figure PCTCN2018100060-appb-000157
当然,fade_in(n)也可以是基于n的其它函数关系的淡入因子。当然,fade_out(n)也可以是基于n的其它函数关系的淡入因子。
其中,n表示样点序号,例如n=0,1,…,N-1。
其中,0<N 3<N 4<N-1。
例如N 3等于101,107、120、150或其他值。
例如N 4等于181,187、200、205或其他值。
其中,所述X 221(n)表示所述当前帧的第三主要声道信号中间段,所述Y 221(n)表示所述当前帧的第三次要声道信号中间段。其中,所述X 222(n)表示所述当前帧的第四主要声道信号中间段,所述Y 222(n)表示所述当前帧的第四次要声道信号中间段。
在一些可能实施方式中,
Figure PCTCN2018100060-appb-000158
Figure PCTCN2018100060-appb-000159
Figure PCTCN2018100060-appb-000160
Figure PCTCN2018100060-appb-000161
其中,所述X L(n)表示所述当前帧的左声道信号,所述X R(n)表示所述当前帧的右声道信号。
所述M 12表示所述前一帧的非相关性信号声道组合方案对应的下混矩阵,所述M 12基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。所述M 21表示所述当前帧相关性信号声道组合方案对应的下混矩阵,所述M 21基于所述当前帧的相关性信号声道组合方案对应的声道组合比例因子构建。
所述M 12可以有多种可能的形式,具体例如:
Figure PCTCN2018100060-appb-000162
Figure PCTCN2018100060-appb-000163
Figure PCTCN2018100060-appb-000164
Figure PCTCN2018100060-appb-000165
Figure PCTCN2018100060-appb-000166
Figure PCTCN2018100060-appb-000167
其中,α 1_pre=tdm_last_ratio_SM;α 2_pre=1-tdm_last_ratio_SM。
其中,tdm_last_ratio_SM表示前一帧的非相关性信号声道组合方案对应的声道组合比例因子。
所述M 21可以有多种可能的形式,具体例如:
Figure PCTCN2018100060-appb-000168
Figure PCTCN2018100060-appb-000169
其中,所述ratio表示所述当前帧的相关性信号声道组合方案对应的声道组合比例因子。
在一些可能实施方式中,所述当前帧的左右声道信号例如可以为当前帧的原始左右声道信号,经时域预处理的左右声道信号或经时延对齐处理的左右声道信号。
具体例如:
Figure PCTCN2018100060-appb-000170
Figure PCTCN2018100060-appb-000171
Figure PCTCN2018100060-appb-000172
其中,所述x L(n)表示所述当前帧的原始左声道信号(原始左声道信号是未经时域预处理的左声道信号),所述x R(n)表示所述当前帧的原始右声道信号(原始右声道信号是未经时域预处理的右声道信号)。
所述x L_HP(n)表示所述当前帧的经时域预处理的左声道信号,所述x R_HP(n)表示所述当前帧的经时域预处理的右声道信号。所述x′ L(n)表示所述当前帧的经时延对齐处理的左声道信号,所述x′ R(n)表示所述当前帧的经时延对齐处理的右声道信号。
可以理解,上述举例的分段时域下混处理方式并不一定是全部的可能实施方式,在实际应用中也可能采用其他分段时域下混处理方式。
相应的,下面针对相关性信号到非相关性信号解码模式和非相关性信号到非相关性信号解码模式场景进行举例说明。相关性信号到非相关性信号解码模式和非相关性信号到非相关性信号解码模式对应的时域下混处理方式例如为分段时域下混处理方式。
参见图7,本申请实施例提供一种音频解码方法,音频解码方法的相关步骤可由解码装置来实施,方法具体可包括:
701、根据码流进行解码以得到当前帧的主次声道解码信号。
702、确定当前帧的声道组合方案。
可以理解,步骤701和步骤702的执行没有必然的先后顺序。
703、在所述当前帧和前一帧的声道组合方案不同的情况下,根据所述当前帧和前一帧的声道组合方案对所述当前帧的主次声道解码信号进行分段时域上混处理,以得到所述当前帧的左右声道重建信号。
其中,所述当前帧的声道组合方案为多种声道组合方案中的其中一种。
其中,例如所述多种声道组合方案包括非相关性信号声道组合方案和相关性信号声道组合方案。其中,所述相关性信号声道组合方案为类正相信号对应的声道组合方案。所述非相关性信号声道组合方案为类反相信号对应的声道组合方案。可以理解,类正相信号对应的声道组合方案适用于类正相信号,类反相信号对应的声道组合方案适用于类反相信号。
其中,分段时域上混处理可以理解为是当前帧的左右声道信号被分为至少两段,针对每段采用不同的时域上混处理方式进行时域上混处理。可以理解,相对于非分段时域上混处理而言,分段时域上混处理使得在相邻帧的声道组合方案发生变化时获得更好平滑过度变得更有可能。
可以理解,上述方案中需确定当前帧的声道组合方案,这就表示当前帧的声道组合方案存在多种可能,这相对于只有唯一一种声道组合方案的传统方案而言,多种可能的声道组合方案和多种可能场景之间有利于获得更好的兼容匹配效果。并且,由于在所述当前帧和前一帧的声道组合方案不同的情况下引入了对所述当前帧的左右声道信号进行分段时域上混处理的机制,分段时域上混处理机制有利于实现声道组合方案的平滑过度,进而有利于提高编码质量。
并且,由于引入了针对类反相信号对应的声道组合方案,这使得对于当前帧的立体声信号为类反相信号的情况下,有了针对性相对更强的声道组合方案和编码模式,进而有利于提高编码质量。
举例来说,前一帧的声道组合方案例如可能为相关性信号声道组合方案或非相关性信号声道组合方案。当前帧的声道组合方案可能为相关性信号声道组合方案或非相关性信号声道组合方案。那么当前帧和前一帧的声道组合方案不同也存在好几种可能情况。
具体例如,当所述前一帧的声道组合方案为相关性信号声道组合方案且所述当前帧的声道组合方案为非相关性信号声道组合方案。其中,所述当前帧的左右声道重建信号包括左右声道重建信号起始段、左右声道重建信号中间段和左右声道重建信号结尾段;所述当前帧的主次声道解码信号包括主次声道解码信号起始段、主次声道解码信号中间段和主次声道解码信号结尾段。那么,所述根据所述当前帧和前一帧的声道组合方案对所述当前帧的主次声道解码信号进行分段时域上混处理,以得到所述当前帧的左右声道重建信号,包括:使用所述前一帧的相关性信号声道组合方案对应的声道组合比例因子和相关性信号声道组合方案对应的时域上混处理方式,对所述当前帧的主次声道解码信号起始段进行时域上混处理,以得到所述当前帧的左右声道重建信号起始段;
使用所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子和非相关性信号声道组合方案对应的时域上混处理方式,对所述当前帧的主次声道解码信号结尾段进行时域上混处理,以得到所述当前帧的左右声道重建信号结尾段;
使用所述前一帧的相关性信号声道组合方案对应的声道组合比例因子和相关性信号声道组合方案对应的时域上混处理方式,对所述当前帧的主次声道解码信号中间段进行时域上混处理以得到第一左右声道重建信号中间段;使用当前帧的非相关性信号声道组合方案对应的声道组合比例因子和非相关性信号声道组合方案对应的时域上混处理方式,对所述当前帧的主次声道解码信号中间段进行时域上混处理以得到第二左右声道重建信号中间段;将所述第一左右声道重建信号中间段和所述第二左右声道重建信号中间段进行加权求和处理以得到所述当前帧的左右声道重建信号中间段。
其中,所述当前帧的左右声道重建信号起始段、左右声道重建信号中间段和左右声道重建信号结尾段的长度可根据需要进行设定。所述当前帧的左右声道重建信号起始段、左右声道重建信号中间段和左右声道重建信号结尾段的长度可以相等、部分相等或互不相等。
其中,所述当前帧的主次声道解码信号起始段、主次声道解码信号中间段和主次声道解码信号结尾段的长度可根据需要进行设定。所述当前帧的主次声道解码信号起始段、主次声道解码信号中间段和主次声道解码信号结尾段的长度可以相等、部分相等或互不相等。
其中,左右声道重建信号可为左右声道解码信号,或可通过将左右声道重建信号进行时延调整处理 和/或时域后处理以得到左右声道解码信号。
其中,将所述第一左右声道重建信号中间段和所述第二左右声道重建信号中间段进行加权求和处理时,所述第一左右声道重建信号中间段对应的加权系数,可等于或不等于第二左右声道重建信号中间段对应的加权系数。
举例来说,将所述第一左右声道重建信号中间段和所述第二左右声道重建信号中间段进行加权求和处理时,所述第一左右声道重建信号中间段对应的加权系数为淡出因子,所述第二左右声道重建信号中间段对应的加权系数为淡入因子。
在一些可能实施方式中,
Figure PCTCN2018100060-appb-000173
其中,
Figure PCTCN2018100060-appb-000174
表示所述当前帧的左声道重建信号起始段,
Figure PCTCN2018100060-appb-000175
表示所述当前帧的右声道重建信号起始段。
Figure PCTCN2018100060-appb-000176
表示所述当前帧的左声道重建信号结尾段,
Figure PCTCN2018100060-appb-000177
表示所述当前帧的右声道重建信号结尾段。其中,
Figure PCTCN2018100060-appb-000178
表示所述当前帧的左声道重建信号中间段,
Figure PCTCN2018100060-appb-000179
表示所述当前帧的右声道重建信号中间段。
其中,
Figure PCTCN2018100060-appb-000180
表示所述当前帧的左声道重建信号。
其中,
Figure PCTCN2018100060-appb-000181
表示所述当前帧的右声道重建信号。
例如,
Figure PCTCN2018100060-appb-000182
例如,fade_in(n)表示淡入因子,fade_out(n)表示淡出因子。例如,fade_in(n)和fade_out(n)之和为1。
具体例如,
Figure PCTCN2018100060-appb-000183
当然,fade_in(n)也可以是基于n的其它函数关系的淡入因子。当然,fade_out(n)也可以是基于n的其它函数关系的淡入因子。
其中,n表示样点序号,n=0,1,…,N-1。其中,0<N 1<N 2<N-1。
其中,所述
Figure PCTCN2018100060-appb-000184
表示所述当前帧的第一左声道重建信号中间段,所述
Figure PCTCN2018100060-appb-000185
表示所述当前帧的第一右声道重建信号中间段。所述
Figure PCTCN2018100060-appb-000186
表示所述当前帧的第二左声道重建信号中间段,所述
Figure PCTCN2018100060-appb-000187
表示所述当前帧的第二右声道重建信号中间段。
在一些可能实施方式中,
Figure PCTCN2018100060-appb-000188
Figure PCTCN2018100060-appb-000189
Figure PCTCN2018100060-appb-000190
Figure PCTCN2018100060-appb-000191
其中,
Figure PCTCN2018100060-appb-000192
表示所述当前帧的主要声道解码信号;
Figure PCTCN2018100060-appb-000193
表示所述当前帧的次要声道解码信号。
所述
Figure PCTCN2018100060-appb-000194
表示所述前一帧的相关性信号声道组合方案对应的上混矩阵,所述
Figure PCTCN2018100060-appb-000195
基于所述前一帧的相关性信号声道组合方案对应的声道组合比例因子构建。所述
Figure PCTCN2018100060-appb-000196
表示所述当前帧的非相关性信号声道组合方案对应的上混矩阵,所述
Figure PCTCN2018100060-appb-000197
基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
所述
Figure PCTCN2018100060-appb-000198
可以有多种可能的形式,具体例如:
Figure PCTCN2018100060-appb-000199
Figure PCTCN2018100060-appb-000200
Figure PCTCN2018100060-appb-000201
Figure PCTCN2018100060-appb-000202
Figure PCTCN2018100060-appb-000203
Figure PCTCN2018100060-appb-000204
其中,α 1=ratio_SM;α 2=1-ratio_SM;所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
所述
Figure PCTCN2018100060-appb-000205
可以有多种可能的形式,具体例如:
Figure PCTCN2018100060-appb-000206
Figure PCTCN2018100060-appb-000207
其中,所述tdm_last_ratio表示所述前一帧的相关性信号声道组合方案对应的声道组合比例因子。
又具体例如,当所述前一帧的声道组合方案为非相关性信号声道组合方案且所述当前帧的声道组合方案为相关性信号声道组合方案。其中,所述当前帧的左右声道重建信号包括左右声道重建信号起始段、左右声道重建信号中间段和左右声道重建信号结尾段;所述当前帧的主次声道解码信号包括主次声道解码信号起始段、主次声道解码信号中间段和主次声道解码信号结尾段。那么,所述根据所述当前帧和前一帧的声道组合方案对所述当前帧的主次声道解码信号进行分段时域上混处理,以得到所述当前帧的左右声道重建信号,包括:
使用所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子和非相关性信号声道组合方案对应的时域上混处理方式,对所述当前帧的主次声道解码信号起始段进行时域上混处理,以得到所述当前帧的左右声道重建信号起始段;
使用所述当前帧的相关性信号声道组合方案对应的声道组合比例因子和相关性信号声道组合方案对应的时域上混处理方式,对所述当前帧的主次声道解码信号结尾段进行时域上混处理,以得到所述当前帧的左右声道重建信号结尾段;
使用所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子和非相关性信号声道组合 方案对应的时域上混处理方式,对所述当前帧的主次声道解码信号中间段进行时域上混处理以得到第三左右声道重建信号中间段;使用当前帧的相关性信号声道组合方案对应的声道组合比例因子和相关性信号声道组合方案对应的时域上混处理方式,对所述当前帧的主次声道解码信号中间段进行时域上混处理以得到第四左右声道重建信号中间段;将所述第三左右声道重建信号中间段和所述第四左右声道重建信号中间段进行加权求和处理以得到所述当前帧的左右声道重建信号中间段。
其中,将所述第三左右声道重建信号中间段和所述第四左右声道重建信号中间段进行加权求和处理时,所述第三左右声道重建信号中间段对应的加权系数,可等于或不等于所述第四左右声道重建信号中间段对应的加权系数。
例如,将所述第三左右声道重建信号中间段和所述第四左右声道重建信号中间段进行加权求和处理时,所述第三左右声道重建信号中间段对应的加权系数为淡出因子,所述第四左右声道重建信号中间段对应的加权系数为淡入因子。
在一些可能实施方式中,
Figure PCTCN2018100060-appb-000208
其中,
Figure PCTCN2018100060-appb-000209
表示所述当前帧的左声道重建信号起始段,
Figure PCTCN2018100060-appb-000210
表示所述当前帧的右声道重建信号起始段。
Figure PCTCN2018100060-appb-000211
表示所述当前帧的左声道重建信号结尾段,
Figure PCTCN2018100060-appb-000212
表示所述当前帧的右声道重建信号结尾段。其中,
Figure PCTCN2018100060-appb-000213
表示所述当前帧的左声道重建信号中间段,
Figure PCTCN2018100060-appb-000214
表示所述当前帧的右声道重建信号中间段;
其中,
Figure PCTCN2018100060-appb-000215
表示所述当前帧的左声道重建信号。
其中,
Figure PCTCN2018100060-appb-000216
表示所述当前帧的右声道重建信号。
例如,
Figure PCTCN2018100060-appb-000217
其中,fade_in(n)表示淡入因子表示,fade_out(n)表示淡出因子,fade_in(n)和fade_out(n)之和为1。
具体例如,
Figure PCTCN2018100060-appb-000218
当然,fade_in(n)也可以是基于n的其它函数关系的淡入因子。当然,fade_out(n)也可以是基于n的其它函数关系的淡入因子。
其中,n表示样点序号,例如n=0,1,…,N-1。
其中,0<N 3<N 4<N-1。
例如N 3等于101,107、120、150或其他值。
例如N 4等于181,187、200、205或其他值。
其中,所述
Figure PCTCN2018100060-appb-000219
表示所述当前帧的第三左声道重建信号中间段,所述
Figure PCTCN2018100060-appb-000220
表示所述当前帧的第三右声道重建信号中间段;所述
Figure PCTCN2018100060-appb-000221
表示所述当前帧的第四左声道重建信号中间段,所述
Figure PCTCN2018100060-appb-000222
表示所述当前帧的第四右声道重建信号中间段。
在一些可能实施方式中,
Figure PCTCN2018100060-appb-000223
Figure PCTCN2018100060-appb-000224
Figure PCTCN2018100060-appb-000225
Figure PCTCN2018100060-appb-000226
其中,
Figure PCTCN2018100060-appb-000227
表示所述当前帧的主要声道解码信号;
Figure PCTCN2018100060-appb-000228
表示所述当前帧的次要声道解码信号。
所述
Figure PCTCN2018100060-appb-000229
表示所述前一帧的非相关性信号声道组合方案对应的上混矩阵,所述
Figure PCTCN2018100060-appb-000230
基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建;所述
Figure PCTCN2018100060-appb-000231
表示所述当前帧的相关性信号声道组合方案对应的上混矩阵,所述
Figure PCTCN2018100060-appb-000232
基于所述当前帧的相关性信号声道组合方案对应的声道组合比例因子构建。
所述
Figure PCTCN2018100060-appb-000233
可以有多种可能的形式,具体例如:
Figure PCTCN2018100060-appb-000234
Figure PCTCN2018100060-appb-000235
Figure PCTCN2018100060-appb-000236
Figure PCTCN2018100060-appb-000237
Figure PCTCN2018100060-appb-000238
Figure PCTCN2018100060-appb-000239
其中,α 1_pre=tdm_last_ratio_SM;α 2_pre=1-tdm_last_ratio_SM;
其中,tdm_last_ratio_SM表示前一帧的非相关性信号声道组合方案对应的声道组合比例因子。
所述
Figure PCTCN2018100060-appb-000240
可以有多种可能的形式,具体例如:
Figure PCTCN2018100060-appb-000241
Figure PCTCN2018100060-appb-000242
其中,所述ratio表示所述当前帧的相关性信号声道组合方案对应的声道组合比例因子。
本申请实施例中,当前帧的立体声参数(例如声道组合比例因子和/或声道间时延差)可为固定值,也可基于当前帧的声道组合方案(例如相关性信号声道组合方案或非相关性信号声道组合方案)来确定。
参见图8,下面举例一种时域立体声参数的确定方法,时域立体声参数的确定方法的相关步骤可由编码装置来实施,方法具体可以包括:
801、确定当前帧的声道组合方案。
802、根据所述当前帧的声道组合方案确定所述当前帧的时域立体声参数,所述时域立体声参数包括声道组合比例因子和声道间时延差中的至少一种。
其中,所述当前帧的声道组合方案为多种声道组合方案中的其中一种。
其中,例如所述多种声道组合方案包括非相关性信号声道组合方案和相关性信号声道组合方案。
其中,所述相关性信号声道组合方案为类正相信号对应的声道组合方案。所述非相关性信号声道组合方案为类反相信号对应的声道组合方案。可以理解,类正相信号对应的声道组合方案适用于类正相信号,类反相信号对应的声道组合方案适用于类反相信号。
在确定所述当前帧的声道组合方案为相关性信号声道组合方案的情况下,所述当前帧的时域立体声参数为所述当前帧的相关性信号声道组合方案对应的时域立体声参数;在确定所述当前帧的声道组合方案为非相关性信号声道组合方案的情况下,所述当前帧的时域立体声参数为所述当前帧的非相关性信号声道组合方案对应的时域立体声参数。
可以理解,上述方案中需确定当前帧的声道组合方案,这就表示当前帧的声道组合方案存在多种可能,这相对于只有唯一一种声道组合方案的传统方案而言,多种可能的声道组合方案和多种可能场景之间有利于获得更好的兼容匹配效果。由于是根据所述当前帧的声道组合方案确定所述当前帧的时域立体声参数,这使得时域立体声参数和多种可能场景之间有利于获得更好的兼容匹配效果,进而有利于提升编解码质量。
在一些可能实施方式中,可以先分别计算出当前帧的非相关性信号声道组合方案对应的声道组合比例因子和当前帧的相关性信号声道组合方案对应的声道组合比例因子。而后在确定当前帧的声道组合方案为相关性信号声道组合方案的情况下,确定当前帧的时域立体声参数为所述当前帧的相关性信号声道组合方案对应的时域立体声参数;或者,在确定当前帧的声道组合方案为非相关性信号声道组合方案的情况下,确定当前帧的时域立体声参数为所述当前帧的非相关性信号声道组合方案对应的时域立体声参数。或者,也可先计算出当前帧的相关性信号声道组合方案对应的时域立体声参数,在确定当前帧的声道组合方案为相关性信号声道组合方案的情况下,确定当前帧的时域立体声参数为所述当前帧的相关性信号声道组合方案对应的时域立体声参数;而在确定当前帧的声道组合方案为非相关性信号声道组合方案的情况下,再计算所述当前帧的非相关性信号声道组合方案对应的时域立体声参数,将计算出的所述当前帧的非相关性信号声道组合方案对应的时域立体声参数,确认为当前帧的时域立体声参数。
或者,也可先确定当前帧的声道组合方案,在确定所述当前帧的声道组合方案为相关性信号声道组合方案的情况下,计算所述当前帧的相关性信号声道组合方案对应的时域立体声参数,那么,当前帧的时域立体声参数为当前帧的相关性信号声道组合方案对应的时域立体声参数。而在确定当前帧的声道组合方案为非相关性信号声道组合方案的情况下,计算所述当前帧的非相关性信号声道组合方案对应的时域立体声参数,那么,当前帧的时域立体声参数为当前帧的非相关性信号声道组合方案对应的时域立体声参数。
在一些可能实施方式中,根据所述当前帧的声道组合方案确定所述当前帧的时域立体声参数包括:根据所述当前帧的声道组合方案,确定所述当前帧的声道组合方案所对应的声道组合比例因子初始值。在无需对所述当前帧的声道组合方案(相关性信号声道组合方案或非相关性信号声道组合方法)对应的声道组合比例因子的初始值进行修正的情况之下,所述当前帧的声道组合方案对应的声道组合比例因子, 等于所述当前帧的声道组合方案对应的声道组合比例因子的初始值。在需对所述当前帧的声道组合方案(相关性信号声道组合方案或非相关性信号声道组合方法)对应的声道组合比例因子的初始值进行修正的情况之下,对所述当前帧的声道组合方案对应的声道组合比例因子的初始值进行修正,以得到所述当前帧的声道组合方案对应的声道组合比例因子的修正值,所述当前帧的声道组合方案对应的声道组合比例因子,等于所述当前帧的声道组合方案对应的声道组合比例因子的修正值。
举例来说,所述根据所述当前帧的声道组合方案确定所述当前帧的时域立体声参数可以包括:根据所述当前帧左声道信号计算所述当前帧的左声道信号的帧能量;根据所述当前帧右声道信号计算所述当前帧的右声道信号的帧能量;根据所述当前帧左声道信号的帧能量和右声道信号的帧能量,计算所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的初始值。
其中,在无需对所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的初始值进行修正的情况下,所述当前帧的相关性信号声道组合方案对应的声道组合比例因子等于所述当前帧的相关性信号声道组合方案对应的声道组合比例因子初始值,所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的编码索引等于所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的初始值的编码索引;
在需对所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的初始值进行修正的情况下,对所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的初始值及其编码索引进行修正,以得到所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的修正值及其编码索引,所述当前帧的相关性信号声道组合方案对应的声道组合比例因子等于所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的修正值;所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的编码索引等于所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的修正值的编码索引。
具体例如,在对所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的初始值及其编码索引进行修正的情况下,
ratio_idx_mod=0.5*(tdm_last_ratio_idx+16);
ratio_mod qua=ratio_tabl[ratio_idx_mod];
其中,所述tdm_last_ratio_idx表示前一帧的相关性信号声道组合方案对应的声道组合比例因子的编码索引,所述ratio_idx_mod表示所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的修正值对应的编码索引,所述ratio_mod qua表示所述当前帧的相关性信号声道组合方案对应的声道组合比例因子的修正值。
又例如,根据所述当前帧的声道组合方案确定所述当前帧的时域立体声参数包括:根据所述当前帧的左声道信号和右声道信号获得所述当前帧的参考声道信号;计算所述当前帧的左声道信号与参考声道信号之间的幅度相关性参数;计算所述当前帧的右声道信号与参考声道信号之间的幅度相关性参数;根据所述当前帧的左右声道信号与参考声道信号之间的幅度相关性参数,计算所述当前帧的左右声道信号之间的幅度相关性差异参数;根据所述当前帧的左右声道信号之间的幅度相关性差异参数,计算所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
其中,根据所述当前帧的左右声道信号之间的幅度相关性差异参数,计算所述当前帧的非相关性信 号声道组合方案对应的声道组合比例因子,例如可包括:根据所述当前帧的左右声道信号之间的幅度相关性差异参数,计算所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子初始值;对所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子初始值进行修正,以得到所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。可以理解,当无需对所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子初始值进行修正时,那么,所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子,等于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子初始值。
在一些可能的实施方式中,
Figure PCTCN2018100060-appb-000243
Figure PCTCN2018100060-appb-000244
其中,
Figure PCTCN2018100060-appb-000245
其中,所述mono_i(n)表示所述当前帧的参考声道信号。
其中,所述x′ L(n)表示所述当前帧经时延对齐处理的左声道信号;所述x′ R(n)表示所述当前帧经时延对齐处理的右声道信号。所述corr_LM表示所述当前帧的左声道信号与参考声道信号之间的幅度相关性参数,所述corr_RM表示所述当前帧的右声道信号与参考声道信号之间的幅度相关性参数。
在一些可能的实施方式中,所述根据所述当前帧的左右声道信号与参考声道信号之间的幅度相关性参数,计算所述当前帧的左右声道信号之间的幅度相关性差异参数,包括:根据当前帧经时延对齐处理的左声道信号与参考声道信号之间的幅度相关性参数,计算当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数;根据当前帧经时延对齐处理的右声道信号与参考声道信号之间的幅度相关性参数,计算当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数;根据当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数及当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数,计算当前帧左右声道之间的幅度相关性差异参数。
其中,平滑处理的方式可以是多样多样的,举例来说:
tdm_lt_corr_LM_SM cur=α*tdm_lt_corr_LM_SM pre+(1-α)corr_LM;
其中,tdm_lt_rms_L_SM cur=(1-A)*tdm_lt_rms_L_SM pre+A*rms_L,所述A表示所述当前帧的左声道信号的长时平滑帧能量的更新因子。所述tdm_lt_rms_L_SM cur表示所述当前帧 的左声道信号的长时平滑帧能量;其中,所述rms_L表示所述当前帧左声道信号的帧能量。tdm_lt_corr_LM_SM cur表示当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数。tdm_lt_corr_LM_SM pre表示前一帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数。α表示左声道平滑因子。
举例来说,
tdm_lt_corr_RM_SM cur=β*tdm_lt_corr_RM_SM pre+(1-β)corr_LM。
其中,tdm_lt_rms_R_SM cur=(1-B)*tdm_lt_rms_R_SM pre+B*rms_R;所述B表示所述当前帧的右声道信号的长时平滑帧能量的更新因子。所述tdm_lt_rms_R_SM pre表示所述当前帧的右声道信号的长时平滑帧能量。其中,所述rms_R表示所述当前帧右声道信号的帧能量。其中,tdm_lt_corr_RM_SM cur表示所述当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数。tdm_lt_corr_RM_SM pre表示前一帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数。β表示右声道平滑因子。
在一些可能的实施方式中,
diff_lt_corr=tdm_lt_corr_LM_SM-tdm_lt_corr_RM_SM;
其中,tdm_lt_corr_LM_SM表示所述当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数,tdm_lt_corr_RM_SM表示所述当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数,所述diff_lt_corr表示所述当前帧左右声道信号之间的幅度相关性差异参数。
在一些可能的实施方式中,所述根据所述当前帧的左右声道信号之间的幅度相关性差异参数,计算所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子包括:对当前帧的左右声道信号之间的幅度相关性差异参数进行映射处理,使映射处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数的取值范围在[MAP_MIN,MAP_MAX]之间;将映射处理后的左右声道信号之间的幅度相关性差异参数转换为声道组合比例因子。
在一些可能的实施方式中,对所述当前帧的左右声道之间的幅度相关性差异参数进行映射处理包括:对所述当前帧的左右声道信号之间的幅度相关性差异参数进行限幅处理;对经限幅处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数进行映射处理。
其中,限幅处理的方式可以是多种多样的,具体例如:
Figure PCTCN2018100060-appb-000246
其中,RATIO_MAX表示经限幅处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数 的最大值,RATIO_MIN表示经限幅处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数的最小值,RATIO_MAX>RATIO_MIN。
其中,映射处理的方式可以是多种多样的,具体例如:
Figure PCTCN2018100060-appb-000247
Figure PCTCN2018100060-appb-000248
B 1=MAP_MAX-RATIO_MAX*A 1,或B 1=MAP_HIGH-RATIO_HIGH*A 1
Figure PCTCN2018100060-appb-000249
B 2=MAP_LOW-RATIO_LOW*A 2,或B 2=MAP_MIN-RATIO_MIN*A 2
Figure PCTCN2018100060-appb-000250
B 3=MAP_HIGH-RATIO_HIGH*A 3,或B 3=MAP_LOW-RATIO_LOW*A 3
其中,所述diff_lt_corr_map表示经映射处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数;
其中,MAP_MAX表示经映射处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数的最大值;MAP_HIGH表示经映射处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数的高门限;MAP_LOW表示经映射处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数的低门限;MAP_MIN表示经映射处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数的最小值;
其中,MAP_MAX>MAP_HIGH>MAP_LOW>MAP_MIN;
RATIO_MAX表示经限幅处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数的最大值,RATIO_HIGH表示经映射处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数的高门限,RATIO_LOW表示经映射处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数的低门限,RATIO_MIN表示经映射处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数的最小值;
其中,RATIO_MAX>RATIO_HIGH>RATIO_LOW>RATIO_MIN。
又例如,
Figure PCTCN2018100060-appb-000251
其中,diff_lt_corr_limit表示经限幅处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数;diff_lt_corr_map表示经映射处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数。
其中,
Figure PCTCN2018100060-appb-000252
其中,所述RATIO_MAX表示所述当前帧的左右声道信号之间的幅度相关性差异参数的最大幅度,所述-RATIO_MAX表示所述当前帧的左右声道信号之间的幅度相关性差异参数的最小幅度。
在一些可能的实施方式中,
Figure PCTCN2018100060-appb-000253
其中,所述diff_lt_corr_map表示经映射处理后的所述当前帧的左右声道信号之间的幅度相关性差异参数。所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子,或所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值。
在本申请一些实施方式,在需进行声道组合比例因子修正的场景,修正可以在编码声道组合比例因子之前或之后。具体例如,可先计算得到当前帧的声道组合比例因子(例如非相关性信号声道组合方案对应的声道组合比例因子或者相关性信号声道组合方案对应的声道组合比例因子)的初始值,而后对声道组合比例因子的初始值进行编码,进而得到当前帧的声道组合比例因子的初始编码索引,而后再对得到的当前帧的声道组合比例因子的初始编码索引进行修正,进而得到当前帧的声道组合比例因子的编码索引(得到当前帧的声道组合比例因子的编码索引,也就相当于也得到了当前帧的声道组合比例因子)。或者,也可以先计算得到当前帧的声道组合比例因子的初始值,而后对计算得到当前帧的声道组合比例因子的初始值进行修正,进而得到当前帧的声道组合比例因子,而后在对得到的当前帧的声道组合比例因子进行编码,以得到当前帧的声道组合比例因子的编码索引。
其中,对所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值进行修正的方式可以是多种多样的,例如,在需要通过对所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值进行修正,来得到所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的情况下,例如可以基于前一帧的声道组合比例因子和所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值,来对所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值进行修正;或者,也可基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值,对所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值进行修正。
例如,首先,根据当前帧的左声道信号的长时平滑帧能量、当前帧的右声道信号的长时平滑帧能量、当前帧的左声道信号的帧间能量差异、历史缓存中的缓存前一帧的编码参数(例如主要声道信号的帧间相关性、次要声道信号的帧间相关性)、当前帧以及前一帧的声道组合方案标识、前一帧的非相关性信号声道组合方案对应的声道组合比例因子以及当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值,确定是否需要对当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值进行修正。若是,则将前一帧的非相关性信号声道组合方案对应的声道组合比例因子作为当前帧的非 相关性信号声道组合方案对应的声道组合比例因子;否则,将当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值作为当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
当然,通过对所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值进行修正,来得到所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的具体实现方式并不限于上述举例。
803、对确定的所述当前帧的时域立体声参数进行编码。
在一些可能的实施方式中,对确定的当前帧的非相关性信号声道组合方案对应的声道组合比例因子进行量化编码,
ratio_init_SM qua=ratio_tabl_SM[ratio_idx_init_SM]。
其中,所述ratio_tabl_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子标量量化的码书,所述ratio_idx_init_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始编码索引,所述ratio_init_SM qua表示当前帧的非相关性信号声道组合方案对应的声道组合比例因子的量化编码初始值。
在一些可能的实施方式中,
ratio_idx_SM=ratio_idx_init_SM。
ratio_SM=ratio_tabl[ratio_idx_SM]。
其中,所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。ratio_idx_SM表示当前帧的非相关性信号声道组合方案对应的声道组合比例因子的编码索引;
或者,
ratio_idx_SM=φ*ratio_idx_init_SM+(1-φ)*tdm_last_ratio_idx_SM
ratio_SM=ratio_tabl[ratio_idx_SM]
其中,ratio_idx_init_SM表示所述当前帧的非相关性信号声道组合方案对应的初始编码索引,tdm_last_ratio_idx_SM表示前一帧的非相关性信号声道组合方案对应的声道组合比例因子的最终编码索引,其中,
Figure PCTCN2018100060-appb-000254
为非相关性信号声道组合方案对应的声道组合比例因子的修正因子。其中,所述ratio_SM表示当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
在一些可能的实施方式中,在需要通过对所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值进行修正,来得到所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的情况下,还可以先所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值进行量化编码,所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始编码索引,然后可以基于前一帧的声道组合比例因子的编码索引和所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始编码索引,来对所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始编码索引进行修正;或者,也可基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始编码索引,对所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始编码索引进行修正。
例如,可以是先将当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值进行量化编码,得到当前帧的非相关性信号声道组合方案对应的初始编码索引。然后在需要对当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始值进行修正时,将前一帧的非相关性信号声道组合方案对应的声道组合比例因子的编码索引作为当前帧的非相关性信号声道组合方案对应的声道组合比例因子的编码索引;否则,将当前帧的非相关性信号声道组合方案对应的声道组合比例因子的初始编码索引作为当前帧的非相关性信号声道组合方案对应的声道组合比例因子的编码索引。最后,将当前帧的非相关性信号声道组合方案对应的声道组合比例因子的编码索引对应的量化编码值作为当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
此外,在时域立体声参数包括声道间时间差的情况下,根据所述当前帧的声道组合方案确定所述当前帧的时域立体声参数可包括:在所述当前帧的声道组合方案为相关性信号声道组合方案的情况下,计算所述当前帧的声道间时间差。并且可将计算得到的所述当前帧的声道间时间差写入码流。在所述当前帧的声道组合方案为非相关性信号声道组合方案的情况下使用默认的声道间时间差(例如0)作为所述当前帧的声道间时间差。并且可不将默认的声道间时间差写入码流,解码装置也使用默认的声道间时间差。
下面还举例提供一种时域立体声参数的编码方法,例如可以包括:确定当前帧的声道组合方案;根据所述当前帧的声道组合方案确定所述当前帧的时域立体声参数;对确定的所述当前帧的时域立体声参数进行编码,所述时域立体声参数包括声道组合比例因子和声道间时延差中的至少一种。
相应的,解码装置可从码流中获得当前帧的时域立体声参数,进而基于从码流中获得的当前帧的时域立体声参数来进行相关解码。
下面通过一个更为具体的应用场景进行举例说明。
参见图9-A,图9-A是本申请实施例提供的一种音频编码方法的流程示意图。本申请实施例提供的一种音频编码方法可由编码装置来实施,方法具体可包括:
901、对当前帧的原始左右声道信号进行时域预处理。
例如若立体声音频信号的采样率为16KHz,一帧信号为20ms,帧长记作N,当N=320是表示帧长为320个样点。其中,当前帧的立体声信号包括当前帧的左声道信号和当前帧的右声道信号。其中,当前帧的原始左声道信号记作x L(n),当前帧的原始右声道信号记作x R(n),n为样点序号,n=0,1,…,N-1。
例如,对当前帧的原始左右声道信号进行时域预处理可包括:对当前帧的原始左右声道信号进行高通滤波处理,得到当前帧经时域预处理的左右声道信号,当前帧经时域预处理的左声道信号记作x L_HP(n),当前帧经时域预处理的的右声道信号记作x R_HP(n)。其中,n为样点序号。n=0,1,…,N-1。其中,高通滤波处理采用的滤波器例如可为截止频率为20Hz的无限脉冲响应滤波器(英文:Infinite Impulse Response,缩写:IIR)滤波器,也可采用其他类型的滤波器。
例如采样率为16KHz且对应截止频率为20Hz的高通滤波器的传递函数可为:
Figure PCTCN2018100060-appb-000255
其中,b 0=0.994461788958195,b 1=-1.988923577916390,b 2=0.994461788958195,a 1=1.988892905899653,a 2=-0.988954249933127,z为Z变换的变换因子。
其中,相应的时域滤波器的传递函数可表示为:
x L_HP(n)=b 0*x L(n)+b 1*x L(n-1)+b 2*x L(n-2)-a 1*x L_HP(n-1)-a 2*x L_HP(n-2)
x R_HP(n)=b 0*x R(n)+b 1*x R(n-1)+b 2*x R(n-2)-a 1*x R_HP(n-1)-a 2*x R_HP(n-2)
902、对当前帧经时域预处理的左右声道信号进行时延对齐处理,得到当前帧经时延对齐处理的左右声道信号。
其中,经时延对齐处理的信号可简称“时延对齐的信号”。例如经时延对齐处理的左声道信号可简称“时延对齐的左声道信号”,经时延对齐处理的右声道信号可简称“时延对齐的左声道信号”,以此类推。
具体地,可根据当前帧预处理后的左右声道信号提取声道间时延参数并编码,根据编码后的声道间时延参数对左右声道信号进行时延对齐处理,得到当前帧经时延对齐处理的左右声道信号。其中,当前帧经时延对齐处理的左声道信号记作x′ L(n),当前帧经时延对齐处理的右声道信号记作x′ R(n),其中,n为样点序号,n=0,1,…,N-1。
具体例如,编码装置可根据当前帧预处理后的左右声道信号计算左右声道间的时域互相关函数。搜索左右声道间的时域互相关函数的最大值(或其它值)以确定左右声道信号间的时延差。对确定的左右声道间的时延差进行量化编码。根据量化编码后的左右声道间时延差,以左右声道中选定的一个声道的信号为基准,对另一个声道的信号进行时延调整,从而获得当前帧经时延对齐处理的左右声道信号。
值得注意的是,时延对齐处理的具体实现方法有很多种,本实施例中对具体时延对齐处理方法不做限定。
903、对当前帧经时延对齐处理的左右声道信号进行时域分析。
具体地,时域分析可以包括瞬态检测等。其中,瞬态检测可以是对分别当前帧经时延对齐处理的左右声道信号进行能量检测(具体可检测当前帧是否发生能量突变)。例如,当前帧经时延对齐处理的左声道信号的能量表示为E cur_L,前一帧时延对齐后的左声道信号的能量表示为E pre_L,那么可根据E pre_L和E cur_L之间的差值的绝对值来进行瞬态检测,得到当前帧经时延对齐处理的左声道信号的瞬态检测结果。同理,可以用同样的方法对当前帧经时延对齐处理的左声道信号进行瞬态检测。时域分析也可以包括除瞬态检测之外的其他传统方式的时域分析,例如可包括频带扩展预处理等。
可以理解,步骤903可在步骤902之后,在对当前帧的主要声道信号编码和次要声道信号编码之前的任意位置执行。
904、根据当前帧经时延对齐处理的左右声道信号进行当前帧的声道组合方案判决以确定当前帧的声道组合方案。
本实施例中举例两种可能的声道组合方案,以下描述中分别称为相关性信号声道组合方案和非相关性信号声道组合方案。本实施例中,相关性信号声道组合方案对应了当前帧(时延对齐后的)左右声道信号为类正相信号的情况下,而非相关性信号声道组合方案对应了当前帧(时延对齐后的)左右声道信号为类反相信号的情况。当然,除了用“相关性信号声道组合方案”和“非相关性信号声道组合方案”来表征这两种可能的声道组合方案之外,在实际应用中不限于用其他的名称命名这两种不同的声道组合方案。
本实施例一些方案中,声道组合方案判决可分为声道组合方案初始判决和声道组合方案修正判决。可以理解,通过进行当前帧的声道组合方案判决,进而确定所述当前帧的声道组合方案。其中,确定当前帧的声道组合方案的一些举例实施方式,可参考上述实施例的相关描述,此处不再赘述。
905、根据当前帧经时延对齐处理的左右声道信号和当前帧的声道组合方案标识,计算当前帧相关性信号声道组合方案对应的声道组合比例因子并编码,得到当前帧相关性信号声道组合方案对应的声道组合比例因子的初始值及其编码索引。
具体例如,首先根据当前帧经时延对齐处理的左右声道信号计算当前帧的左右声道信号的帧能量。
其中,当前帧左声道信号的帧能量rms_L满足:
Figure PCTCN2018100060-appb-000256
其中,当前帧右声道信号的帧能量rms_R满足:
Figure PCTCN2018100060-appb-000257
其中,x′ L(n)表示当前帧经时延对齐处理的左声道信号。
其中,x′ R(n)表示当前帧经时延对齐处理的右声道信号。
然后,根据当前帧左声道的帧能量和右声道的帧能量,计算当前帧相关性信号声道组合方案对应的声道组合比例因子。其中,计算得到的当前帧相关性信号声道组合方案对应的声道组合比例因子ratio_init满足:
Figure PCTCN2018100060-appb-000258
然后,对计算得到的当前帧相关性信号声道组合方案对应的声道组合比例因子ratio_init进行量化编码,得到对应的编码索引ratio_idx_init,及量化编码后的当前帧相关性信号声道组合方案对应的声道组合比例因子ratio_init qua
ratio_init qua=ratio_tabl[ratio_idx_init]
其中,ratio_tabl为标量量化的码书。其中,量化编码可以采用传统的任何一种标量量化方法,例如均匀标量量化,也可以是非均匀标量量化,编码比特数例如为5比特,这里对标量量化的具体方法不再赘述。
量化编码后的当前帧相关性信号声道组合方案对应的声道组合比例因子ratio_init qua即为得到的当前帧相关性信号声道组合方案对应的声道组合比例因子的初始值,编码索引ratio_idx_init即为当前帧相关性信号声道组合方案对应的声道组合比例因子的初始值对应的编码索引。
另外,还可根据当前帧的声道组合方案标识tdm_SM_flag的值,对当前帧相关性信号声道组合方案对应的声道组合比例因子的初始值对应的编码索引进行修正。
例如,量化编码为5比特的标量量化,则当tdm_SM_flag=1时,将当前帧相关性信号声道组合方案对应的声道组合比例因子的初始值对应的编码索引ratio_idx_init修正为某一预先设定值(例如15或其他取值);并且,可将当前帧相关性信号声道组合方案对应的声道组合比例因子的初始值修正为ratio_init qua=ratio_tabl[15]。
值得注意的是,除了上述计算方法,还可根据时域立体声编码传统技术中任何一种计算声道组合方案对应的声道组合比例因子的方法,计算当前帧相关性信号声道组合方案对应的声道组合比例因子。也可直接将当前帧相关性信号声道组合方案对应的声道组合比例因子的初始值设置为固定值(例如0.5或其他值)。
906、可根据声道组合比例因子修正标识来判决是否需对声道组合比例因子进行修正。
若是,则修正当前帧相关性信号声道组合方案对应的声道组合比例因子及其编码索引,得到当前帧相关性信号声道组合方案对应的声道组合比例因子的修正值及其编码索引。
其中,当前帧的声道组合比例因子修正标识记作tdm_SM_modi_flag。例如声道组合比例因子修正标识取值为0,表示无需进行声道组合比例因子的修正,声道组合比例因子修正标识取值为1,表示需进行声道组合比例因子的修正。当然声道组合比例因子修正标识也可选用其它不同的取值来表示是否需进行声道组合比例因子的修正。
例如,根据声道组合比例因子修正标识判决是否需对声道组合比例因子进行修正具体可包括:例如若声道组合比例因子修正标识tdm_SM_modi_flag=1,则判决需对声道组合比例因子进行修正。又例如若声道组合比例因子修正标识tdm_SM_modi_flag=0,则判决无需对声道组合比例因子进行修正。
其中,修正当前帧相关性信号声道组合方案对应的声道组合比例因子及其编码索引具体可以包括:
例如当前帧相关性信号声道组合方案对应的声道组合比例因子的修正值对应的编码索引满足:ratio_idx_mod=0.5*(tdm_last_ratio_idx+16),其中,tdm_last_ratio_idx为上一帧相关性信号声道组合方案对应的声道组合比例因子的编码索引。
那么,当前帧相关性信号声道组合方案对应的声道组合比例因子的修正值ratio_mod qua满足:ratio_mod qua=ratio_tabl[ratio_idx_mod]。。
907、根据当前帧相关性信号声道组合方案对应的声道组合比例因子的初始值及其编码索引、当前帧相关性信号声道组合方案对应的声道组合比例因子的修正值及其编码索引、以及声道组合比例因子修正标识,确定当前帧相关性信号声道组合方案对应的声道组合比例因子ratio和编码索引ratio_idx。
具体例如,确定的相关性信号声道组合方案对应的声道组合比例因子ratio满足:
Figure PCTCN2018100060-appb-000259
其中,上述ratio_init qua表示当前帧的相关性信号声道组合方案对应的声道组合比例因子的初始 值,上述ratio_mod qua表示当前帧的相关性信号声道组合方案对应的声道组合比例因子的修正值,上述tdm_SM_modi_flag表示当前帧的声道组合比例因子修正标识。
其中,确定的相关性信号声道组合方案对应的声道组合比例因子对应的编码索引ratio_idx满足:
Figure PCTCN2018100060-appb-000260
其中,ratio_idx_init表示当前帧相关性信号声道组合方案对应的声道组合比例因子的初始值对应的编码索引,ratio_idx_mod表示当前帧相关性信号声道组合方案对应的声道组合比例因子的修正值对应的编码索引。
908、判断当前帧的声道组合方案标识是否对应非相关性信号声道组合方案,若是则计算当前帧非相关性信号声道组合方案对应的声道组合比例因子并编码,得到非相关性信号声道组合方案对应的声道组合比例因子和编码索引。
首先,可判断是否需要对计算当前帧非相关性信号声道组合方案对应的声道组合比例因子用到的历史缓存进行重置。
例如若当前帧的声道组合方案标识tdm_SM_flag等于1(例如tdm_SM_flag等于1表示当前帧的声道组合方案标识对应非相关性信号声道组合方案),而前一帧的声道组合方案标识tdm_last_SM_flag等于0(例如tdm_last_SM_flag等于0表示当前帧的声道组合方案标识对应相关性信号声道组合方案),则表示需要对计算当前帧非相关性信号声道组合方案对应的声道组合比例因子用到的历史缓存进行重置。
值得注意的是,判断是否需要对计算当前帧非相关性信号声道组合方案对应的声道组合比例因子用到的历史缓存进行重置,也可以通过在声道组合方案初始判决和声道组合方案修正判决的过程中确定历史缓存重置标识tdm_SM_reset_flag,然后,通过判断历史缓存重置标识的取值来实现。例如tdm_SM_reset_flag为1,表示当前帧的声道组合方案标识对应了非相关性信号声道组合方案而前一帧的声道组合方案标识对应了相关性信号声道组合方案。例如历史缓存重置标识tdm_SM_reset_flag等于1,表示需要对计算当前帧非相关性信号声道组合方案对应的声道组合比例因子用到的历史缓存进行重置。具体的重置方法有很多种,可以是将计算当前帧非相关性信号声道组合方案对应的声道组合比例因子用到的历史缓存中的所有参数均按照预先设定的初始值进行重置;或者也可以是将计算当前帧非相关性信号声道组合方案对应的声道组合比例因子用到的历史缓存中的部分参数均按照预先设定的初始值进行重置;或者还可将计算当前帧非相关性信号声道组合方案对应的声道组合比例因子用到的历史缓存中的部分参数均按照预先设定的初始值进行重置,而另一部分参数按照计算相关性信号声道组合方案对应的声道组合比例因子用到的历史缓存中对应的参数值进行重置。
接下来,进一步判断当前帧的声道组合方案标识tdm_SM_flag是否对应非相关性信号声道组合方案。其中,非相关性信号声道组合方案是一种更加适合于对类反相立体声信号进行时域下混的声道组合方案。其中,在本实施例中,在当前帧的声道组合方案标识tdm_SM_flag=1时,表征当前帧的声道组合方案标识对应了非相关性信号声道组合方案;在当前帧的声道组合方案标识tdm_SM_flag=0时,表征当前帧的声道组合方案标识对应了相关性信号声道组合方案。
判断当前帧的声道组合方案标识是否对应非相关性信号声道组合方案具体可包括:
判断当前帧的声道组合方案标识的值是否为1。若当前帧的声道组合方案标识tdm_SM_flag=1,表示当前帧的声道组合方案标识对应非相关性信号声道组合方案。在这种情况下,可计算当前帧非相关性信号声道组合方案对应的声道组合比例因子并编码。
参见图9-B,计算当前帧非相关性信号声道组合方案对应的声道组合比例因子并编码例如可包括如下的步骤9081-9085。
9081、对当前帧经时延对齐处理的左右声道信号进行信号能量分析。
分别得到当前帧左声道信号的帧能量、当前帧右声道信号的帧能量、当前帧左声道的长时平滑帧能量、当前帧右声道的长时平滑帧能量、当前帧左声道的帧间能量差异和当前帧右声道的帧间能量差异。
例如当前帧左声道信号的帧能量rms_L满足:
Figure PCTCN2018100060-appb-000261
其中,当前帧右声道信号的帧能量rms_R满足:
Figure PCTCN2018100060-appb-000262
其中,x′ L(n)表示当前帧经时延对齐处理的左声道信号。
其中,x′ R(n)表示当前帧经时延对齐处理的右声道信号。
例如当前帧左声道的长时平滑帧能量tdm_lt_rms_L_SM cur满足:
tdm_lt_rms_L_SM cur=(1-A)*tdm_lt_rms_L_SM pre+A*rms_L
其中,tdm_lt_rms_L_SM pre表示前一帧左声道的长时平滑帧能量,A表示左声道长时平滑帧能量的更新因子,A例如可以取0到1之间的实数,A例如可等于0.4。
例如当前帧右声道的长时平滑帧能量tdm_lt_rms_R_SM cur满足:
tdm_lt_rms_R_SM cur=(1-B)*tdm_lt_rms_R_SM pre+B*rms_R
其中,tdm_lt_rms_R_SM pre表示前一帧右声道的长时平滑帧能量,B表示右声道长时平滑帧能量的更新因子,B例如可以取0到1之间的实数,B例如可以和左声道长时平滑帧能量的更新因子取相同或不同的数值,B例如也可等于0.4。
例如当前帧左声道的帧间能量差异ener_L_dt满足:
ener_L_dt=tdm_lt_rms_L_SM cur-tdm_lt_rms_L_SM pre
例如当前帧右声道的帧间能量差异ener_R_dt满足:
ener_R_dt=tdm_lt_rms_R_SM cur-tdm_lt_rms_R_SM pre
9082、根据当前帧经时延对齐处理的左右声道信号确定当前帧的参考声道信号。参考声道信号也可被称作单声道信号,若将参考声道信号称作单声道信号,则后续所有与参考声道相关的描述和参数命名, 则可以统一将参考声道信号替换为单声道信号。
例如参考声道信号mono_i(n)满足:
Figure PCTCN2018100060-appb-000263
其中,x′ L(n)为当前帧经时延对齐处理的左声道信号,其中,x′ R(n)为当前帧经时延对齐处理的右声道信号。
9083、分别计算当前帧经时延对齐处理的左右声道信号与参考声道信号之间的幅度相关性参数。
例如,当前帧经时延对齐处理的左声道信号与参考声道信号之间的幅度相关性参数corr_LM例如满足:
Figure PCTCN2018100060-appb-000264
例如当前帧经时延对齐处理的右声道信号与参考声道信号之间的幅度相关性参数corr_RM例如满足:
Figure PCTCN2018100060-appb-000265
其中,x′ L(n)表示当前帧经时延对齐处理的左声道信号。其中,x′ R(n)表示当前帧经时延对齐处理的右声道信号。mono_i(n)表示当前帧的参考声道信号。|·|表示取绝对值。
9084、根据当前帧经时延对齐处理的左声道信号与参考声道信号之间的幅度相关性参数及当前帧经时延对齐处理的右声道信号与参考声道信号之间的幅度相关性参数,计算当前帧左右声道之间的幅度相关性差异参数diff_lt_corr。
可以理解,步骤9081可在步骤9082、9083之前执行,或者也可以在步骤9082、9083之后且在步骤9084之前执行。
参见图9-C,例如,计算当前帧左右声道之间的幅度相关性差异参数diff_lt_corr具体可包括如下步骤90841-90842。
90841、根据当前帧经时延对齐处理的左声道信号与参考声道信号之间的幅度相关性参数,以及当前帧经时延对齐处理的右声道信号与参考声道信号之间的幅度相关性参数,计算当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数,及当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数。
例如一种计算当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数及当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数,可包括:当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数tdm_lt_corr_LM_SM满足:
tdm_lt_corr_LM_SM cur=α*tdm_lt_corr_LM_SM pre+(1-α)corr_LM。
其中,tdm_lt_corr_LM_SM cur表示当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数,tdm_lt_corr_LM_SM pre表示前一帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数,α表示左声道平滑因子,其中,α可以是预先设定的0到1之间的实数,如0.2、0.5、0.8。或者,α的取值也可以通过自适应计算得到。
例如当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数tdm_lt_corr_RM_SM满足:
tdm_lt_corr_RM_SM cur=β*tdm_lt_corr_RM_SM pre+(1-β)corr_LM
其中,tdm_lt_corr_RM_SM cur表示当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数,tdm_lt_corr_RM_SM pre表示前一帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数,β表示右声道平滑因子,其中,β可以是预先设定的0到1之间的实数,β可以和左声道平滑因子α取值相同或不同,例如β可等于0.2、0.5、0.8。或者β的取值也可以通过自适应计算得到。
另一种计算当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数及当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数的方法,可包括:
首先,对当前帧经时延对齐处理的左声道信号与参考声道信号之间的幅度相关性参数corr_LM进行修正,得到修正后的当前帧左声道信号与参考声道信号之间的幅度相关性参数corr_LM_mod;对当前帧经时延对齐处理的右声道信号与参考声道信号之间的幅度相关性参数corr_RM进行修正,得到修正后的当前帧右声道信号与参考声道信号之间的幅度相关性参数corr_RM_mod。
然后,根据修正后的当前帧左声道信号与参考声道信号之间的幅度相关性参数corr_LM_mod和修正后的当前帧右声道信号与参考声道信号之间的幅度相关性参数corr_RM_mod,以及前一帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数tdm_lt_corr_LM_SM pre和前一帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数tdm_lt_corr_RM_SM pre,确定当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数diff_lt_corr_LM_tmp及前一帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数diff_lt_corr_RM_tmp。
接下来,根据当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数diff_lt_corr_LM_tmp及前一帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数diff_lt_corr_RM_tmp,获得当前帧的左右声道之间的幅度相关性差异参数的初始值diff_lt_corr_SM;并根据获得的当前帧的左右声道之间的幅度相关性差异参数的初始值 diff_lt_corr_SM以及前一帧的左右声道之间的幅度相关性差异参数tdm_last_diff_lt_corr_SM,确定当前帧的左右声道之间的幅度相关性差异的帧间变化参数d_lt_corr。
最后,根据信号能量分析而获得的当前帧左声道信号的帧能量、当前帧右声道信号的帧能量帧能量、当前帧左声道的长时平滑帧能量、当前帧右声道的长时平滑帧能量、当前帧左声道的帧间能量差异、当前帧右声道的帧间能量差异以及当前帧的左右声道之间的幅度相关性差异的帧间变化参数,自适应选择不同的左声道平滑因子、右声道平滑因子,并计算当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数tdm_lt_corr_LM_SM以及当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数tdm_lt_corr_RM_SM。
除以上举例的两种方法,还可以有很多种计算当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数及当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数的方法,本申请对此不作限定。
90842、根据当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数及当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数,计算当前帧左右声道之间的幅度相关性差异参数diff_lt_corr。
例如当前帧左右声道之间的幅度相关性差异参数diff_lt_corr满足:
diff_lt_corr=tdm_lt_corr_LM_SM-tdm_lt_corr_RM_SM
其中,tdm_lt_corr_LM_SM表示当前帧长时平滑后的左声道信号与参考声道信号之间的幅度相关性参数,tdm_lt_corr_RM_SM表示当前帧长时平滑后的右声道信号与参考声道信号之间的幅度相关性参数。
9085、将当前帧左右声道之间的幅度相关性差异参数diff_lt_corr转换为声道组合比例因子并进行编码量化,以确定当前帧非相关性信号声道组合方案对应的声道组合比例因子及其编码索引。
参见图9-D,将当前帧左右声道之间的幅度相关性差异参数转换为声道组合比例因子的一种可能方法具体可以包括步骤90851-90853。
90851、对左右声道之间的幅度相关性差异参数进行映射处理,使映射处理后的左右声道之间的幅度相关性差异参数的取值范围在[MAP_MIN,MAP_MAX]之间。
对左右声道之间的幅度相关性差异参数进行映射处理的一种方法可包括:
首先,对左右声道之间的幅度相关性差异参数进行限幅处理,例如经限幅处理后的左右声道之间的幅度相关性差异参数diff_lt_corr_limit满足:
Figure PCTCN2018100060-appb-000266
RATIO_MAX表示限幅后左右声道之间的幅度相关性差异参数的最大值,RATIO_MIN表示限幅后左右声道之间的幅度相关性差异参数的最小值。其中,RATIO_MAX例如为预先设定的经验值,RATIO_MAX例如为1.5、3.0或其他值。其中,RATIO_MIN例如为预先设定的经验值,RATIO_MIN例如为-1.5、-3.0或其他值。其中,RATIO_MAX>RATIO_MIN。
然后,对限幅处理后的左右声道之间的幅度相关性差异参数进行映射处理。映射处理后的左右声 道之间的幅度相关性差异参数diff_lt_corr_map满足:
Figure PCTCN2018100060-appb-000267
其中,
Figure PCTCN2018100060-appb-000268
B 1=MAP_MAX-RATIO_MAX*A 1,或者B 1=MAP_HIGH-RATIO_HIGH*A 1
Figure PCTCN2018100060-appb-000269
B 2=MAP_LOW-RATIO_LOW*A 2,或者B 2=MAP_MIN-RATIO_MIN*A 2
Figure PCTCN2018100060-appb-000270
B 3=MAP_HIGH-RATIO_HIGH*A 3,或者B 3=MAP_LOW-RATIO_LOW*A 3
其中,MAP_MAX表示映射处理后的左右声道之间的幅度相关性差异参数取值的最大值,MAP_HIGH表示映射处理后的左右声道之间的幅度相关性差异参数取值的高门限,MAP_LOW表示映射处理后的左右声道之间的幅度相关性差异参数取值的低门限。MAP_MIN表示映射处理后的左右声道之间的幅度相关性差异参数取值的最小值。
其中,MAP_MAX>MAP_HIGH>MAP_LOW>MAP_MIN。
例如在本申请的一些实施例中,MAP_MAX可为2.0,MAP_HIGH可为1.2,MAP_LOW可为0.8,MAP_MIN可为0.0。当然实际应用中不限于这样的取值举例。
RATIO_MAX表示限幅后左右声道之间的幅度相关性差异参数的最大值,RATIO_HIGH表示限幅后左右声道之间的幅度相关性差异参数取值的高门限,RATIO_LOW表示限幅后左右声道之间的幅度相关性差异参数取值的低门限,RATIO_MIN表示限幅后左右声道之间的幅度相关性差异参数的最小值。
其中,RATIO_MAX>RATIO_HIGH>RATIO_LOW>RATIO_MIN。
例如在本申请一些实施例中,RATIO_MAX为1.5,RATIO_HIGH为0.75,RATIO_LOW为-0.75,RATIO_MIN为-1.5。当然实际应用中不限于这样的取值举例。
本申请的一些实施例的另一种方法是:映射处理后的左右声道之间的幅度相关性差异参数diff_lt_corr_map满足:
Figure PCTCN2018100060-appb-000271
其中,diff_lt_corr_limit表示经过限幅处理后的左右声道之间的幅度相关性差异参数。
其中,
Figure PCTCN2018100060-appb-000272
其中,RATIO_MAX表示左右声道之间的幅度相关性差异参数的最大幅度,-RATIO_MAX表示左右声道之间的幅度相关性差异参数的最小幅度。其中,RATIO_MAX可以为预先设定的经验值,RATIO_MAX例如可为1.5、3.0或其他大于0的实数。
90852、将映射处理后的左右声道之间的幅度相关性差异参数转换为声道组合比例因子。
声道组合比例因子ratio_SM满足:
Figure PCTCN2018100060-appb-000273
其中,cos(·)表示余弦运算。
除了上述方法之外,还可以通过其他方法将左右声道之间的幅度相关性差异参数转换为声道组合比例因子,例如:
根据信号能量分析而获得的当前帧左声道的长时平滑帧能量、当前帧右声道的长时平滑帧能量、当前帧左声道的帧间能量差异、编码器历史缓存中的缓存前一帧的编码参数(例如主要声道信号的帧间相关性参数、次要声道信号的帧间相关性参数)、当前帧以及前一帧的声道组合方案标识、当前帧以及前一帧的非相关性信号声道组合方案对应的声道组合比例因子,确定是否对非相关性信号声道组合方案对应的声道组合比例因子进行更新。
若需要对非相关性信号声道组合方案对应的声道组合比例因子进行更新,则使用上述举例方法将左右声道之间的幅度相关性差异参数转换为声道组合比例因子;否则,直接将前一帧的非相关性信号声道组合方案对应的声道组合比例因子及其编码索引,作为当前帧的非相关性信号声道组合方案对应的声道组合比例因子及其编码索引。
90853、对转换后得到的声道组合比例因子进行量化编码,确定当前帧非相关性信号声道组合方案对应的声道组合比例因子。
具体例如,对转换后得到的声道组合比例因子进行量化编码,得到当前帧非相关性信号声道组合方案对应的初始编码索引ratio_idx_init_SM,及量化编码后的当前帧非相关性信号声道组合方案对应的声道组合比例因子的初始值ratio_init_SM qua
其中,ratio_init_SM qua=ratio_tabl_SM[ratio_idx_init_SM]。
其中,ratio_tabl_SM表示非相关性信号声道组合方案对应的声道组合比例因子标量量化的码书。量化编码可以采用传统技术中的任何一种标量量化方法,如均匀标量量化,也可以是非均匀标量量化,编码比特数可以是5比特,这里对具体方法不再赘述。非相关性信号声道组合方案对应的声道组合比例因子标量量化的码书可以采用和相关性信号声道组合方案对应的声道组合比例因子标量量化的码 书相同或不同的码书。其中,当码书相同,这样可只需要存储一个用于声道组合比例因子标量量化的码书即可。此时,量化编码后的当前帧非相关性信号声道组合方案对应的声道组合比例因子的初始值ratio_init_SM qua
其中,ratio_init_SM qua=ratio_tabl[ratio_idx_init_SM]。
例如,一种方法是将量化编码后的当前帧非相关性信号声道组合方案对应的声道组合比例因子的初始值直接作为当前帧非相关性信号声道组合方案对应的声道组合比例因子,并将当前帧非相关性信号声道组合方案对应的声道组合比例因子的初始编码索引直接作为当前帧非相关性信号声道组合方案对应的声道组合比例因子的编码索引,即:
其中,当前帧非相关性信号声道组合方案对应的声道组合比例因子的编码索引ratio_idx_SM满足:ratio_idx_SM=ratio_idx_init_SM。
其中,当前帧非相关性信号声道组合方案对应的声道组合比例因子满足:
ratio_SM=ratio_tabl[ratio_idx_SM]
另一种方法可以是:根据前一帧的非相关性信号声道组合方案对应的声道组合比例因子的编码索引或者前一帧的非相关性信号声道组合方案对应的声道组合比例因子,对量化编码后的当前帧非相关性信号声道组合方案对应的声道组合比例因子的初始值以及当前帧非相关性信号声道组合方案对应的初始编码索引进行修正,将修正后的当前帧非相关性信号声道组合方案对应的声道组合比例因子的编码索引作为当前帧非相关性信号声道组合方案对应的声道组合比例因子的编码索引,将修正后的非相关性信号声道组合方案对应的声道组合比例因子作为当前帧非相关性信号声道组合方案对应的声道组合比例因子。
其中,当前帧非相关性信号声道组合方案对应的声道组合比例因子的编码索引ratio_idx_SM满足:ratio_idx_SM=φ*ratio_idx_init_SM+(1-φ)*tdm_last_ratio_idx_SM。
其中,ratio_idx_init_SM表示当前帧非相关性信号声道组合方案对应的初始编码索引,tdm_last_ratio_idx_SM为前一帧非相关性信号声道组合方案对应的声道组合比例因子的编码索引,
Figure PCTCN2018100060-appb-000274
为非相关性信号声道组合方案对应的声道组合比例因子的修正因子。
Figure PCTCN2018100060-appb-000275
的取值可为经验值,例如
Figure PCTCN2018100060-appb-000276
可等于0.8。
则当前帧非相关性信号声道组合方案对应的声道组合比例因子满足:
ratio_SM=ratio_tabl[ratio_idx_SM]
还有一种方法是:将未量化的非相关性信号声道组合方案对应的声道组合比例因子,作为当前帧非相关性信号声道组合方案对应的声道组合比例因子,即当前帧非相关性信号声道组合方案对应的声道组合比例因子的ratio_SM满足:
Figure PCTCN2018100060-appb-000277
此外,第四种方法是:根据前一帧的非相关性信号声道组合方案对应的声道组合比例因子,对未 量化的当前帧非相关性信号声道组合方案对应的声道组合比例因子进行修正,将修正后的非相关性信号声道组合方案对应的声道组合比例因子,作为当前帧非相关性信号声道组合方案对应的声道组合比例因子,并对其进行量化编码,得到当前帧非相关性信号声道组合方案对应的声道组合比例因子的编码索引。
除以上述方法,还可以有很多种方法来将左右声道之间的幅度相关性差异参数转换为声道组合比例因子并进行编码量化,同样也有很多不同的方法来确定当前帧非相关性信号声道组合方案对应的声道组合比例因子及其编码索引,本申请对此不作限定。
909、根据前一帧的声道组合方案标识和当前帧的声道组合方案标识进行编码模式判决,以确定当前帧的编码模式。
其中,当前帧的声道组合方案标识记作tdm_SM_flag,前一帧的声道组合方案标识记作tdm_last_SM_flag,前一帧的声道组合方案标识和当前帧的声道组合方案标识的联合标识可以表示为(tdm_last_SM_flag,tdm_SM_flag),可根据此联合标识来进行编码模式判决,具体例如:
假设相关性信号声道组合方案用0表示,非相关性信号声道组合方案用1表示,则前一帧和当前帧的声道组合方案标识的联合标识有以下四种情况(01),(11),(10),(00),则当前帧的编码模式分别判决为:相关性信号编码模式,非相关性信号编码模式,相关性信号到非相关性信号编码模式,非相关性信号到相关性信号编码模式。例如:当前帧的声道组合方案标识的联合标识为(00),则表示当前帧的编码模式为相关性信号编码模式;当前帧的声道组合方案标识的联合标识为(11)则表示当前帧的编码模式为非相关性信号编码模式;当前帧的声道组合方案标识的联合标识为(01)则表示当前帧的编码模式为相关性信号到非相关性信号编码模式;当前帧的声道组合方案标识的联合标识为(10)则表示当前帧的编码模式为非相关性信号到相关性信号编码模式。
910、在获得当前帧的编码模式stereo_tdm_coder_type之后,编码装置根据当前帧的编码模式采用对应的时域下混处理方法对当前帧的左右声道信号进行时域下混处理,以得到当前帧的主要声道信号和次要声道信号。
其中,所述当前帧的编码模式为多种编码模式中的其中一种。例如所述多种编码模式可包括:相关性信号到非相关性信号编码模式、非相关性信号到相关性信号编码模式、相关性信号编码模式和非相关性信号编码模式等。其中,不同编码模式进行时域下混处理的实施方式,可参考上述实施例中的相关举例描述,此处不再赘述。
911、编码装置对主要声道信号和次要声道信号分别进行编码,得到主要声道编码信号和次要声道编码信号。
具体地,可以先根据前一帧的主要声道信号和/或次要声道信号编码中得到的参数信息以及主要声道信号编码和次要声道信号编码的总比特数,对主要声道信号编码和次要声道信号编码进行比特分配。然后根据比特分配的结果,分别对主要声道信号和次要声道信号进行编码,得到主要声道编码的编码索引、次要声道编码的编码索引。主要声道编码和次要声道编码,可以采用任何一种单声道音频编码技术,这里不再赘述。
912、编码装置根据声道组合方案标识选择相应的声道组合比例因子编码索引写入码流,并将主要声道编码信号、次要声道编码信号以及当前帧的声道组合方案标识写入码流。
具体例如,若当前帧的声道组合方案标识tdm_SM_flag对应了相关性信号声道组合方案,则将 当前帧相关性信号声道组合方案对应的声道组合比例因子的编码索引ratio_idx写入码流;若当前帧的声道组合方案标识tdm_SM_flag对应了非相关性信号声道组合方案,则将当前帧非相关性信号声道组合方案对应的声道组合比例因子的编码索引ratio_idx_SM写入码流。例如,tdm_SM_flag=0,则将当前帧相关性信号声道组合方案对应的声道组合比例因子的编码索引ratio_idx写入码流;tdm_SM_flag=1,则将当前帧非相关性信号声道组合方案对应的声道组合比例因子的编码索引ratio_idx_SM写入码流。
并且,将主要声道编码信号、次要声道编码信号以及当前帧的声道组合方案标识写入比特流。可以理解,写码流操作无先后顺序。
相应的,下面针对时域立体声的解码场景进行举例说明。
参见图10,下面还提供一种音频解码方法,音频解码方法的相关步骤可由解码装置来具体实施,具体可包括:
1001、根据码流进行解码以得到当前帧的主次声道解码信号。
1002、根据码流进行解码以得到当前帧的时域立体声参数。
其中,当前帧的时域立体声参数包括当前帧的声道组合比例因子(码流包含的是当前帧的声道组合比例因子的编码索引,基于当前帧的声道组合比例因子的编码索引进行解码可以得到当前帧的声道组合比例因子),还可包括当前帧的声道间时间差(例如,码流包含的是当前帧的声道间时间差的编码索引,基于当前帧的声道间时间差的编码索引进行解码可以得到当前帧的声道间时间差;或者码流包含的是当前帧的声道间时间差的绝对值得编码索引,基于当前帧的声道间时间差的绝对值的编码索引进行解码可以得到当前帧的声道间时间差的绝对值)等。
1003、基于码流得到所述码流中包含的当前帧的声道组合方案标识,确定所述当前帧的声道组合方案。
1004、基于所述当前帧的声道组合方案和前一帧的声道组合方案确定当前帧的解码模式。
其中,基于所述当前帧的声道组合方案和前一帧的声道组合方案确定当前帧的解码模式,可参考步骤909中确定当前帧的编码模式的方法,根据所述当前帧的声道组合方案和前一帧的声道组合方案确定当前帧的解码模式。其中,所述当前帧的解码模式为多种解码模式中的其中一种。例如所述多种解码模式可包括:相关性信号到非相关性信号解码模式、非相关性信号到相关性信号解码模式、相关性信号编码模式和非相关性信号解码模式等。编码模式和解码模式是一一对应的。
例如,当前帧的声道组合方案标识的联合标识为(00)则表示当前帧的解码模式也为相关性信号解码模式;当前帧的声道组合方案标识的联合标识为(11)则表示当前帧的解码模式为非相关性信号解码模式;当前帧的声道组合方案标识的联合标识为(01)则表示当前帧的解码模式为相关性信号到非相关性信号解码模式;当前帧的声道组合方案标识的联合标识为(10)则表示当前帧的解码模式为非相关性信号到相关性信号解码模式。
可以理解,步骤1001、步骤1002、步骤1003-1004的执行没有必然的先后顺序。
1005、采用确定的当前帧的解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号。
其中,不同解码模式进行时域上混处理的相关实施方式,可参考上述实施例中的相关举例描述,此处不再赘述。
其中,时域上混处理所使用的上混矩阵基于得到的当前帧的声道组合比例因子构建。
其中,当前帧的左右声道重建信号可作为所述当前帧的左右声道解码信号。
或者,进一步的,还可基于当前帧的声道间时间差对所述当前帧的左右声道重建信号进行时延调整,得到当前帧经时延调整的左右声道重建信号,当前帧经时延调整的左右声道重建信号可作为当前帧的左右声道解码信号。或者,进一步的,还可对当前帧经时延调整的左右声道重建信号进行时域后处理,其中,当前帧经时域后处理的左右声道重建信号可作为所述当前帧的左右声道解码信号。
上述详细阐述了本申请实施例的方法,下面提供了本申请实施例的装置。
参见图11-A,本申请实施例还提供一种装置1100,可包括:
相互耦合的处理器1110和存储器1120。所述处理器1110可用于执行本申请实施例提供的任意一种方法的部分或全部步骤。
存储器1120包括但不限于是随机存储记忆体(英文:Random Access Memory,简称:RAM)、只读存储器(英文:Read-Only Memory,简称:ROM)、可擦除可编程只读存储器(英文:Erasable Programmable Read Only Memory,简称:EPROM)、或便携式只读存储器(英文:Compact Disc Read-Only Memory,简称:CD-ROM),该存储器402用于相关指令及数据。
当然,装置1100还可包括用于接收和发送数据的收发器1130。
处理器1110可以是一个或多个中央处理器(英文:Central Processing Unit,简称:CPU),在处理器1110是一个CPU的情况下,该CPU可以是单核CPU,也可以是多核CPU。处理器1110具体可以是数字信号处理器。
在实现过程中,上述方法的各步骤可通过处理器1110中的硬件的集成逻辑电路或者软件形式的指令完成。上述处理器1110可以是通用处理器、数字信号处理器、专用集成电路、现成可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。处理器1110可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本发明实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。
软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等等本领域成熟的存储介质之中。该存储介质位于存储器1120,例如处理器1110可读取存储器1120中的信息,结合其硬件完成上述方法的步骤。
进一步的,装置1100还可包括收发器1130,收发器1130例如可用于相关数据(例如指令或声道信号或码流)的收发。
举例来说,装置1100可执行上述图2至图9-D任意一附图所示实施例中对应的方法的部分或全部步骤。
具体例如,当装置1100执行上述编码的相关步骤时,装置1100可称为编码装置(或音频编码装置)。当装置1100执行上述解码的相关步骤时,装置1100可称为解码装置(或音频解码装置)。
参见图11-B,在装置1100为编码装置的情况下,装置1100例如还可进一步包括:麦克风1140和模数转换器1150等。
其中,麦克风1140例如可用于采样得到模拟音频信号。
模数转换器1150例如可用于将模拟音频信号转换为数字音频信号。
参见图11-C,在装置1100为编码装置的情况下,装置1100例如还可进一步包括:扬声器1160和数模转换器1170等。
数模转换器1170例如可用于将数字音频信号转换为模拟音频信号。
其中,扬声器1160例如可用于播放模拟音频信号。
此外,参见图12-A,本申请实施例提供一种装置1200,包括用于实施本申请实施例提供的任意一种方法的若干个功能单元。
例如,当装置1200执行图2所示实施例中对应的方法时,装置1200可包括:
第一确定单元1210,用于确定当前帧的声道组合方案,基于前一帧和当前帧的声道组合方案确定当前帧的编码模式。
编码单元1220,用于基于当前帧的编码模式所对应的时域下混处理对当前帧的左右声道信号进行时域下混处理,以得到当前帧的主次声道信号。
此外,参见图12-B,装置1200还可包括第二确定单元1230,用于确定当前帧的时域立体声参数。编码单元1220还可用于对当前帧的时域立体声参数进行编码。
又例如,参见图12-C,当装置1200执行图3所示实施例中对应的方法时,装置1200可包括:
第三确定单元1240,用于基于码流中的当前帧的声道组合方案标识确定当前帧的声道组合方案;根据前一帧的声道组合方案和所述当前帧的声道组合方案,确定所述当前帧的解码模式。
解码单元1250,用于基于码流解码得到当前帧的主次声道解码信号;基于当前帧的解码模式所对应的时域上混处理对当前帧的主次声道解码信号进行时域上混处理,以得到当前帧的左右声道重建信号。
这个装置执行其他方法时的情况以此类推。
本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质存储了程序代码,其中,所述程序代码包括用于执行本申请实施例提供的任意一种方法的部分或全部步骤的指令。
本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行本申请实施例提供的任意一种方法的部分或全部步骤。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置,可通过其它的方式实现。例如以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可结合或者可以集成到另一个系统,或一些特征可以忽略或不执行。另一点,所显示或讨论的相互之间的间接耦合或者直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例的方案的目的。
另外,在本发明各实施例中的各功能单元可集成在一个处理单元中,也可以是各单元单独物理存在,也可两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,或者也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可为个人计算机、服务器或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。

Claims (32)

  1. 一种音频编码方法,其特征在于,包括:
    确定当前帧的编码模式;
    在确定所述当前帧的编码模式为非相关性信号编码模式的情况下,采用所述非相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号,所述非相关性信号编码模式对应的时域下混处理方式为非相关性信号声道组合方案对应的时域下混处理方式,所述非相关性信号声道组合方案为类反相信号对应的声道组合方案;
    对得到的所述当前帧的主次声道信号进行编码。
  2. 根据权利要求1所述的方法,其特征在于,采用所述非相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号,包括:
    根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号;或者根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号。
  3. 根据权利要求2所述的方法,其特征在于,在根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
    Figure PCTCN2018100060-appb-100001
    其中,所述X L(n)表示所述当前帧的左声道信号,所述X R(n)表示所述当前帧的右声道信号,所述Y(n)表示经时域下混处理而得到的所述当前帧的主要声道信号,所述X(n)表示经时域下混处理而得到的所述当前帧的次要声道信号;
    其中,所述n表示样点序号;所述M 22表示所述当前帧的非相关性信号声道组合方案对应的下混矩阵,其中,所述M 22基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
  4. 根据权利要求2所述方法,其特征在于,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
    Figure PCTCN2018100060-appb-100002
    Figure PCTCN2018100060-appb-100003
    其中,所述X L(n)表示所述当前帧的左声道信号,所述X R(n)表示所述当前帧的右声道信号,所述Y(n)表示经时域下混处理而得到的所述当前帧的主要声道信号,所述X(n)表示经时域下混处理而得到的所述当前帧的次要声道信号;
    其中,所述n表示样点序号;所述delay_com表示编码时延补偿;所述M 22表示所述当前帧的非相关性信号声道组合方案对应的下混矩阵,所述M 22基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建;所述M 12表示所述前一帧的非相关性信号声道组合方案对应的下混矩阵,所述M 12基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
  5. 根据权利要求2所述的方法,其特征在于,
    在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
    Figure PCTCN2018100060-appb-100004
    Figure PCTCN2018100060-appb-100005
    Figure PCTCN2018100060-appb-100006
    其中,fade_in(n)表示淡入因子,
    Figure PCTCN2018100060-appb-100007
    其中,fade_out(n)表示淡出因子,
    Figure PCTCN2018100060-appb-100008
    其中,NOVA_1表示过渡处理长度;
    其中,所述n表示样点序号,n=0,1,…,N-1;
    所述X L(n)表示所述当前帧的左声道信号,所述X R(n)表示所述当前帧的右声道信号,所述Y(n)表示经时域下混处理而得到的所述当前帧的主要声道信号,所述X(n)表示经时域下混处理而得到的所述当前帧的次要声道信号;
    其中,所述delay_com表示编码时延补偿;所述M 22表示所述当前帧的非相关性信号声道组合方 案对应的下混矩阵,其中,所述M 22基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建;所述M 12表示所述前一帧的非相关性信号声道组合方案对应的下混矩阵,其中,所述M 12基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
  6. 根据权利要求3至5任一项所述的方法,其特征在于,
    Figure PCTCN2018100060-appb-100009
    Figure PCTCN2018100060-appb-100010
    Figure PCTCN2018100060-appb-100011
    Figure PCTCN2018100060-appb-100012
    Figure PCTCN2018100060-appb-100013
    Figure PCTCN2018100060-appb-100014
    其中,α 1=ratio_SM;α 2=1-ratio_SM;所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
  7. 根据权利要求3至6任一项所述的方法,其特征在于,
    Figure PCTCN2018100060-appb-100015
    Figure PCTCN2018100060-appb-100016
    Figure PCTCN2018100060-appb-100017
    Figure PCTCN2018100060-appb-100018
    Figure PCTCN2018100060-appb-100019
    Figure PCTCN2018100060-appb-100020
    其中,α 1_pre=tdm_last_ratio_SM;α 2_pre=1-tdm_last_ratio_SM;
    其中,tdm_last_ratio_SM表示前一帧的非相关性信号声道组合方案对应的声道组合比例因子。
  8. 根据权利要求3至7任一项所述的方法,其特征在于,
    Figure PCTCN2018100060-appb-100021
    Figure PCTCN2018100060-appb-100022
    Figure PCTCN2018100060-appb-100023
    其中,所述x L(n)表示所述当前帧的原始左声道信号,所述x R(n)表示所述当前帧的原始右声道信 号;所述x L_HP(n)表示所述当前帧的经时域预处理的左声道信号,所述x R_HP(n)表示所述当前帧的经时域预处理的右声道信号;所述x′ L(n)表示所述当前帧的经时延对齐处理的左声道信号,所述x′ R(n)表示所述当前帧的经时延对齐处理的右声道信号。
  9. 一种立体声解码方法,其特征在于,包括:
    根据码流进行解码以得到当前帧的主次声道解码信号;
    确定所述当前帧的解码模式;
    在确定所述当前帧的解码模式为非相关性信号解码模式的情况下,采用所述非相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,其中,所述非相关性信号解码模式对应的时域上混处理方式为非相关性信号声道组合方案对应的时域上混处理方式,所述非相关性信号声道组合方案为类反相信号对应的声道组合方案。
  10. 根据权利要求9所述的方法,其特征在于,所述采用所述非相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,包括:根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号;或者根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号。
  11. 根据权利要求10所述的方法,其特征在于,在根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
    Figure PCTCN2018100060-appb-100024
    其中,所述
    Figure PCTCN2018100060-appb-100025
    表示所述当前帧的左声道重建信号,所述
    Figure PCTCN2018100060-appb-100026
    表示所述当前帧的右声道重建信号,所述
    Figure PCTCN2018100060-appb-100027
    表示所述当前帧的主要声道解码信号,所述
    Figure PCTCN2018100060-appb-100028
    表示所述当前帧的次要声道解码信号;
    其中,所述n表示样点序号;所述
    Figure PCTCN2018100060-appb-100029
    表示所述当前帧的非相关性信号声道组合方案对应的上混矩阵,其中,所述
    Figure PCTCN2018100060-appb-100030
    基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
  12. 根据权利要求10所述的方法,其特征在于,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
    Figure PCTCN2018100060-appb-100031
    Figure PCTCN2018100060-appb-100032
    其中,所述
    Figure PCTCN2018100060-appb-100033
    表示所述当前帧的左声道重建信号,所述
    Figure PCTCN2018100060-appb-100034
    表示所述当前帧的右声道重建信 号,所述
    Figure PCTCN2018100060-appb-100035
    表示所述当前帧的主要声道解码信号,所述
    Figure PCTCN2018100060-appb-100036
    表示所述当前帧的次要声道解码信号;
    其中,所述n表示样点序号,n=0,1,…,N-1;
    其中,所述upmixing_delay表示解码时延补偿;
    所述
    Figure PCTCN2018100060-appb-100037
    表示所述当前帧的非相关性信号声道组合方案对应的上混矩阵,所述
    Figure PCTCN2018100060-appb-100038
    基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建;所述
    Figure PCTCN2018100060-appb-100039
    表示所述前一帧的非相关性信号声道组合方案对应的上混矩阵,所述
    Figure PCTCN2018100060-appb-100040
    基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
  13. 根据权利要求10所述的方法,其特征在于,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
    Figure PCTCN2018100060-appb-100041
    Figure PCTCN2018100060-appb-100042
    Figure PCTCN2018100060-appb-100043
    其中,所述
    Figure PCTCN2018100060-appb-100044
    表示所述当前帧的左声道重建信号,所述
    Figure PCTCN2018100060-appb-100045
    表示所述当前帧的右声道重建信号,所述
    Figure PCTCN2018100060-appb-100046
    表示所述当前帧的主要声道解码信号,所述
    Figure PCTCN2018100060-appb-100047
    表示所述当前帧的次要声道解码信号;
    其中,所述NOVA_1表示过渡处理长度;
    其中,fade_in(n)表示淡入因子,
    Figure PCTCN2018100060-appb-100048
    其中,fade_out(n)表示淡出因子,
    Figure PCTCN2018100060-appb-100049
    其中,所述n表示样点序号,n=0,1,…,N-1;
    其中,所述upmixing_delay表示解码时延补偿;
    所述
    Figure PCTCN2018100060-appb-100050
    表示所述当前帧的非相关性信号声道组合方案对应的上混矩阵,所述
    Figure PCTCN2018100060-appb-100051
    基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建;所述
    Figure PCTCN2018100060-appb-100052
    表示所述前一帧的非相关性信号声道组合方案对应的上混矩阵,所述
    Figure PCTCN2018100060-appb-100053
    基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
  14. 根据权利要求11至13任一项所述的方法,其特征在于,
    Figure PCTCN2018100060-appb-100054
    Figure PCTCN2018100060-appb-100055
    Figure PCTCN2018100060-appb-100056
    Figure PCTCN2018100060-appb-100057
    Figure PCTCN2018100060-appb-100058
    Figure PCTCN2018100060-appb-100059
    其中,α 1=ratio_SM;α 2=1-ratio_SM;所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
  15. 根据权利要求11至14任一项所述的方法,其特征在于,
    Figure PCTCN2018100060-appb-100060
    Figure PCTCN2018100060-appb-100061
    Figure PCTCN2018100060-appb-100062
    Figure PCTCN2018100060-appb-100063
    Figure PCTCN2018100060-appb-100064
    Figure PCTCN2018100060-appb-100065
    其中,α 1_pre=tdm_last_ratio_SM;α 2_pre=1-tdm_last_ratio_SM;
    其中,tdm_last_ratio_SM表示前一帧的非相关性信号声道组合方案对应的声道组合比例因子。
  16. 一种时域立体声编码装置,其特征在于,包括:相互耦合的处理器和存储器;
    所述处理器用于执行如下步骤:
    确定当前帧的编码模式;
    在确定所述当前帧的编码模式为非相关性信号编码模式的情况下,采用所述非相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号,所述非相关性信号编码模式对应的时域下混处理方式为非相关性信号声道组合方案对应的时域下混处理方式,所述非相关性信号声道组合方案为类反相信号对应的声道组合方案;
    对得到的所述当前帧的主次声道信号进行编码。
  17. 根据权利要求16所述的装置,其特征在于,在采用所述非相关性信号编码模式对应的时域下混处理方式,对所述当前帧的左右声道信号进行时域下混处理以得到所述当前帧的主次声道信号的方面,所述处理器具体用于:
    根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号;或者根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号。
  18. 根据权利要求17所述的装置,其特征在于,在根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
    Figure PCTCN2018100060-appb-100066
    其中,所述X L(n)表示所述当前帧的左声道信号,所述X R(n)表示所述当前帧的右声道信号,所述Y(n)表示经时域下混处理而得到的所述当前帧的主要声道信号,所述X(n)表示经时域下混处理而得到的所述当前帧的次要声道信号;
    其中,所述n表示样点序号;所述M 22表示所述当前帧的非相关性信号声道组合方案对应的下混矩阵,其中,所述M 22基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
  19. 根据权利要求17所述的装置,其特征在于,
    在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
    Figure PCTCN2018100060-appb-100067
    Figure PCTCN2018100060-appb-100068
    其中,所述X L(n)表示所述当前帧的左声道信号,所述X R(n)表示所述当前帧的右声道信号,所述Y(n)表示经时域下混处理而得到的所述当前帧的主要声道信号,所述X(n)表示经时域下混处理而得到的所述当前帧的次要声道信号;
    其中,所述n表示样点序号;所述delay_com表示编码时延补偿;所述M 22表示所述当前帧的非相关性信号声道组合方案对应的下混矩阵,其中,所述M 22基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建;所述M 12表示所述前一帧的非相关性信号声道组合方案对应的下混矩阵,其中,所述M 22基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
  20. 根据权利要求17所述的装置,其特征在于,
    在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的左右声道信号进行时域下混处理,以得到所述当前帧的主次声道信号的情况下,
    Figure PCTCN2018100060-appb-100069
    Figure PCTCN2018100060-appb-100070
    Figure PCTCN2018100060-appb-100071
    其中,fade_in(n)表示淡入因子,
    Figure PCTCN2018100060-appb-100072
    其中,fade_out(n)表示淡出因子,
    Figure PCTCN2018100060-appb-100073
    其中,NOVA_1表示过渡处理长度;
    其中,所述n表示样点序号,n=0,1,…,N-1;
    所述X L(n)表示所述当前帧的左声道信号,所述X R(n)表示所述当前帧的右声道信号,所述Y(n)表示经时域下混处理而得到的所述当前帧的主要声道信号,所述X(n)表示经时域下混处理而得到的所述当前帧的次要声道信号;
    其中,所述delay_com表示编码时延补偿;所述M 22表示所述当前帧的非相关性信号声道组合方案对应的下混矩阵,其中,所述M 22基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建;所述M 12表示所述前一帧的非相关性信号声道组合方案对应的下混矩阵,其中,所述M 22基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
  21. 根据权利要求18至20任一项所述的装置,其特征在于,
    Figure PCTCN2018100060-appb-100074
    Figure PCTCN2018100060-appb-100075
    Figure PCTCN2018100060-appb-100076
    Figure PCTCN2018100060-appb-100077
    Figure PCTCN2018100060-appb-100078
    Figure PCTCN2018100060-appb-100079
    其中,α 1=ratio_SM;α 2=1-ratio_SM;所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
  22. 根据权利要求18至21任一项所述的装置,其特征在于,
    Figure PCTCN2018100060-appb-100080
    Figure PCTCN2018100060-appb-100081
    Figure PCTCN2018100060-appb-100082
    Figure PCTCN2018100060-appb-100083
    Figure PCTCN2018100060-appb-100084
    Figure PCTCN2018100060-appb-100085
    其中,α 1_pre=tdm_last_ratio_SM;α 2_pre=1-tdm_last_ratio_SM;
    其中,tdm_last_ratio_SM表示前一帧的非相关性信号声道组合方案对应的声道组合比例因子。
  23. 根据权利要求18至22任一项所述的装置,其特征在于,
    Figure PCTCN2018100060-appb-100086
    Figure PCTCN2018100060-appb-100087
    Figure PCTCN2018100060-appb-100088
    其中,所述x L(n)表示所述当前帧的原始左声道信号,所述x R(n)表示所述当前帧的原始右声道信号;所述x L_HP(n)表示所述当前帧的经时域预处理的左声道信号,所述x R_HP(n)表示所述当前帧的经时域预处理的右声道信号;所述x′ L(n)表示所述当前帧的经时延对齐处理的左声道信号,所述x′ R(n)表示所述当前帧的经时延对齐处理的右声道信号。
  24. 一种立体声解码装置,其特征在于,包括:相互耦合的处理器和存储器;
    所述处理器用于执行如下步骤:
    解码码流以得到当前帧的主次声道解码信号;
    确定所述当前帧的解码模式;
    在确定所述当前帧的解码模式为非相关性信号解码模式的情况,采用所述非相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号,所述非相关性信号解码模式对应的时域上混处理方式为非相关性信号声道组合方案对应的时域上混处理方式,所述非相关性信号声道组合方案为类反相信号对应的声道组合方案。
  25. 根据权利要求24所述的装置,其特征在于,在采用所述非相关性信号解码模式对应的时域上混处理方式,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的方面,处理器用于:根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号;或者根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行 时域上混处理以得到所述当前帧的左右声道重建信号。
  26. 根据权利要求25所述的装置,其特征在于,在根据所述当前帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
    Figure PCTCN2018100060-appb-100089
    其中,所述
    Figure PCTCN2018100060-appb-100090
    表示所述当前帧的左声道解码信号,所述
    Figure PCTCN2018100060-appb-100091
    表示所述当前帧的右声道重建信号,所述
    Figure PCTCN2018100060-appb-100092
    表示所述当前帧的主要声道解码信号,所述
    Figure PCTCN2018100060-appb-100093
    表示所述当前帧的次要声道解码信号;
    其中,所述n表示样点序号;所述
    Figure PCTCN2018100060-appb-100094
    表示所述当前帧的非相关性信号声道组合方案对应的上混矩阵,其中,所述
    Figure PCTCN2018100060-appb-100095
    基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
  27. 根据权利要求25所述的装置,其特征在于,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
    Figure PCTCN2018100060-appb-100096
    Figure PCTCN2018100060-appb-100097
    其中,所述
    Figure PCTCN2018100060-appb-100098
    表示所述当前帧的左声道解码信号,所述
    Figure PCTCN2018100060-appb-100099
    表示所述当前帧的右声道重建信号,所述
    Figure PCTCN2018100060-appb-100100
    表示所述当前帧的主要声道解码信号,所述
    Figure PCTCN2018100060-appb-100101
    表示所述当前帧的次要声道解码信号;
    其中,所述n表示样点序号,n=0,1,…,N-1;
    其中,所述upmixing_delay表示解码时延补偿;
    所述
    Figure PCTCN2018100060-appb-100102
    表示所述当前帧的非相关性信号声道组合方案对应的上混矩阵,所述
    Figure PCTCN2018100060-appb-100103
    基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建;所述
    Figure PCTCN2018100060-appb-100104
    表示所述前一帧的非相关性信号声道组合方案对应的上混矩阵,所述
    Figure PCTCN2018100060-appb-100105
    基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
  28. 根据权利要求25所述的装置,其特征在于,在根据所述当前帧和前一帧的非相关性信号声道组合方案的声道组合比例因子,对所述当前帧的主次声道解码信号进行时域上混处理以得到所述当前帧的左右声道重建信号的情况下,
    Figure PCTCN2018100060-appb-100106
    Figure PCTCN2018100060-appb-100107
    Figure PCTCN2018100060-appb-100108
    其中,所述
    Figure PCTCN2018100060-appb-100109
    表示所述当前帧的左声道解码信号,所述
    Figure PCTCN2018100060-appb-100110
    表示所述当前帧的右声道重建信号,所述
    Figure PCTCN2018100060-appb-100111
    表示所述当前帧的主要声道解码信号,所述
    Figure PCTCN2018100060-appb-100112
    表示所述当前帧的次要声道解码信号;
    其中,所述NOVA_1表示过渡处理长度;
    其中,fade_in(n)表示淡入因子,
    Figure PCTCN2018100060-appb-100113
    其中,fade_out(n)表示淡出因子,
    Figure PCTCN2018100060-appb-100114
    其中,所述n表示样点序号,n=0,1,…,N-1;
    其中,所述upmixing_delay表示解码时延补偿;
    所述
    Figure PCTCN2018100060-appb-100115
    表示所述当前帧的非相关性信号声道组合方案对应的上混矩阵,所述
    Figure PCTCN2018100060-appb-100116
    基于所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子构建;所述
    Figure PCTCN2018100060-appb-100117
    表示所述前一帧的非相关性信号声道组合方案对应的上混矩阵,所述
    Figure PCTCN2018100060-appb-100118
    基于所述前一帧的非相关性信号声道组合方案对应的声道组合比例因子构建。
  29. 根据权利要求26至28任一项所述的装置,其特征在于,
    Figure PCTCN2018100060-appb-100119
    Figure PCTCN2018100060-appb-100120
    Figure PCTCN2018100060-appb-100121
    Figure PCTCN2018100060-appb-100122
    Figure PCTCN2018100060-appb-100123
    Figure PCTCN2018100060-appb-100124
    其中,α 1=ratio_SM;α 2=1-ratio_SM;所述ratio_SM表示所述当前帧的非相关性信号声道组合方案对应的声道组合比例因子。
  30. 根据权利要求26至29任一项所述的装置,其特征在于,
    Figure PCTCN2018100060-appb-100125
    Figure PCTCN2018100060-appb-100126
    Figure PCTCN2018100060-appb-100127
    Figure PCTCN2018100060-appb-100128
    Figure PCTCN2018100060-appb-100129
    Figure PCTCN2018100060-appb-100130
    其中,α 1_pre=tdm_last_ratio_SM;α 2_pre=1-tdm_last_ratio_SM;
    其中,所述tdm_last_ratio_SM表示前一帧的非相关性信号声道组合方案对应的声道组合比例因子。
  31. 一种计算机可读存储介质,其特征在于,
    所述计算机可读存储介质存储了程序代码,所述程序代码包括用于执行权利要求1-8任意一项所述方法的指令。
  32. 一种计算机可读存储介质,其特征在于,
    所述计算机可读存储介质存储了程序代码,所述程序代码包括用于执行权利要求9-15任意一项所述方法的指令。
PCT/CN2018/100060 2017-08-10 2018-08-10 时域立体声编解码方法和相关产品 WO2019029724A1 (zh)

Priority Applications (16)

Application Number Priority Date Filing Date Title
BR112020002682-4A BR112020002682A2 (pt) 2017-08-10 2018-08-10 método de codificação e decodificação estéreo de domínio do tempo e produto relacionado
JP2020507599A JP6974927B2 (ja) 2017-08-10 2018-08-10 時間領域ステレオエンコーディング及びデコーディング方法並びに関連製品
KR1020207006889A KR102380431B1 (ko) 2017-08-10 2018-08-10 시간-도메인 스테레오 코딩 및 디코딩 방법, 및 관련 제품
SG11202001169WA SG11202001169WA (en) 2017-08-10 2018-08-10 Time-domain stereo encoding and decoding method and related product
RU2020109715A RU2773022C2 (ru) 2017-08-10 2018-08-10 Способ кодирования и декодирования стерео во временной области и сопутствующий продукт
ES18843195T ES2977791T3 (es) 2017-08-10 2018-08-10 Método de codificación y decodificación estéreo en dominio del tiempo y producto relacionado
EP23216612.4A EP4358083A3 (en) 2017-08-10 2018-08-10 Time-domain stereo encoding and decoding method and related product
KR1020237002928A KR102641952B1 (ko) 2017-08-10 2018-08-10 시간-도메인 스테레오 코딩 및 디코딩 방법, 및 관련 제품
EP18843195.1A EP3664087B1 (en) 2017-08-10 2018-08-10 Time-domain stereo coding and decoding method, and related product
KR1020247006315A KR20240032159A (ko) 2017-08-10 2018-08-10 시간-도메인 스테레오 코딩 및 디코딩 방법, 및 관련 제품
KR1020227010026A KR102493482B1 (ko) 2017-08-10 2018-08-10 시간-도메인 스테레오 코딩 및 디코딩 방법, 및 관련 제품
US16/785,174 US11062715B2 (en) 2017-08-10 2020-02-07 Time-domain stereo encoding and decoding method and related product
US17/355,785 US11640825B2 (en) 2017-08-10 2021-06-23 Time-domain stereo encoding and decoding method and related product
JP2021180319A JP7311573B2 (ja) 2017-08-10 2021-11-04 時間領域ステレオエンコーディング及びデコーディング方法並びに関連製品
US18/193,922 US20230306972A1 (en) 2017-08-10 2023-03-31 Time-domain stereo encoding and decoding method and related product
JP2023111288A JP2023139041A (ja) 2017-08-10 2023-07-06 時間領域ステレオエンコーディング及びデコーディング方法並びに関連製品

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710679740.6 2017-08-10
CN201710679740.6A CN109389984B (zh) 2017-08-10 2017-08-10 时域立体声编解码方法和相关产品

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/785,174 Continuation US11062715B2 (en) 2017-08-10 2020-02-07 Time-domain stereo encoding and decoding method and related product

Publications (1)

Publication Number Publication Date
WO2019029724A1 true WO2019029724A1 (zh) 2019-02-14

Family

ID=65272801

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/100060 WO2019029724A1 (zh) 2017-08-10 2018-08-10 时域立体声编解码方法和相关产品

Country Status (10)

Country Link
US (3) US11062715B2 (zh)
EP (2) EP4358083A3 (zh)
JP (3) JP6974927B2 (zh)
KR (4) KR102380431B1 (zh)
CN (2) CN114005455A (zh)
BR (1) BR112020002682A2 (zh)
ES (1) ES2977791T3 (zh)
SG (1) SG11202001169WA (zh)
TW (1) TWI689210B (zh)
WO (1) WO2019029724A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114005455A (zh) * 2017-08-10 2022-02-01 华为技术有限公司 时域立体声编解码方法和相关产品
CN109389985B (zh) 2017-08-10 2021-09-14 华为技术有限公司 时域立体声编解码方法和相关产品
WO2022176270A1 (ja) * 2021-02-16 2022-08-25 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 符号化装置、復号装置、符号化方法、及び、復号方法
CN114365509B (zh) * 2021-12-03 2024-03-01 北京小米移动软件有限公司 一种立体声音频信号处理方法及设备/存储介质/装置
WO2023130018A1 (en) * 2021-12-30 2023-07-06 Ibiquity Digital Corporation Method and detector for providing an alert message for left/right phase inversion

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09146599A (ja) * 1995-11-27 1997-06-06 Nec Corp 音声符号化装置
US20030220800A1 (en) * 2002-05-21 2003-11-27 Budnikov Dmitry N. Coding multichannel audio signals
CN102428512A (zh) * 2009-06-02 2012-04-25 松下电器产业株式会社 下混装置、编码装置以及其方法
US20130177158A1 (en) * 2011-11-17 2013-07-11 Digital Audio Labs Method for sending multiple pcm audio channels over an industry-standard stereo link
WO2015140398A1 (en) * 2014-03-21 2015-09-24 Nokia Technologies Oy Methods, apparatuses for forming audio signal payload and audio signal payload

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7830921B2 (en) * 2005-07-11 2010-11-09 Lg Electronics Inc. Apparatus and method of encoding and decoding audio signal
KR101218776B1 (ko) 2006-01-11 2013-01-18 삼성전자주식회사 다운믹스된 신호로부터 멀티채널 신호 생성방법 및 그 기록매체
US8190425B2 (en) * 2006-01-20 2012-05-29 Microsoft Corporation Complex cross-correlation parameters for multi-channel audio
KR101629862B1 (ko) * 2008-05-23 2016-06-24 코닌클리케 필립스 엔.브이. 파라메트릭 스테레오 업믹스 장치, 파라메트릭 스테레오 디코더, 파라메트릭 스테레오 다운믹스 장치, 파라메트릭 스테레오 인코더
US8060042B2 (en) * 2008-05-23 2011-11-15 Lg Electronics Inc. Method and an apparatus for processing an audio signal
MY159110A (en) * 2008-07-11 2016-12-15 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E V Audio encoder and decoder for encoding and decoding audio samples
EP2323130A1 (en) 2009-11-12 2011-05-18 Koninklijke Philips Electronics N.V. Parametric encoding and decoding
FR2966634A1 (fr) 2010-10-22 2012-04-27 France Telecom Codage/decodage parametrique stereo ameliore pour les canaux en opposition de phase
JP5753540B2 (ja) * 2010-11-17 2015-07-22 パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America ステレオ信号符号化装置、ステレオ信号復号装置、ステレオ信号符号化方法及びステレオ信号復号方法
ES2809677T3 (es) * 2015-09-25 2021-03-05 Voiceage Corp Método y sistema para codificar una señal de sonido estéreo utilizando parámetros de codificación de un canal primario para codificar un canal secundario
CN114005455A (zh) * 2017-08-10 2022-02-01 华为技术有限公司 时域立体声编解码方法和相关产品

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09146599A (ja) * 1995-11-27 1997-06-06 Nec Corp 音声符号化装置
US20030220800A1 (en) * 2002-05-21 2003-11-27 Budnikov Dmitry N. Coding multichannel audio signals
CN102428512A (zh) * 2009-06-02 2012-04-25 松下电器产业株式会社 下混装置、编码装置以及其方法
US20130177158A1 (en) * 2011-11-17 2013-07-11 Digital Audio Labs Method for sending multiple pcm audio channels over an industry-standard stereo link
WO2015140398A1 (en) * 2014-03-21 2015-09-24 Nokia Technologies Oy Methods, apparatuses for forming audio signal payload and audio signal payload

Also Published As

Publication number Publication date
US20220139404A1 (en) 2022-05-05
KR20240032159A (ko) 2024-03-08
KR102380431B1 (ko) 2022-04-01
CN109389984A (zh) 2019-02-26
JP2023139041A (ja) 2023-10-03
BR112020002682A2 (pt) 2020-07-28
EP3664087A4 (en) 2020-08-05
EP4358083A2 (en) 2024-04-24
KR20220043246A (ko) 2022-04-05
SG11202001169WA (en) 2020-03-30
EP3664087A1 (en) 2020-06-10
KR20200035138A (ko) 2020-04-01
KR102641952B1 (ko) 2024-02-29
RU2020109715A3 (zh) 2021-11-19
JP2022010020A (ja) 2022-01-14
KR20230018550A (ko) 2023-02-07
US20200176000A1 (en) 2020-06-04
RU2020109715A (ru) 2021-09-10
JP2020529636A (ja) 2020-10-08
TW201911888A (zh) 2019-03-16
CN109389984B (zh) 2021-09-14
JP7311573B2 (ja) 2023-07-19
KR102493482B1 (ko) 2023-01-31
US11640825B2 (en) 2023-05-02
EP3664087B1 (en) 2024-02-14
JP6974927B2 (ja) 2021-12-01
US11062715B2 (en) 2021-07-13
TWI689210B (zh) 2020-03-21
EP4358083A3 (en) 2024-06-19
CN114005455A (zh) 2022-02-01
US20230306972A1 (en) 2023-09-28
ES2977791T3 (es) 2024-08-30

Similar Documents

Publication Publication Date Title
TWI697892B (zh) 音訊編解碼模式確定方法和相關產品
TWI689210B (zh) 時域身歷聲編解碼方法和相關產品
TWI705432B (zh) 音訊編解碼方法、音頻編解碼裝置及電腦可讀存儲介質
WO2019029736A1 (zh) 时域立体声编解码方法和相关产品
JP7309813B2 (ja) 時間領域ステレオパラメータ符号化方法および関連製品
RU2772405C2 (ru) Способ стереокодирования и декодирования во временной области и соответствующий продукт
RU2773022C2 (ru) Способ кодирования и декодирования стерео во временной области и сопутствующий продукт

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18843195

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020507599

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112020002682

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 20207006889

Country of ref document: KR

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2018843195

Country of ref document: EP

Effective date: 20200302

ENP Entry into the national phase

Ref document number: 112020002682

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20200207