WO2012163304A1 - 音频解码方法及装置 - Google Patents

音频解码方法及装置 Download PDF

Info

Publication number
WO2012163304A1
WO2012163304A1 PCT/CN2012/076435 CN2012076435W WO2012163304A1 WO 2012163304 A1 WO2012163304 A1 WO 2012163304A1 CN 2012076435 W CN2012076435 W CN 2012076435W WO 2012163304 A1 WO2012163304 A1 WO 2012163304A1
Authority
WO
WIPO (PCT)
Prior art keywords
channel
audio
audio data
lost
channels
Prior art date
Application number
PCT/CN2012/076435
Other languages
English (en)
French (fr)
Inventor
赵云轩
刘智辉
Original Assignee
华为终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为终端有限公司 filed Critical 华为终端有限公司
Priority to EP12792712.7A priority Critical patent/EP2654039B1/en
Priority to AU2012265335A priority patent/AU2012265335B2/en
Publication of WO2012163304A1 publication Critical patent/WO2012163304A1/zh
Priority to US14/090,216 priority patent/US20140088976A1/en

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/005Correction of errors induced by the transmission channel, if related to the coding algorithm
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Definitions

  • the present invention relates to the field of data processing, and in particular, to an audio decoding method and apparatus. Background technique
  • Video technology realizes the long-distance transmission of voice, image, data and other information together, enabling people to use the video technology to hear the other party's voice while watching the other party's moving image and film content. Enhance the intimacy and presence of off-site communication.
  • the video conferencing system is generally composed of a videoconferencing server (taking a multipoint control unit (MCU) as an example) and a terminal.
  • MCU multipoint control unit
  • each terminal corresponds to a site, and each terminal collects the sounds and images of each site and encodes them to the MCU.
  • the MCU processes the sound and image in a certain way (sound mixing, image forwarding or composing multiple pictures), and sends the processed sound and image to each terminal.
  • the terminal decodes and outputs the sound and image of the remote site to realize remote operation. The purpose of communication.
  • the existing video conferencing system generally uses the User Datagram Protocol (UDP) to transmit audio and image data. Since UDP provides a transaction-oriented single unreliable information transmission service, audio and image data are transmitted during the process. The phenomenon of packet loss is inevitable.
  • UDP User Datagram Protocol
  • the encoded data packet is sent to the decoding terminal.
  • the decoding terminal After receiving the data packet, the decoding terminal strips off the real-time transport protocol (RTP) of the data packet, deinterleaves according to the multi-channel code stream format, and decodes the audio data of each channel.
  • RTP real-time transport protocol
  • the decoding terminal can determine the channel to which the lost audio data belongs, and for each channel, independently according to the decoded audio data in the channel, the audio data lost in the channel is independent.
  • the packet loss concealment process that is, the packet loss concealment processing in the channel, and the final output signal is obtained. For details, please refer to FIG. 1.
  • the decoding terminal may determine that the channel to which the lost audio data belongs is The left channel (L) and the right channel (R), and for the left channel, the audio data L1 in the data packet P1 and/or the audio data L3 in the data packet P3 are used to perform the audio data L2 in the data packet P2.
  • the packet loss concealing process is performed, and the audio data R1 in the packet P2 is subjected to packet loss concealment processing using the audio data R1 in the packet P1 and/or the audio data R3 in the packet P3.
  • the decoding terminal when the decoding terminal performs the packet hiding processing, the packet hiding processing in the channel is performed for the audio data in the channel. For the multi-channel system, such processing is performed. Affects the effect of packet loss processing.
  • Embodiments of the present invention provide an audio decoding method and apparatus, which can improve the effect of packet loss concealing processing in a video decoding system of N channels (where N is greater than or equal to 2).
  • the audio decoding method provided by the embodiment of the present invention is applied to an audio decoding system, where the audio decoding system includes N channels, and N is an integer greater than or equal to 2, including:
  • Receiving a data packet when detecting packet loss occurs, and audio data corresponding to an audio frame of M channels of the N channels is lost, if the M channels are excluded from the N channels.
  • the audio data of the other audio channels that belong to the same audio frame as the audio data that has been lost in the audio frame are not lost, and the unrecovered audio data corresponding to the audio frame in the NM channels is decoded.
  • the M is an integer greater than 0 and less than N; extracting, from the decoded, the signal characteristic parameters of the NM channels that are not lost audio data of the audio frame; determining the first channel and the second Whether there is correlation between the channels, the first channel is any one of M channels in which audio data is lost in the audio frame, and the second channel is that audio is not lost in the audio frame Any one of the NM channels of data; if yes, corresponding to the audio of the first channel according to a signal characteristic parameter of the second channel corresponding to the audio data of the audio frame
  • the number of lost audio in the frame Packet loss concealment processing; if having been lost in accordance with the audio frame of the audio data corresponding to a preset algorithm for packet loss concealment will be the first channel in the channel packet loss concealment processing.
  • the audio decoding device configured to decode the audio data of the N channels, where the N is an integer greater than or equal to 2, including: a receiving unit, configured to receive a data packet; and a decoding unit, configured to: when detecting that packet loss occurs, and audio data corresponding to an audio frame of M channels of the N channels is lost, if N sounds In the channel other than the M channels, the audio data belonging to the same audio frame as the audio data that has been lost in the audio frame is not lost, and the corresponding one of the NM channels is The unresolved audio data of the audio frame is decoded, and the M is an integer greater than 0 and smaller than N; and an extracting unit, configured to extract, corresponding to the audio, the NM channels obtained by decoding by the decoding unit a signal characteristic parameter of the unmissed audio data of the frame; a correlation determining unit, configured to determine whether there is correlation between the first channel and the second channel, where the first channel is lost in the audio frame Any one of M channels
  • the embodiments of the present invention have the following advantages:
  • N is an integer greater than or equal to 2
  • M is an integer greater than 0 and less than N
  • N is an integer greater than or equal to 2
  • the signal characteristic parameters of the NM channels corresponding to the audio data of the audio frame that are not lost may be acquired, and when there is correlation between the first channel and the second channel, according to the corresponding audio of the second channel
  • the signal characteristic parameter of the unmissed audio data of the frame performs packet loss concealment processing on the lost audio data corresponding to the audio frame of the first channel, since the unrecovered audio data belongs to the same audio frame, different channels
  • the audio data so the correlation between the different channels can be utilized when the audio decoding device performs the packet hiding process, thereby improving the effect of the packet hiding process in the N channel systems.
  • FIG. 1 is a schematic structural diagram of a data packet in the prior art
  • FIG. 2 is a schematic diagram of an embodiment of an audio decoding method according to the present invention
  • 3 is a schematic diagram of audio data division according to the present invention
  • FIG. 4 is a schematic diagram of another embodiment of an audio decoding method according to the present invention.
  • FIG. 5 is a schematic diagram of packet loss during transmission of audio data according to the present invention.
  • FIG. 6 is a schematic diagram of a data flow of audio decoding according to the present invention.
  • FIG. 7 is a schematic diagram of an embodiment of an audio decoding device according to the present invention.
  • FIG. 8 is a schematic diagram of another embodiment of an audio decoding device according to the present invention. detailed description
  • Embodiments of the present invention provide an audio decoding method and apparatus, which can improve the effect of packet loss concealing processing in an N-channel (N is an integer greater than or equal to 2) audio decoding system.
  • an embodiment of the audio decoding method of the present invention includes:
  • the audio decoding device may be configured to decode audio data of N channels, where N is an integer greater than or equal to 2.
  • the data packet is sent to the audio decoding device via the network.
  • the transmitting process may be that the audio encoding device directly transmits the data packet to the audio decoding device, or the audio encoding device may transmit the data packet to the forwarding device, and then the forwarding device transmits the data packet to the audio decoding device.
  • Figure 3 shows the audio streams of N channels, where N channels belong to the same unit duration
  • Audio data eg, Cli, C2i, ..., CNi
  • i is the serial number of the audio frame, and the value of i is related to the duration of the audio data.
  • the audio data of a unit length can be called a piece of audio data, and the unit length can be determined according to the actual application environment, and also represents the length of an audio frame, for example, 5 milliseconds, or 10 milliseconds, etc.
  • Each audio frame can be thought of as a collection of audio data of different channels belonging to the same segment unit duration.
  • each audio frame has a fixed length
  • each audio frame includes N pieces of audio data
  • each piece of audio data corresponds to one Channel, where N is the number of channels and N is an integer greater than or equal to 2.
  • N is the number of channels and N is an integer greater than or equal to 2.
  • each audio frame contains 3 pieces of audio data, which correspond to the left channel, the middle channel, and the right channel, respectively.
  • UDP provides a transaction-oriented single unreliable information transmission service
  • packet loss during audio and image data transmission is inevitable.
  • the audio decoding device can determine each channel. Loss of audio data.
  • the audio decoding device can decode the unrecovered audio data corresponding to the audio frame among the NM channels.
  • the M is an integer greater than 0 and less than N.
  • the audio decoding device can obtain the signal characteristic parameters of the audio data after decoding the unrecovered audio data corresponding to the audio frame in the N-M channels.
  • the specific signal characteristic parameter may be a signal pitch period, and/or a signal energy. It may be understood that, in practical applications, the signal characteristic parameter may be represented by using the above two parameters, and may also adopt other The parameters are represented, for example, signal tones, etc., and are not limited herein.
  • the manner in which the audio decoding device extracts the signal characteristic parameters of the un-missed audio data of the audio frame in the N-M channels obtained after the decoding is the prior art, and details are not described herein again.
  • step 204 determining whether there is a correlation between the first channel and the second channel, and if so, executing step 205, if not, executing step 206;
  • the first channel is any one of M channels in which audio data is lost in the audio frame
  • the second channel is any one of NM channels in the audio frame in which audio data is not lost.
  • first channel If there is a correlation between the first channel and the second channel, it means that the audio data transmitted by the first channel and the audio data transmitted by the second channel are for the same sound source, so the first channel There is also a strong correlation between the lost audio data and the unrecorded audio data of the second channel.
  • the audio decoding device may refer to the signal of the second channel corresponding to the audio data of the audio frame that is not lost when performing packet loss concealment processing on the lost audio data of the audio channel corresponding to the first channel.
  • the feature parameter that is, using the signal characteristic parameter of the second channel corresponding to the unmissed audio data of the audio frame, performs packet loss concealment processing on the lost audio data corresponding to the audio frame of the first channel.
  • the audio decoding is performed.
  • the device may also be based on a signal characteristic parameter of the second channel corresponding to the audio data of the audio frame that is not lost, and a signal characteristic parameter of the at least one relevant channel corresponding to the audio data of the audio frame that is not lost, the first sound
  • the lost audio data of the track is subjected to packet loss concealment processing;
  • the related channel is a channel having a correlation with the first channel other than the second channel among the N-M channels corresponding to the unmissed audio data.
  • the first channel has There is essentially no correlation between the lost audio data and the audio data that is not lost in the second channel.
  • the audio decoding device may use the preset packet hiding algorithm to perform the intra-channel packet loss concealment processing on the lost audio data of the audio channel corresponding to the first channel, and the specific process and the traditional lost The packet hiding process is similar and will not be described here.
  • N is an integer greater than or equal to 2
  • M is an integer greater than 0 and less than N
  • N is an integer greater than or equal to 2
  • the signal characteristic parameters of the unmissed audio data corresponding to the audio frame in the NM channels may be acquired, and when the first channel is When there is correlation between the second channels, the lost audio data of the first channel corresponding to the audio frame is lost according to the signal characteristic parameter of the second channel corresponding to the audio data of the audio frame that is not lost.
  • Hidden processing since the audio data that is not lost belongs to the same audio frame and the audio data of different channels, the correlation between different channels can be utilized when the audio decoding device performs the packet hiding process, thereby improving N sounds. The effect of packet loss handling in the system.
  • FIG. 4 another embodiment of the audio decoding method of the present invention includes:
  • Step 401 in this embodiment is similar to the content of step 201 in the foregoing embodiment shown in FIG. 2, and details are not described herein again.
  • UDP provides a transaction-oriented single unreliable information transmission service
  • packet loss during audio and image data transmission is inevitable.
  • the audio decoding device can determine each channel. The audio data is lost.
  • Each data packet has its corresponding identifier.
  • the first data packet sent by the audio coding device is data packet 1, its identifier is 000, and the second data packet is data packet 2, and its identifier is 001, third.
  • the data packet is packet 3, its identifier is 010, and so on.
  • the audio decoding device may determine, according to the identifier of the received data packet, whether packet loss has occurred. For example, the audio encoding device sequentially encodes the data packet, starting from 000, followed by 001, 010, 011, etc., assuming the audio decoding device The identifier of the first packet is received as 000, and the identifier of the second packet is 010. Considering that different packets have different routes, after waiting for a period of time, the packet with the identifier of 001 cannot be received. The audio decoding device can determine that packet loss has occurred by detecting, and the lost data packet is the data packet 2.
  • the audio decoding device can use other methods in addition to determining whether packet loss occurs or a specific lost packet.
  • the method is not limited here.
  • the audio data of a unit length of different channels of the same period of time constitutes the same audio frame, so the audio decoding device can first query which channels have lost packets after detecting that packet loss has occurred, if all channels are If the audio data is lost in the same audio frame, indicating that an audio frame is completely lost, the audio decoding device may perform intra-channel packet loss concealment on the lost audio data of each channel according to a preset packet loss concealment algorithm.
  • the specific packet loss concealment process is similar to the traditional packet loss concealment process, and is not described here.
  • the audio decoding device knows that not all of the N channels have lost audio data in an audio frame, but only the audio data of the M channels are lost, and the M channels are excluded from the N channels.
  • the audio data of the other audio channels that belong to the same audio frame as the audio data that has been lost in the audio frame are not lost, and the audio decoding device can decode the audio data of the audio frame that has not been lost.
  • M is an integer greater than 0 and less than N.
  • the signal characteristic parameters of the audio data can be obtained by extracting.
  • the specific signal characteristic parameter may be a signal pitch period, and/or a signal energy. It may be understood that, in practical applications, the signal characteristic parameter may be represented by using the above two parameters, and may also adopt other The parameters are represented, for example, signal tones, etc., and are not limited herein.
  • step 404 determining whether there is a correlation between the first channel and the second channel, and if so, executing step 405, and if not, executing step 408;
  • the audio decoding device may perform the analysis by using the historical audio data of each channel in order to determine whether there is a correlation between the channels.
  • the specific analysis manner may include:
  • the audio decoding device may calculate, by using a correlation function, that the audio data received on the first channel and the audio data on the second channel that belong to the same audio frame as the first channel have been received.
  • the correlation value between the audio data may be used.
  • the audio decoding device determines whether there is correlation between the first channel and the second channel according to the correlation value. Specifically, if the correlation value approaches 1, indicating that there is a relationship between the first channel and the second channel. Correlation, if the correlation value approaches 0, it means that there is no correlation between the first channel and the second channel.
  • the audio decoding device may acquire a signal characteristic parameter of the audio data that has been received on the first channel and the already received audio data on the second channel that belongs to the same audio frame as the already received audio data of the first channel.
  • Signal characteristic parameter of the audio data that has been received on the first channel and the already received audio data on the second channel that belongs to the same audio frame as the already received audio data of the first channel.
  • the correlation between the first channel and the second channel may be determined according to the signal characteristic parameter, specifically:
  • the audio decoding device may determine that the signal characteristic parameter of the audio data that has been received on the first channel and the already received audio data that belongs to the same audio frame as the already received audio data of the first channel on the second channel Whether the signal characteristic parameter satisfies the preset relevant condition, and if yes, determining that there is a correlation between the first channel and the second channel, and if not, determining that there is no relationship between the first channel and the second channel Correlation.
  • the preset related condition may be that the signal feature parameter of the audio data that has been received on the first channel belongs to the same audio frame as the already received audio data of the first channel on the second channel.
  • the difference between the signal characteristic parameters of the received audio data is less than a preset value, and if the difference is less than the preset value, determining the signal characteristic parameter of the audio data that has been received on the first channel and the second channel.
  • the audio decoding device determines the correlation between the first channel and the second channel in the embodiment. It can be understood that, in practical applications, the audio decoding device can also adopt other more The manner determines the correlation between the first channel and the second channel, for example, the audio encoding device notifies the correlation between the channels of the audio decoding device before transmitting the data packet or while transmitting the data packet, or may directly Correlation between channels is preset in the audio decoding device, and details are not described herein again.
  • the audio decoding device can determine the correlation between the channels by the above manner, for example, assuming that there are 4 channels, respectively, channel 1, channel 2, channel 3, and channel 4, audio decoding.
  • the correlation between the channels determined by the device can be: "Channel 1, channel 2 and channel 3 have correlation, channel 1 and channel 4 have no correlation, channel 2 and channel 4 have no correlation, channel 3 and There is no correlation between channel 4";
  • step 404 in this embodiment is a process for determining the correlation between the first channel and the second channel for the audio decoding device, and the process is not limited to being performed after step 403, and the process may be a cycle.
  • the process of sexual execution for example every 10 seconds or 20 seconds or other duration, enables the correlation between the channels to be updated in real time.
  • the audio decoding device may first calculate the lost audio of the first channel corresponding to the audio frame according to the packet loss concealment algorithm in the channel.
  • the time compensation parameter corresponding to the data specifically:
  • the channel 3 is a channel corresponding to the lost audio data corresponding to the audio frame (ie, the first channel), and the audio decoding device can obtain from channel 3 the most recently successfully received before the current audio frame.
  • the signal characteristic parameter of the audio data is subjected to a time weighting operation according to the signal characteristic parameter to obtain a time compensation parameter, and the specific weighting operation manner may be:
  • Time compensation parameter ( a * length / ( delta * length ) ) * fcl ;
  • a is a time weighting factor
  • length is the length of an audio frame
  • delta is the difference between the audio frame of the unused audio data used and the audio frame number of the lost audio data
  • fcl is not lost in the channel. Signal characteristic parameters of audio data.
  • the audio decoding device determines that the current audio frame of channel 3 is audio frame 3, and the audio decoding device receives the audio data of channel 3 at audio frame 1, the signal pitch period of the audio data is 100 Hz, for each audio frame With a length of 30 milliseconds, the time compensation parameters can be calculated as:
  • a is a time weighting coefficient
  • the time weighting coefficient a is related to parameters such as a signal pitch period and an audio frame length.
  • the time compensation parameter represents the compensation of the lost audio data over the signal pitch period within the channel.
  • the audio decoding device may correct the time compensation parameter by using the signal characteristic parameter of the second channel corresponding to the unrecovered audio data of the audio frame to obtain a comprehensive compensation parameter, assuming that the audio frame 3 is not
  • the channel that loses audio data is channel 1 (ie, the second channel).
  • channel 1 ie, the second channel.
  • the process of specifically correcting the comprehensive compensation parameters can be:
  • the spatial weighting coefficient b is related to the degree of correlation between the channels. It should be noted that, in practical applications, the audio decoding device may also use other methods to use the audio data of the channel 1 that is not lost on the audio frame 3.
  • the signal characteristic parameter corrects the time compensation parameter, which is not limited herein.
  • X is the time compensation weight
  • y is the spatial compensation weight
  • b is the spatial weighting coefficient
  • fc2 is the signal characteristic parameter of the audio data that is not lost between the channels.
  • the comprehensive compensation parameter in this embodiment may be:
  • the content of the above description is that the time compensation parameter is corrected by using the signal characteristic parameter of the second channel corresponding to the audio data of the audio frame that is not lost, and the comprehensive compensation parameter is obtained.
  • the time compensation parameter may be corrected according to the signal feature parameters of the audio signal of the audio channel that has the correlation with the first channel, and the time compensation parameter is obtained, and the specific process may be:
  • the related channel is other channels of the N-M channels corresponding to the unmissed audio data having a correlation with the first channel other than the second channel.
  • the process of obtaining specific compensation parameters for specific corrections can be:
  • i is the number of channels that have correlation with the first channel participating in the modified comprehensive compensation parameter
  • j represents the jth channel of the i channels
  • mj is the correlation of the jth channel
  • the weighting factor, b is the spatial weighting factor.
  • i is an integer greater than or equal to 1 and less than or equal to N-M
  • j is an integer greater than or equal to 1
  • mj and the spatial weighting factor b are related to the degree of correlation between the channels.
  • Ml is the correlation weighting coefficient of channel 1
  • fcOl is the signal characteristic parameter of the audio data of channel 1 not lost on audio frame 3
  • m2 is the correlation weighting coefficient of channel 2
  • fc02 is the channel 2 in the audio frame Signal characteristic parameters of audio data that are not lost on 3.
  • the specific values of ml and m2 are related to the degree of correlation between the channels, for example, the distance between the audio collection device corresponding to channel 1 and the audio collection device corresponding to channel 3 is smaller than the audio collection device corresponding to channel 2.
  • the distance between the audio capture devices corresponding to channel 3 indicates that the correlation between channel 1 and channel 3 is stronger, then ml can be set to be greater than m2, and vice versa. It can be understood that, in practical applications, there are more ways and rules for setting the correlation weighting coefficient, which is not limited herein.
  • step 406 can correct the time compensation parameter by using the signal characteristic parameters of the audio data of the channel 1 and the channel 2 that are not lost on the audio frame 3.
  • the audio decoding device can also directly perform the channel.
  • the internal and inter-channel weighting operations result in comprehensive compensation parameters, for example, assuming that the signal pitch period of the unmissed audio data of channel 1 is 150 Hz, and the signal pitch period of the unrecovered audio data of channel 2 is 170 Hz.
  • X is the time compensation weight
  • y is the spatial compensation weight
  • b is the spatial weighting coefficient
  • the comprehensive compensation parameter in this embodiment may be:
  • the audio decoding device calculates the integrated compensation parameter
  • the audio data that has been lost on the audio frame 3 by the channel 3 can be recovered according to the integrated compensation parameter.
  • the signal characteristic parameter of the audio data that has been lost on the audio frame 3 of the channel 3 can be set as: integrated compensation parameter + (signal characteristic parameter of the audio data not lost in the channel + audio data not lost between the channels) Signal characteristic parameters) II.
  • the audio decoding device can determine the channel. 3
  • the process of restoring the lost audio data according to the comprehensive compensation parameter is illustrated by only a few examples. It can be understood that, in practical applications, there may be more ways to be integrated.
  • the compensation parameter is used to recover the lost audio data, which is not limited herein.
  • channel 3 If there is no correlation between channel 3 and any channel corresponding to the audio data that has not been lost, then the audio data transmitted by channel 3 corresponding to all the unrecovered audio data is for different sounds. Source, so there is essentially no correlation between channel 3 and all unrecovered audio data.
  • the audio decoding device may use the preset packet hiding algorithm to perform the intra-channel packet loss concealment processing on the audio data that has been lost on the audio frame 3 of the channel 3, and the specific process is hidden from the traditional packet loss.
  • the processing is similar and will not be described here.
  • N is an integer greater than or equal to 2
  • M is an integer greater than 0 and less than N
  • N is an integer greater than or equal to 2
  • the signal characteristic parameters of the NM channels corresponding to the audio data of the audio frame that are not lost may be acquired, and when there is correlation between the first channel and the second channel, according to the corresponding audio of the second channel
  • the signal characteristic parameter of the unmissed audio data of the frame performs packet loss concealment processing on the lost audio data corresponding to the audio frame of the first channel, since the unrecovered audio data belongs to the same audio frame, different channels
  • the audio data so the correlation between the different channels can be utilized when the audio decoding device performs the packet hiding process, thereby improving the effect of the packet hiding process in the N channel systems.
  • Bright Referring to FIG. 5, the embodiment is applied to a 2-channel system in which the audio data of the left channel is Li and the audio data of the right channel is Ri.
  • the audio encoding device may form the audio data Li of the left channel of the i-th audio frame and the audio data Ri+1 of the right channel of the i+1th audio frame into one data packet;
  • the audio data Li+1 of the left channel of the i+1th audio frame and the audio data Ri of the right channel of the i-th audio frame are combined into another data packet.
  • the audio encoding device packs the left channel audio data L1 of the first audio frame and the right channel audio data R2 of the second audio frame to obtain the data packet 1, and the left channel audio data L2 and the first audio of the second audio frame.
  • the right channel audio data R1 of the frame is packed to obtain the data packet 2, and so on, the audio encoding device packs L3 and R4 to obtain the data packet 3, and L4 and R3 pack to obtain the data packet 4.
  • the audio encoding device can assign a unique identifier to each data packet, for example, 00 for packet 1, 01 for packet 2, 10 for packet 3, and 11 for packet 4.
  • the data packets can be transmitted to the audio decoding device. It is assumed that the data packet 3 is lost during transmission, and the audio data decoded by the audio decoding device is also as shown in Fig. 5, in which L3 and R4 are lost.
  • the identifier of the first data packet received by the audio decoding device is 00.
  • the audio decoding device performs deinterleaving on the left and right channels on the received data packet, and respectively decodes the left and right channels, wherein the audio data obtained by decoding the left channel of the first data packet is L1, and the right channel is performed.
  • the audio data obtained after decoding is R2, and the audio decoding device can buffer L1 and R2.
  • the identifier of the second data packet received by the audio decoding device is 01.
  • the audio decoding device performs deinterleaving on the left and right channels on the received data packet, and respectively decodes the left and right channels, wherein the audio data obtained by decoding the left channel of the first data packet is L2, and the right channel is performed.
  • the audio data obtained after decoding is R1, and the audio decoding device can obtain the audio data of two audio frames by combining the previously buffered L1 and R2, which are audio frame 1 (corresponding to L1 and R1) and audio frame 2 (corresponding to L2 and R2). .
  • the audio decoding device may be similar to the signal characteristic parameters of the L1 and the processes described in the foregoing method embodiments, and details are not described herein again.
  • the audio decoding device can determine the process according to the L2 signal feature and the foregoing method embodiment. The process described in the above is similar and will not be described here.
  • the third data packet received by the audio decoding device is identified as 11.
  • the audio decoding device performs deinterleaving on the left and right channels on the received data packet, and respectively decodes the left and right channels, wherein the audio data obtained by decoding the left channel of the first data packet is L4, and the right channel is performed.
  • the audio data obtained after decoding is R3, and the audio decoding device can buffer L4 and R3.
  • the audio decoding device can learn that the data packet with the packet identifier 10 is lost according to the identifier of the data packet. According to the audio data obtained after decoding, the audio data L3 and R4 are lost.
  • the audio decoding device can acquire the audio data R3 of the right channel belonging to the same audio frame as L3, and obtain the signal characteristic parameters of R3, and then determine whether the left channel and the right channel have correlation.
  • the signal characteristic parameters of the R3 are used, and the signal characteristic parameters of the L2 and L4 are combined to perform packet loss concealment processing on the L3.
  • the specific process is similar to the process described in the foregoing method embodiment, and details are not described herein again.
  • L3 and L4 signal feature parameters are used to perform packet loss concealment processing on L3.
  • the specific process is similar to the process described in the foregoing method embodiment, and details are not described herein again.
  • the audio decoding device can perform packet loss concealment processing on R4 in a similar manner, and the specific process will not be described herein.
  • an embodiment of the audio decoding device of the present invention includes:
  • the receiving unit 701 is configured to receive a data packet.
  • the decoding unit 702 is configured to: when it is detected that packet loss occurs, and audio data of a corresponding one of the M channels of the N channels is lost, if the M channels are excluded from the N channels Other than the other channels, the audio data belonging to the same audio frame as the audio data that has been lost in the audio frame is not lost, and the unrecovered audio data corresponding to the audio frame in the NM channels is decoded.
  • M is an integer greater than 0 and less than N;
  • the extracting unit 703 is configured to extract, from the N-M channels obtained by the decoding unit 702, signal characteristic parameters of audio data that are not lost corresponding to the audio frame;
  • the correlation determining unit 704 is configured to determine whether there is correlation between the first channel and the second channel, where the first channel is any one of the M channels in which the audio data is lost in the audio frame, and the second The channel is any one of the NM channels in the audio frame that does not lose audio data. If yes, the first packet loss hiding unit 705 is triggered to perform a corresponding operation. If not, the second packet hiding unit 706 is triggered. Perform the corresponding operation;
  • the first packet loss hiding unit 705 is configured to: according to the signal feature parameter of the second channel corresponding to the audio channel of the second channel extracted by the extracting unit 703, the corresponding audio frame of the first channel is lost. Audio data is subjected to packet loss concealment processing;
  • the second packet loss hiding unit 706 is configured to perform intra-channel packet loss concealment processing on the lost audio data of the first channel corresponding to the audio channel according to a preset packet loss concealment algorithm.
  • FIG. 8 Another embodiment of the audio decoding device of the present invention includes:
  • a receiving unit 801 configured to receive a data packet
  • the decoding unit 802 is configured to: when it is detected that packet loss occurs, and audio data corresponding to a certain audio frame of the M channels of the N channels is lost, if the M channels are excluded from the N channels Other than the other channels, the audio data belonging to the same audio frame as the audio data that has been lost in the audio frame is not lost, and the unrecovered audio data corresponding to the audio frame in the NM channels is decoded.
  • M is an integer greater than 0 and less than N;
  • the extracting unit 803 is configured to extract a signal feature parameter of the N-M channels obtained by the decoding unit 802 and corresponding to the audio data of the audio frame that is not lost;
  • the correlation determining unit 804 is configured to determine whether there is correlation between the first channel and the second channel, where the first channel is any one of the M channels in which the audio data is lost in the audio frame, and the second The channel is any one of the NM channels in the audio frame where the audio data is not lost. If yes, the first packet loss unit 805 is triggered to perform a corresponding operation. If not, the second packet hiding unit 806 is triggered. Perform the corresponding operation;
  • a first packet loss concealing unit 805, configured to: according to a signal feature parameter of the second channel corresponding to the audio channel of the second channel extracted by the extracting unit 803, corresponding to the audio channel of the first channel
  • the lost audio data is subjected to packet loss concealment processing
  • the second packet loss hiding unit 806 is configured to perform intra-channel packet loss concealment processing on the lost audio data of the first channel corresponding to the audio channel according to a preset packet loss concealment algorithm.
  • the correlation determining unit 804 in this embodiment may further include:
  • the numerical calculation module 8041 is configured to calculate, by using a correlation function, the audio data that has been received on the first channel and the audio data that has received the audio data on the second channel that belongs to the same audio frame. Correlation value between audio data;
  • the determining module 8042 is configured to determine whether there is correlation between the first channel and the second channel according to the correlation value calculated by the numerical calculation module.
  • the correlation determining unit 804 in this embodiment may further include:
  • the obtaining module 8043 is configured to acquire a signal feature parameter of the audio data that has been received on the first channel, and the same audio frame on the second channel that belongs to the same audio frame as the first channel has received Signal characteristic parameters of the audio data;
  • the determining module 8044 configured to determine that the signal feature parameter of the audio data that has been received on the first channel and the audio data that has received the audio data on the second channel that belong to the same audio frame have been received Whether the difference between the signal characteristic parameters of the audio data is less than a preset value, and if so, determining that there is a correlation between the first channel and the second channel, and if not, determining the first sound There is no correlation between the track and the second channel.
  • the first packet loss hiding unit 805 in this embodiment may further include:
  • a calculation module 8051 configured to calculate, according to a packet loss concealment algorithm in the channel, a time compensation parameter corresponding to the lost audio data of the first channel corresponding to the audio frame;
  • the correction module 8052 is configured to correct the time compensation parameter calculated by the calculation module 8051 by using a signal feature parameter of the second channel that is not lost corresponding to the audio data of the audio frame to obtain a comprehensive compensation parameter;
  • the recovery module 8053 is configured to recover the lost audio data corresponding to the audio frame of the first channel according to the comprehensive compensation parameter corrected by the correction module 8052.
  • the first packet loss hiding unit 805 in this embodiment may be specifically configured to use, according to a second channel, a signal feature parameter of the audio data of the audio frame that is not lost, and a pair of at least one related channel.
  • the signal characteristic parameter of the unrecovered audio data of the audio frame should be subjected to packet loss concealment processing on the lost audio data corresponding to the audio frame of the first channel.
  • the related channel is other channels of the N-M channels corresponding to the unmissed audio data having a correlation with the first channel other than the second channel.
  • the receiving unit 801 can receive a data packet from the audio encoding device.
  • the data packet is sent to the audio decoding device.
  • the transmitting process may be that the audio encoding device directly transmits the data packet to the audio decoding device, or the audio encoding device may transmit the data packet to the forwarding device, and then the forwarding device transmits the data packet to the audio decoding device.
  • UDP provides a transaction-oriented single unreliable information transfer service
  • packet loss during audio and image data transmission is inevitable.
  • the audio decoding device can determine the audio of each channel. Data loss situation.
  • the audio decoding device knows that not all of the N channels have lost audio data in the same audio frame, but only the audio data of the M channels in the same audio frame is lost, and the M channels are excluded from the N channels.
  • the audio data of the other audio channels other than the audio data that has been lost in the audio frame is not lost, and the decoding unit 802 can decode the audio data of the audio frame that has not been lost.
  • the extracting unit 803 can obtain the signal characteristic parameters of the audio data.
  • the specific signal characteristic parameter may be a signal pitch period, and/or a signal energy. It may be understood that, in practical applications, the signal characteristic parameter may be represented by using the above two parameters, and may also adopt other The parameters are represented, for example, signal tones, etc., and are not limited herein.
  • the correlation determining unit 804 may determine whether there is a correlation between the first channel and the second channel, and the first channel is any one of the M channels in which the audio data is lost in the audio frame.
  • the second channel is any one of NM channels in the audio frame in which audio data is not lost.
  • the specific determination manner of the correlation determining unit 804 is similar to that described in the foregoing step 404 in the embodiment shown in FIG. 4, and details are not described herein again.
  • the calculating module 8051 in the first packet hiding unit 805 may first calculate the first according to the packet loss concealment algorithm in the channel. The time compensation parameter of the channel corresponding to the lost audio data of the audio frame.
  • the correction module 8052 may correct the time compensation parameter by using the signal characteristic parameter of the second channel corresponding to the unrecovered audio data of the audio frame to obtain a comprehensive compensation parameter.
  • the recovery module 8053 can recover the lost audio data corresponding to the audio frame of the first channel according to the comprehensive compensation parameter.
  • the content of the foregoing description is that the first packet loss concealing unit 805 adopts the signal characteristic parameter of the second channel corresponding to the audio data of the audio frame that is not lost, and the corresponding audio frame of the first channel is lost.
  • the audio data is subjected to packet loss concealment processing.
  • the first packet loss concealing unit 805 can also correspond to the first channel corresponding to the signal characteristic parameter of the unrecovered audio data of the audio frame corresponding to the plurality of channels having the correlation with the first channel.
  • the lost audio data of the frame is subjected to packet loss concealment processing, and the specific process is similar to that described in the foregoing steps 405 to 407 in the embodiment shown in FIG. 4, and details are not described herein again.
  • the second packet loss hiding unit 806 can use the preset packet loss concealment algorithm to correspond to the audio channel of the first channel.
  • the lost audio data is subjected to packet loss concealment processing in the channel, and the specific process is similar to the traditional packet hiding process, and will not be described here.
  • N is an integer greater than or equal to 2
  • M is an integer greater than 0 and less than N
  • N is an integer greater than or equal to 2
  • the signal characteristic parameters of the NM channels corresponding to the audio data of the audio frame that are not lost may be acquired, and when there is correlation between the first channel and the second channel, according to the corresponding audio of the second channel
  • the signal characteristic parameter of the unmissed audio data of the frame performs packet loss concealment processing on the lost audio data corresponding to the audio frame of the first channel, since the unrecovered audio data belongs to the same audio frame, different channels
  • the audio data so the correlation between the different channels can be utilized when the audio decoding device performs the packet hiding process, thereby improving the effect of the packet hiding process in the N channel systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)
  • Stereophonic System (AREA)

Abstract

一种音频解码方法及相应的音频解码装置,其中音频解码方法包括:接收数据包;当检测到发生数据包丢包,且N个声道中的M个声道的对应某一音频帧的音频数据丢失时,若N个声道中除所述M个声道之外的其他声道的,与所述音频帧中已丢失的音频数据属于相同音频帧的音频数据未丢失,则对未丢失的音频数据进行解码,提取解码后得到的信号特征参数;判断第一声道与第二声道之间是否具有相关性;若具有,则根据所述第二声道对所述第一声道的对应所述音频帧的已丢失的音频数据进行丢包隐藏处理,以提高音频解码时丢包隐藏处理的效果。

Description

音频解码方法及装置 本申请要求于 2011 年 06 月 02 日提交中国专利局、 申请号为 201110147225.6、 发明名称为"音频解码方法及装置 "的中国专利申请的优先 权, 其全部内容通过引用结合在本申请中。 技术领域
本发明涉及数据处理领域, 尤其涉及一种音频解码方法及装置。 背景技术
视讯技术实现了语音、 图像、 数据等信息综合在一起的远距离传输, 使 人们在进行异地交流时利用视讯技术既可以听到对方的声音, 又可以看到对 方的活动图像和胶片内容, 大大增强了异地交流的亲切感和临场感。
视讯会议系统一般由视讯会议服务器(以多点控制单元(MCU, Multipoint Control Unit ) 为例)和终端组成。 在一个视讯会议中, 每个终端对应一个会 场, 由各个终端采集各个会场的声音、 图像并编码发送给 MCU。 MCU按照 一定的方式对声音、 图像进行处理(声音混音、 图像转发或组成多画面), 并 将处理后的声音和图像发送给各个终端, 终端解码输出远端会场的声音和图 像, 实现远程通信的目的。
现有的视讯会议系统一般采用用户数据报协议 ( UDP , User Datagram Protocol )传输音频和图像数据, 由于 UDP提供的是面向事务的筒单不可靠信 息传送服务, 所以音频和图像数据传输过程中的丢包现象在所难免。
现有技术中, 编码终端完成音频编码之后, 会将编码后的数据包发送给 解码终端。
解码终端接收到数据包后, 将数据包的实时传输协议(RTP, Real-time Transport Protocol )头剥离掉, 按照多声道码流格式进行解交织, 并解码得到 每个声道的音频数据。
如果发生了数据包丢包, 则解码终端可以确定丢失的音频数据所属的声 道, 并且针对每个声道, 根据该声道内解码后的音频数据对该声道内丢失的 音频数据进行独立的丢包隐藏处理, 即声道内的丢包隐藏处理, 并得到最终 的输出信号。 具体请参阅图 1 , 现有技术中的技术方案中, 若数据包 P2发生了丢包, 而数据包 P1 以及数据包 P3未丢包, 则解码终端可以确定丢失的音频数据所 属的声道为左声道(L ) 以及右声道(R ), 并针对左声道, 使用数据包 P1中 的音频数据 L1和 /或数据包 P3 中的音频数据 L3对数据包 P2中的音频数据 L2进行丢包隐藏处理, 且使用数据包 P1中的音频数据 R1和 /或数据包 P3中 的音频数据 R3对数据包 P2中的音频数据 R2进行丢包隐藏处理。
但是, 上述现有技术的方案中, 解码终端在进行丢包隐藏处理时, 会针 对声道中的音频数据进行声道内的丢包隐藏处理, 对于多声道系统而言, 这 样的处理方式影响了丢包隐藏处理的效果。
发明内容
本发明实施例提供了一种音频解码方法及装置, 能够提高 N个声道(其 中 N大于或等于 2 ) 的视频解码系统中的丢包隐藏处理的效果。
本发明实施例提供的音频解码方法, 应用于音频解码系统, 所述音频解 码系统包含 N个声道, N为大于或等于 2的整数, 包括:
接收数据包; 当检测到发生数据包丢包, 且 N个声道中的 M个声道的对 应某一音频帧的音频数据丢失时, 若 N个声 道中除所述 M个声道 之外的其他声道的, 与所述音频帧中已丢失的音频数据属于相同音频帧的音 频数据未丢失, 则对所述 N-M个声道中对应所述音频帧的未丢失的音频数据 进行解码, 所述 M为大于 0, 且小于 N的整数; 提取解码后得到的所述 N-M 个声道中对应所述音频帧的未丢失的音频数据的信号特征参数; 判断第一声 道与第二声道之间是否具有相关性, 所述第一声道为所述音频帧中丢失了音 频数据的 M个声道中的任一个, 所述第二声道为所述音频帧中未丢失音频数 据的 N-M个声道中的任一个; 若具有, 则根据所述第二声道的对应所述音频 帧的未丢失的音频数据的信号特征参数对所述第一声道的对应所述音频帧的 已丢失的音频数据进行丢包隐藏处理; 若不具有, 则按照预置的丢包隐藏算 法对所述第一声道的对应所述音频帧的已丢失的音频数据进行声道内的丢包 隐藏处理。
本发明实施例提供的音频解码装置, 用于对 N个声道的音频数据进行解 码, 所述 N为大于或等于 2的整数, 包括: 接收单元, 用于接收数据包; 解码单元, 用于当检测到发生数据包丢包, 且 N个声道中的 M个声道的对应某一音频帧的音频数据丢失时,若 N个声道 中除所述 M个声道之外的其他声道的, 与所述音频帧中已丢失的音频数据属 于相同音频帧的的音频数据未丢失, 则对所述 N-M个声道中对应所述音频帧 的未丢失的音频数据进行解码, 所述 M为大于 0, 且小于 N的整数; 提取单 元, 用于提取所述解码单元解码后得到的所述 N-M个声道中对应所述音频帧 的未丢失的音频数据的信号特征参数; 相关性判断单元, 用于判断第一声道 与第二声道之间是否具有相关性, 所述第一声道为所述音频帧中丢失了音频 数据的 M个声道中的任一个, 所述第二声道为所述音频帧中未丢失音频数据 的 N-M个声道中的任一个,若具有,则触发第一丢包隐藏单元执行相应操作, 若不具有, 则触发第二丢包隐藏单元执行相应操作; 所述第一丢包隐藏单元, 用于根据所述提取单元提取到的所述第二声道的对应所述音频帧的未丢失的 音频数据的信号特征参数对所述第一声道的对应所述音频帧的已丢失的音频 数据进行丢包隐藏处理; 所述第二丢包隐藏单元, 用于按照预置的丢包隐藏 算法对所述第一声道的对应所述音频帧的已丢失的音频数据进行声道内的丢 包隐藏处理。
从以上技术方案可以看出, 本发明实施例具有以下优点:
本发明实施例中, 当检测到发生数据包丢包, 且 N个声道(N为大于或 等于 2的整数) 中的 M个声道(M为大于 0, 且小于 N的整数) 的对应某一 音频帧的音频数据丢失时, 若 N个声道中除该 M个声道之外的其他声道的, 与该音频帧中已丢失的音频数据属于相同音频帧的音频数据未丢失, 则可以 获取 N-M个声道中对应该音频帧的未丢失的音频数据的信号特征参数, 并当 第一声道与第二声道之间具有相关性时, 根据第二声道的对应该音频帧的未 丢失的音频数据的信号特征参数对第一声道的对应该音频帧的已丢失的音频 数据进行丢包隐藏处理, 由于未丢失的音频数据是属于相同的音频帧, 不同 声道的音频数据, 所以在音频解码装置进行丢包隐藏处理时可以利用不同声 道之间的相关性, 从而提高 N个声道系统中的丢包隐藏处理的效果。 附图说明
图 1为现有技术中数据包结构示意图;
图 2为本发明音频解码方法一个实施例示意图; 图 3为本发明音频数据划分示意图;
图 4为本发明音频解码方法另一实施例示意图;
图 5为本发明音频数据一种传输过程中丢包示意图;
图 6为本发明音频解码一种数据流程示意图;
图 7为本发明音频解码装置一个实施例示意图;
图 8为本发明音频解码装置另一实施例示意图。 具体实施方式
本发明实施例提供了一种音频解码方法及装置, 能够提高 N个声道(N 为大于或等于 2的整数)音频解码系统中的丢包隐藏处理的效果。
请参阅图 2, 本发明音频解码方法一个实施例包括:
201、 接收数据包;
本实施例中, 音频解码装置可以用于对 N个声道的音频数据进行解码, 该 N为大于或等于 2的整数。
当音频编码装置完成了编码之后, 数据包会经过网络被发送至音频解码 装置。
该发送的过程可以是音频编码装置直接向音频解码装置发送该数据包, 也可以是音频编码装置向转发设备发送该数据包, 再由转发设备将该数据包 发送至音频解码装置。
为便于理解, 下面首先对音频编解码过程中的一些参数进行筒要说明: 请参阅图 3, 图 3展示了 N个声道的音频数据流, 其中, N个声道属于 同一段单位时长的音频数据(例如 Cli, C2i, ...... , CNi )可以看作是一个音 频帧, 即如图 3中所划分的音频帧 1 , 音频帧 2......音频帧 i等, 该 i为音频 帧的序号, i的数值与音频数据的时长有关。
为后续描述的筒便, 可以将一段单位长度的音频数据叫做一段音频数据, 该单位长度可以根据实际应用环境确定, 其同时也表示一个音频帧的长度, 例如为 5毫秒, 或 10毫秒等, 每个音频帧可以看作是由属于同一段单位时长 的不同声道的音频数据所组成的集合。
需要说明的是, 该 N个声道的音频数据被按照时间顺序划分为不同的音 频帧, 每一个音频帧具有固定的长度, 每一个音频帧中包含 N段音频数据, 每段音频数据对应一个声道, 该 N为声道的数目, N为大于或等于 2的整数。 例如对于 3声道系统而言, 每个音频帧中包含 3段音频数据, 这 3段音 频数据分别对应左声道, 中声道以及右声道。
202、 当检测到发生数据包丢包, 且 N个声道中的 M个声道的对应某一 音频帧的音频数据丢失时, 若 N个声道中除该 M个声道之外的其他声道的, 与该音频帧中已丢失的音频数据属于相同音频帧的音频数据未丢失, 则对 N-M个声道中对应该音频帧的未丢失的音频数据进行解码;
由于 UDP提供的是面向事务的筒单不可靠信息传送服务, 所以音频和图 像数据传输过程中的丢包现象在所难免, 当检测发生数据包丢包时, 音频解 码装置可以确定各声道的音频数据丢失情况。
如果 M个声道的对应某一音频帧的音频数据丢失,且 N个声道中除该 M 个声道之外的其他声道的, 与该音频帧中已丢失的音频数据属于相同音频帧 的音频数据未丢失, 则说明属于同一个音频帧中的音频数据未完全丢失, 此 时, 音频解码装置可以对 N-M个声道中对应该音频帧的未丢失的音频数据进 行解码。
本实施例中, 该 M为大于 0, 且小于 N的整数。
203、 提取解码后得到的 N-M 个声道中对应该音频帧的未丢失的音频数 据的信号特征参数;
音频解码装置对 N-M个声道中对应该音频帧的未丢失的音频数据进行解 码后, 可以得到这些音频数据的信号特征参数。
本实施例中, 具体的信号特征参数可以为信号基音周期, 和 /或信号能量, 可以理解的是, 在实际应用中, 信号特征参数除了采用上述两个参数进行表 示之外, 还可以采用其他的参数进行表示, 例如信号音调等, 具体此处不作 限定。
本实施例中, 音频解码装置提取解码后得到的 N-M个声道中对应该音频 帧的未丢失的音频数据的信号特征参数的方式为现有技术, 此处不再赘述。
204、 判断第一声道与第二声道之间是否具有相关性, 若是, 则执行步骤 205, 若否, 则执行步骤 206;
本实施例中, 第一声道为该音频帧中丢失了音频数据的 M个声道中的任 一个, 第二声道为该音频帧中未丢失音频数据的 N-M个声道中的任一个。
需要说明的是, 当不同的声道所传输的音频数据是针对同一声源的, 则 这些声道之间具有相关性, 若不同的声道所传输的音频数据是针对不同声源, 则这些声道之间不具有相关性。
205、 根据第二声道的对应该音频帧的未丢失的音频数据的信号特征参数 对第一声道的对应该音频帧的已丢失的音频数据进行丢包隐藏处理;
若第一声道与第二声道之间具有相关性, 则说明第一声道所传输的音频 数据以及第二声道所传输的音频数据是针对同一声源的, 所以第一声道的已 丢失的音频数据与第二声道的未丢失的音频数据之间也具有较强的相关性。
本实施例中, 音频解码装置在对第一声道的对应该音频帧的已丢失的音 频数据进行丢包隐藏处理时可以参考第二声道的对应该音频帧的未丢失的音 频数据的信号特征参数, 即使用第二声道的对应该音频帧的未丢失的音频数 据的信号特征参数对第一声道的对应该音频帧的已丢失的音频数据进行丢包 隐藏处理。
需要说明的是, 若除了第二声道与第一声道之间具有相关性, 同时还有 其他未丢失的音频数据对应的声道也与第一声道之间具有相关性, 则音频解 码装置也可以根据第二声道的对应该音频帧的未丢失的音频数据的信号特征 参数, 以及至少一个相关声道的对应该音频帧的未丢失的音频数据的信号特 征参数, 对第一声道的已丢失的音频数据进行丢包隐藏处理;
该相关声道为未丢失的音频数据对应的 N-M个声道中除第二声道之外, 与第一声道具有相关性的声道。
206、 按照预置的丢包隐藏算法对第一声道的对应该音频帧的已丢失的音 频数据进行声道内的丢包隐藏处理。
若第一声道与第二声道之间不具有相关性, 则说明第一声道所传输的音 频数据与第二声道所传输的音频数据不是针对同一声源, 所以第一声道已丢 失的音频数据与第二声道未丢失的音频数据之间基本没有相关性。
本实施例中, 音频解码装置可以使用预置的丢包隐藏算法对第一声道的 对应该音频帧的已丢失的音频数据进行声道内的丢包隐藏处理, 具体的过程 与传统的丢包隐藏处理过程类似, 此处不再赘述。
本发明实施例中, 当检测到发生数据包丢包, 且 N个声道(N为大于或 等于 2的整数) 中的 M个声道(M为大于 0, 且小于 N的整数) 的对应某一 音频帧的音频数据丢失时, 若 N个声道中除该 M个声道之外的其他声道的, 与该音频帧中已丢失的音频数据属于相同音频帧的音频数据未丢失, 则可以 获取 N-M个声道中对应该音频帧的未丢失的音频数据的信号特征参数, 并当 第一声道与第二声道之间具有相关性时, 根据第二声道的对应该音频帧的未 丢失的音频数据的信号特征参数对第一声道的对应该音频帧的已丢失的音频 数据进行丢包隐藏处理, 由于未丢失的音频数据是属于相同的音频帧, 不同 声道的音频数据, 所以在音频解码装置进行丢包隐藏处理时可以利用不同声 道之间的相关性, 从而提高 N个声道系统中的丢包隐藏处理的效果。
为便于理解, 下面以一具体实例对本发明音频解码方法进行详细描述, 请参阅图 4, 本发明音频解码方法另一实施例包括:
401、 接收数据包;
本实施例中步骤 401与前述图 2所示实施例中步骤 201的内容类似, 具 体此处不再赘述。
402、 当检测到发生数据包丢包, 且 N个声道中的 M个声道的对应某一 音频帧的音频数据丢失时, 若其他声道的, 与该音频帧中已丢失的音频数据 属于相同音频帧的音频数据未丢失, 则对 N-M个声道中对应该音频帧的未丢 失的音频数据进行解码;
由于 UDP提供的是面向事务的筒单不可靠信息传送服务, 所以音频和图 像数据传输过程中的丢包现象在所难免, 当检测到发生数据包丢包时, 音频 解码装置可以确定各声道的音频数据丢失情况。
每个数据包都有其对应的一个标识, 例如音频编码装置发送的第一个数 据包为数据包 1 , 其标识为 000, 第二个数据包为数据包 2, 其标识为 001 , 第三个数据包为数据包 3, 其标识为 010, 以此类推。
音频解码装置可以根据接收到的数据包的标识以确定是否发生了丢包, 例如音频编码装置对数据包进行顺序编码,从 000开始,之后依次为 001 , 010, 011等等, 假设音频解码装置接收到第一个数据包的标识为 000, 第二个数据 包的标识为 010, 考虑到不同数据包其路由不同, 在等待一段时间后, 无法接 收到包的标识为 001 的数据包, 则音频解码装置通过检测可以判定发生了丢 包, 且丢失的数据包为数据包 2。
可以理解的是, 在实际应用中, 音频解码装置除了采用上述的方式确定 是否发生丢包, 以及具体丢失的数据包之外, 还可以使用其他的方式, 具体 方式此处不作限定。
同一段单位时长的不同声道的一段单位长度的音频数据组成同一个音频 帧, 所以音频解码装置在检测判定发生了丢包之后可以先查询哪些声道发生 了丢包, 如果是所有的声道在同一音频帧都丢失了音频数据, 说明某一个音 频帧被完全丢失, 则音频解码装置可以按照预置的丢包隐藏算法对各声道的 已丢失的音频数据进行声道内的丢包隐藏处理, 具体的丢包隐藏处理过程与 传统的丢包隐藏处理过程类似, 此处不再赘述。
若音频解码装置获知并不是所有的 N个声道在某一音频帧都丢失了音频 数据, 而只是其中的 M个声道的音频数据丢失, 且 N个声道中除该 M个声 道之外的其他声道的, 与该音频帧中已丢失的音频数据属于相同音频帧的音 频数据未丢失, 则音频解码装置可以对该音频帧的未丢失的音频数据进行解 码。
本实施例中, M为大于 0, 且小于 N的整数。
403、 提取解码后得到的 N-M 个声道中对应该音频帧的未丢失的音频数 据的信号特征参数;
音频解码装置对 N-M个声道中对应该音频帧的未丢失的音频数据进行解 码后, 通过提取可以得到这些音频数据的信号特征参数。
本实施例中, 具体的信号特征参数可以为信号基音周期, 和 /或信号能量, 可以理解的是, 在实际应用中, 信号特征参数除了采用上述两个参数进行表 示之外, 还可以采用其他的参数进行表示, 例如信号音调等, 具体此处不作 限定。
404、 判断第一声道与第二声道之间是否具有相关性, 若是, 则执行步骤 405, 若否, 则执行步骤 408;
其中, 该第一声道为该音频帧中已丢失了音频数据的 M个声道中的任一 个, 该第二声道为该音频帧中未丢失音频数据的 N-M个声道中的任一个; 本实施例中, 音频解码装置为了确定声道之间是否存在相关性, 可以使 用各声道的历史音频数据进行分析, 具体的分析方式可以包括:
( 1 )、 采用音频数据进行分析:
音频解码装置可以利用相关函数计算第一声道上已接收到的音频数据与 第二声道上与第一声道的已经接收到的音频数据属于同一音频帧的已接收到 的音频数据之间的相关值。
音频解码装置根据该相关值判断第一声道与第二声道之间是否具有相关 性, 具体的, 若该相关值趋近于 1 , 则说明第一声道与第二声道之间具有相关 性, 若该相关值趋近于 0, 则说明第一声道与第二声道之间不具有相关性。
( 2 )、 采用音频数据的信号特征参数进行分析:
音频解码装置可以获取第一声道上已经接收到的音频数据的信号特征参 数以及第二声道上与第一声道的已经接收到的音频数据属于同一音频帧的已 经接收到的音频数据的信号特征参数;
当音频解码装置获取到音频数据的信号特征参数之后, 可以根据信号特 征参数确定第一声道与第二声道之间的相关性, 具体的:
音频解码装置可以判断第一声道上已经接收到的音频数据的信号特征参 数与第二声道上与第一声道的已经接收到的音频数据属于同一音频帧的已经 接收到的音频数据的信号特征参数是否满足预置的相关条件, 若满足, 则确 定第一声道与第二声道之间具有相关性, 若不满足, 则确定第一声道与第二 声道之间不具有相关性。
本实施例中, 预置的相关条件可以是指第一声道上已经接收到的音频数 据的信号特征参数与第二声道上与第一声道的已经接收到的音频数据属于同 一音频帧的已经接收到的音频数据的信号特征参数之间的差异小于预置数 值, 若该差异小于预置数值, 则确定第一声道上已经接收到的音频数据的信 号特征参数与第二声道上与第一声道的已经接收到的音频数据属于同一音频 帧的已经接收到的音频数据的信号特征参数满足预置的相关条件, 反之亦然。
上述仅是本实施例中音频解码装置确定第一声道与第二声道之间的相关 性的几种方式, 可以理解的是, 在实际应用中, 音频解码装置还可以采用其 他更多的方式确定第一声道与第二声道之间的相关性, 例如由音频编码装置 在发送数据包之前或在发送数据包的同时通知音频解码装置各声道之间的相 关性, 或者可以直接在音频解码装置中预置各声道之间的相关性, 具体方式 此处不再赘述。
本实施例中, 音频解码装置可以通过上述方式确定各声道之间的相关性, 例如, 假设共有 4声道, 分别为声道 1 , 声道 2, 声道 3以及声道 4, 音频解 码装置确定的各声道之间的相关性可以为: "声道 1 , 声道 2以及声道 3之间具有相关性, 声道 1与声道 4之间不具 有相关性, 声道 2与声道 4之间不具有相关性, 声道 3与声道 4之间不具有 相关性";
或者可以为: "声道 1与声道 3之间具有相关性, 声道 2与声道 4之间具 有相关性, 声道 1与声道 2之间不具有相关性, 声道 1与声道 4之间不具有 相关性, 声道 3与声道 2之间不具有相关性, 声道 3与声道 4之间不具有相 关性,,。
需要说明的是, 本实施例中的步骤 404为音频解码装置确定第一声道和 第二声道之间的相关性的过程, 该过程并不限于在步骤 403之后执行, 该过 程可以为周期性执行的过程, 例如每隔 10秒或 20秒或其他时长就执行一次, 从而使得各声道之间的相关性能够得到实时更新。
405、 按照声道内的丢包隐藏算法计算第一声道的对应该音频帧的已丢失 的音频数据对应的时间补偿参数;
若音频解码装置确定第一声道与第二声道之间具有相关性, 则音频解码 装置可以首先按照声道内的丢包隐藏算法计算第一声道的对应该音频帧的已 丢失的音频数据对应的时间补偿参数, 具体的:
声道 3为对应该音频帧的已丢失的音频数据对应的声道(即为第一声道), 则音频解码装置可以从声道 3 中获取在当前音频帧之前, 最近被成功接收到 的音频数据的信号特征参数, 并根据该信号特征参数进行时间加权运算得到 时间补偿参数, 具体加权运算的方式可以为:
时间补偿参数 = ( a*length/ ( delta*length ) ) *fcl ;
其中, a为时间加权系数, length为一个音频帧的长度, delta为使用的未 丢失音频数据的音频帧与已丢失音频数据的音频帧序号之间的差值, fcl为声 道内未丢失的音频数据的信号特征参数。
例如, 音频解码装置确定声道 3的当前音频帧为音频帧 3, 音频解码装置 在音频帧 1接收到了声道 3的音频数据, 该音频数据的信号基音周期为 100 赫兹, 每一个音频帧的长度为 30毫秒, 则可以计算时间补偿参数为:
( a*30/ ( 30+30+30 ) ) *100;
其中 a为时间加权系数,该时间加权系数 a与信号基音周期、音频帧长度 等参数相关。 该时间补偿参数表示的是在声道内对已丢失的音频数据在信号基音周期 上的补偿。
需要说明的是, 本实施例中仅以一个例子说明了按照声道内的丢包隐藏 算法计算已丢失的音频数据对应的时间补偿参数的过程, 可以理解的是, 在 实际应用中, 还可以有更多的方式计算时间补偿参数, 具体为本领域技术人 员的公知常识, 此处不作限定。
406、 采用第二声道的对应该音频帧的未丢失的音频数据的信号特征参数 对时间补偿参数进行修正得到综合补偿参数;
当计算得到时间补偿参数之后, 音频解码装置可以采用第二声道的对应 该音频帧的未丢失的音频数据的信号特征参数对时间补偿参数进行修正得到 综合补偿参数,假设在音频帧 3并未丢失音频数据的声道为声道 1 (即第二声 道), 经过步骤 405可知声道 1与声道 3之间具有相关性, 即第一声道与第二 声道之间具有相关性, 则具体修正得到综合补偿参数的过程可以为:
综合补偿参数 =未丢失的音频数据的信号特征参数 *空间加权系数 b*时间 补偿参数。
该空间加权系数 b与声道之间的相关性程度相关, 需要说明的是, 在实 际应用中, 音频解码装置还可以采用其他的方式使用声道 1在音频帧 3上未 丢失的音频数据的信号特征参数对时间补偿参数进行修正, 此处不作限定。
本实施例中步骤 406可以采用声道 1在音频帧 3上未丢失的音频数据的 信号特征参数对时间补偿参数进行修正, 可以理解的是, 音频解码装置同样 可以直接进行声道内以及声道间的加权运算得到综合补偿参数, 例如, 假设 未丢失的音频数据的信号基音周期为 150赫兹, 则该综合补偿参数可以为: 综合补偿参数 =x*时间补偿参数 +y* ( b*fc2 )
其中, X为时间补偿权重, y为空间补偿权重, b为空间加权系数, fc2为 声道间未丢失的音频数据的信号特征参数。
结合前述例子, 本实施例中的综合补偿参数可以为:
X* ( ( a*30/ ( 30+30+30 ) ) * 100 ) +y* ( b* 150 )。
假设 x=0.3 , y=0.7 , a=0.5 , b=0.1 , 则综合补偿参数为 5+10.5=15.5。 需要说明的是, 上述描述的内容是采用第二声道的对应该音频帧的未丢 失的音频数据的信号特征参数对时间补偿参数进行修正得到综合补偿参数的 过程, 在实际应用中, 若除了第二声道与第一声道之间具有相关性, 同时还 有其他的未丢失的音频数据对应的声道也与第一声道之间具有相关性, 则还 可以根据多个与第一声道具有相关性的声道的对应该音频帧的未丢失的音频 数据的信号特征参数对时间补偿参数进行修正得到综合补偿参数, 具体过程 可以为:
采用第二声道的对应该音频帧的未丢失的音频数据的信号特征参数, 以 及至少一个相关声道的对应该音频帧的未丢失的音频数据的信号特征参数, 对时间补偿参数进行修正得到综合补偿参数。
该相关声道为未丢失的音频数据对应的 N-M个声道中除第二声道之外, 与第一声道具有相关性的其他声道。
具体修正得到综合补偿参数的过程可以为:
综合补偿参数 = mj*第 j个声道的未丢失的音频数据的信号特征参数 * b* 时间补偿参数)。
其中, i为参与修正综合补偿参数的, 与第一声道具有相关性的声道的个 数, j表示这 i个声道中的第 j个声道, mj为第 j个声道的相关性加权系数, b 为空间加权系数。
其中, i为大于或等于 1 , 且小于或等于 N-M的整数, j为大于或等于 1 , 且小于或等于 i的整数, mj以及空间加权系数 b与声道之间的相关性程度相 关。
假设在音频帧 3并未丢失音频数据的声道为声道 1以及声道 2,经过步骤 405可知声道 1、 声道 2以及声道 3之间两两均具有相关性, 则综合补偿参数 可以为:
ml *fc01 *b*时间补偿参数 + m2*fc02*b*时间补偿参数。
ml为声道 1的相关性加权系数, fcOl为声道 1在音频帧 3上未丢失的音 频数据的信号特征参数, m2为声道 2的相关性加权系数, fc02为声道 2在音 频帧 3上未丢失的音频数据的信号特征参数。
其中, ml以及 m2的具体数值与声道之间的相关性程度相关, 例如声道 1对应的音频采集设备与声道 3对应的音频采集设备之间的距离小于声道 2对 应的音频采集设备与声道 3对应的音频采集设备之间的距离, 则说明声道 1 与声道 3之间的相关性更强, 则可以设置 ml大于 m2, 反之亦然。 可以理解的是, 在实际应用中, 还有更多的设置相关性加权系数的方式 以及规则, 具体此处不作限定。
本实施例中仅以两个声道的未丢失的音频数据的信号特征参数为例进行 说明, 可以理解的是, 还可以使用更多的与第一声道具有相关性的相关声道 的未丢失的音频数据的信号特征参数进行相应处理, 具体方式类似, 此处不 再赘述。
本实施例中步骤 406可以采用声道 1以及声道 2在音频帧 3上未丢失的 音频数据的信号特征参数对时间补偿参数进行修正, 可以理解的是, 音频解 码装置同样可以直接进行声道内以及声道间的加权运算得到综合补偿参数, 例如, 假设声道 1 的未丢失的音频数据的信号基音周期为 150赫兹, 声道 2 的未丢失的音频数据的信号基音周期为 170赫兹, 则该综合补偿参数可以为: 综合补偿参数 =x*时间补偿参数 +y* ( ^mj*第 j个声道的未丢失的音频数 据的信号特征参数 * b ) )
其中, X为时间补偿权重, y为空间补偿权重, b为空间加权系数。
结合前述例子, 本实施例中的综合补偿参数可以为:
X* ( ( a*30/ ( 30+30+30 ) ) * 100 ) +y* ( ml *b* 150+m2*b* 170 )。
假设 x=0.3 , y=0.7 , a=0.5 , b=0.1 , ml=0.6 , m2=0.4, 则综合补偿参数为 5+11.06=16.06。
407、 根据综合补偿参数对第一声道的对应该音频帧的已丢失的音频数据 进行恢复;
当音频解码装置计算得到综合补偿参数之后, 可以根据该综合补偿参数 对声道 3在音频帧 3上已丢失的音频数据进行恢复。
具体可以将声道 3在音频帧 3上已丢失的音频数据的信号特征参数设置 为: 综合补偿参数 + (声道内未丢失的音频数据的信号特征参数 +声道间未丢 失的音频数据的信号特征参数) II。
若综合补偿参数为 15.5 , 声道 3上最近接收到的音频数据的信号基音周 期为 100赫兹, 声道 1上未丢失的音频数据的信号基音周期为 150赫兹, 则 音频解码装置可以确定声道 3在音频帧 3上丢失的音频数据的信号基音周期 为 15.5+ ( ( 100+150 ) 12 ) =140.5赫兹。
计算得到声道 3在音频帧 3上丢失的音频数据的信号基音周期之后, 可 以将声道 1在音频帧 3上未丢失的音频数据复制到声道 3的音频帧 3上, 并 将复制后的音频数据的信号基音周期修改为 140.5赫兹, 其余参数保持不变, 从而可以恢复出声道 3在音频帧 3上丢失的音频数据。
需要说明的是, 当采用多个与第一声道具有相关性的相关声道的未丢失 的音频数据的信号特征参数对第一声道的对应该音频帧的已丢失的音频数据 进行恢复时, 具体的处理方式类似, 此处不再赘述。
需要说明的是, 本实施例中仅以几个例子说明了根据综合补偿参数对已 丢失的音频数据进行恢复的过程, 可以理解的是, 在实际应用中, 还可以有 更多的方式根据综合补偿参数对已丢失的音频数据进行恢复, 具体此处不作 限定。
408、 按照预置的丢包隐藏算法对第一声道的对应该音频帧的已丢失的音 频数据进行声道内的丢包隐藏处理。
若声道 3与任意一个未丢失的音频数据对应的声道之间均不具有相关性, 则说明声道 3 与所有未丢失的音频数据对应的声道所传输的音频数据是针对 不同的声源的, 所以声道 3与所有未丢失的音频数据之间基本没有相关性。
本实施例中, 音频解码装置可以使用预置的丢包隐藏算法对声道 3在音 频帧 3 上已丢失的音频数据进行声道内的丢包隐藏处理, 具体的过程与传统 的丢包隐藏处理过程类似, 此处不再赘述。
本发明实施例中, 当检测到发生数据包丢包, 且 N个声道(N为大于或 等于 2的整数) 中的 M个声道(M为大于 0, 且小于 N的整数) 的对应某一 音频帧的音频数据丢失时, 若 N个声道中除该 M个声道之外的其他声道的, 与该音频帧中已丢失的音频数据属于相同音频帧的音频数据未丢失, 则可以 获取 N-M个声道中对应该音频帧的未丢失的音频数据的信号特征参数, 并当 第一声道与第二声道之间具有相关性时, 根据第二声道的对应该音频帧的未 丢失的音频数据的信号特征参数对第一声道的对应该音频帧的已丢失的音频 数据进行丢包隐藏处理, 由于未丢失的音频数据是属于相同的音频帧, 不同 声道的音频数据, 所以在音频解码装置进行丢包隐藏处理时可以利用不同声 道之间的相关性, 从而提高 N个声道系统中的丢包隐藏处理的效果。 明: 请参阅图 5, 本实施例应用于 2声道系统, 其中左声道的音频数据为 Li, 右声道的音频数据为 Ri。
音频编码装置可以将第 i个音频帧的左声道的音频数据 Li与第 i+1个音 频帧的右声道的音频数据 Ri+1组成一个数据包;
并将第 i+1个音频帧的左声道的音频数据 Li+1与第 i个音频帧的右声道 的音频数据 Ri组成另一数据包。
本实施例中以 4个音频帧为例进行说明, 可以理解的是, 在实际应用中 还可以为更多的音频帧, 具体此处不作限定。
音频编码装置将第 1音频帧的左声道音频数据 L1与第 2音频帧的右声道 音频数据 R2打包得到数据包 1 ,并将第 2音频帧的左声道音频数据 L2与第 1 音频帧的右声道音频数据 R1打包得到数据包 2, 以此类推, 音频编码装置将 L3与 R4打包得到数据包 3, L4与 R3打包得到数据包 4。
音频编码装置可以为各数据包分配唯一的标识,例如为数据包 1分配 00, 为数据包 2分配 01 , 为数据包 3分配 10, 为数据包 4分配 11。
音频编码装置打包完成之后, 可以向音频解码装置发送这些数据包。 假 设数据包 3在发送过程中发生了丢失, 音频解码装置解码得到的音频数据也 如图 5所示, 其中, L3和 R4被丢失。
音频解码装置具体解码过程请参阅图 6, 如图 6所示, 音频解码装置接收 到的第一个数据包的标识为 00。
音频解码装置对接收到的数据包进行左右声道的解交织, 并分别对左右 声道进行解码,其中,第一个数据包进行左声道解码后得到的音频数据为 L1 , 进行右声道解码后得到的音频数据为 R2,音频解码装置可以将 L1和 R2进行 緩存。
音频解码装置接收到的第二个数据包的标识为 01。
音频解码装置对接收到的数据包进行左右声道的解交织, 并分别对左右 声道进行解码,其中,第一个数据包进行左声道解码后得到的音频数据为 L2, 进行右声道解码后得到的音频数据为 R1 , 音频解码装置结合前面緩存的 L1 和 R2可以得到两个音频帧的音频数据, 分别为音频帧 1 (对应 L1以及 R1 ) 以及音频帧 2 (对应 L2以及 R2 )。
由于 L1和 R1都成功的被接收到, 则无需进行丢包隐藏处理, 直接输出, 由于 LI和 Rl都未丢失, 则音频解码装置可以根据 L1的信号特征参数以及 前述方法实施例中描述的过程类似, 此处不再赘述。
此外, 由于 L2和 R2都成功的被接收到, 则无需进行丢包隐藏处理, 直 接输出, 由于 L2和 R2都未丢失,则音频解码装置可以根据 L2的信号特征参 判断过程与前述方法实施例中描述的过程类似, 此处不再赘述。
音频解码装置则接收到的第三个数据包的标识为 11。
音频解码装置对接收到的数据包进行左右声道的解交织, 并分别对左右 声道进行解码,其中,第一个数据包进行左声道解码后得到的音频数据为 L4, 进行右声道解码后得到的音频数据为 R3,音频解码装置可以将 L4和 R3进行 緩存。
音频解码装置根据数据包的标识可以获知数据包标识为 10 的数据包丢 失, 根据解码后得到的音频数据可知, 音频数据 L3以及 R4被丢失。
则音频解码装置可以获取与 L3 即属于同一音频帧的右声道的音频数据 R3, 并得到 R3的信号特征参数, 随后判断左声道与右声道是否具有相关性。
若具有相关性, 则使用 R3的信号特征参数, 结合 L2, L4的信号特征参 数对 L3进行丢包隐藏处理, 具体过程与前述方法实施例中描述的过程类似, 此处不再赘述。
若不具有相关性, 则使用 L2, L4的信号特征参数对 L3进行丢包隐藏处 理, 具体过程与前述方法实施例中描述的过程类似, 此处不再赘述。
同理, 音频解码装置也可以按照类似的方式对 R4进行丢包隐藏处理, 具 体过程此处不再赘述。
上面介绍了本发明音频解码方法实施例, 下面介绍本发明音频解码装置 实施例, 请参阅图 7, 本发明音频解码装置一个实施例包括:
接收单元 701 , 用于接收数据包;
解码单元 702, 用于当检测到发生数据包丢包, 且 N个声道中的 M个声 道的对应某一音频帧的音频数据丢失时, 若 N个声道中除该 M个声道之外的 其他声道的, 与该音频帧中已丢失的音频数据属于相同音频帧的音频数据未 丢失, 则对该 N-M个声道中对应该音频帧的未丢失的音频数据进行解码, 所 述 M为大于 0, 且小于 N的整数;
提取单元 703, 用于提取所述解码单元 702解码后得到的 N-M个声道中 对应该音频帧的未丢失的音频数据的信号特征参数;
相关性判断单元 704, 用于判断第一声道与第二声道之间是否具有相关 性, 第一声道为该音频帧中丢失了音频数据的 M个声道中的任一个, 第二声 道为该音频帧中未丢失音频数据的 N-M个声道中的任一个, 若具有, 则触发 第一丢包隐藏单元 705执行相应操作, 若不具有, 则触发第二丢包隐藏单元 706执行相应操作;
第一丢包隐藏单元 705 ,用于根据提取单元 703提取到的第二声道的对应 该音频帧的未丢失的音频数据的信号特征参数对第一声道的对应该音频帧的 已丢失的音频数据进行丢包隐藏处理;
第二丢包隐藏单元 706 ,用于按照预置的丢包隐藏算法对第一声道的对应 该音频帧的已丢失的音频数据进行声道内的丢包隐藏处理。
下面以一具体实例对本发明音频解码装置进行详细描述, 请参阅图 8, 本 发明音频解码装置另一实施例包括:
接收单元 801 , 用于接收数据包;
解码单元 802, 用于当检测到发生数据包丢包, 且 N个声道中的 M个声 道的对应某一音频帧的音频数据丢失时, 若 N个声道中除该 M个声道之外的 其他声道的, 与该音频帧中已丢失的音频数据属于相同音频帧的音频数据未 丢失, 则对该 N-M个声道中对应该音频帧的未丢失的音频数据进行解码, 所 述 M为大于 0, 且小于 N的整数;
提取单元 803 , 用于提取所述解码单元 802解码后得到的该 N-M个声道 中对应该音频帧的未丢失的音频数据的信号特征参数;
相关性判断单元 804, 用于判断第一声道与第二声道之间是否具有相关 性, 第一声道为该音频帧中丢失了音频数据的 M个声道中的任一个, 第二声 道为该音频帧中未丢失音频数据的 N-M个声道中的任一个, 若具有, 则触发 第一丢包隐藏单元 805执行相应操作, 若不具有, 则触发第二丢包隐藏单元 806执行相应操作;
第一丢包隐藏单元 805 ,用于根据提取单元 803提取到的第二声道的对应 该音频帧的未丢失的音频数据的信号特征参数对第一声道的对应该音频帧的 已丢失的音频数据进行丢包隐藏处理;
第二丢包隐藏单元 806,用于按照预置的丢包隐藏算法对第一声道的对应 该音频帧的已丢失的音频数据进行声道内的丢包隐藏处理。
本实施例中的相关性判断单元 804可以进一步包括:
数值计算模块 8041 , 用于利用相关函数计算第一声道上已经接收到的音 频数据以及所述第二声道上与第一声道的已经接收到的音频数据属于同一音 频帧的已经接收到的音频数据之间的相关值;
判断模块 8042, 用于根据所述数值计算模块计算得到的相关值判断第一 声道与第二声道之间是否具有相关性。
或者,
本实施例中的相关性判断单元 804可以进一步包括:
获取模块 8043 , 用于获取第一声道上已经接收到的音频数据的信号特征 参数以及所述第二声道上与第一声道的已经接收到的音频数据属于同一音频 帧的已经接收到的音频数据的信号特征参数;
判定模块 8044, 用于判断第一声道上已经接收到的音频数据的信号特征 参数与所述第二声道上与第一声道的已经接收到的音频数据属于同一音频帧 的已经接收到的音频数据的信号特征参数之间的差异是否小于预置数值, 若 是, 则确定所述第一声道与所述第二声道之间具有相关性, 若否, 则确定所 述第一声道与所述第二声道之间不具有相关性。
本实施例中的第一丢包隐藏单元 805可以进一步包括:
计算模块 8051 , 用于按照声道内的丢包隐藏算法计算第一声道的对应该 音频帧的已丢失的音频数据对应的时间补偿参数;
修正模块 8052, 用于采用第二声道的未丢失的对应该音频帧的音频数据 的信号特征参数对计算模块 8051计算得到的该时间补偿参数进行修正得到综 合补偿参数;
恢复模块 8053 ,用于根据修正模块 8052修正得到的该综合补偿参数对第 一声道的对应该音频帧的已丢失的音频数据进行恢复。
或者,
本实施例中的第一丢包隐藏单元 805具体可以用于根据第二声道的对应 该音频帧的未丢失的音频数据的信号特征参数, 以及至少一个相关声道的对 应该音频帧的未丢失的音频数据的信号特征参数, 对所述第一声道的对应该 音频帧的已丢失的音频数据进行丢包隐藏处理。
该相关声道为未丢失的音频数据对应的 N-M个声道中除第二声道之外, 与第一声道具有相关性的其他声道。
为便于理解, 下面以一具体应用场景对本实施例音频解码装置各单元之 间的联系进行详细描述:
本实施例中, 接收单元 801可以接收来自音频编码装置的数据包。
当音频编码装置完成了编码之后, 数据包会被发送至音频解码装置。 该发送的过程可以是音频编码装置直接向音频解码装置发送该数据包, 也可以是音频编码装置向转发设备发送该数据包, 再由转发设备将该数据包 发送至音频解码装置。
由于 UDP提供的是面向事务的筒单不可靠信息传送服务, 所以音频和图 像数据传输过程中的丢包现象在所难免, 当发生数据包丢包时, 音频解码装 置可以确定各声道的音频数据丢失情况。
若音频解码装置获知并不是 N个声道在同一音频帧都丢失了音频数据, 而只是其中的 M个声道在同一音频帧的音频数据丢失,且 N个声道中除该 M 个声道之外的其他声道的, 与该音频帧中已丢失的音频数据属于相同音频帧 的音频数据未丢失, 则解码单元 802 可以对该音频帧的未丢失的音频数据进 行解码。
解码单元 802对 N-M个声道对应该音频帧的未丢失的音频数据进行解码 后, 提取单元 803可以得到这些音频数据的信号特征参数。
本实施例中, 具体的信号特征参数可以为信号基音周期, 和 /或信号能量, 可以理解的是, 在实际应用中, 信号特征参数除了采用上述两个参数进行表 示之外, 还可以采用其他的参数进行表示, 例如信号音调等, 具体此处不作 限定。
本实施例中, 相关性判断单元 804可以确定第一声道与第二声道之间是 否存在相关性, 第一声道为该音频帧中丢失了音频数据的 M个声道中的任一 个, 第二声道为该音频帧中未丢失音频数据的 N-M个声道中的任一个。
相关性判断单元 804具体的确定方式与前述图 4所示实施例中步骤 404 所描述的内容类似, 此处不再赘述。 当相关性判断单元 804若确定第一声道与第二声道之间具有相关性, 则 第一丢包隐藏单元 805中的计算模块 8051可以首先按照声道内的丢包隐藏算 法计算第一声道的对应该音频帧的已丢失的音频数据对应的时间补偿参数。
当计算模块 8051计算得到时间补偿参数之后, 修正模块 8052可以采用 第二声道的对应该音频帧的未丢失的音频数据的信号特征参数对时间补偿参 数进行修正得到综合补偿参数。
当修正模块 8052计算得到综合补偿参数之后, 恢复模块 8053可以根据 该综合补偿参数对第一声道的对应该音频帧的已丢失的音频数据进行恢复。
需要说明的是, 上述描述的内容是第一丢包隐藏单元 805采用第二声道 的对应该音频帧的未丢失的音频数据的信号特征参数对第一声道的对应该音 频帧的已丢失的音频数据进行丢包隐藏处理的过程, 在实际应用中, 若除了 第二声道与第一声道之间具有相关性, 同时还有其他的声道也与第一声道之 间具有相关性, 则第一丢包隐藏单元 805 还可以根据多个与第一声道具有相 关性的声道的对应该音频帧的未丢失的音频数据的信号特征参数对第一声道 的对应该音频帧的已丢失的音频数据进行丢包隐藏处理,具体过程与前述图 4 所示实施例中步骤 405至步骤 407所描述的内容类似, 此处不再赘述。
若相关性判断单元 804确定第一声道与第二声道之间不具有相关性, 则 第二丢包隐藏单元 806 可以使用预置的丢包隐藏算法对第一声道的对应该音 频帧的已丢失的音频数据进行声道内的丢包隐藏处理, 具体的过程与传统的 丢包隐藏处理过程类似, 此处不再赘述。
本发明实施例中, 当检测到发生数据包丢包, 且 N个声道(N为大于或 等于 2的整数) 中的 M个声道(M为大于 0, 且小于 N的整数) 的对应某一 音频帧的音频数据丢失时, 若 N个声道中除该 M个声道之外的其他声道的, 与该音频帧中已丢失的音频数据属于相同音频帧的音频数据未丢失, 则可以 获取 N-M个声道中对应该音频帧的未丢失的音频数据的信号特征参数, 并当 第一声道与第二声道之间具有相关性时, 根据第二声道的对应该音频帧的未 丢失的音频数据的信号特征参数对第一声道的对应该音频帧的已丢失的音频 数据进行丢包隐藏处理, 由于未丢失的音频数据是属于相同的音频帧, 不同 声道的音频数据, 所以在音频解码装置进行丢包隐藏处理时可以利用不同声 道之间的相关性, 从而提高 N个声道系统中的丢包隐藏处理的效果。 本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤 是可以通过程序来指令相关的硬件完成, 该程序可以存储于一种计算机可读 存储介质中, 上述提到的存储介质可以是只读存储器, 磁盘或光盘等。
以上对本发明所提供的一种音频解码方法及装置进行了详细介绍, 对于 本领域的一般技术人员, 依据本发明实施例的思想, 在具体实施方式及应用 范围上均会有改变之处, 因此, 本说明书内容不应理解为对本发明的限制。

Claims

权利要求书
1、 一种音频解码方法, 应用于音频解码系统, 所述音频解码系统包含 N个 声道, N为大于或等于 2的整数, 其特征在于, 包括:
接收数据包;
当检测到发生数据包丢包,且 N个声道中的 M个声道的对应某一音频帧的 音频数据丢失时, 若 N个声道中除所述 M个声道之外的其他声道的, 与所述音 频帧中已丢失的音频数据属于相同音频帧的音频数据未丢失, 则对所述 N-M个 声道中对应所述音频帧的未丢失的音频数据进行解码, 所述 M为大于 0, 且小 于 N的整数;
提取解码后得到的所述 N-M个声道中对应所述音频帧的未丢失的音频数据 的信号特征参数;
判断第一声道与第二声道之间是否具有相关性, 所述第一声道为所述音频 帧中丢失了音频数据的 M个声道中的任一个, 所述第二声道为所述音频帧中未 丢失音频数据的 N-M个声道中的任一个;
若具有, 则根据所述第二声道的对应所述音频帧的未丢失的音频数据的信 号特征参数对所述第一声道的对应所述音频帧的已丢失的音频数据进行丢包隐 藏处理;
若不具有, 则按照预置的丢包隐藏算法对所述第一声道的对应所述音频帧 的已丢失的音频数据进行声道内的丢包隐藏处理。
2、 根据权利要求 1所述的方法, 其特征在于, 所述判断第一声道与第二声 道之间是否具有相关性包括:
利用相关函数计算所述第一声道上已经接收到的音频数据以及所述第二声 道上与所述第一声道的已经接收到的音频数据属于同一音频帧的已经接收到的 音频数据之间的相关值;
根据所述相关值判断所述第一声道以及第二声道之间是否具有相关性。
3、 根据权利要求 1所述的方法, 其特征在于, 所述判断第一声道与第二声 道之间是否具有相关性包括:
获取所述第一声道上已经接收到的音频数据的信号特征参数以及所述第二 声道上与所述第一声道的已经接收到的音频数据属于同一音频帧的已经接收到 的音频数据的信号特征参数; 判断所述第一声道上已经接收到的音频数据的信号特征参数与所述第二声 道上与所述第一声道的已经接收到的音频数据属于同一音频帧的已经接收到的 音频数据的信号特征参数之间的差异是否小于预置数值, 若是, 则确定所述第 一声道与所述第二声道之间具有相关性, 若否, 则确定所述第一声道与所述第 二声道之间不具有相关性。
4、 根据权利要求 1至 3中任一项所述的方法, 其特征在于, 所述根据所述 第二声道的对应所述音频帧的未丢失的音频数据的信号特征参数对所述第一声 道的对应所述音频帧的已丢失的音频数据进行丢包隐藏处理包括:
按照声道内的丢包隐藏算法计算第一声道的对应所述音频帧的已丢失的音 频数据对应的时间补偿参数;
采用所述第二声道的对应所述音频帧的未丢失的音频数据的信号特征参数 对所述时间补偿参数进行修正得到综合补偿参数;
根据所述综合补偿参数对所述第一声道的对应所述音频帧的已丢失的音频 数据进行恢复。
5、 根据权利要求 4所述的方法, 其特征在于, 所述采用所述第二声道的对 应所述音频帧的未丢失的音频数据的信号特征参数对所述时间补偿参数进行修 正得到综合补偿参数包括:
按照预置的加权算法, 对所述第二声道的对应所述音频帧的未丢失的音频 数据的信号特征参数以及所述时间补偿参数进行加权运算得到综合补偿参数。
6、 根据权利要求 1至 3中任一项所述的方法, 其特征在于, 所述根据所述 第二声道的对应所述音频帧的未丢失的音频数据的信号特征参数对所述第一声 道的对应所述音频帧的已丢失的音频数据进行丢包隐藏处理包括:
根据所述第二声道的对应所述音频帧的未丢失的音频数据的信号特征参 数, 以及至少一个相关声道的对应所述音频帧的未丢失的音频数据的信号特征 参数, 对所述第一声道的对应所述音频帧的已丢失的音频数据进行丢包隐藏处 理;
所述相关声道为未丢失的音频数据对应的 N-M个声道中除所述第二声道之 外, 与所述第一声道具有相关性的其他声道。
7、 根据权利要求 6所述的方法, 其特征在于, 所述根据所述第二声道的对 应所述音频帧的未丢失的音频数据的信号特征参数, 以及至少一个相关声道的 对应所述音频帧的未丢失的音频数据的信号特征参数, 对所述第一声道的对应 所述音频帧的已丢失的音频数据进行丢包隐藏处理包括:
按照声道内的丢包隐藏算法计算第一声道的对应所述音频帧的已丢失的音 频数据对应的时间补偿参数;
采用所述第二声道的对应所述音频帧的未丢失的音频数据的信号特征参 数, 以及所述至少一个相关声道的对应所述音频帧的未丢失的音频数据的信号 特征参数, 对所述时间补偿参数进行修正得到综合补偿参数;
根据所述综合补偿参数对所述第一声道的对应所述音频帧的已丢失的音频 数据进行恢复。
8、 根据权利要求 1至 7中任一项所述的方法, 其特征在于,
所述信号特征参数包括: 信号基音周期, 和 /或信号能量。
9、 根据权利要求 1至 7中任一项所述的方法, 其特征在于,
当检测到发生数据包丢包, 并判断同一音频帧中的 N个声道的音频数据全 部丢失时, 则按照预置的丢包隐藏算法对所述 N个声道的已丢失的音频数据进 行声道内的丢包隐藏处理。
10、 一种音频解码装置, 用于对 N个声道的音频数据进行解码, 所述 N为 大于或等于 2的整数, 其特征在于, 包括:
接收单元, 用于接收数据包;
解码单元, 用于当检测到发生数据包丢包, 且 N个声道中的 M个声道的对 应某一音频帧的音频数据丢失时, 若 N个声道中除所述 M个声道之外的其他声 道的, 与所述音频帧中已丢失的音频数据属于相同音频帧的的音频数据未丢失, 则对所述 N-M个声道中对应所述音频帧的未丢失的音频数据进行解码, 所述 M 为大于 0, 且小于 N的整数;
提取单元, 用于提取所述解码单元解码后得到的所述 N-M个声道中对应所 述音频帧的未丢失的音频数据的信号特征参数;
相关性判断单元, 用于判断第一声道与第二声道之间是否具有相关性, 所 述第一声道为所述音频帧中丢失了音频数据的 M个声道中的任一个, 所述第二 声道为所述音频帧中未丢失音频数据的 N-M个声道中的任一个, 若具有, 则触 发第一丢包隐藏单元执行相应操作, 若不具有, 则触发第二丢包隐藏单元执行 相应操作; 所述第一丢包隐藏单元, 用于根据所述提取单元提取到的所述第二声道的 对应所述音频帧的未丢失的音频数据的信号特征参数对所述第一声道的对应所 述音频帧的已丢失的音频数据进行丢包隐藏处理;
所述第二丢包隐藏单元, 用于按照预置的丢包隐藏算法对所述第一声道的 对应所述音频帧的已丢失的音频数据进行声道内的丢包隐藏处理。
11、 根据权利要求 10所述的音频解码装置, 其特征在于, 所述相关性判断 单元包括:
数值计算模块, 用于利用相关函数计算所述第一声道上已经接收到的音频 数据以及所述第二声道上与所述第一声道的已经接收到的音频数据属于同一音 频帧的已经接收到的音频数据之间的相关值;
判断模块, 用于根据所述数值计算模块计算得到的相关值判断所述第一声 道以及第二声道之间是否具有相关性。
12、 根据权利要求 10所述的音频解码装置, 其特征在于, 所述相关性判断 单元包括:
获取模块, 用于获取所述第一声道上已经接收到的音频数据的信号特征参 数以及所述第二声道上与所述第一声道的已经接收到的音频数据属于同一音频 帧的已经接收到的音频数据的信号特征参数;
判定模块, 用于判断所述第一声道上已经接收到的音频数据的信号特征参 数与所述第二声道上与所述第一声道的已经接收到的音频数据属于同一音频帧 的已经接收到的音频数据的信号特征参数之间的差异是否小于预置数值, 若是, 则确定所述第一声道与所述第二声道之间具有相关性, 若否, 则确定所述第一 声道与所述第二声道之间不具有相关性。
13、 根据权利要求 10至 12中任一项所述的音频解码装置, 其特征在于, 所述第一丢包隐藏单元包括:
计算模块, 用于按照声道内的丢包隐藏算法计算第一声道的对应所述音频 帧的已丢失的音频数据对应的时间补偿参数;
修正模块, 用于采用所述第二声道的对应所述音频帧的未丢失的音频数据 的信号特征参数对所述计算模块计算得到的时间补偿参数进行修正得到综合补 偿参数;
恢复模块, 用于根据所述修正模块修正得到的综合补偿参数对所述第一声 道的对应所述音频帧的已丢失的音频数据进行恢复。
14、 根据权利要求 10至 12中任一项所述的音频解码装置, 其特征在于, 根据所述第二声道的对应所述音频帧的未丢失的音频数据的信号特征参 数, 以及至少一个相关声道的对应所述音频帧的未丢失的音频数据的信号特征 参数, 对所述第一声道的对应所述音频帧的已丢失的音频数据进行丢包隐藏处 理;
所述相关声道为未丢失的音频数据对应的 N-M个声道中除所述第二声道之 外, 与所述第一声道具有相关性的其他声道。
PCT/CN2012/076435 2011-06-02 2012-06-04 音频解码方法及装置 WO2012163304A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP12792712.7A EP2654039B1 (en) 2011-06-02 2012-06-04 Audio decoding method and apparatus
AU2012265335A AU2012265335B2 (en) 2011-06-02 2012-06-04 Audio decoding method and device
US14/090,216 US20140088976A1 (en) 2011-06-02 2013-11-26 Audio decoding method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201110147225.6A CN102810313B (zh) 2011-06-02 2011-06-02 音频解码方法及装置
CN201110147225.6 2011-06-02

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/090,216 Continuation US20140088976A1 (en) 2011-06-02 2013-11-26 Audio decoding method and apparatus

Publications (1)

Publication Number Publication Date
WO2012163304A1 true WO2012163304A1 (zh) 2012-12-06

Family

ID=47234008

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2012/076435 WO2012163304A1 (zh) 2011-06-02 2012-06-04 音频解码方法及装置

Country Status (5)

Country Link
US (1) US20140088976A1 (zh)
EP (1) EP2654039B1 (zh)
CN (1) CN102810313B (zh)
AU (1) AU2012265335B2 (zh)
WO (1) WO2012163304A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112437315A (zh) * 2020-09-02 2021-03-02 上海幻电信息科技有限公司 适应多系统版本的音频适配方法及系统

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
NO2780522T3 (zh) 2014-05-15 2018-06-09
US10547329B2 (en) * 2015-03-02 2020-01-28 Samsung Electronics Co., Ltd. Transmitter and puncturing method thereof
KR101800420B1 (ko) * 2015-03-02 2017-11-23 삼성전자주식회사 송신 장치 및 그의 펑처링 방법
US20160323425A1 (en) * 2015-04-29 2016-11-03 Qualcomm Incorporated Enhanced voice services (evs) in 3gpp2 network
US10224045B2 (en) 2017-05-11 2019-03-05 Qualcomm Incorporated Stereo parameters for stereo decoding
CN107294655B (zh) * 2017-05-31 2019-12-20 珠海市杰理科技股份有限公司 蓝牙通话信号恢复方法、装置、存储介质和计算机设备
US10043523B1 (en) * 2017-06-16 2018-08-07 Cypress Semiconductor Corporation Advanced packet-based sample audio concealment
CN107293303A (zh) * 2017-06-16 2017-10-24 苏州蜗牛数字科技股份有限公司 一种多声道语音丢包补偿方法
CN107360166A (zh) * 2017-07-15 2017-11-17 深圳市华琥技术有限公司 一种音频数据处理方法及其相关设备
CN111402905B (zh) * 2018-12-28 2023-05-26 南京中感微电子有限公司 音频数据恢复方法、装置及蓝牙设备
CN111866668B (zh) * 2020-07-17 2021-10-15 头领科技(昆山)有限公司 一种带有耳机放大器的多声道蓝牙耳机

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1445941A (zh) * 2000-09-30 2003-10-01 华为技术有限公司 一种在网络上传输ip语音包的丢包恢复方法
WO2003107591A1 (en) * 2002-06-14 2003-12-24 Nokia Corporation Enhanced error concealment for spatial audio
US20070094009A1 (en) * 2005-10-26 2007-04-26 Ryu Sang-Uk Encoder-assisted frame loss concealment techniques for audio coding
CN101221765A (zh) * 2008-01-29 2008-07-16 北京理工大学 一种基于语音前向包络预测的差错隐藏方法
US20090279615A1 (en) * 2008-05-07 2009-11-12 The Hong Kong University Of Science And Technology Error concealment for frame loss in multiple description coding
US7805297B2 (en) * 2005-11-23 2010-09-28 Broadcom Corporation Classification-based frame loss concealment for audio signals

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7379865B2 (en) * 2001-10-26 2008-05-27 At&T Corp. System and methods for concealing errors in data transmission
US7047187B2 (en) * 2002-02-27 2006-05-16 Matsushita Electric Industrial Co., Ltd. Method and apparatus for audio error concealment using data hiding
US7835916B2 (en) * 2003-12-19 2010-11-16 Telefonaktiebolaget Lm Ericsson (Publ) Channel signal concealment in multi-channel audio systems
US7627467B2 (en) * 2005-03-01 2009-12-01 Microsoft Corporation Packet loss concealment for overlapped transform codecs
US7464029B2 (en) * 2005-07-22 2008-12-09 Qualcomm Incorporated Robust separation of speech signals in a noisy environment
US8027485B2 (en) * 2005-11-21 2011-09-27 Broadcom Corporation Multiple channel audio system supporting data channel replacement
CN101030951B (zh) * 2007-02-08 2010-11-24 华为技术有限公司 一种丢包补偿方法及装置
CN100550712C (zh) * 2007-11-05 2009-10-14 华为技术有限公司 一种信号处理方法和处理装置
WO2009084226A1 (ja) * 2007-12-28 2009-07-09 Panasonic Corporation ステレオ音声復号装置、ステレオ音声符号化装置、および消失フレーム補償方法
CN101261833B (zh) * 2008-01-24 2011-04-27 清华大学 一种使用正弦模型进行音频错误隐藏处理的方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1445941A (zh) * 2000-09-30 2003-10-01 华为技术有限公司 一种在网络上传输ip语音包的丢包恢复方法
WO2003107591A1 (en) * 2002-06-14 2003-12-24 Nokia Corporation Enhanced error concealment for spatial audio
US20070094009A1 (en) * 2005-10-26 2007-04-26 Ryu Sang-Uk Encoder-assisted frame loss concealment techniques for audio coding
US7805297B2 (en) * 2005-11-23 2010-09-28 Broadcom Corporation Classification-based frame loss concealment for audio signals
CN101221765A (zh) * 2008-01-29 2008-07-16 北京理工大学 一种基于语音前向包络预测的差错隐藏方法
US20090279615A1 (en) * 2008-05-07 2009-11-12 The Hong Kong University Of Science And Technology Error concealment for frame loss in multiple description coding

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112437315A (zh) * 2020-09-02 2021-03-02 上海幻电信息科技有限公司 适应多系统版本的音频适配方法及系统

Also Published As

Publication number Publication date
EP2654039A4 (en) 2014-03-05
CN102810313B (zh) 2014-01-01
AU2012265335B2 (en) 2015-01-29
AU2012265335A1 (en) 2013-08-15
EP2654039A1 (en) 2013-10-23
US20140088976A1 (en) 2014-03-27
EP2654039B1 (en) 2015-04-01
CN102810313A (zh) 2012-12-05

Similar Documents

Publication Publication Date Title
WO2012163304A1 (zh) 音频解码方法及装置
WO2012163303A1 (zh) 音频编码方法及装置、音频解码方法及装置、编解码系统
US10930262B2 (en) Artificially generated speech for a communication session
KR102277438B1 (ko) 단말 장치들 간의 멀티미디어 통신에 있어서, 오디오 신호를 송신하고 수신된 오디오 신호를 출력하는 방법 및 이를 수행하는 단말 장치
JP6893237B2 (ja) データストリーミングの前方誤り訂正
US9641588B2 (en) Packets recovery system and method
US8831001B2 (en) Device, system, and method of voice-over-IP communication
JP5412917B2 (ja) 誤り訂正制御装置、誤り訂正制御方法およびメディアデータ配信システム
CN102226944A (zh) 混音方法及设备
JP2009512265A (ja) ネットワーク上の動画データ伝送制御システムとその方法
CN1745526A (zh) 用于同步音频和视频流的设备和方法
JP2011130065A5 (zh)
US20100125768A1 (en) Error resilience in video communication by retransmission of packets of designated reference frames
US9246631B2 (en) Communication devices that encode and transmit data, methods of controlling such communication devices, and computer-readable storage media storing instructions for controlling such communication devices
JP2001007786A (ja) データ通信方法およびシステム
JP2016178549A (ja) 送信装置、受信装置、方法及びプログラム
EP2654311A1 (en) Synchronization method and synchronization apparatus for multicast group quick access, and terminal
JP2002152181A (ja) マルチメディアデータ通信方法およびマルチメディアデータ通信装置
TW201429229A (zh) 影片傳輸系統及方法
CN114554198B (zh) 基于纠删码的视频关键帧冗余传输方法和系统
JP2005159679A (ja) 映像音声通信システム
JP4952636B2 (ja) 映像通信装置および映像通信方法
JP4311176B2 (ja) 映像音声通信システム
US20130243086A1 (en) Wireless transmission terminal and wireless transmission method, encoder and encoding method therefor, and computer programs
JP6396780B2 (ja) エコーキャンセル装置、拠点装置、通信システム、及びエコーキャンセル方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12792712

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2012792712

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2012265335

Country of ref document: AU

Date of ref document: 20120604

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE