WO2018077083A1 - 音频帧丢失恢复方法和装置 - Google Patents

音频帧丢失恢复方法和装置 Download PDF

Info

Publication number
WO2018077083A1
WO2018077083A1 PCT/CN2017/106640 CN2017106640W WO2018077083A1 WO 2018077083 A1 WO2018077083 A1 WO 2018077083A1 CN 2017106640 W CN2017106640 W CN 2017106640W WO 2018077083 A1 WO2018077083 A1 WO 2018077083A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
audio
frames
data
audio frame
Prior art date
Application number
PCT/CN2017/106640
Other languages
English (en)
French (fr)
Inventor
梁俊斌
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2018077083A1 publication Critical patent/WO2018077083A1/zh
Priority to US16/286,928 priority Critical patent/US11227612B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/004Arrangements for detecting or preventing errors in the information received by using forward error control
    • H04L1/0045Arrangements at the receiver end
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/005Correction of errors induced by the transmission channel, if related to the coding algorithm
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/0078Avoidance of errors by organising the transmitted data in a format specifically designed to deal with errors, e.g. location
    • H04L1/0083Formatting with frames or packets; Protocol or part of protocol for error control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • H04L65/764Media network packet handling at the destination 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/08Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters
    • G10L19/12Determination or coding of the excitation function; Determination or coding of the long-term prediction parameters the excitation function being a code excitation, e.g. in code excited linear prediction [CELP] vocoders
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L2019/0001Codebooks

Definitions

  • the present application relates to the field of Internet technologies, and in particular, to an audio frame loss recovery method and apparatus.
  • An audio frame (also referred to as an information-encoded frame, an audio data frame, etc.) is obtained by segmenting an audio signal and encoding the sampled value of each audio signal according to an audio coding mechanism.
  • the data format and size of the audio frame are related to the audio coding mechanism used.
  • the framing of the audio signal facilitates the transmission and processing of the audio data stream. Therefore, the audio frame is widely used in services such as network streaming media and VoIP (Voice over Internet Protocol).
  • the audio frame is usually transmitted by using the UDP (User Datagram Protocol) unreliable connection protocol.
  • the purpose of the present application is to provide an audio frame loss recovery method and apparatus for high quality recovery of data of a lost audio frame with limited occupied bandwidth.
  • the plurality of audio frames including at least one a first audio frame and a plurality of second audio frames, the redundant frame including data extracted from the plurality of second audio frames, excluding data of the at least one first audio frame;
  • the redundant frame includes data of the lost frame
  • acquiring data of the lost audio frame from the redundant frame and recovering the lost audio frame by using data of the lost audio frame
  • An audio frame loss recovery apparatus of an embodiment of the present application includes: a processor and a memory, wherein the memory stores computer readable instructions, and the processor can be:
  • the redundant frames comprising extracting from the plurality of second audio frames Data that does not include data of the at least one first audio frame;
  • the redundant frame includes the data of the lost audio frame, acquiring data of the lost audio frame from the redundant frame, and recovering the lost audio frame by using data of the lost audio frame;
  • An audio encoding apparatus includes: a processor and a memory, wherein the memory stores computer readable instructions, and the processor can be:
  • a computer readable storage medium of an embodiment of the present application stores computer readable instructions that, when executed, can cause the processor to:
  • the redundant frames comprising extracting from the plurality of second audio frames Data that does not include data of the at least one first audio frame;
  • the redundant frame includes the data of the lost audio frame, acquiring data of the lost audio frame from the redundant frame, and recovering the lost audio frame by using data of the lost audio frame;
  • a computer readable storage medium of an embodiment of the present application stores computer readable instructions, When executed, the processor can be configured to: encode the audio signal to generate a plurality of audio data frames;
  • the audio frame repair method part of the embodiments of the present application uses the historical frame coding information as the redundant information, reduces the data amount of the required redundant information, and improves the transmission efficiency of the audio data. At the same time, not only the redundant frames are used to recover the lost frames, but also the adjacent frames of the lost frames are used to recover the lost frames. With less redundant information, a certain frame loss recovery effect can be achieved, and the phenomenon caused by packet loss can be reduced. .
  • FIG. 1 is a flowchart of an audio frame loss recovery method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a principle of an audio frame loss recovery method according to an embodiment of the present application
  • FIG. 3 is a block diagram of an audio frame loss recovery apparatus according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a terminal corresponding to an audio frame loss recovery apparatus according to an embodiment of the present application.
  • 5a, 5b are schematic diagrams of an audio transmission system according to an embodiment of the present application.
  • FIG. 6 is a flowchart of an audio frame loss recovery method according to an embodiment of the present application.
  • FIG. 7 is a flowchart of an audio encoding method according to an embodiment of the present application.
  • FIG. 5a is a schematic diagram of an audio transmission system according to an embodiment of the present application. As shown in FIG. 5a, the system includes a server 50, a terminal device 52, and a network 53.
  • Server 50 is used to provide audio frames to terminal device 52 over network 53.
  • the terminal device 52 performs audio decoding on the received audio frame, and outputs the decoded audio signal.
  • the terminal device 52 can include a processor 521 and a memory 522.
  • the memory 522 stores an operating system 524, a network communication module 526 for transmitting and receiving data through a network, and an audio decoding module 528.
  • Audio decoding module 528 can be stored in memory 522 in the form of computer executable instructions. In other embodiments, the audio decoding module 528 can also be implemented in the form of hardware or hardware plus software.
  • FIG. 6 is a flowchart of an audio frame loss recovery method according to an embodiment of the present application. As shown in FIG. 6, the method can include the following steps.
  • Step S61 the terminal device receives a plurality of audio frames and at least one redundant frame.
  • the plurality of audio frames includes at least one first audio frame and a plurality of second audio frames.
  • the redundant frame includes data extracted from the plurality of second audio frames, and does not include data of the at least one first audio frame. That is, the redundant frame is generated using only the data of the plurality of second audio frames, and the redundant data is not provided for the first audio frame.
  • step S62 the lost audio frame is detected.
  • Step S63 when the redundant frame includes data of the lost frame, acquire data of the lost audio frame from the redundant frame, and recover the lost audio frame by using data of the lost audio frame.
  • Step S64 when the plurality of audio frames and the redundant frames include the lost audio frame And acquiring data of the adjacent audio frame from the plurality of audio frames and the redundant frame, and recovering the lost audio frame by using data of the adjacent audio frame.
  • the amount of data of the required redundant information is reduced, and the transmission efficiency of the audio data is improved.
  • the redundant frames are used to recover the lost frames, but also the adjacent frames of the lost frames are used to recover the lost frames. Therefore, a certain frame loss recovery effect can be achieved by using less redundant information, and the loss caused by packet loss is reduced. And so on.
  • the design of the position and number of the first audio frame can avoid the situation that the frame loss cannot be recovered or the recovery effect is poor, so that less redundant data is used to achieve better frame loss recovery effect.
  • the adjacent audio frame includes a previous frame and a subsequent frame of the lost audio frame; recovering the lost audio frame using data of the adjacent audio frame includes:
  • the encoding parameter is at least one of a line spectrum pair, a pitch period, or a gain.
  • the adjacent audio frame is a previous frame or a subsequent frame of the lost audio frame
  • restoring the lost audio frame by using data of the adjacent audio frame includes: encoding the lost audio frame
  • the value of the parameter is set to a value or a preset value of the encoding parameter of the adjacent audio frame.
  • FIG. 5b is a schematic diagram of an audio transmission system according to another embodiment of the present application.
  • the system can respond Used for network voice communication services, such as instant messaging, VoIP, etc.
  • the system includes a first terminal device 51, a second terminal device 52, and a network 53.
  • the first terminal device 51 can collect the voice signal input by the user, encode the collected voice signal to obtain a voice frame, and send the voice frame to the second terminal device 52.
  • the second terminal device 52 can perform the audio frame loss recovery method of the embodiments of the present application to decode the voice data from the first terminal device 51.
  • the second terminal device 52 is similar to the terminal device 52 in the embodiment shown in FIG. 5a, except that in the present embodiment, the audio decoding module 528 of FIG. 5a is embodied as a speech decoding module 529.
  • FIG. 7 is a flowchart of an audio encoding method according to an embodiment of the present application. This audio encoding method can be performed by the first terminal device 51. As shown in FIG. 7, the method may include the following steps.
  • Step S71 encoding the audio signal to generate a plurality of audio data frames.
  • Step S72 in the plurality of audio frames, determining at least one first audio frame as an audio frame that does not provide redundant data.
  • Step S73 generating at least one redundant frame by using data of a plurality of second audio frames of the plurality of audio frames, wherein the second audio frame is other than the first audio frame of the plurality of audio frames Audio frame.
  • Step S74 the multiple audio frames and the at least one redundant frame are sent to a decoding device.
  • the audio encoding end by providing redundant data only to a part of the audio frames, the amount of data of the redundant data can be reduced, and the transmission efficiency of the audio data can be improved.
  • At least one audio frame may be selected from the plurality of audio frames as the first audio frame in a manner that selects at most two consecutive frames every at least one audio frame. That is, up to two consecutive frames are selected as the first audio frame that does not provide redundant data every at least one audio frame (these audio frames are the second audio frame). In this way, no redundant data is provided through control.
  • the position and number of the first audio frame can avoid the situation that the frame loss cannot be recovered or the recovery effect is poor, so that less redundant data is used to achieve better frame loss recovery effect.
  • FIG. 1 is a method for recovering an audio frame loss according to an embodiment of the present application.
  • the technical solution of this embodiment can be implemented in any terminal or server.
  • the terminal can be a mobile terminal, such as a mobile phone, a tablet, or the like.
  • the technical solution of this embodiment may be used to restore audio data or video data and the like transmitted in real time on a client or a relay server of a social application.
  • the method of this embodiment may include the following steps.
  • Step S110 receiving a plurality of audio frames and redundant frames, the redundant frames including data of at least one audio frame extracted from the plurality of audio frames according to a preset interval.
  • the audio frame may include audio stream data, or other types of data, which is not limited in this embodiment.
  • the preset interval is not limited, and may specifically be an interval of one or several frames.
  • the preset interval is one frame or two frames.
  • at least one adjacent frame can be found from the redundant frame for data recovery, and the data of the adjacent frame can provide a higher recovery effect.
  • the transmitting end of the audio data caches the historical multi-frame audio stream data before the current frame, and extracts the corresponding historical frame code stream as a redundant frame according to a certain frame interval.
  • the audio frames are not continuously acquired, but are acquired at intervals, which reduces the size of the redundant frames, thereby effectively reducing the bandwidth.
  • Step S120 detecting a lost audio frame before the current audio frame.
  • the relay server of the social application or the client installed on the terminal may determine the previously lost audio frame based on the received current audio frame, for example, before determining the current frame. One or the first two frames are lost.
  • Step S130 when the lost audio frame is not included in the redundant frame, from the current audio frame and the redundancy The data of the adjacent frame of the lost audio frame is obtained in the frame.
  • the data of the lost audio frame when the data of the lost audio frame is included in the redundant frame, the data can be directly used for recovery.
  • the adjacent frame can be acquired. The data of the adjacent frame is located in the redundant frame or is the current audio frame.
  • the decoder can be used to decode the redundant frame to obtain the encoded information for generating the recovery packet (ie, the recovered data).
  • Step S140 recovering data of the lost audio frame according to data of the adjacent frame.
  • the decoder may be used to decode the redundant frame of the frame loss position or the data of the current frame to recover the lost frame.
  • the technical solution of the embodiment takes into consideration the bandwidth occupation and the recovery effect, and improves the transmission efficiency while ensuring a certain recovery effect.
  • step S140 may include:
  • the first coefficient is not limited, and the first coefficient may include two values, which respectively correspond to the previous frame and the subsequent frame.
  • CELP Code Excited Linear Prediction Coding
  • LSP Line Spectrum Pair
  • Pitch Pitch Period
  • Gain Gain
  • Code The codebooks
  • the embodiment implements an "interpolation" recovery method, and obtains the ith LSP/Pitch/Gain parameter of the lost nth frame by interpolating as follows:
  • LSP Line Spectrum Pair
  • Pitch Pitch Period
  • Gain Gain
  • Code Codebook
  • LSP_int(i)(n) a ⁇ LSP(i)(n+1)+(1-a) ⁇ LSP(i)(n-1); where n is the frame number and a is a weighting coefficient less than 1. , i is the LSP serial number;
  • Pitch_int(n) 0.5 ⁇ (Pitch(n+1)+Pitch(n-1)); wherein n is a frame number;
  • Gain_int(i)(n) b ⁇ Gain(i)(n+1)+(1-b) ⁇ Gain(i)(n-1); where n is the frame number and b is a weighting coefficient less than 1. , i is the Gain serial number;
  • the values of the above coefficients a and b are not limited, and 0.5 may be replaced with other values. Based on the data of the previous frame and the next frame, the lost frame can be recovered with high quality.
  • step S140 may include:
  • the preset second coefficient and the i-th line spectrum pair or gain in the previous frame, as the i-th line spectrum pair or gain of the lost audio frame; taking the preset pitch period minimum allowable value and the previous frame
  • the larger value of the i-th pitch period in the ith pitch period of the lost audio frame, i is a positive integer.
  • the second coefficient is not limited.
  • LSP_ext(i)(n) LSP(i)(n-1); where n is the frame number and i is the LSP sequence number;
  • Pitch_ext(n) Max(Tlow, Pitch(n-1)-1); where n is the frame number and Tlow is the minimum allowable value of the pitch period;
  • Gain_ext(i)(n) c ⁇ Gain(i)(n-1); n is the frame number, and c is a weighting coefficient less than 1. i is the Gain serial number;
  • the second coefficient is 1 when the LSP is restored, and other values may be used. This embodiment does not limit this. Based on the previous frame, the lost frame can be effectively recovered when there are many frames lost.
  • step S140 may further include: taking a random value as the codebook encoded by the lost information.
  • the ith Code (codebook) parameter of the lost nth frame is obtained in a random value manner:
  • the codebook can take a random value, which is simple and fast.
  • FIG. 2 The schematic diagram of the recovery is shown in FIG. 2, that is, the adjacent frames and the redundant frames can provide adjacent two frames for interpolation recovery, and one adjacent The frame is subjected to epitaxial recovery.
  • an embodiment of the present application provides an audio frame loss recovery apparatus.
  • the technical solution of this embodiment can be implemented in any terminal or server.
  • the terminal can be a mobile terminal such as a mobile phone, a tablet, or the like.
  • the technical solution of this embodiment may be used to restore audio data or video data and the like transmitted in real time on a client or a relay server of a social application.
  • the device of this embodiment includes:
  • the receiving module 310 receives a plurality of audio frames and redundant frames, and the redundant frames include data of at least one audio frame extracted from the plurality of audio frames according to a preset interval.
  • the audio frame may include audio stream data, and may be other types of data. This embodiment does not limit this.
  • the preset interval is not limited. Can be an interval of one or several frames.
  • the preset interval may be one frame or two frames.
  • At least one adjacent frame can be found from the redundant frame for data recovery, and the data of the adjacent frame can provide a higher recovery effect.
  • the transmitting end of the audio data caches the historical multi-frame audio stream data before the current frame, and extracts the corresponding historical frame code stream as a redundant frame according to a certain frame interval.
  • an audio frame is acquired at intervals to generate a redundant frame, which reduces the size of the redundant frame, thereby effectively reducing the bandwidth.
  • the detecting module 320 detects the lost audio frame before the current audio frame.
  • the relay server of the social application or the client installed on the terminal may determine the previously lost audio frame based on the received current audio frame, for example, before determining the current frame. One or the first two frames are lost.
  • the obtaining module 330 acquires data of adjacent frames of the lost audio frame from the current audio frame and the redundant frame when the lost audio frame is not included in the redundant frame.
  • the data of the lost audio frame when the data of the lost audio frame is included in the redundant frame, the data can be directly used for recovery.
  • the adjacent frame may be acquired, and the data of the adjacent frame is located in the redundant frame or is the current audio frame.
  • the decoder can be used to decode the redundant frame to obtain the encoded information for generating the restored frame.
  • the recovery module 340 recovers the data of the lost audio frame according to the data of the adjacent frame.
  • the decoder may be used to decode the redundant frame of the frame loss position or the data of the current frame to recover the lost frame.
  • the technical solution of the embodiment takes into consideration the bandwidth occupation and the recovery effect, and improves the transmission efficiency while ensuring a certain recovery effect.
  • the recovery module 340 may be based on the preset first coefficient, and the i-th line spectrum pair, the pitch period or the gain in the previous frame, and the ith line spectrum pair and the pitch of the next frame. Period or gain, calculate the ith line pair, pitch period or gain in the lost audio frame, i is a positive integer.
  • the first coefficient is not limited, and the first coefficient may include two values, which respectively correspond to the previous frame and the subsequent frame.
  • CELP Code Excited Linear Prediction Coding
  • LSP Line Spectrum Pair
  • Pitch Pitch Period
  • Gain Gain
  • Code The codebooks
  • the embodiment implements an "interpolation" recovery method, and obtains the ith LSP/Pitch/Gain parameter of the lost nth frame by interpolating as follows:
  • LSP Line Spectrum Pair
  • Pitch Pitch Period
  • Gain Gain
  • Code Codebook
  • LSP_int(i)(n) a ⁇ LSP(i)(n+1)+(1-a) ⁇ LSP(i)(n-1); where n is the frame number and a is a weighting coefficient less than 1. , i is the LSP serial number;
  • Pitch_int(n) 0.5 ⁇ (Pitch(n+1)+Pitch(n-1)); wherein n is a frame number;
  • Gain_int(i)(n) b ⁇ Gain(i)(n+1)+(1-b) ⁇ Gain(i)(n-1); where n is the frame number and b is a weighting coefficient less than 1. , i is the Gain serial number;
  • the values of the above coefficients a and b are not limited, and 0.5 may be replaced with other values. Based on the data of the previous frame and the next frame, the lost frame can be recovered with high quality.
  • the recovery module 340 can be based on the preset second coefficient and in the previous frame.
  • the i-th pitch period of the audio frame, i is a positive integer.
  • the second coefficient is not limited.
  • an "epitaxial" recovery method is implemented, and the ith LSP/Pitch/Gain parameter of the lost nth frame is obtained by the following extension method:
  • LSP_ext(i)(n) LSP(i)(n-1); where n is the frame number and i is the LSP sequence number;
  • Pitch_ext(n) Max(Tlow, Pitch(n-1)-1); where n is the frame number and Tlow is the minimum allowable value of the pitch period;
  • Gain_ext(i)(n) c ⁇ Gain(i)(n-1); n is the frame number, c is a weighting coefficient less than 1, and i is a Gain number;
  • the second coefficient is 1 when the LSP is restored, and other values may be used. This embodiment does not limit this. Based on the previous frame, the lost frame can be effectively recovered when there are many frames lost.
  • the recovery module 340 takes a random value as a codebook encoded with the lost information.
  • the ith Code (codebook) parameter of the lost nth frame is obtained in a random value manner:
  • the codebook can take a random value, which is simple and fast.
  • FIG. 2 The schematic diagram of the recovery is shown in FIG. 2, that is, the adjacent frames and the redundant frames can provide adjacent two frames for interpolation recovery, and one adjacent The frame is subjected to epitaxial recovery.
  • the embodiment of the present application further provides another audio frame loss recovery method for implementing the embodiment of the present application.
  • the terminal of the complex device is shown in FIG.
  • the terminal can be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), an in-vehicle computer, and the like.
  • FIG. 4 is a block diagram showing a partial structure of a mobile phone related to a terminal provided by an embodiment of the present application. Referring to FIG.
  • the mobile phone includes: a radio frequency (RF) circuit 410, a memory 420, an input unit 430, a display unit 440, a sensor 450, an audio circuit 460, a wireless fidelity (WiFi) module 470, and a processor 480. And power supply 490 and other components.
  • RF radio frequency
  • the structure of the handset shown in FIG. 4 does not constitute a limitation to the handset, and may include more or less components than those illustrated, or some components may be combined, or different components may be arranged.
  • the memory 420 can be used to store software programs and modules, and the processor 480 executes various functional applications and data processing of the mobile phone by running software programs and modules stored in the memory 420.
  • the memory 420 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the mobile phone (such as audio data, phone book, etc.).
  • memory 420 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the processor 480 is the control center of the handset, and connects various portions of the entire handset using various interfaces and lines, by executing or executing software programs and/or modules stored in the memory 420, and invoking data stored in the memory 420, executing The phone's various functions and processing data, so that the overall monitoring of the phone.
  • the processor 480 included in the terminal further has the function of executing computer readable instructions in the memory 420 to:
  • the redundant frames including data of at least one audio frame extracted from the plurality of audio frames according to a preset interval
  • the data of the adjacent frame of the lost audio frame is obtained from the current audio frame and the redundant frame;
  • the data of the lost audio frame is restored according to the data of the adjacent frame.
  • a method, a device and a terminal for frame loss repair of an audio frame are implemented, and part of the historical frame coding information is used as redundant information, and the required redundant information can be reduced by more than half. Since the voice has short-term correlation and stationarity, the data of the audio frame of the interval in the redundant frame can recover the frame loss better, that is, the technical solution of the present application still has a comparative use in the case of using less redundant bandwidth. Good frame loss repair ability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Detection And Prevention Of Errors In Transmission (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Mobile Radio Communication Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

本申请公开了一种音频帧丢失恢复方法和装置,该方法包括:接收多个音频帧和至少一个冗余帧,所述多个音频帧包括至少一个第一音频帧和多个第二音频帧,所述冗余帧包括从所述多个第二音频帧提取的数据,不包括所述至少一个第一音频帧的数据;检测到丢失音频帧;当所述冗余帧包括所述丢失音频帧的数据时,从所述冗余帧中获取所述丢失音频帧的数据,利用所述丢失音频帧的数据恢复所述丢失音频帧;当所述多个音频帧和所述冗余帧中包括所述丢失音频帧的相邻音频帧的数据时,从所述多个音频帧和所述冗余帧中获取所述相邻音频帧的数据,利用所述相邻音频帧的数据恢复所述丢失音频帧。

Description

音频帧丢失恢复方法和装置
相关文件
本申请要求于2016年10月31日提交中国专利局、申请号为201610931391.8、申请名称为“信息编码帧丢失恢复方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及互联网技术领域,尤其涉及一种音频帧丢失恢复方法和装置。
背景
音频帧(也称为信息编码帧、音频数据帧等),是通过将音频信号分段、将每段音频信号的采样值根据某种音频编码机制编码后得到的数据。音频帧的数据格式、大小与采用的音频编码机制有关。将音频信号分帧可以方便音频数据流的传输和处理,因此音频帧广泛应用在网络流媒体、VoIP(Voice over Internet Protocol)等业务中。为了保证音频传输的实时性,通常使用UDP(User Datagram Protocol)不可靠连接协议方式传输音频帧。
技术内容
有鉴于此,本申请的目的在于提供一种音频帧丢失恢复方法和装置,以在占用带宽有限的情况下,对丢失音频帧的数据进行高质量的恢复。
本申请实施例的一种音频帧丢失恢复方法包括:
接收多个音频帧和至少一个冗余帧,所述多个音频帧包括至少一个 第一音频帧和多个第二音频帧,所述冗余帧包括从所述多个第二音频帧提取的数据,不包括所述至少一个第一音频帧的数据;
检测到丢失音频帧;
当所述冗余帧包括所述丢失帧的数据时,从所述冗余帧中获取所述丢失音频帧的数据,利用所述丢失音频帧的数据恢复所述丢失音频帧;
当所述多个音频帧和所述冗余帧中包括所述丢失音频帧的相邻音频帧的数据时,从所述多个音频帧和所述冗余帧中获取所述相邻音频帧的数据,利用所述相邻音频帧的数据恢复所述丢失音频帧。
本申请实施例的一种音频帧编码方法包括:
对音频信号进行编码生成多个音频数据帧;
在所述多个音频帧中,确定至少一个第一音频帧作为不提供冗余数据的音频帧;
利用多个音频帧中多个第二音频帧的数据生成至少一个冗余帧,其中,所述第二音频帧为所述多个音频帧中除所述第一音频帧之外的音频帧;
将所述多个音频帧和所述至少一个冗余帧发送给解码设备。
本申请实施例的一种音频帧丢失恢复装置包括:处理器和存储器,所述存储器中存储有计算机可读指令,可以使所述处理器:
接收多个音频帧和至少一个冗余帧,所述多个音频帧包括至少一个第一音频帧和多个第二音频帧,所述冗余帧包括根据从所述多个第二音频帧提取的数据,不包括所述至少一个第一音频帧的数据;
检测到丢失音频帧;
当所述冗余帧包括所述丢失音频帧的数据时,从所述冗余帧中获取所述丢失音频帧的数据,利用所述丢失音频帧的数据恢复所述丢失音频帧;
当所述多个音频帧和所述冗余帧中包括所述丢失音频帧的相邻音频帧的数据时,从所述多个音频帧和所述冗余帧中获取所述相邻音频帧的数据,利用所述相邻音频帧的数据恢复所述丢失音频帧。
本申请实施例的一种音频编码装置包括:处理器和存储器,所述存储器中存储有计算机可读指令,可以使所述处理器:
对音频信号进行编码生成多个音频数据帧;
在所述多个音频帧中,确定至少一个第一音频帧作为不提供冗余数据的音频帧;
利用多个音频帧中多个第二音频帧的数据生成至少一个冗余帧,其中,所述第二音频帧为所述多个音频帧中除所述第一音频帧之外的音频帧;
将所述多个音频帧和所述至少一个冗余帧发送给解码设备。
本申请实施例的一种计算机可读存储介质存储有计算机可读指令,被执行时可以使处理器:
接收多个音频帧和至少一个冗余帧,所述多个音频帧包括至少一个第一音频帧和多个第二音频帧,所述冗余帧包括根据从所述多个第二音频帧提取的数据,不包括所述至少一个第一音频帧的数据;
确定丢失的丢失音频帧;
当所述冗余帧包括所述丢失音频帧的数据时,从所述冗余帧中获取所述丢失音频帧的数据,利用所述丢失音频帧的数据恢复所述丢失音频帧;
当所述多个音频帧和所述冗余帧中包括所述丢失音频帧的相邻音频帧的数据时,从所述多个音频帧和所述冗余帧中获取所述相邻音频帧的数据,利用所述相邻音频帧的数据恢复所述丢失音频帧。
本申请实施例的一种计算机可读存储介质存储有计算机可读指令, 被执行时可以使处理器:对音频信号进行编码生成多个音频数据帧;
在所述多个音频帧中,确定至少一个第一音频帧作为不提供冗余数据的音频帧;
利用多个音频帧中多个第二音频帧的数据生成至少一个冗余帧,其中,所述第二音频帧为所述多个音频帧中除所述第一音频帧之外的音频帧;
将所述多个音频帧和所述至少一个冗余帧发送给解码设备。
本申请各实施例的音频帧修复方法部分使用历史帧编码信息作为冗余信息,减少了所需冗余信息的数据量,提高了音频数据的传输效率。同时,不仅利用冗余帧恢复丢失帧,还利用丢失帧的相邻帧来恢复丢失帧,利用较少的冗余信息也能达到一定的丢帧恢复效果,减少丢包造成的卡顿等现象。
附图简要说明
图1为本申请实施例的一种音频帧丢失恢复方法的流程图;
图2为本申请实施例的一种音频帧丢失恢复方法的原理示意图;
图3为本申请实施例的一种音频帧丢失恢复装置的框图;
图4为本申请实施例的一种音频帧丢失恢复装置对应的终端的示意图;
图5a、5b为本申请实施例的音频传输系统的示意图;
图6为本申请实施例的一种音频帧丢失恢复方法的流程图;
图7为本申请实施例的一种音频编码方法的流程图。
实施方式
为了使本申请实施例所要解决的技术问题、技术方案及有益效果更 加清楚、明白,以下结合附图和实施例,对本申请实施例进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
图5a为本申请实施例的音频传输系统的示意图。如图5a所示,该系统包括服务器50、终端设备52和网络53。
服务器50用于通过网络53向终端设备52提供音频帧。
终端设备52对收到的音频帧进行音频解码,输出解码获得的音频信号。终端设备52可以包括处理器521和存储器522。其中,存储器522中存储有操作系统524、用于通过网络收发数据的网络通信模块526、以及音频解码模块528。音频解码模块528可以以计算机可执行指令的形式存储在存储器522中。其它实施例中,音频解码模块528也可以以硬件或硬件加软件的形式实现。
终端设备52可以执行本申请各实施例的音频帧丢失恢复方法来对来自服务器50的音频数据进行解码。图6为本申请实施例的一种音频帧丢失恢复方法的流程图。如图6所示,该方法可以包括以下步骤。
步骤S61,终端设备接收多个音频帧和至少一个冗余帧。
其中,所述多个音频帧包括至少一个第一音频帧和多个第二音频帧。所述冗余帧包括从所述多个第二音频帧提取的数据,不包括所述至少一个第一音频帧的数据。也即,冗余帧仅利用多个第二音频帧的数据生成,不为第一音频帧提供冗余数据。
步骤S62,检测到丢失音频帧。
步骤S63,当所述冗余帧包括所述丢失帧的数据时,从所述冗余帧中获取所述丢失音频帧的数据,利用所述丢失音频帧的数据恢复所述丢失音频帧。
步骤S64,当所述多个音频帧和所述冗余帧中包括所述丢失音频帧 的相邻音频帧的数据时,从所述多个音频帧和所述冗余帧中获取所述相邻音频帧的数据,利用所述相邻音频帧的数据恢复所述丢失音频帧。
这样,通过仅利用部分历史帧编码信息作为冗余信息,减少了所需冗余信息的数据量,提高了音频数据的传输效率。同时,不仅利用冗余帧恢复丢失帧,还利用丢失帧的相邻帧来恢复丢失帧,因此,利用较少的冗余信息也能达到一定的丢帧恢复效果,减少丢包造成的卡顿等现象。
一些例子中,在接收到的所述多个音频帧中,相邻的两个第二音频帧之间存在至多两个第一音频帧。也即,每隔至少一个音频帧(这些音频帧为第二音频帧)有最多连续两帧是不提供冗余数据的第一音频帧。这样,采用这种第一音频帧的位置和数量的设计方案,可以避免丢帧无法恢复或恢复效果差的情况,从而使用较少的冗余数据达到较好的丢帧恢复效果。
一些例子中,所述相邻音频帧包括所述丢失音频帧的前一帧和后一帧;利用所述相邻音频帧的数据恢复所述丢失音频帧包括:
将所述丢失音频帧的编码参数的取值设置为所述前一帧的所述编码参数的值和所述后一帧的所述编码参数的值之间的值。
一些例子中,所述编码参数为线谱对、基音周期或增益中的至少一个。
一些例子中,所述相邻音频帧为所述丢失音频帧的前一帧或后一帧,利用所述相邻音频帧的数据恢复所述丢失音频帧包括:将所述丢失音频帧的编码参数的取值设置为所述相邻音频帧的所述编码参数的值或预设值。
图5b为本申请另一实施例的音频传输系统的示意图。该系统可以应 用于网络语音通信业务,例如即时通信、VoIP等。如图5b所示,该系统包括第一终端设备51、第二终端设备52和网络53。
第一终端设备51可以采集用户输入的语音信号,对采集的语音信号进行编码得到语音帧,并将语音帧发送给第二终端设备52。
第二终端设备52可以执行本申请各实施例的音频帧丢失恢复方法来对来自第一终端设备51的语音数据进行解码。第二终端设备52与图5a所示实施例中的终端设备52相似,只是在本实施例中,图5a中的音频解码模块528体现为语音解码模块529。
图7为本申请实施例的一种音频编码方法的流程图。该音频编码方法可以由第一终端设备51执行。如图7所示,该方法可以包括以下步骤。
步骤S71,对音频信号进行编码生成多个音频数据帧。
步骤S72,在所述多个音频帧中,确定至少一个第一音频帧作为不提供冗余数据的音频帧。
步骤S73,利用多个音频帧中多个第二音频帧的数据生成至少一个冗余帧,其中,所述第二音频帧为所述多个音频帧中除所述第一音频帧之外的音频帧。
步骤S74,将所述多个音频帧和所述至少一个冗余帧发送给解码设备。
这样,在音频编码端,通过仅对部分音频帧提供冗余数据,可以减少冗余数据的数据量,提高音频数据的传输效率。
一些实施例中,可以以每隔至少一个音频帧选择最多连续两帧的方式从所述多个音频帧中选择至少一个音频帧作为所述第一音频帧。也即,每隔至少一个音频帧(这些音频帧为第二音频帧)选择最多连续两帧作为不提供冗余数据的第一音频帧。这样,通过控制不提供冗余数据 的第一音频帧的位置和数量,可以避免丢帧无法恢复或恢复效果差的情况,从而使用较少的冗余数据达到较好的丢帧恢复效果。
图1为本申请实施例的一种音频帧丢失恢复方法。本实施例的技术方案可以在任意终端或服务器中实现。终端可以是移动终端,例如:手机、平板电脑等。本实施例的技术方案可以用于恢复社交应用的客户端或中转服务器上所实时传输的音频数据或视频数据等。本实施例的方法可以包括以下步骤。
步骤S110,接收多个音频帧和冗余帧,冗余帧包括根据预设间隔从多个音频帧提取的至少一个音频帧的数据。
在本实施例中,该音频帧包含的可以是音频码流数据,也可以是其他类型的数据,本实施例对此不进行限制。在本实施例中,对预设间隔不进行限制,具体可以是一个或几个帧的间隔。
在本实施例中,预设间隔为一帧或二帧。这样,对于丢失帧,可以从冗余帧中找到至少一个相邻帧从而进行数据恢复,相邻帧的数据能够提供较高的恢复效果。
在本实施例中,音频数据的发送端在发送数据时,缓存当前帧之前的历史多帧音频码流数据,并按照一定的帧间隔提取对应的历史帧码流作为冗余帧。本实施例并非连续获取音频帧,而是按间隔获取,这就减少了冗余帧的大小,从而有效降低了带宽。
步骤S120,检测当前音频帧之前的丢失音频帧。
在本实施例中,以社交应用为例,社交应用的中转服务器或者安装在终端上的客户端,可以基于接收的当前音频帧,确定此前丢失的音频帧,例如,可确定当前帧之前的前一个或前两个帧丢失。
步骤S130,在冗余帧中未包含丢失音频帧时,从当前音频帧和冗余 帧中获取丢失音频帧的相邻帧的数据。
在本实施例中,当冗余帧中包含丢失音频帧的数据时,可以直接利用该数据进行恢复。而当冗余帧中不包含丢失音频帧的数据时,则可以获取相邻帧。该相邻帧的数据位于冗余帧中或为当前音频帧。在本实施例中,可以利用解码器解码冗余帧获取编码信息,以用于生成恢复包(即恢复数据)。
步骤S140,根据相邻帧的数据,对丢失音频帧的数据进行恢复。在本实施例中,可以利用解码器解码丢帧位置的冗余帧或当前帧的数据,以对丢失帧进行恢复。
在本实施例中,以社交应用传输语音为例,由于语音具有短时相关性和平稳性,所以基于冗余帧中间隔的音频帧的数据能较好地恢复丢帧。可见,本实施例的技术方案兼顾了带宽占用以及恢复效果,在保证一定的恢复效果的同时提高了传输效率。
本申请的一个实施例中提供了另一种音频帧丢失恢复方法。本实施例的方法中,步骤S140可以包括:
根据预设的第一系数,以及前一帧中的第i个线谱对、基音周期或增益,以及后一帧的第i个线谱对、基音周期或增益,计算丢失音频帧中第i个线谱对、基音周期或增益,i为正整数。
在本实施例的技术方案中,对第一系数不进行限制,第一系数可以包括两个数值,分别对应前一帧和后一帧。
在本实施例中,以最常见的语音编解码模型CELP(码激励线性预测编码)为例,CELP编码模型中以LSP(线谱对)、Pitch(基音周期)、Gain(增益)、Code(码本)这四组压缩编码参数代表一帧语音信号,而这些参数可以从当前编码帧和冗余帧码流中解析得到。
本实施例所实现的是“内插”恢复方法,以如下内插方式分别得到恢复丢失的第n个帧的第i个LSP/Pitch/Gain参数:
首先,从当前编码帧和FEC冗余帧码流中解析出LSP(线谱对)、Pitch(基音周期)、Gain(增益)、Code(码本)参数。
LSP_int(i)(n)=a×LSP(i)(n+1)+(1-a)×LSP(i)(n-1);其中,n为帧序号,a为小于1的加权系数,i为LSP序号;
Pitch_int(n)=0.5×(Pitch(n+1)+Pitch(n-1));其中,n为帧序号;
Gain_int(i)(n)=b×Gain(i)(n+1)+(1-b)×Gain(i)(n-1);其中,n为帧序号,b为小于1的加权系数,i为Gain序号;
在本实施例中,对上述系数a、b的值不进行限制,0.5可以采用其他值代替。基于前一帧和后一帧两帧的数据,可以高质量恢复丢失的帧。
本申请的一个实施例中提供了另一种音频帧丢失恢复方法。本实施例的方法中,步骤S140可以包括:
根据预设的第二系数,以及前一帧中的第i个线谱对或增益,作为丢失音频帧的第i个线谱对或增益;取预设的基音周期最小允许值与前一帧中的第i个基音周期的较大值,作为丢失音频帧的第i个基音周期,i为正整数。
在本实施例的技术方案中,对第二系数不进行限制。
本实施例所实现的是“外延”恢复方法。以如下外延方式分别得到恢复丢失的第n个帧的第i个LSP/Pitch/Gain参数:
LSP_ext(i)(n)=LSP(i)(n-1);其中,n为帧序号,i为LSP序号;
Pitch_ext(n)=Max(Tlow,Pitch(n-1)-1);其中,n为帧序号,Tlow为基音周期最小允许值;
Gain_ext(i)(n)=c×Gain(i)(n-1);n为帧序号,c为小于1的加权系数, i为Gain序号;
在本实施例中,恢复LSP时第二系数为1,也可以采用其他值,本实施例对此不进行限制。基于前一帧,可以在丢帧较多时有效恢复丢失的帧。
基于前述实施例,步骤S140还可以包括:取随机值作为丢失信息编码的码本。在本实施例中,以随机值方式获取丢失的第n个帧的第i个Code(码本)参数:
Code_comp(i)(n)=Random();
在本实施例中,码本取随机值即可,简单快速。
以上实施例中,汇总了内插和外延两种恢复方式,恢复的示意图如图2所示,即当前帧以及冗余帧中可以提供相邻的两个帧进行内插恢复,以及一个相邻帧进行外延恢复。
如图3所示,本申请的一个实施例中提供了一种音频帧丢失恢复装置。本实施例的技术方案可以在任意终端或服务器中上实现。终端可以是移动终端例如:手机、平板电脑等。本实施例的技术方案可以用于恢复社交应用的客户端或中转服务器上所实时传输的音频数据或视频数据等。本实施例的装置包括:
接收模块310,接收多个音频帧和冗余帧,冗余帧包括根据预设间隔从多个音频帧提取的至少一个音频帧的数据。在本实施例中,该音频帧包含的可以是音频码流数据,也可以是其他类型的数据,本实施例对此不进行限制;在本实施例中,对预设间隔不进行限制,具体可以是一个或几个帧的间隔。
在本实施例中,预设间隔可以为一帧或二帧。这样,对于丢失帧, 可以从冗余帧中找到至少一个相邻帧从而进行数据恢复,相邻帧的数据能够提供较高的恢复效果。
在本实施例中,音频数据的发送端在发送数据时,缓存当前帧之前的历史多帧音频码流数据,并按照一定的帧间隔提取对应的历史帧码流作为冗余帧。本实施例按间隔获取音频帧来生成冗余帧,减少了冗余帧的大小,从而有效降低了带宽。
检测模块320,检测当前音频帧之前的丢失音频帧。
在本实施例中,以社交应用为例,社交应用的中转服务器或者安装在终端上的客户端,可以基于接收的当前音频帧,确定此前丢失的音频帧,例如,可确定当前帧之前的前一个或前两个帧丢失。
获取模块330,在冗余帧中未包含丢失音频帧时,从当前音频帧和冗余帧中获取丢失音频帧的相邻帧的数据。
在本实施例中,当冗余帧中包含丢失音频帧的数据时,可以直接利用该数据进行恢复。而当冗余帧中不包含丢失音频帧的数据时,则可以获取相邻帧,该相邻帧的数据位于冗余帧中或为当前音频帧。在本实施例中,可以利用解码器可以解码冗余帧获取编码信息,以用于生成恢复帧。
恢复模块340,根据相邻帧的数据,对丢失音频帧的数据进行恢复。在本实施例中,可以利用解码器解码丢帧位置的冗余帧或当前帧的数据,以对丢失帧进行恢复。
在本实施例中,以社交应用传输语音为例,由于语音具有短时相关性和平稳性,所以基于冗余帧中间隔的音频帧的数据能较好地恢复丢帧。可见,本实施例的技术方案兼顾了带宽占用以及恢复效果,在保证一定的恢复效果的同时提高了传输效率。
本申请的一个实施例中提供了另一种音频帧丢失恢复装置。本实施例的装置中,恢复模块340可以根据预设的第一系数,以及前一帧中的第i个线谱对、基音周期或增益,以及后一帧的第i个线谱对、基音周期或增益,计算丢失音频帧中第i个线谱对、基音周期或增益,i为正整数。
在本实施例的技术方案中,对第一系数不进行限制,第一系数可以包括两个数值,分别对应前一帧和后一帧。
在本实施例中,以最常见的语音编解码模型CELP(码激励线性预测编码)为例,CELP编码模型中以LSP(线谱对)、Pitch(基音周期)、Gain(增益)、Code(码本)这四组压缩编码参数代表一帧语音信号,而这些参数可以从当前编码帧和冗余帧码流中解析得到。
本实施例所实现的是“内插”恢复方法,以如下内插方式分别得到恢复丢失的第n个帧的第i个LSP/Pitch/Gain参数:
首先,从当前编码帧和FEC冗余帧码流中解析出LSP(线谱对)、Pitch(基音周期)、Gain(增益)、Code(码本)参数。
LSP_int(i)(n)=a×LSP(i)(n+1)+(1-a)×LSP(i)(n-1);其中,n为帧序号,a为小于1的加权系数,i为LSP序号;
Pitch_int(n)=0.5×(Pitch(n+1)+Pitch(n-1));其中,n为帧序号;
Gain_int(i)(n)=b×Gain(i)(n+1)+(1-b)×Gain(i)(n-1);其中,n为帧序号,b为小于1的加权系数,i为Gain序号;
在本实施例中,对上述系数a、b的值不进行限制,0.5可以采用其他值代替。基于前一帧和后一帧两帧的数据,可以高质量恢复丢失的帧。
本申请的一个实施例中提供了另一种音频帧丢失恢复装置。本实施例的装置中,恢复模块340可以根据预设的第二系数,以及前一帧中的 第i个线谱对或增益,作为丢失音频帧的第i个线谱对或增益;取预设的基音周期最小允许值与前一帧中的第i个基音周期的较大值,作为丢失音频帧的第i个基音周期,i为正整数。
在本实施例的技术方案中,对第二系数不进行限制。
本实施例所实现的是“外延”恢复方法,以如下外延方式分别得到恢复丢失的第n个帧的第i个LSP/Pitch/Gain参数:
LSP_ext(i)(n)=LSP(i)(n-1);其中,n为帧序号,i为LSP序号;
Pitch_ext(n)=Max(Tlow,Pitch(n-1)-1);其中,n为帧序号,Tlow为基音周期最小允许值;
Gain_ext(i)(n)=c×Gain(i)(n-1);n为帧序号,c为小于1的加权系数,i为Gain序号;
在本实施例中,恢复LSP时第二系数为1,也可以采用其他值,本实施例对此不进行限制。基于前一帧,可以在丢帧较多时有效恢复丢失的帧。
基于前述实施例,进一步地,恢复模块340取随机值作为丢失信息编码的码本。在本实施例中,以随机值方式获取丢失的第n个帧的第i个Code(码本)参数:
Code_comp(i)(n)=Random();
在本实施例中,码本取随机值即可,简单快速。
以上实施例中,汇总了内插和外延两种恢复方式,恢复的示意图如图2所示,即当前帧以及冗余帧中可以提供相邻的两个帧进行内插恢复,以及一个相邻帧进行外延恢复。
本申请实施例还提供了另一种用于实现本申请实施例音频帧丢失恢 复装置的终端,如图4所示。为了便于说明,仅示出了与本申请实施例相关的部分,具体技术细节未揭示的,请参照本申请实施例方法部分。该终端可以为包括手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑等任意终端设备。以终端为手机为例,图4示出的是与本申请实施例提供的终端相关的手机的部分结构的框图。参考图4,手机包括:射频(Radio Frequency,RF)电路410、存储器420、输入单元430、显示单元440、传感器450、音频电路460、无线保真(wireless fidelity,WiFi)模块470、处理器480、以及电源490等部件。本领域技术人员可以理解,图4中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
存储器420可用于存储软件程序以及模块,处理器480通过运行存储在存储器420的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器420可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器420可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
处理器480是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器420内的软件程序和/或模块,以及调用存储在存储器420内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。
在本申请实施例中,该终端所包括的处理器480还具有以下功能:执行存储器420中的计算机可读指令以:
接收多个音频帧和冗余帧,冗余帧包括根据预设间隔从多个音频帧提取的至少一个音频帧的数据;
检测当前音频帧之前的丢失音频帧;
在冗余帧中未包含丢失音频帧时,从当前音频帧和冗余帧中获取丢失音频帧的相邻帧的数据;
根据相邻帧的数据,对丢失音频帧的数据进行恢复。
实际上实现了一种音频帧的丢帧修复方法、装置和终端,部分使用历史帧编码信息作为冗余信息,所需冗余信息可以减少一半以上。由于语音具有短时相关性和平稳性,所以冗余帧中间隔的音频帧的数据能较好地恢复丢帧,即在使用较少的冗余带宽情况下,本申请的技术方案仍然具有较好丢帧修复能力。
以上参照附图说明了本申请的优选实施例,并非因此局限本申请的权利范围。本领域技术人员不脱离本申请的范围和实质,可以有多种变型方案实现本申请,比如作为一个实施例的特征可用于另一实施例而得到又一实施例。凡在运用本申请的技术构思之内所作的任何修改、等同替换和改进,均应在本申请的权利范围之内。

Claims (21)

  1. 一种音频帧丢失恢复方法,其特征在于,应用于终端设备,包括:
    接收多个音频帧和至少一个冗余帧,所述多个音频帧包括至少一个第一音频帧和多个第二音频帧,所述冗余帧包括从所述多个第二音频帧提取的数据,不包括所述至少一个第一音频帧的数据;
    检测到丢失音频帧;
    当所述冗余帧包括所述丢失帧的数据时,从所述冗余帧中获取所述丢失音频帧的数据,利用所述丢失音频帧的数据恢复所述丢失音频帧;
    当所述多个音频帧和所述冗余帧中包括所述丢失音频帧的相邻音频帧的数据时,从所述多个音频帧和所述冗余帧中获取所述相邻音频帧的数据,利用所述相邻音频帧的数据恢复所述丢失音频帧。
  2. 根据权利要求1所述的方法,其特征在于,
    在接收到的所述多个音频帧中,相邻的两个第二音频帧之间存在至多两个第一音频帧。
  3. 根据权利要求1所述的方法,其特征在于,所述相邻音频帧包括所述丢失音频帧的前一帧和后一帧;利用所述相邻音频帧的数据恢复所述丢失音频帧包括:
    将所述丢失音频帧的编码参数的取值设置为所述前一帧的所述编码参数的值和所述后一帧的所述编码参数的值之间的值。
  4. 根据权利要求3所述的方法,其特征在于,所述编码参数为线谱对、基音周期或增益中的至少一个。
  5. 根据权利要求3所述的方法,其特征在于,将所述丢失音频帧的编码参数的取值设置为所述前一帧的所述编码参数的值和所述后一帧 的所述编码参数的值之间的值包括:
    根据预设的第一系数,以及所述前一帧中的第i个线谱对、基音周期或增益,以及所述后一帧的第i个线谱对、基音周期或增益,计算所述丢失音频帧中第i个线谱对、基音周期或增益,i为正整数。
  6. 根据权利要求1所述的方法,其特征在于,所述相邻音频帧为所述丢失音频帧的前一帧或后一帧;利用所述相邻音频帧的数据恢复所述丢失音频帧包括:
    将所述丢失音频帧的编码参数的取值设置为所述相邻音频帧的所述编码参数的值或预设值。
  7. 根据权利要求6所述的方法,其特征在于,将所述丢失音频帧的编码参数的取值设置为所述相邻音频帧的所述编码参数的值或预设值包括:
    根据预设的第二系数,以及所述相邻音频帧中的第i个线谱对或增益,作为所述丢失音频帧的第i个线谱对或增益;
    取预设的基音周期最小允许值与所述相邻音频帧中的第i个基音周期的较大值,作为所述丢失音频帧的第i个基音周期,i为正整数。
  8. 根据权利要求1所述的方法,其特征在于,利用所述相邻音频帧的数据恢复所述丢失音频帧包括:
    取随机值作为所述丢失音频帧的码本。
  9. 一种音频编码方法,其特征在于,应用于音频编码设备,包括:
    对音频信号进行编码生成多个音频帧;
    在所述多个音频帧中,确定至少一个第一音频帧作为不提供冗余数据的音频帧;
    利用多个音频帧中多个第二音频帧的数据生成至少一个冗余帧,其 中,所述第二音频帧为所述多个音频帧中除所述第一音频帧之外的音频帧;
    将所述多个音频帧和所述至少一个冗余帧发送给解码设备。
  10. 根据权利要求9所述的方法,其特征在于,在所述多个音频帧中确定至少一个第一音频帧包括:
    以每隔至少一个音频帧选择最多连续两帧的方式从所述多个音频帧中选择至少一个音频帧作为所述第一音频帧。
  11. 一种音频帧丢失恢复装置,其特征在于,包括:处理器和存储器,所述存储器中存储有计算机可读指令,可以使所述处理器:
    接收多个音频帧和至少一个冗余帧,所述多个音频帧包括至少一个第一音频帧和多个第二音频帧,所述冗余帧包括根据从所述多个第二音频帧提取的数据,不包括所述至少一个第一音频帧的数据;
    检测到丢失音频帧;
    当所述冗余帧包括所述丢失音频帧的数据时,从所述冗余帧中获取所述丢失音频帧的数据,利用所述丢失音频帧的数据恢复所述丢失音频帧;
    当所述多个音频帧和所述冗余帧中包括所述丢失音频帧的相邻音频帧的数据时,从所述多个音频帧和所述冗余帧中获取所述相邻音频帧的数据,利用所述相邻音频帧的数据恢复所述丢失音频帧。
  12. 根据权利要求11所述的装置,其特征在于,所述计算机可读指令可以使所述处理器:
    从所述多个音频帧和所述冗余帧中获取所述丢失音频帧的前一帧的数据和后一帧的数据作为所述相邻音频帧的数据;
    将所述丢失音频帧的编码参数的取值设置为所述前一帧的所述编码 参数的值和所述后一帧的所述编码参数的值之间的值。
  13. 根据权利要求11所述的装置,其特征在于,所述计算机可读指令可以使所述处理器:
    从所述多个音频帧和所述冗余帧中获取所述丢失音频帧的前一帧的数据或后一帧的数据作为所述相邻音频帧的数据;
    将所述丢失音频帧的编码参数的取值设置为所述相邻音频帧的所述编码参数的值或预设值。
  14. 根据权利要求11所述的装置,其特征在于,所述计算机可读指令可以使所述处理器:
    取随机值作为所述丢失音频帧的码本。
  15. 一种音频编码装置,其特征在于,包括:处理器和存储器,所述存储器中存储有计算机可读指令,可以使所述处理器:
    对音频信号进行编码生成多个音频数据帧;
    在所述多个音频帧中,确定至少一个第一音频帧作为不提供冗余数据的音频帧;
    利用多个音频帧中多个第二音频帧的数据生成至少一个冗余帧,其中,所述第二音频帧为所述多个音频帧中除所述第一音频帧之外的音频帧;
    将所述多个音频帧和所述至少一个冗余帧发送给解码设备。
  16. 根据权利要求15所述的装置,其特征在于,所述计算机可读指令可以使所述处理器:
    以每隔至少一个音频帧选择最多连续两帧的方式从所述多个音频帧中选择至少一个音频帧作为所述第一音频帧。
  17. 一种计算机可读存储介质,其特征在于,存储有计算机可读指令,被执行时可以使处理器:
    接收多个音频帧和至少一个冗余帧,所述多个音频帧包括至少一个第一音频帧和多个第二音频帧,所述冗余帧包括根据从所述多个第二音频帧提取的数据,不包括所述至少一个第一音频帧的数据;
    检测到丢失音频帧;
    当所述冗余帧包括所述丢失音频帧的数据时,从所述冗余帧中获取所述丢失音频帧的数据,利用所述丢失音频帧的数据恢复所述丢失音频帧;
    当所述多个音频帧和所述冗余帧中包括所述丢失音频帧的相邻音频帧的数据时,从所述多个音频帧和所述冗余帧中获取所述相邻音频帧的数据,利用所述相邻音频帧的数据恢复所述丢失音频帧。
  18. 根据权利要求17所述的存储介质,其特征在于,所述计算机可读指令被执行时可以使处理器:
    从所述多个音频帧和所述冗余帧中获取所述丢失音频帧的前一帧的数据和后一帧的数据作为所述相邻音频帧的数据;
    将所述丢失音频帧的编码参数的取值设置为所述前一帧的所述编码参数的值和所述后一帧的所述编码参数的值之间的值。
  19. 根据权利要求17所述的装置,其特征在于,所述计算机可读指令可以使所述处理器:
    从所述多个音频帧和所述冗余帧中获取所述丢失音频帧的前一帧的数据或后一帧的数据作为所述相邻音频帧的数据;
    将所述丢失音频帧的编码参数的取值设置为所述相邻音频帧的所述编码参数的值或预设值。
  20. 一种计算机可读存储介质,其特征在于,存储有计算机可读指令,被执行时可以使处理器:对音频信号进行编码生成多个音频数据帧;
    在所述多个音频帧中,确定至少一个第一音频帧作为不提供冗余数据的音频帧;
    利用多个音频帧中多个第二音频帧的数据生成至少一个冗余帧,其中,所述第二音频帧为所述多个音频帧中除所述第一音频帧之外的音频帧;
    将所述多个音频帧和所述至少一个冗余帧发送给解码设备。
  21. 根据权利要求20所述的存储介质,其特征在于,所述计算机可读指令可以使所述处理器:
    以每隔至少一个音频帧选择最多连续两帧的方式从所述多个音频帧中选择至少一个音频帧作为所述第一音频帧。
PCT/CN2017/106640 2016-10-31 2017-10-18 音频帧丢失恢复方法和装置 WO2018077083A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/286,928 US11227612B2 (en) 2016-10-31 2019-02-27 Audio frame loss and recovery with redundant frames

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610931391.8A CN108011686B (zh) 2016-10-31 2016-10-31 信息编码帧丢失恢复方法和装置
CN201610931391.8 2016-10-31

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/286,928 Continuation US11227612B2 (en) 2016-10-31 2019-02-27 Audio frame loss and recovery with redundant frames

Publications (1)

Publication Number Publication Date
WO2018077083A1 true WO2018077083A1 (zh) 2018-05-03

Family

ID=62023146

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/106640 WO2018077083A1 (zh) 2016-10-31 2017-10-18 音频帧丢失恢复方法和装置

Country Status (3)

Country Link
US (1) US11227612B2 (zh)
CN (1) CN108011686B (zh)
WO (1) WO2018077083A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112802485A (zh) * 2021-04-12 2021-05-14 腾讯科技(深圳)有限公司 语音数据处理方法、装置、计算机设备及存储介质
CN112908346A (zh) * 2019-11-19 2021-06-04 中国移动通信集团山东有限公司 丢包恢复方法及装置、电子设备和计算机可读存储介质

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112669858A (zh) 2019-10-14 2021-04-16 上海华为技术有限公司 一种数据处理方法及相关装置
CN110943800B (zh) * 2019-11-06 2023-04-28 厦门亿联网络技术股份有限公司 数据包的发送方法、装置及系统、存储介质、电子装置
CN110992963B (zh) * 2019-12-10 2023-09-29 腾讯科技(深圳)有限公司 网络通话方法、装置、计算机设备及存储介质
CN111245566B (zh) * 2020-01-09 2021-09-21 北京创世云科技股份有限公司 不可靠网络的抗丢包方法、装置、存储介质及电子设备
CN111292768B (zh) * 2020-02-07 2023-06-02 腾讯科技(深圳)有限公司 丢包隐藏的方法、装置、存储介质和计算机设备
CN112820306B (zh) * 2020-02-20 2023-08-15 腾讯科技(深圳)有限公司 语音传输方法、系统、装置、计算机可读存储介质和设备
CN111326166B (zh) * 2020-02-25 2023-04-14 网易(杭州)网络有限公司 语音处理方法及装置、计算机可读存储介质、电子设备
CN111883172B (zh) * 2020-03-20 2023-11-28 珠海市杰理科技股份有限公司 用于音频丢包修复的神经网络训练方法、装置和系统
CN111626155B (zh) * 2020-05-14 2023-08-01 新华智云科技有限公司 一种篮球位置点的生成方法及设备
CN111371534B (zh) * 2020-06-01 2020-09-18 腾讯科技(深圳)有限公司 一种数据重传方法、装置、电子设备和存储介质
CN114079534B (zh) * 2020-08-20 2023-03-28 腾讯科技(深圳)有限公司 编码、解码方法、装置、介质和电子设备
CN112489665B (zh) * 2020-11-11 2024-02-23 北京融讯科创技术有限公司 语音处理方法、装置以及电子设备
CN112532349B (zh) * 2020-11-24 2022-02-18 广州技象科技有限公司 基于解码异常的数据处理方法及装置
CN112634912B (zh) * 2020-12-18 2024-04-09 北京猿力未来科技有限公司 丢包补偿方法及装置
CN112738442B (zh) * 2020-12-24 2021-10-08 中标慧安信息技术股份有限公司 智能化监控录像存储方法和系统
CN113096670B (zh) * 2021-03-30 2024-05-14 北京字节跳动网络技术有限公司 音频数据的处理方法、装置、设备及存储介质
CN113192519B (zh) * 2021-04-29 2023-05-23 北京达佳互联信息技术有限公司 音频编码方法和装置以及音频解码方法和装置
CN117097705B (zh) * 2023-10-21 2024-01-16 北京蔚领时代科技有限公司 一种基于WebTransport的音视频传输方法和系统
CN117135148A (zh) * 2023-10-21 2023-11-28 北京蔚领时代科技有限公司 一种基于WebRTC的音视频传输方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102810314A (zh) * 2011-06-02 2012-12-05 华为终端有限公司 音频编码方法及装置、音频解码方法及装置、编解码系统
US20150106106A1 (en) * 2013-10-11 2015-04-16 Qualcomm Incorporated Systems and methods of communicating redundant frame information
CN104917671A (zh) * 2015-06-10 2015-09-16 腾讯科技(深圳)有限公司 基于移动终端的音频处理方法和装置
CN105741843A (zh) * 2014-12-10 2016-07-06 联芯科技有限公司 一种基于延时抖动的丢包补偿方法及系统

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5917835A (en) * 1996-04-12 1999-06-29 Progressive Networks, Inc. Error mitigation and correction in the delivery of on demand audio
US6810377B1 (en) * 1998-06-19 2004-10-26 Comsat Corporation Lost frame recovery techniques for parametric, LPC-based speech coding systems
JP4249821B2 (ja) * 1998-08-31 2009-04-08 富士通株式会社 ディジタルオーディオ再生装置
US7117156B1 (en) * 1999-04-19 2006-10-03 At&T Corp. Method and apparatus for performing packet loss or frame erasure concealment
US6597961B1 (en) * 1999-04-27 2003-07-22 Realnetworks, Inc. System and method for concealing errors in an audio transmission
US7031926B2 (en) * 2000-10-23 2006-04-18 Nokia Corporation Spectral parameter substitution for the frame error concealment in a speech decoder
ATE439666T1 (de) * 2001-02-27 2009-08-15 Texas Instruments Inc Verschleierungsverfahren bei verlust von sprachrahmen und dekoder dafer
EP1449305B1 (en) * 2001-11-30 2006-04-05 Telefonaktiebolaget LM Ericsson (publ) Method for replacing corrupted audio data
CA2388439A1 (en) * 2002-05-31 2003-11-30 Voiceage Corporation A method and device for efficient frame erasure concealment in linear predictive based speech codecs
US20050049853A1 (en) * 2003-09-01 2005-03-03 Mi-Suk Lee Frame loss concealment method and device for VoIP system
US7146309B1 (en) * 2003-09-02 2006-12-05 Mindspeed Technologies, Inc. Deriving seed values to generate excitation values in a speech coder
KR100612889B1 (ko) * 2005-02-05 2006-08-14 삼성전자주식회사 선스펙트럼 쌍 파라미터 복원 방법 및 장치와 그 음성복호화 장치
US7930176B2 (en) * 2005-05-20 2011-04-19 Broadcom Corporation Packet loss concealment for block-independent speech codecs
US8255207B2 (en) * 2005-12-28 2012-08-28 Voiceage Corporation Method and device for efficient frame erasure concealment in speech codecs
WO2008007700A1 (fr) * 2006-07-12 2008-01-17 Panasonic Corporation Dispositif de décodage de son, dispositif de codage de son, et procédé de compensation de trame perdue
EP2054878B1 (en) * 2006-08-15 2012-03-28 Broadcom Corporation Constrained and controlled decoding after packet loss
CN101325631B (zh) * 2007-06-14 2010-10-20 华为技术有限公司 一种估计基音周期的方法和装置
CN100524462C (zh) * 2007-09-15 2009-08-05 华为技术有限公司 对高带信号进行帧错误隐藏的方法及装置
WO2010031049A1 (en) * 2008-09-15 2010-03-18 GH Innovation, Inc. Improving celp post-processing for music signals
US8428938B2 (en) * 2009-06-04 2013-04-23 Qualcomm Incorporated Systems and methods for reconstructing an erased speech frame
US20110196673A1 (en) * 2010-02-11 2011-08-11 Qualcomm Incorporated Concealing lost packets in a sub-band coding decoder
US8321216B2 (en) * 2010-02-23 2012-11-27 Broadcom Corporation Time-warping of audio signals for packet loss concealment avoiding audible artifacts
US8613038B2 (en) * 2010-10-22 2013-12-17 Stmicroelectronics International N.V. Methods and apparatus for decoding multiple independent audio streams using a single audio decoder
US9026434B2 (en) * 2011-04-11 2015-05-05 Samsung Electronic Co., Ltd. Frame erasure concealment for a multi rate speech and audio codec
US9741350B2 (en) * 2013-02-08 2017-08-22 Qualcomm Incorporated Systems and methods of performing gain control
CN103280222B (zh) * 2013-06-03 2014-08-06 腾讯科技(深圳)有限公司 音频编码、解码方法及其系统
CN104282309A (zh) * 2013-07-05 2015-01-14 杜比实验室特许公司 丢包掩蔽装置和方法以及音频处理系统
CN107369454B (zh) * 2014-03-21 2020-10-27 华为技术有限公司 语音频码流的解码方法及装置
CN105100508B (zh) * 2014-05-05 2018-03-09 华为技术有限公司 一种网络语音质量评估方法、装置和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102810314A (zh) * 2011-06-02 2012-12-05 华为终端有限公司 音频编码方法及装置、音频解码方法及装置、编解码系统
US20150106106A1 (en) * 2013-10-11 2015-04-16 Qualcomm Incorporated Systems and methods of communicating redundant frame information
CN105741843A (zh) * 2014-12-10 2016-07-06 联芯科技有限公司 一种基于延时抖动的丢包补偿方法及系统
CN104917671A (zh) * 2015-06-10 2015-09-16 腾讯科技(深圳)有限公司 基于移动终端的音频处理方法和装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112908346A (zh) * 2019-11-19 2021-06-04 中国移动通信集团山东有限公司 丢包恢复方法及装置、电子设备和计算机可读存储介质
CN112802485A (zh) * 2021-04-12 2021-05-14 腾讯科技(深圳)有限公司 语音数据处理方法、装置、计算机设备及存储介质

Also Published As

Publication number Publication date
US20190198027A1 (en) 2019-06-27
US11227612B2 (en) 2022-01-18
CN108011686B (zh) 2020-07-14
CN108011686A (zh) 2018-05-08

Similar Documents

Publication Publication Date Title
WO2018077083A1 (zh) 音频帧丢失恢复方法和装置
CN110620892B (zh) 视频通信中的视频标注技术
CN111314335B (zh) 数据传输方法、装置、终端、存储介质和系统
JP2012521718A5 (zh)
CN110838894B (zh) 语音处理方法、装置、计算机可读存储介质和计算机设备
JP6072068B2 (ja) オーディオ・データを処理するための方法、装置、及びシステム
KR101924767B1 (ko) 음성 주파수 코드 스트림 디코딩 방법 및 디바이스
CN107276777A (zh) 会议系统的音频处理方法及装置
WO2019119950A1 (zh) 视频编码处理方法、装置及具有视频编码功能的应用
US20150036679A1 (en) Methods and apparatuses for transmitting and receiving audio signals
US10784988B2 (en) Conditional forward error correction for network data
JP2010050634A (ja) 符号化装置、復号装置及び符号化システム
JP2022552382A (ja) 音声伝送方法及びそのシステム、装置、コンピュータプログラム、並びにコンピュータ機器
WO2021057697A1 (zh) 视频编解码方法和装置、存储介质及电子装置
CN110557226A (zh) 一种音频传输方法和装置
US11646042B2 (en) Digital voice packet loss concealment using deep learning
CN112769524B (zh) 语音传输方法、装置、计算机设备和存储介质
JP2007324876A (ja) データ送信装置、データ受信装置、データ送信方法、データ受信方法、及びプログラム
US20210409736A1 (en) Video encoding method and apparatus, video decoding method and apparatus, electronic device and readable storage medium
CN117640015B (zh) 一种语音编码、解码方法、装置、电子设备及存储介质
US11489620B1 (en) Loss recovery using streaming codes in forward error correction
CN114079535B (zh) 转码方法、装置、介质和电子设备
CN114079534B (zh) 编码、解码方法、装置、介质和电子设备
CN114448957B (zh) 音频数据传输方法和装置
EP4210332A1 (en) Method and system for live video streaming with integrated encoding and transmission semantics

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17865114

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17865114

Country of ref document: EP

Kind code of ref document: A1