CN117413465A - Wireless transmission and reception of packetized audio data incorporating forward error correction - Google Patents

Wireless transmission and reception of packetized audio data incorporating forward error correction Download PDF

Info

Publication number
CN117413465A
CN117413465A CN202280039648.2A CN202280039648A CN117413465A CN 117413465 A CN117413465 A CN 117413465A CN 202280039648 A CN202280039648 A CN 202280039648A CN 117413465 A CN117413465 A CN 117413465A
Authority
CN
China
Prior art keywords
blocks
source
frame
parity
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202280039648.2A
Other languages
Chinese (zh)
Inventor
G·拉苏尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dolby International AB
Original Assignee
Dolby International AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dolby International AB filed Critical Dolby International AB
Priority claimed from PCT/EP2022/064541 external-priority patent/WO2022253732A1/en
Publication of CN117413465A publication Critical patent/CN117413465A/en
Pending legal-status Critical Current

Links

Landscapes

  • Detection And Prevention Of Errors In Transmission (AREA)

Abstract

Methods for transmitting and receiving an audio stream are provided. For transmission, the method involves obtaining frames of an audio signal, determining a number of source blocks into which the frames of the audio signal are to be divided, and a number of parity blocks to be generated for forward error correction, wherein the number of source blocks and the number of parity blocks are determined based on characteristics of a wireless communication protocol to be used. The wireless communication protocol may be bluetooth. The parity blocks may be used by a decoder to reconstruct one or more corrupted or lost source blocks and may be obtained by means of reed-solomon encoding.

Description

Wireless transmission and reception of packetized audio data incorporating forward error correction
Cross Reference to Related Applications
The present application claims priority to the following priority applications: U.S. provisional application No. 63/195,781 (reference: D21054USP 1) filed on month 02 of 2021 and U.S. provisional application No. 63/363,855 (reference: D21054USP 2) filed on month 29 of 2022, both of which are incorporated herein by reference.
Technical Field
The present disclosure relates to systems, methods, and media for wireless audio streaming.
Background
With the increasing use of networking home speaker devices, home theatres, and wireless speakers and headphones, users increasingly use wireless communication channels to play media content. Reliability of audio transmissions using a wireless communication channel is important. In many wireless communication protocols, a transmitter device retransmits corrupted or discarded audio packets to a receiver device. However, retransmitting corrupted or discarded audio packets may be detrimental. For example, retransmission of packets may cause the audio content to be out of synchronization with the video content. As another example, retransmission of packets may cause distortion or delay to real-time audio content (e.g., a telephone or videoconference session). Accordingly, improved systems, methods, and media for wireless audio streaming are desired.
Symbols and terms
Throughout this disclosure, including in the claims, the terms "speaker (speaker)", "loudspeaker (loudspecker)" and "audio reproduction transducer" are synonymously used to denote any sound producing transducer (or set of transducers). A typical set of headphones includes two speakers. The speaker may be implemented to include multiple transducers (e.g., a woofer and a tweeter) that may be driven by a single common speaker feed or multiple speaker feeds. In some examples, the speaker feeds may undergo different processing in different circuit branches coupled to different transducers.
Throughout this disclosure, including in the claims, the expression "performing an operation on" a signal or data (e.g., filtering, scaling, transforming, or applying gain to a signal or data) is used in a broad sense to mean performing an operation directly on a signal or data or on a processed version of a signal or data (e.g., a version of a signal that has undergone preliminary filtering or preprocessing prior to performing an operation thereon).
Throughout this disclosure, including in the claims, the expression "system" is used in a broad sense to denote a device, system, or subsystem. For example, a subsystem implementing a decoder may be referred to as a decoder system, and a system including such a subsystem (e.g., a system that generates X output signals in response to multiple inputs, where the subsystem generates M inputs and the other X-M inputs are received from external sources) may also be referred to as a decoder system.
Throughout this disclosure, including in the claims, the term "processor" is used in a broad sense to refer to a system or device that is programmable or otherwise configurable (e.g., with software or firmware) to perform operations on data (e.g., audio or video or other image data). Examples of processors include field programmable gate arrays (or other configurable integrated circuits or chip sets), digital signal processors programmed and/or otherwise configured to perform pipelined processing of audio or other sound data, programmable general purpose processors or computers, and programmable microprocessor chips or chip sets.
Disclosure of Invention
At least some aspects of the present disclosure may be implemented via a method. Some methods may involve obtaining frames of an audio signal. Some methods may involve determining a number of source blocks into which frames of the audio signal are to be divided and a number of parity blocks to be generated for forward error correction, wherein the number of source blocks and the number of parity blocks are determined based at least in part on characteristics of a wireless communication protocol to be used to transmit the audio stream. Some methods may involve dividing a frame of the audio signal into the number of source blocks. Some methods may involve generating the number of parity blocks using the source block. Some methods may involve transmitting the source blocks and the parity blocks, wherein the parity blocks are usable by a decoder to reconstruct one or more corrupted or lost source blocks.
In some examples, the parity block is generated using a reed-solomon encoder.
In some examples, the characteristics of the wireless communication protocol include timing information indicating a packet schedule of the wireless communication protocol. In some examples, the number of parity blocks is determined by: determining a total number of blocks to be used for encoding frames of the audio signal based on a duration of the frames of the audio signal and timing information indicating a packet schedule; determining the number of source blocks; and determining the number of parity blocks by determining a difference between the total number of blocks and the number of source blocks.
In some examples, the characteristics of the wireless communication protocol include a frame size of a frame of the audio signal and/or a packet size of a packet to be used to transmit a source block of the source blocks or a parity block of the parity blocks. In some examples, the number of source blocks to use is determined by determining a number of packets each having the packet size to transmit a frame having the frame size.
In some examples, the number of parity blocks identified for a frame of the audio signal is different than the number of parity blocks generated for a previous frame of the audio signal.
In some examples, the number of parity blocks generated for a frame of the audio signal varies in a repeatable manner, the repeatable manner being determined based on timing information indicative of a packet schedule of the wireless communication protocol.
In some examples, the frame is not retransmitted in response to a portion of the frame of the audio signal being discarded or corrupted.
In some examples, the wireless communication protocol is a bluetooth protocol.
Some methods may involve receiving, using a wireless communication protocol, a set of source blocks and a set of parity blocks corresponding to frames of an audio signal, wherein the set of source blocks is at least a subset of source blocks generated by an encoder and the set of parity blocks is at least a subset of parity blocks generated by the encoder, and wherein a number of source blocks and a number of parity blocks generated and transmitted by the encoder are determined based at least in part on characteristics of the wireless communication protocol. Some methods may involve determining a number of corrupted source blocks in the set of source blocks. Some methods may involve determining whether to reconstruct the corrupted source block in response to determining that the number of corrupted source blocks is greater than zero. Some methods may involve reconstructing the corrupted source block in response to determining to reconstruct the corrupted source block. Some methods may involve causing a version of an audio frame to be presented that includes the reconstructed corrupted source block.
In some examples, determining whether to reconstruct the corrupted source blocks includes determining whether the number of corrupted source blocks is less than the number of uncorrupted parity blocks in the received set of parity blocks.
In some examples, some methods may further involve: receiving a second set of source blocks and a second set of parity blocks corresponding to a second frame of the audio signal; determining a second number of corrupted source blocks in the second set of source blocks; generating a replacement audio frame in response to determining that a second number of the corrupted source blocks in the second set of source blocks is greater than a number of parity blocks in the second set of parity blocks; and causing the replacement audio frame to be presented. In some examples, the replacement audio frame includes a decrease in an output level of the audio signal.
In some examples, reconstructing the corrupted source block includes providing the set of source blocks and the set of parity blocks to a reed-solomon decoder.
In some examples, some methods may further involve storing the set of source blocks and the set of parity blocks in a buffer prior to reconstructing the corrupted source block, and wherein an amount of audio data stored in the buffer varies over time based at least in part on a packet schedule associated with the wireless communication protocol.
In some examples, the wireless communication protocol is a bluetooth protocol.
In some examples, the version of the audio frame that includes the reconstructed corrupted source block is presented via a loudspeaker.
Some or all of the operations, functions, and/or methods described herein may be performed by one or more devices in accordance with instructions (e.g., software) stored on one or more non-transitory media. Such non-transitory media may include memory devices such as the memory devices described herein, including but not limited to Random Access Memory (RAM) devices, read Only Memory (ROM) devices, and the like. Thus, some innovative aspects of the subject matter described in this disclosure can be implemented via one or more non-transitory media having software stored thereon.
At least some aspects of the present disclosure may be implemented via an apparatus. For example, one or more devices may be capable of performing, at least in part, the methods disclosed herein. In some embodiments, the apparatus is or includes an audio processing system having an interface system and a control system. The control system may include one or more general purpose single or multi-chip processors, digital Signal Processors (DSPs), application Specific Integrated Circuits (ASICs), field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, or a combination thereof.
The details of one or more embodiments of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Note that the relative dimensions of the following figures may not be drawn to scale.
Drawings
Fig. 1 is a schematic diagram of an example system for wireless audio streaming, according to some embodiments.
Fig. 2 is a flowchart of an example process for transmitting an audio stream, according to some embodiments.
Fig. 3 is a schematic diagram of an example packet for transmitting an audio stream, according to some embodiments.
Fig. 4 is a flowchart of an example process for receiving and presenting an audio stream, according to some embodiments.
Fig. 5 illustrates example data showing time delays of an audio stream transmitted using the techniques described herein, in accordance with some embodiments.
Fig. 6 shows a block diagram illustrating an example of components of an apparatus capable of implementing various aspects of the disclosure.
Like reference numbers and designations in the various drawings indicate like elements.
Detailed Description
The reliability of audio streaming over wireless communication links (such as bluetooth, wiFi, etc.) may result in packets being dropped or corrupted. In general, packets being dropped or corrupted may result in the entire frame of audio data being retransmitted. Retransmission of frames can lead to latency problems, such as pausing audio playback while waiting for the frame to be retransmitted. This can be particularly problematic during playback of audio content that includes real-time audio (e.g., telephone or teleconferencing conversations) and/or audio content that is synchronized with video content. Furthermore, retransmitting the entire frame is an inefficient use of the wireless channel when only a portion of the frame may be corrupted or dropped (e.g., where the frame is divided into multiple packets and only a subset of the packets are dropped or corrupted).
Systems, methods, and media for wireless audio streaming are described herein. The techniques described herein allow for more efficient utilization of wireless channel capacity. In particular, the techniques described herein utilize forward error correction to reconstruct a partially dropped or corrupted frame, thereby reducing the need to retransmit the dropped or corrupted frame. Furthermore, the techniques described herein may reduce playback latency and reduce latency differences, thereby reducing the number of pauses in the playback of audio content.
In some implementations, a transmitting device, which may include an encoder, may divide frames of an audio signal into source blocks. The transmitting device may then generate a set of parity blocks, and may reconstruct the corrupted or discarded source blocks in the set of parity blocks. In particular, each block may include a mathematically hashed version of the audio data for that block. The transmitting device may then transmit the source block and the parity block to the receiving device. The receiving device, which may include a decoder, may determine whether any source blocks have been discarded or corrupted and if so, may use the parity blocks to reconstruct the discarded or corrupted source blocks.
In some embodiments, the number of source blocks and/or the number of parity blocks may be selected such that the channel capacity of the wireless communication protocol is optimized. For example, the number of parity blocks may be selected to maximize the number of source blocks that can be reconstructed based on the packet size specified by the wireless communication protocol and the maximum number of blocks that can be transmitted by the packet scheduling constraint. A more detailed technique for determining the number of source blocks and/or the number of parity blocks is shown and described below in connection with fig. 2. Furthermore, the total number of blocks generated and transmitted per frame may vary in a manner that optimizes channel capacity by allowing more parity blocks to be transmitted per frame, as shown and described in more detail below in connection with fig. 2 and 3.
It should be noted that various forward error correction techniques or algorithms may be utilized in connection with the techniques described herein. Examples include reed-solomon coding, hamming (Hamming) coding, binary convolutional coding, low density parity check coding, and the like. The forward error correction technique is further described in Watson, M., begen A. And Roca, V. Forward Error Correction (FEC) Framework, RFC 6363, 2011, the entire contents of which are incorporated herein by reference.
Fig. 1 illustrates an example of a system 100 for compressed audio delivery according to some embodiments. As shown, the system 100 includes a transmitting device 102 that packetizes audio frames and transmits the packets, and a receiving device 104 that receives the packets and reconstructs the audio frames based on the received packets.
In some embodiments, the encoder 108 of the transmitting device 102 may obtain the input audio frame 106. In some embodiments, the input audio frame 106 may have any suitable duration (e.g., 10 milliseconds, 32 milliseconds, 40 milliseconds, etc.). In some embodiments, the encoder 108 may obtain the input audio frames 106 in any suitable manner, for example, by retrieving the input audio frames 106 from a memory storing the audio stream, by identifying the next audio frame of the audio stream, and so forth.
In some embodiments, the encoder 108 may divide the input audio frame 106 into a set of blocks. The set of blocks may include a set of source blocks 110. Note that herein, a set of source blocks is denoted generally as having K source blocks. The set of blocks may additionally include a set of parity blocks 112. Note that herein, a set of parity blocks is generally represented as having P parity blocks. As shown in fig. 1, the set of blocks is generally represented as having N blocks, where N is represented as k+p (e.g., the number of source blocks plus the number of parity blocks). In some embodiments, the number of source blocks and the number of parity blocks may depend on the characteristics of the wireless transmission protocol used by the transmitting device 102 and the receiving device 104, as shown and described in more detail below in connection with fig. 2.
In some embodiments, the transmitter 114 may transmit a set of source blocks 110 and a set of parity blocks 112. For example, the transmitter 114 may utilize one or more antennas to wirelessly transmit data represented in a set of source blocks 110 and a set of parity blocks 112. In some embodiments, the transmitter 114 may transmit the set of source blocks 110 and the set of parity blocks 112 as a sequence of packets 116, wherein each packet in the sequence of packets 116 corresponds to a block of the set of source blocks 110 or the set of parity blocks 112. In some embodiments, the transmitter 114 may transmit the sequence of packets 116 according to a scheduling protocol or scheduling constraint specified by or associated with the wireless communication protocol used by the transmitter 114.
Turning to the receiving device 104, the receiver 118 of the receiving device 104 may receive the sequence of packets 120. For example, the receiver 118 may utilize one or more antennas to wirelessly receive packets in the sequence of packets 116 transmitted by the transmitter 114. As shown in fig. 1, in some embodiments, the received sequence of packets 120 may include a subset of packets transmitted by the transmitter 114 of the transmitting device 102. For example, in the example depicted in fig. 2, two packets corresponding to two source blocks are discarded during transmission, and packets "5" and "K" are lost in the received sequence of packets 120. It should be noted that the received sequence of packets 120 is depicted as an example only. In some cases, one or more packets may include a corrupted block, or a discarded packet may correspond to a parity block. In other words, the received packet sequence 120 may be different from the transmitted packet sequence 116 in any suitable manner. Alternatively, in some cases, the received packet sequence 120 may correspond entirely to the transmitted packet sequence 116, which indicates that the transmission of the input audio frame 106 is complete and lossless. In this case, forward error correction using the received parity block is not required.
The decoder 122 of the receiving device 104 may generate the reconstructed audio frame 124. For example, in some embodiments, decoder 122 may utilize forward error correction techniques (such as reed-solomon encoding, hamming encoding, etc.) to reconstruct lost and/or corrupted source blocks in the received packet sequence 120 using the parity blocks included in the packet sequence 120, as shown and described in more detail below in connection with fig. 4.
In some implementations, the reconstructed audio frame 124 can then be rendered and/or played. For example, in some embodiments, rendering the reconstructed audio frames 124 may be using various algorithms to distribute one or more audio signals over speakers and/or headphones to obtain a particular perceived impression (e.g., reflecting spatial characteristics of the audio stream specified by the content creator, etc.). As another example, in some embodiments, the reconstructed audio frame 124 may be played in a manner such as to cause the reconstructed audio frame 124 to be presented via one or more loudspeakers, headphones, or the like. In some implementations, the reconstructed audio frames 124 can be presented in a manner that is substantially synchronized with the presentation of the video content.
It should be noted that in some embodiments, packets corresponding to a compressed audio stream may be transmitted in a unicast network topology in which one transmitting device transmits packets received by one receiving device, or in a multicast network topology in which one transmitting device transmits packets to multiple receiving devices (e.g., two, three, five, ten, twenty, etc.).
In some embodiments, an encoder (e.g., of a transmitting device) may determine the number of source blocks and the number of parity blocks to generate in connection with a frame of an audio signal. In some implementations, the number of source blocks and the number of parity blocks may be determined such that use of wireless communication protocol channel capacity is optimized. For example, the number of source blocks and the number of parity blocks selected may allow for the use of parity blocks to reconstruct a quantity of corrupted or discarded source blocks such that few or no frames need to be replaced with virtual frames or retransmitted. In one example, the number of source blocks and the number of parity blocks may be selected such that up to 30% of the source blocks in a given frame may be reconstructed. In some implementations, the number of source blocks and/or the number of parity blocks may be selected based on timing information associated with a wireless communication protocol used to transmit the audio stream. In some implementations, the timing information can include packet scheduling constraints associated with the wireless communication protocol. In some embodiments, the number of source blocks and/or the number of parity blocks may be determined based on a frame size of a frame of the audio signal and/or a packet size of a packet to be used for the transport block. A more detailed technique for selecting the number of source blocks and/or the number of parity blocks in a manner that optimizes channel capacity is shown and described below in connection with fig. 2.
Fig. 2 illustrates an example of a process 200 for transmitting a compressed audio stream in accordance with some embodiments. In some examples, blocks of process 200 may be performed by a control system (such as control system 610 of fig. 6). According to some examples, blocks of process 200 may be performed by a transmitting device (such as transmitting device 102 shown in fig. 1). In some implementations, the blocks of process 200 may be performed by an encoder and/or a transmitter of a transmitting device. In some embodiments, the blocks of process 200 may be performed in a different order than shown in fig. 2. In some implementations, two or more blocks of process 200 may be performed substantially in parallel. In some embodiments, one or more blocks of process 200 may be omitted.
Process 200 may begin at 202 with obtaining a frame of an audio signal. For example, in some implementations, the process 200 may obtain the frame from a memory or other storage location. As another example, in some embodiments, the process 200 may identify the frame as a next or subsequent frame of the audio signal to be transmitted.
At 204, process 200 may identify a number of source blocks (denoted generally herein as K) into which frames of an audio signal are to be divided and a number of parity blocks (denoted generally herein as P) to generate for forward error correction based at least in part on characteristics of a wireless communication protocol to be used.
In some implementations, the number of source blocks and/or the number of parity blocks can be determined based on the timing information. In some embodiments, the timing information may include packet scheduling specifications or constraints associated with the wireless communication protocol. In some embodiments, the total number N of blocks (e.g., source blocks plus parity blocks) may be determined based on timing information. For example, in some embodiments, N may be determined as the number of blocks that may be transmitted given a particular packet schedule associated with a wireless communication protocol (e.g., a timing interval for a transmitter to transmit a packet) over the duration of a frame. For example, N may be determined as follows:
in the above equation, "frame_duration" represents a time interval (e.g., in milliseconds or any other suitable unit) corresponding to a frame, and "packet_interval" represents a timing interval of transmission packets (e.g., the number of milliseconds between consecutive transmission packets). In some implementations, the number of source blocks (e.g., K) can be determined based on a size of a frame of the audio signal (e.g., in bytes) and a size of a packet associated with the wireless communication scheme (e.g., in bytes). For example, the number of source blocks may be determined as the number of blocks required to divide a frame having a particular frame size into a set of packets, each packet having the packet size. For example, K may be determined as follows:
In the above equation, "frame_size" represents the size of an audio frame (e.g., in bytes or any other suitable unit), and "packet_size" represents the size of a packet to be transmitted (e.g., in bytes or any other suitable unit). In some implementations, the number of parity blocks (e.g., P) can be determined as the difference between the total number of blocks and the number of source blocks. For example, P may be determined as follows:
P=N-K
the following are examples for determining the total number of blocks, the number of source blocks, and the number of parity blocks for different audio stream bit rates and different wireless communication protocols. In some embodiments, the wireless communication protocol may be such that audio packets are transmitted at a predetermined packet size and using a predetermined packet scheduling interval. In some implementations, the wireless communication protocol may be a protocol that enables transmission of uncompressed audio, and its effective bandwidth may also be used to reliably transmit compressed audio. For example, in some embodiments, the packets may include compressed audio data using, for example, 24-bit (or 16-bit, etc.) Pulse Code Modulation (PCM). The example given below assumes a frame duration of 32 milliseconds. However, this is merely an example, and the techniques described herein may be applied to frames having different durations. Furthermore, the techniques described herein may be applied to various audio stream bit rates, wireless communication protocols that utilize other packet sizes and/or packet scheduling specifications, and/or other wireless communication protocols (e.g., wiFi) than those described in the examples below.
The following table indicates the frame sizes (in bytes) that can be used for audio streams having various bit rates (in kbps). The frame size may be used to determine the number of source blocks.
Bit rate (in kbps) Frame size (bytes)
128 512
384 1536
448 1792
768 3072
1355 5420
1664 6656
In a first example, the wireless communication link may transmit audio packets, each audio packet having a packet size of 288 bytes and a packet scheduling interval of 1 packet every 1 millisecond. In this example, packets may be transmitted using a packet scheduling interval of 1 packet every 1 millisecond. Thus, for a frame size of 3072 bytes (corresponding to a bit rate of 768 kbps), the total number of blocks, N, can be determined as the frame duration (32 milliseconds) divided by the packet interval (1 millisecond). That is, N may be a total of 32 blocks in this case. Continuing with the first example, the number of source blocks K may be determined as the frame size (3072 bytes) divided by the packet size (288 bytes). That is, in this case, K may be 10.67, or rounded up to 11 source blocks. Correspondingly, the number of parity blocks P may be 32-11, or 19 parity blocks.
In a second example using a packet size of 288 bytes and a packet scheduling interval of 1 packet per 1 millisecond, for a frame size of 5426 bytes (corresponding to a bit rate of 1355 kbps), the total number of blocks N may be determined as the frame duration (32 milliseconds) divided by the packet interval (1 millisecond). That is, N may be a total of 32 blocks. Continuing with this second example, the number of source blocks K may be determined as the frame size (5426 bytes) divided by the packet size (288 bytes). That is, in this case, K may be 18.82, or rounded up to 19 source blocks. It should be noted that in some embodiments, a portion of the last source block may be used to encode bytes of the next frame. In some such embodiments, these additional bytes may be used to correct any corruption in the first source block of the next frame. Additionally or alternatively, in some embodiments, a copy of the first byte of the frame may be used. Correspondingly, the number of parity blocks P may be 32-19, or 11 parity blocks. In some implementations, such as in some cases where the frame duration is an integer millisecond, the total number of blocks used for frames of the audio signal may be the same (e.g., a total of 32 blocks per frame) regardless of the bit rate and/or frame size. In other words, in some embodiments, the total number of blocks used for frames of the audio signal may depend on the packet scheduling interval, rather than the bit rate and/or the frame size. In other embodiments, the total number of blocks per frame may vary based on the bit rate and/or frame size.
In examples using bluetooth as the wireless communication protocol, the size of the packet may be up to 240 bytes (e.g., 200 bytes, 224 bytes, 230 bytes, 236 bytes, 240 bytes, etc.). Bluetooth packets may be transmitted using packet scheduling intervals that are multiples of 1.25 milliseconds. For example, bluetooth packets may be scheduled to be transmitted at 1.25 millisecond slots, 2.5 millisecond slots, etc. In a first example, for a frame size of 1792 bytes (corresponding to a bit rate of 448 kbps), a packet size of 224 bytes, and for a packet scheduling interval of 2.5 milliseconds, the total number of blocks, N, may be determined as the frame duration (32 milliseconds) divided by the packet scheduling interval (2.5 milliseconds). That is, in this case, N may be 12.8 or rounded up to 13 blocks in total. Continuing with the first example, the number of source blocks K may be determined as the frame size (1792 bytes) divided by the packet size (224 bytes). That is, in this case, K may be 8 source blocks. Correspondingly, the number of parity blocks P may be 13-8, or 5 parity blocks.
It should be noted that in some embodiments, the number of parity blocks may actually depend on the bit rate of the audio signal. For example, since the number of source blocks is proportional to the frame size, which in turn is proportional to the bit rate, the number of parity blocks may be inversely proportional to the bit rate with the total number of blocks fixed. In other words, a lower bit rate signal may use more parity blocks than a higher bit rate signal. It should be noted that in some embodiments, the bit rate may be adjusted to strike a balance between audio quality and forward error correction implementation. For example, in some embodiments, the receiver device may transmit a message to the sender device that the quality of the received audio signal is degraded. Continuing with this example, the transmitter may select a lower bit rate for transmitting the audio signal, which may result in a reduced audio quality, but the margin for transmitting the parity block may be more, whereby the receiver device may perform forward error correction.
Referring back to fig. 2, at 206, process 200 may subdivide the frame into K source blocks. The subdivision of the frame may be to divide the data of the frame into K blocks such that the data of the frame is substantially equally divided among the K source blocks.
At 208, process 200 may generate P parity blocks using K source blocks. For example, in some embodiments, process 200 may use a particular forward error correction technique or algorithm to generate P parity blocks using K source blocks. Examples of forward error correction techniques that may be used include reed-solomon encoding, hamming encoding, and the like. In an example using reed-solomon encoding, and n=32, k=19, and p=11, a reed-solomon scheme of (32, 19) may be used. Similarly, in an example where reed-solomon encoding is used, and n= 13, k=8, and p=5, the reed-solomon scheme of (13, 8) may be used. It should be appreciated that other forward error correction techniques may be used, such as hamming codes, binary convolutional codes, low density parity check codes, and the like. In some embodiments, the forward error correction technique used may be selected based on the correction efficiency of the technique and/or any other considerations.
At 210, process 200 may transmit K source blocks and P parity blocks using a wireless communication protocol. For example, in some implementations, the source block and the parity block may be transmitted as a sequence of packets, where each packet is transmitted according to a packet scheduling constraint associated with a wireless communication protocol. For example, for some communication protocols, the sequence of packets may be transmitted in the form of one packet per millisecond. As another example, where the wireless communication protocol is bluetooth, the sequence of packets may be transmitted in the form of one packet every 2.5 milliseconds (or any other suitable multiple of 1.25 milliseconds).
It should be noted that in some implementations, a header (header) may be added to the packet. For example, the header may indicate a forward error correction scheme (e.g., used at block 208) used to generate the parity block. As more specific examples, the header may indicate the type of algorithm used, the total number of blocks, the number of source blocks, and/or the number of parity blocks. As another example, in some embodiments, the header may include a checksum, such as a Cyclic Redundancy Check (CRC), which may be used to determine whether the packet has been corrupted. As yet another example, in some embodiments, the header may include a sequence counter that may be used to detect discarded packets.
Process 200 may then loop back to block 202 and obtain another frame of the audio signal. Process 200 may loop through blocks 202 through 210 until the audio signal is completely transmitted. Additionally or alternatively, in some implementations, the process 200 may loop through blocks 202 through 210 until an instruction to cease transmitting the audio signal is received (e.g., a signal received from a user device, from a remote control device, from a keyboard or other user input device, a voice signal, a gesture associated with an instruction to cease transmitting the audio signal, etc.).
It should be noted that the transmission of parity blocks may increase the overall transmission bit rate associated with the audio stream. In one example, an audio stream transmitted at a bit rate of 448kbps without a parity block may be transmitted at a bit rate of 653kbps due to the presence of additional data associated with the parity block. However, using parity blocks to reconstruct discarded or corrupted source blocks (using forward error correction) reduces or eliminates retransmission of frames altogether, thereby improving the overall efficiency of the wireless channel.
In some implementations, the total number of blocks used to encode a frame may vary from frame to frame. In some embodiments, the number of parity blocks may vary from frame to frame based on the total number of blocks used. For example, the number of source blocks may remain fixed between frames, although the total number of blocks used may vary. Thus, the number of parity blocks may vary from frame to frame, for example, to account for the total number of blocks that may occur if the number of source blocks is fixed. In some implementations, the differences between frames may occur in a repeatable manner that depends on timing information (e.g., packet scheduling constraints) associated with a particular wireless communication protocol.
For example, where bluetooth is used as the wireless communication protocol, packets (one packet for each block) may be transmitted at intervals of 2.5 milliseconds (or any other suitable multiple of 1.25 milliseconds). Thus, in the case of subdividing a frame into 9 source blocks, the first frame may be associated with 4 parity blocks, for a total of 13 blocks. Since the transmission time of these 13 blocks (corresponding to one frame of the audio signal) is 32.5 milliseconds (e.g., 13×2.5 milliseconds), which is longer than the duration of 32 milliseconds of the frame, the transmitting device may actually lag behind the audio stream. Thus, after a predetermined number of frames, the total number of blocks may be reduced (e.g., by one) to account for the additional time required to transmit to the blocks for the previous frame. For example, after four frames are transmitted using a total of 13 blocks (e.g., 9 source blocks and 4 parity blocks), a fifth frame may be transmitted using a total of 12 blocks (e.g., 9 source blocks and 3 parity blocks). Note that in this example, a scheme of 13 blocks is used for the first four frames and 12 blocks are used for the fifth frame, in which a total of 64 blocks (e.g., each block as a packet) can be transmitted, spanning 160 milliseconds (e.g., 64×2.5 milliseconds), which corresponds to the duration spanned by the five frames (e.g., 32×5).
Fig. 3 illustrates an example grouping scheme in which the number of blocks used per frame varies from frame to frame, according to some embodiments. As shown, for frame 1, 9 source blocks and 4 parity blocks are transmitted. The blocks are grouped into 4 groups, wherein each block is transmitted in association with a packet, and wherein the packets are transmitted at intervals of 2.5 milliseconds. Thus, 4 blocks (whether source or parity) are transmitted within 10 milliseconds. Similarly, frames 2, 3, and 4 are associated with 9 source blocks and 4 parity blocks. However, due to the additional time required to transmit 13 blocks per frame, for frame 5, 9 source blocks and 3 parity blocks (e.g., one less parity block) are transmitted, allowing the encoding scheme to be realigned with the frame of the audio signal.
In some implementations, a receiving device (e.g., a bluetooth or Wi-Fi connected speaker or headset, etc.) may receive a sequence of packets, each packet corresponding to one block of the transmission (e.g., one source block or one parity block). In some embodiments, a decoder of a receiving device may determine the number of corrupted or discarded blocks in a set of source blocks associated with a received sequence of packets. In some implementations, in response to determining that at least one source block is corrupted or discarded in transmission, the decoder may determine whether the corrupted or discarded block can be reconstructed using the parity block. For example, the decoder may determine whether the number of corrupted blocks is less than the number of parity blocks. In some embodiments, in response to determining that the corrupted block can be reconstructed, the decoder can reconstruct the corrupted block and cause an audio frame to be presented that includes the reconstructed source block. Conversely, in response to determining that the corrupted block cannot be reconstructed, the decoder may generate a replacement frame or "virtual" frame, and may cause the replacement frame or virtual frame to be presented.
Fig. 4 illustrates an example of a process 400 for reconstructing a source block and rendering an audio signal, in accordance with some embodiments. In some examples, blocks of process 400 may be performed by a control system (such as control system 610 of fig. 6). In some embodiments, the blocks of process 400 may be performed on a receiving device (such as receiving device 104 of fig. 1). In some embodiments, the blocks of process 400 may be performed by a decoder and/or a receiver of a receiving device. In some implementations, the blocks of process 400 may be performed in a different order than shown in fig. 4. In some embodiments, two or more blocks of process 400 may be performed substantially in parallel. In some embodiments, one or more blocks of process 400 may be omitted.
The process 400 may begin at 402 with receiving a set of source blocks and a set of parity blocks corresponding to a frame of an audio signal. In some embodiments, each block may be associated with a received packet in a sequence of received packets. In some implementations, the process 400 may obtain a block (e.g., a source block or a parity block) from the packet. In some implementations, the process 400 may use information stored in the header of the packet to obtain the block.
At 404, the process 400 may determine the number of corrupted source blocks or discarded source blocks in a set of source blocks. In some implementations, the process 400 can identify corrupted source blocks of the set of source blocks based at least in part on a checksum (e.g., a CRC value) included in a header of a packet corresponding to the source block. Any suitable forward error correction technique may be used to identify the corrupted source blocks, such as using reed-solomon encoding or the like. In some implementations, the process 400 can identify the number of discarded source blocks by acknowledging the difference between the expected number of source blocks (e.g., as specified by the forward error correction scheme being used) and the number of received source blocks.
At 406, the process 400 may determine whether the number of source blocks corrupted or discarded is zero. If at 406 the process 400 determines that the number of source blocks that are corrupted or discarded is zero (yes at 406), the process 400 may proceed to 408, i.e., the set of source blocks may be used to generate an audio frame. For example, process 400 may reconstruct a full-length audio frame using data included in source blocks in the set of source blocks. Process 400 may then proceed to block 414, i.e., may cause (e.g., via one or more speakers, one or more headphones, etc.) the audio frame to be presented.
Conversely, if at 406 the process 400 determines that the number of corrupted or discarded blocks is not zero (no at 406), the process 400 may proceed to block 410, i.e., may determine whether the number of corrupted or discarded source blocks is less than or equal to the number of parity blocks. For example, in the case where the number of received parity blocks is three, if the number of corrupted or discarded blocks is one, two, or three, the process 400 may determine that the number of corrupted or discarded blocks is less than or equal to the number of parity blocks. It should be noted that at 402, the number of received parity blocks may be less than the number of parity blocks transmitted by the transmitting device (e.g., where the parity blocks are discarded). Thus, in some embodiments, the process 400 may compare the number of corrupted or discarded source blocks to the number of received parity blocks (e.g., rather than to the number of expected or transmitted parity blocks). Additionally or alternatively, in some implementations, one or more of the received parity blocks may be corrupted. Thus, in some implementations, the process 400 may compare the number of corrupted or discarded source blocks to the number of received uncorrupted parity blocks (e.g., parity blocks available for forward error correction).
If at 410 the process 400 determines that the number of corrupted or discarded source blocks is less than or equal to the number of parity blocks ("yes" at 410), the process 400 may proceed to 412, i.e., an audio frame may be generated by reconstructing the corrupted or discarded source blocks. For example, process 400 may use a forward error correction scheme to reconstruct corrupted or discarded source blocks using received parity blocks. As described above, the forward error correction scheme (e.g., reed-solomon encoding, hamming encoding, etc.) used to generate the parity blocks may similarly be used to reconstruct the corrupted or discarded source blocks. In some implementations, the forward error correction scheme to be used can be specified in one or more packet headers of packets corresponding to the received source block and the received parity block (e.g., as described above in connection with block 402).
Conversely, if at 410 the process 400 determines that the number of corrupted or discarded source blocks exceeds the number of parity blocks ("no" at 410), the process 400 may proceed to 414, i.e., an alternate or virtual audio frame may be generated. In some embodiments, the replacement audio frame or virtual audio frame may be an audio frame that soft mutes the output of the decoder, for example, by reducing the overall sound level during the audio frame presentation duration. In some implementations, the replacement frame may be generated using audio data from a previous frame and/or a next frame. For example, in some implementations, interpolation may be used to generate data corresponding to the replacement frame. In one example, interpolation may be performed between audio data from a previous frame and audio data of a next frame.
Regardless of whether block 412 or 414 is performed, process 400 may cause the presentation of audio frames at 416. For example, process 400 may cause audio frames (whether audio frames including reconstructed corrupted or discarded source blocks, or replacement audio frames/virtual audio frames) to be presented via one or more speakers, one or more headphones, or the like. It should be noted that prior to causing the audio frame to be presented, in some embodiments, the process 400 may render the audio frame, e.g., distribute the audio signal to one or more speakers, headphones, etc., to create a particular perceived impression.
In some implementations, the techniques described above may generate a number of source blocks and a number of parity blocks that are optimized for the channel capacity of a particular type of wireless link. More particularly, the number of source blocks and the number of parity blocks may be selected (using the techniques described herein) such that relatively few frames need to be retransmitted or replaced with virtual frames because there are insufficient parity blocks to reconstruct the corrupted or discarded source blocks. Further, by determining the number of source blocks and the number of parity blocks based on packet scheduling constraints, the techniques described herein may allow a buffer associated with a decoder (e.g., of a receiving device) to generally maintain a non-zero amount of data or a non-zero duration of audio data, thereby reducing overall system latency. For example, in the case where the transmission duration of the total number of blocks (e.g., source blocks and parity blocks) associated with a frame exceeds the duration of the frame, there may be an initial delay in the decoder receiving all blocks corresponding to the frame. However, due to the initial delay, the decoder may then be able to maintain a buffer, typically at about 0 milliseconds. This may allow the receiving device to continuously play the audio stream without pausing or interrupting so that the buffer keeps up, which may be particularly advantageous in case of presenting real-time audio data and/or in case of presenting audio content synchronized with the video content.
Fig. 5 illustrates an example of buffer delay using the source block and parity block scheme illustrated and described above in connection with fig. 3. Fig. 3 depicts an example scheme of source blocks and parity blocks associated with bluetooth communications, wherein packets (one for each block) are transmitted at 2.5 millisecond intervals, and wherein 4 blocks are transmitted every 10 millisecond isochronous intervals. In the example shown and described above in fig. 3, 9 source blocks and 4 parity blocks are transmitted for the first four frames, and 9 source blocks and 3 parity frames are transmitted for the fifth frame.
Referring to fig. 5, a graph 502 shows the duration of audio data stored in a buffer of a decoder in units of milliseconds. Each step up in curve 502 corresponds to a block received within an isochronous interval of 10 milliseconds (e.g., where 4 blocks are received at a packet interval time of 2.5 milliseconds). Because 13 blocks are used for the first four frames, the decoder receives all of the audio data for one frame (which has a duration of 32 milliseconds) in 40 milliseconds (e.g., in four isochronous frames). In other words, the 13 th block is received during the fourth isochronous interval (spanning 30 to 40 milliseconds), and thus the decoder is then ready to render the first audio frame after 40 milliseconds. Thus, as shown by curve 502 in fig. 5, after 40 milliseconds have elapsed, the duration of the audio data stored in the buffer drops from 40 milliseconds to 8 milliseconds because one frame of audio data (e.g., 32 milliseconds) is presented. Curve 504 illustrates the duration of unplayed audio data stored in the buffer. As shown, the buffer always has at least part of the data of the next frame, since there is a mismatch between the duration required to receive a 32 millisecond frame of audio data relative to the duration of the audio frame. Further, due to forward error correction (e.g., parity blocks), the data in the buffer may generally correspond to available source blocks (e.g., may be used as is and/or may be reconstructed using parity blocks), allowing for more robust audio playback. This may allow the decoder to achieve continuous or near continuous audio playback without pausing in order to have time to buffer more data.
Fig. 6 is a block diagram illustrating an example of components of an apparatus capable of implementing various aspects of the disclosure. As with the other figures provided herein, the types and numbers of elements shown in fig. 6 are provided by way of example only. Other embodiments may include more, fewer, and/or different types and numbers of elements. According to some examples, the apparatus 600 may be configured to perform at least some of the methods disclosed herein. In some implementations, the apparatus 600 may be or include one or more components of a television, an audio system, a mobile device (e.g., a cellular telephone), a laptop computer, a tablet device, a smart speaker, or another type of device.
According to some alternative embodiments, the apparatus 600 may be or may include a server. In some such examples, apparatus 600 may be or may include an encoder. Thus, in some cases, the apparatus 600 may be a device configured for use within an audio environment, such as a home audio environment, while in other cases, the apparatus 600 may be a device configured for use in a "cloud", such as a server.
In this example, the apparatus 600 includes an interface system 605 and a control system 610. In some implementations, the interface system 605 may be configured to communicate with one or more other devices in the audio environment. In some examples, the audio environment may be a home audio environment. In other examples, the audio environment may be another type of environment, such as an office environment, an automobile environment, a train environment, a street or sidewalk environment, a park environment, and so forth. In some implementations, the interface system 605 may be configured to exchange control information and associated data with an audio device of an audio environment. In some examples, the control information and associated data may relate to one or more software applications being executed by the apparatus 600.
In some implementations, the interface system 605 may be configured to receive a content stream or to provide a content stream. The content stream may include audio data. The audio data may include, but may not be limited to, audio signals. In some cases, the audio data may include spatial data such as channel data and/or spatial metadata. In some examples, the content stream may include video data and audio data corresponding to the video data.
The interface system 605 may include one or more network interfaces and/or one or more external device interfaces (e.g., one or more Universal Serial Bus (USB) interfaces). According to some embodiments, the interface system 605 may include one or more wireless interfaces. The interface system 605 may include one or more devices for implementing a user interface, such as one or more microphones, one or more speakers, a display system, a touch sensor system, and/or a gesture sensor system. In some examples, interface system 605 may include one or more interfaces between control system 610 and a memory system (such as optional memory system 615 shown in fig. 6). However, in some cases, control system 610 may include a memory system. In some implementations, the interface system 605 may be configured to receive input from one or more microphones in an environment.
For example, control system 610 may include a general purpose single or multi-chip processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, and/or discrete hardware components.
In some implementations, the control system 610 may reside in more than one device. For example, in some implementations, a portion of the control system 610 may reside in a device within one of the environments depicted herein, and another portion of the control system 610 may reside in a device outside of the environment, such as a server, mobile device (e.g., smart phone or tablet), or the like. In other examples, a portion of control system 610 may reside in a device within an environment, and another portion of control system 610 may reside in one or more other devices within the environment. For example, a portion of the control system 610 may reside in a device (such as a server) implementing a cloud-based service, and another portion of the control system 610 may reside in another device (such as another server, a memory device, etc.) implementing a cloud-based service. In some examples, the interface system 605 may also reside in more than one device.
In some implementations, the control system 610 may be configured to at least partially perform the methods disclosed herein. According to some examples, the control system 610 may be configured to implement the following methods: determining a number of source blocks and/or a number of parity blocks for a particular wireless communication protocol; generating one or more parity blocks; transmitting and/or receiving packets; reconstructing the corrupted source block; such that reconstructed audio frames, etc., are presented.
Some or all of the methods described herein may be performed by one or more devices according to instructions (e.g., software) stored on one or more non-transitory media. Such non-transitory media may include, for example, those memory devices described herein, including but not limited to Random Access Memory (RAM) devices, read Only Memory (ROM) devices, and the like. One or more non-transitory media may reside, for example, in the optional memory system 615 and/or the control system 610 shown in fig. 6. Accordingly, various innovative aspects of the subject matter described in this disclosure can be implemented in one or more non-transitory media having software stored thereon. For example, the software may include instructions for: determining a number of source blocks and/or a number of parity blocks for a particular wireless communication protocol; generating one or more parity blocks; transmitting and/or receiving packets; reconstructing the corrupted source block; such that reconstructed audio frames, etc., are presented. The software may be executable by one or more components of a control system, such as control system 610 of fig. 6, for example.
In some examples, the apparatus 600 may include an optional microphone system 620 shown in fig. 6. Optional microphone system 620 may include one or more microphones. In some implementations, one or more microphones may be part of or associated with another device (such as a speaker of a speaker system, a smart audio device, etc.). In some examples, the apparatus 600 may not include the microphone system 620. However, in some such embodiments, the apparatus 600 may still be configured to receive microphone data for one or more microphones in an audio environment via the interface system 610. In some such implementations, a cloud-based implementation of the apparatus 600 may be configured to receive microphone data or noise indicia corresponding at least in part to microphone data from one or more microphones in an audio environment via the interface system 610.
According to some embodiments, the apparatus 600 may comprise an optional loudspeaker system 625 shown in fig. 6. The optional speaker system 625 may include one or more speakers, which may also be referred to herein as "speakers" or more generally as "audio reproduction transducers. In some examples (e.g., cloud-based implementations), the apparatus 600 may not include the loudspeaker system 625. In some embodiments, the apparatus 600 may comprise a headset. Headphones may be connected or coupled to device 600 via a headphone jack or via a wireless connection (e.g., bluetooth).
Aspects of the present disclosure include a system or device configured (e.g., programmed) to perform one or more examples of the disclosed methods, and a tangible computer-readable medium (e.g., disk) storing code for implementing one or more examples of the disclosed methods or steps thereof. For example, some disclosed systems may be or include a programmable general purpose processor, digital signal processor, or microprocessor programmed with software or firmware and/or otherwise configured to perform any of a variety of operations on data, including embodiments of the disclosed methods or steps thereof. Such a general purpose processor may be or include a computer system including an input device, memory, and a processing subsystem programmed (and/or otherwise configured) to perform one or more examples of the disclosed methods (or steps thereof) in response to data asserted thereto.
Some embodiments may be implemented as a configurable (e.g., programmable) Digital Signal Processor (DSP) configured (e.g., programmed and otherwise configured) to perform the required processing on the audio signals, including execution of one or more examples of the disclosed methods. Alternatively, embodiments of the disclosed systems (or elements thereof) may be implemented as a general-purpose processor (e.g., a Personal Computer (PC) or other computer system or microprocessor, which may include an input device and memory) programmed and/or otherwise configured with software or firmware to perform any of a variety of operations, including one or more examples of the disclosed methods. Alternatively, elements of some embodiments of the inventive system are implemented as a general-purpose processor or DSP configured (e.g., programmed) to perform one or more examples of the disclosed methods, and the system also includes other elements (e.g., one or more microphones and/or one or more microphones). A general purpose processor configured to perform one or more examples of the disclosed methods may be coupled to an input device (e.g., a mouse and/or keyboard), memory, and a display device.
Another aspect of the disclosure is a computer-readable medium (e.g., a disk or other tangible storage medium) storing code (e.g., an encoder executable to perform one or more examples of the disclosed methods or steps thereof) for performing one or more examples of the disclosed methods or steps thereof.
While specific embodiments of, and applications for, the present disclosure have been described herein, it will be apparent to those of ordinary skill in the art that many more modifications than mentioned herein are possible without departing from the scope of the disclosure described and claimed herein. It is to be understood that while certain forms of the disclosure have been illustrated and described, the disclosure is not to be limited to the specific embodiments described and illustrated or to the specific methods described.

Claims (20)

1. A method for transmitting an audio stream, the method comprising:
obtaining a frame of an audio signal;
determining a number of source blocks into which frames of the audio signal are to be divided and a number of parity blocks to be generated for forward error correction, wherein the number of source blocks and the number of parity blocks are determined based at least in part on characteristics of a wireless communication protocol to be used to transmit the audio stream;
Dividing a frame of the audio signal into a number of source blocks;
generating a number of parity blocks using the source block; and
the source blocks and the parity blocks are transmitted, wherein the parity blocks are usable by a decoder to reconstruct one or more corrupted or lost source blocks.
2. The method of claim 1, wherein the parity block is generated using a reed-solomon encoder.
3. The method of any of claims 1 or 2, wherein the characteristics of the wireless communication protocol include timing information indicating a packet schedule of the wireless communication protocol.
4. A method according to claim 3, wherein the number of parity blocks is determined by:
determining a total number of blocks to be used for encoding frames of the audio signal based on a duration of the frames of the audio signal and the timing information indicating a packet schedule;
determining the number of source blocks; and
the number of parity blocks is determined by determining a difference between the total number of blocks and the number of source blocks.
5. The method of any of claims 1 to 4, wherein the characteristics of the wireless communication protocol include a frame size of a frame of the audio signal and/or a packet size of a packet to be used for transmitting a source block of the source blocks or a parity block of the parity blocks.
6. The method of claim 5, wherein the number of source blocks to be used is determined by determining a number of packets each having the packet size to transmit a frame having the frame size.
7. The method of any of claims 1 to 6, wherein a number of parity blocks identified for a frame of the audio signal is different from a number of parity blocks generated for a previous frame of the audio signal.
8. The method of any of claims 1-7, wherein the number of parity blocks generated for a frame of the audio signal varies in a repeatable manner, the repeatable manner being determined based on timing information indicative of packet scheduling of the wireless communication protocol.
9. The method of any of claims 1 to 8, wherein frames of the audio signal are not retransmitted in response to portions of the frames being discarded or corrupted.
10. The method of any of claims 1-9, wherein the wireless communication protocol is a bluetooth protocol.
11. A method for receiving an audio stream, the method comprising:
Receiving a set of source blocks and a set of parity blocks corresponding to a frame of an audio signal using a wireless communication protocol, wherein the set of source blocks is at least a subset of source blocks generated by an encoder and the set of parity blocks is at least a subset of parity blocks generated by the encoder, and wherein a number of source blocks and a number of parity blocks generated and transmitted by the encoder are determined based at least in part on characteristics of the wireless communication protocol;
determining a number of corrupted source blocks in the set of source blocks;
in response to determining that the number of corrupted source blocks is greater than zero, determining whether to reconstruct the corrupted source blocks;
reconstructing the corrupted source block in response to determining to reconstruct the corrupted source block; and
such that a version of the audio frame including the reconstructed corrupted source block is presented.
12. The method of claim 11, wherein determining whether to reconstruct the corrupted source block comprises determining whether the number of corrupted source blocks is less than the number of uncorrupted parity blocks in the received set of parity blocks.
13. The method of any one of claims 11 or 12, further comprising:
Receiving a second set of source blocks and a second set of parity blocks corresponding to a second frame of the audio signal;
determining a second number of corrupted source blocks in the second set of source blocks;
generating a replacement audio frame in response to determining that a second number of the corrupted source blocks in the second set of source blocks is greater than a number of parity blocks in the second set of parity blocks; and
causing the replacement audio frame to be presented.
14. The method of claim 13, wherein the replacement audio frame comprises a decrease in an output level of the audio signal.
15. The method of any of claims 11 to 14, wherein reconstructing the corrupted source block includes providing the set of source blocks and the set of parity blocks to a reed-solomon decoder.
16. The method of any of claims 11-15, further comprising storing the set of source blocks and the set of parity blocks in a buffer prior to reconstructing the corrupted source block, and wherein an amount of audio data stored in the buffer varies over time based at least in part on a packet schedule associated with the wireless communication protocol.
17. The method of any of claims 11 to 16, wherein the wireless communication protocol is a bluetooth protocol.
18. The method of any of claims 11 to 17, wherein the version of the audio frame comprising the reconstructed corrupted source block is presented via a loudspeaker.
19. An apparatus configured to implement the method of any one of claims 1 to 18.
20. One or more non-transitory media having software stored thereon, the software comprising instructions for controlling one or more devices to perform the method of any of claims 1-18.
CN202280039648.2A 2021-06-02 2022-05-30 Wireless transmission and reception of packetized audio data incorporating forward error correction Pending CN117413465A (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US63/195,781 2021-06-02
US202263363855P 2022-04-29 2022-04-29
US63/363,855 2022-04-29
PCT/EP2022/064541 WO2022253732A1 (en) 2021-06-02 2022-05-30 Wireless transmission and reception of packetized audio data in combination with forward error correction

Publications (1)

Publication Number Publication Date
CN117413465A true CN117413465A (en) 2024-01-16

Family

ID=89500444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202280039648.2A Pending CN117413465A (en) 2021-06-02 2022-05-30 Wireless transmission and reception of packetized audio data incorporating forward error correction

Country Status (1)

Country Link
CN (1) CN117413465A (en)

Similar Documents

Publication Publication Date Title
US10014977B2 (en) Systems and methods for transmitting data
CN106656422B (en) Streaming media transmission method for dynamically adjusting FEC redundancy
CN110769347B (en) Synchronous playing method of earphone assembly and earphone assembly
JP5140716B2 (en) Method for streaming media content, decoding method, encoding device, decoding device, and streaming system
GB2533832A (en) Broadcast audio retransmissions
JP4668913B2 (en) Transmission of digital television by error correction
JP2006067072A (en) Generation method, generator, generation program for error correction data, and computer readable recording medium storing the same
JP2013518514A (en) Majority error correction technology
KR20090001370A (en) Method of setting configuration of codec and codec using the same
JP6178308B2 (en) Streaming wireless communication capable of automatic retransmission request and selective packet mass retransmission
US20160192114A1 (en) Time to play
US20170237525A1 (en) Methods and apparatus for maximum utilization of a dynamic varying digital data channel
US10833710B2 (en) Bandwidth efficient FEC scheme supporting uneven levels of protection
JP2021016181A (en) Transmission method and transmission device
CN117413465A (en) Wireless transmission and reception of packetized audio data incorporating forward error correction
JP2024521195A (en) Radio transmission and reception of packetized audio data combined with forward error correction
JP2007324876A (en) Data transmitter, data receiver, data transmitting method, data receiving method, and program
JP2005045741A (en) Device, method and system for voice communication
EP2200025A1 (en) Bandwidth scalable codec and control method thereof
KR101904422B1 (en) Method of Setting Configuration of Codec and Codec using the same
JP4412262B2 (en) COMMUNICATION METHOD, COMMUNICATION SYSTEM, TRANSMISSION DEVICE, AND RECEPTION DEVICE
US20230117443A1 (en) Systems and Methods for Selective Storing of Data Included in a Corrupted Data Packet
US20220416949A1 (en) Reception terminal and method
US20230055690A1 (en) Error correction overwrite for audio artifact reduction
US11438400B2 (en) Content data delivery system, server, and content data delivery method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination