WO2024001405A1 - 音频处理方法、装置、芯片、电子设备及存储介质 - Google Patents

音频处理方法、装置、芯片、电子设备及存储介质 Download PDF

Info

Publication number
WO2024001405A1
WO2024001405A1 PCT/CN2023/087246 CN2023087246W WO2024001405A1 WO 2024001405 A1 WO2024001405 A1 WO 2024001405A1 CN 2023087246 W CN2023087246 W CN 2023087246W WO 2024001405 A1 WO2024001405 A1 WO 2024001405A1
Authority
WO
WIPO (PCT)
Prior art keywords
bit width
audio
data
sampling bit
sampling
Prior art date
Application number
PCT/CN2023/087246
Other languages
English (en)
French (fr)
Inventor
颜廷管
余庆华
王泷
Original Assignee
哲库科技(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 哲库科技(上海)有限公司 filed Critical 哲库科技(上海)有限公司
Publication of WO2024001405A1 publication Critical patent/WO2024001405A1/zh

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10009Improvement or modification of read or write signals
    • G11B20/10037A/D conversion, D/A conversion, sampling, slicing and digital quantisation or adjusting parameters thereof
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10009Improvement or modification of read or write signals
    • G11B20/10018Improvement or modification of read or write signals analog processing for digital recording or reproduction
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10009Improvement or modification of read or write signals
    • G11B20/10268Improvement or modification of read or write signals bit detection or demodulation methods

Definitions

  • This application relates to the field of audio and video technology, and specifically to an audio processing method, device, chip, electronic equipment and storage medium.
  • the embodiments of this application disclose an audio processing method, device, chip, electronic device and storage medium.
  • the embodiment of the present application discloses an audio processing method, which is applied to electronic devices.
  • the method includes:
  • the first sampling bit width corresponds to the sampling bit width set by the audio codec
  • the second sampling bit width corresponds to the original sampling bit width of the audio source data
  • the embodiment of the present application discloses a chip, including a processor and a communication unit;
  • the processor is configured to:
  • the first decoded data is encoded based on the second sampling bit width to obtain an audio encoded data packet; wherein the first sampling bit width corresponds to the sampling bit width set by the audio codec, and the second sampling bit width is The bit width corresponds to the original sampling bit width of the audio source data;
  • the communication unit is configured to:
  • the audio encoded data packet is sent to an audio output device via a wireless communication channel.
  • the embodiment of the present application discloses an audio processing method, which is applied to an audio output device.
  • the method includes:
  • the embodiment of the present application discloses a chip, which includes a processor and a communication unit;
  • the communication unit is configured to:
  • the processor is configured to:
  • the audio encoding data packet is decoded based on the second sampling bit width to obtain second decoded data; the second sampling bit width corresponds to the original sampling bit width of the audio source data.
  • the embodiment of the present application discloses an audio processing device, which is applied to electronic equipment.
  • the device includes:
  • a decoding module used to decode the audio source data based on the first sampling bit width to obtain the first decoded data
  • An encoding module configured to encode the first decoded data based on the second sampling bit width to obtain audio encoding data pack
  • the first sampling bit width corresponds to the sampling bit width set by the audio codec
  • the second sampling bit width corresponds to the original sampling bit width of the audio source data
  • the embodiment of the present application discloses an audio processing device, which is applied to audio output equipment.
  • the device includes:
  • Acquisition module used to obtain audio encoding data packets
  • a decoding module configured to decode the audio encoding data packet based on a second sampling bit width to obtain second decoded data; the second sampling bit width corresponds to the original sampling bit width of the audio source data;
  • a conversion module configured to convert the second decoded data into analog data.
  • An embodiment of the present application discloses an electronic device, including a memory and a processor.
  • a computer program is stored in the memory.
  • the processor implements any of the above. method.
  • An embodiment of the present application discloses a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, any one of the methods described above is implemented.
  • Figure 1A is a schematic diagram of audio data processing in related technologies
  • Figure 1B is an application scenario diagram of the audio processing method in one embodiment
  • Figure 2 is a flow chart of an audio processing method in one embodiment
  • Figure 3A is a schematic diagram of upsampling sound source data in one embodiment
  • Figure 3B is a schematic diagram of downsampling the first decoded data in one embodiment
  • Figure 4 is a schematic flow chart of an audio processing method in one embodiment
  • Figure 5A is a schematic structural diagram of an audio encoding data packet in an embodiment
  • Figure 5B is a schematic structural diagram of an audio encoding data packet in another embodiment
  • Figure 5C is a schematic structural diagram of an audio encoding data packet in another embodiment
  • Figure 6 is a flow chart of an audio processing method in another embodiment
  • FIG. 7 is a block diagram of an audio processing device in one embodiment
  • Figure 8 is a block diagram of an audio processing device in another embodiment
  • Figure 9 is a structural block diagram of an electronic device in an embodiment.
  • first, second, etc. used in this application may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another element.
  • first sampling bit width may be referred to as a second sampling bit width
  • second sampling bit width may be referred to as a first sampling bit width. Both the first sampling bit width and the second sampling bit width are sampling bit widths, but they are not the same sampling bit width.
  • pluralitality used in this application refers to two or more than two.
  • the term “and/or” used in this application refers to one of the solutions, or any combination of multiple solutions thereto.
  • FIG. 1A is a schematic diagram of audio data processing in the related art. As shown in Figure 1A, taking the electronic device transmitting audio data to the audio output device for playback through Bluetooth wireless communication as an example, the sampling bit width set by the audio codec is 24 bits.
  • the electronic device obtains the audio source data, decodes the audio source data based on PCM (Pulse Code Modulation, Pulse Code Modulation) on 24bit, and obtains PCM data in 24bit form, and then performs SBC (Sub Band Coding) on the 24bit PCM data.
  • PCM Pulse Code Modulation, Pulse Code Modulation
  • SBC Sub Band Coding
  • AAC Advanced Audio Coding, Advanced Audio Coding
  • the electronic device can transmit the Bluetooth audio encoding data corresponding to the 24bit parameters to the audio output device through Bluetooth wireless communication.
  • the audio output device performs PCM decoding on the Bluetooth audio encoding data corresponding to the 24bit parameters to obtain PCM data in 24bit form, and then uses the DAC (Digital to Analog converter, digital-to-analog converter) and AMP (Amplifier for Power, power amplifier) perform digital-to-analog conversion and power amplification to obtain analog data, thereby playing the analog data.
  • DAC Digital to Analog converter, digital-to-analog converter
  • AMP Analogifier for Power, power amplifier
  • the sampling bit width set by the audio codec is usually the upper limit sampling bit width.
  • the entire audio transmission and encoding The decoding process will still be processed and transmitted according to the sampling bit width set by the audio codec.
  • the sampling bit width set by the audio codec is 24 bit
  • the original sampling bit width of the audio data is 16 bit.
  • the entire audio transmission and codec The process will still be coded, decoded and transmitted according to 24bit, resulting in a waste of resources, increased memory usage, increased audio playback delay, increased device power consumption, and increased bandwidth occupied by transmission.
  • the embodiments of the present application disclose an audio processing method, device, chip, electronic device and storage medium, which can reduce equipment power consumption and audio playback delay, reduce memory consumption during audio processing, and does not initialize the audio codec. , without changing the sampling bit width set by the audio codec, can avoid the problem of hearing lag caused by re-initializing the audio codec, and ensure the audio playback quality.
  • FIG. 1B is an application scenario diagram of the audio processing method in one embodiment.
  • the electronic device 110 and the audio output device 120 may establish a communication connection.
  • the electronic device 110 may include but is not limited to a mobile phone, a smart wearable device, a vehicle-mounted terminal, a tablet computer, a PC (Personal Computer), a PDA (Personal Digital Assistant), etc.
  • the audio output device 120 may include but is not limited to headphones, speaker equipment, vehicle-mounted terminals, etc. Further, the audio output device 120 may be a TWS (True Wireless Stereo) headset.
  • TWS Truste Wireless Stereo
  • Wireless communication connections such as Bluetooth and WiFi can be established between the electronic device 110 and the audio output device 120, or a wired communication connection can be established through a USB (Universal Serial Bus, Universal Serial Bus) interface.
  • USB Universal Serial Bus, Universal Serial Bus
  • the communication connection method between the output devices 120 is not specifically limited.
  • the electronic device 110 may decode the audio source data based on the first sampling bit width to obtain the first decoded data, and then The first decoded data is encoded based on the second sampling bit width to obtain an audio encoded data packet.
  • the first sampling bit width corresponds to the sampling bit width set by the audio codec
  • the second sampling bit width corresponds to the original sampling bit width of the audio source data.
  • the electronic device 110 may send the audio encoding data packet to the audio output device 120.
  • the audio output device 120 may decode the audio encoding data packet based on the second sampling bit width to obtain the second decoded data.
  • the second decoded data is then converted into analog data to output the analog data.
  • an audio processing method is provided, which can be applied to the above-mentioned electronic device.
  • the method may include the following steps:
  • Step 210 Decode the sound source data based on the first sampling bit width to obtain first decoded data.
  • the sampling bit width can also be called the sampling depth, which refers to the number of binary digits in the digital signal of the sound card. It can be used to reflect the resolution of the sound card processing. The larger the sampling bit width, the higher the resolution.
  • each discrete pulse signal is quantized into a series of binary encoding streams with a certain quantization accuracy. The number of bits in this series of binary encoding streams is the sampling bit width. .
  • the audio source data may refer to audio data to be played or currently being played.
  • the sound source data can be any kind of audio data including music, video sounds, background sounds of applications running on electronic devices, call sounds, prompt sounds, etc., but is not limited to this.
  • the electronic device can transmit the audio source data to the audio output device, so that the audio source data can be played through the audio output device.
  • the audio source data Before the electronic device transmits the audio source data to the audio output device, the audio source data may be encoded and decoded first.
  • the electronic device can decode the audio source data based on the first sampling bit width, and obtain the first decoded data.
  • the first sampling bit width corresponds to the sampling bit width set by the audio codec.
  • the audio codec The set sampling bit width can be the upper limit sampling bit width, which is compatible with most audio data.
  • the set sampling bit width of the audio codec can be 24bit, 32bit, etc., but is not limited to this.
  • the data format of the audio source data may include but is not limited to FLAC (Free Lossless Audio Codec, lossless audio compression coding) format, APE (obtained through Monkey's Audio compression) format, ALAC (Apple lossless audio codec, developed by Apple) Lossless audio format) format, MP3 (Moving Picture Experts Group Audio Layer III, Motion Picture Experts Compression Standard Audio Layer 3) format, RealAudio format, etc., but are not limited to these.
  • FLAC Free Lossless Audio Codec, lossless audio compression coding
  • APE obtained through Monkey's Audio compression
  • ALAC Apple lossless audio codec, developed by Apple
  • MP3 Moving Picture Experts Group Audio Layer III, Motion Picture Experts Compression Standard Audio Layer 3
  • RealAudio format etc.
  • the original sampling bit width of the audio source data may be smaller than the sampling bit width set by the audio codec, and the number of bits in each series of binary encoding streams in the audio source data is the original sampling bit width.
  • the electronic device can decode the audio source data based on the first sampling bit width to obtain the first decoded data corresponding to the first sampling bit width, and decode the audio source data corresponding to the original sampling bit width into the first sampling bit width format.
  • One decodes the data For example, if the original sampling bit width corresponding to the sound source data is 16 bits and the first sampling bit width is 24 bits, then the sound source data corresponding to the 16 bit parameters can be decoded into the first decoded data in the form of 24 bits.
  • the electronic device can upsample the sound source data corresponding to the original sampling bit width according to a preset upsampling method, and decode the upsampled sound source data to obtain first decoded data corresponding to the first sampling bit width.
  • the preset upsampling method can be to add a preset bit value to the low bits of the audio source data in binary form to obtain the audio source data corresponding to the first sampling bit width.
  • N-bit preset bit values are added, where N can be the difference between the first sampling bit width and the original sampling bit width.
  • the preset bit value can be set according to actual requirements, for example, it can be 0 or 1, but is not limited to this.
  • FIG. 3A is a schematic diagram of upsampling sound source data in one embodiment.
  • the original sampling bit width corresponding to the sound source data is 16 bits
  • the first sampling bit width is 24 bits.
  • 8 bits of 0 can be added to the low bits of the 16-bit sound source data to obtain 24-bit sound source data.
  • the electronic device can then The 24-bit audio source data is decoded to obtain 24-bit first decoded data.
  • the audio source data corresponding to the original sampling bit width can also be decoded first, and then the decoded data can be up-sampled to obtain the first decoded data.
  • the embodiment of the present application does not control the sequence of the up-sampling and decoding processes. The order is not limited.
  • the electronic device may perform PCM decoding on the sound source data based on the first sampling bit width to obtain first decoded data in PCM format. Regardless of the original sampling bit width of the audio source data, the audio source data is decoded using the first sampling bit width corresponding to the sampling bit width set by the audio codec. There is no need to initialize the audio codec and the audio is not changed. The sampling bit width set by the codec can avoid the problem of hearing lag caused by reinitializing the audio codec and ensure the audio playback quality.
  • Step 220 Encode the first decoded data based on the second sampling bit width to obtain an audio encoded data packet.
  • the second sampling bit width may be smaller than the first sampling bit width.
  • the second sampling bit width may be greater than or equal to the original sampling bit width of the audio source data, thereby ensuring the music quality when the audio source data is played, for example,
  • the original sampling bit width is 8 bit, and the second sampling bit width can be 16 bit, 8 bit, etc.
  • the electronic device can encode the first decoded data based on the second sampling bit width to obtain audio encoding data corresponding to the second sampling bit width, and convert the first decoded data in the form of the first sampling bit width according to the second sampling bit width. Encoding is performed to obtain audio encoding data corresponding to the second sampling bit width, and then the audio encoding data is encapsulated to obtain an audio encoding data packet.
  • the electronic device can down-sample the first decoded data in the form of the first sampling bit width according to a preset down-sampling method, and encode the down-sampled first decoded data to obtain audio corresponding to the second sampling bit width.
  • the preset down-sampling method may be a sampling method corresponding to the above-mentioned preset up-sampling method.
  • the preset up-sampling method can be to add a preset bit value to the low bits of the audio source data in binary form to obtain the audio source data corresponding to the first sampling bit width.
  • the preset down-sampling method can be It is to clip the bit values arranged in the last P bits in the first decoded data corresponding to the first sampling bit width.
  • the electronic device can clip the first decoded data in binary form from the lower bits to retain the part of the first decoded data corresponding to the second sampling bit width, and can trim the first decoded data corresponding to the first sampling bit width,
  • the bit values arranged in the last P bits are trimmed, and the trimmed first decoded data is encoded to obtain audio coded data corresponding to the second sampling bit width, and then the audio coded data is encapsulated to obtain an audio coded data packet.
  • the P is the difference between the first sampling bit width and the second sampling bit width.
  • FIG. 3B is a schematic diagram of downsampling the first decoded data in one embodiment.
  • the first sampling bit width corresponding to the first decoded data is 24 bits
  • the second sampling bit width is 16 bits.
  • the bit values arranged in the lower 8 bits of the first decoded data in the form of 24 bits can be processed. Crop and retain the high 16 bits of the first decoded data to obtain 16-bit first decoded data.
  • the electronic device can then encode the 16-bit first decoded data to obtain audio coded data corresponding to the 16-bit parameters.
  • other preset up-sampling methods and preset down-sampling methods may also be used, which are not limited by the embodiments of this application.
  • the above-mentioned second sampling bit width corresponds to the original sampling bit width of the audio source data, so that the memory consumption and memory consumption in the audio encoding process can be reduced as much as possible while ensuring the music quality when the audio source data is played. processing time, thereby reducing audio playback delay and device power consumption.
  • the first sampling bit width is 24 bit
  • the original sampling bit width of the audio source data is 16 bit
  • the second sampling bit width is 16 bit
  • the memory occupied during the audio encoding process can be reduced by 2/3.
  • the encoding processing time and corresponding power consumption can also be reduced by 2/3.
  • encoding the first decoded data may include but is not limited to encoding SBC, AAC, etc. on the first decoded data.
  • the electronic device may send the audio encoding data packet to the audio output device via the wireless communication channel.
  • the wireless communication channel may include a Bluetooth communication channel including a broadcast channel and/or a data channel.
  • a Bluetooth connection can be established between the electronic device and the audio output device.
  • the Bluetooth connection can include a classic Bluetooth connection, a BLE (Bluetooth Low Energy, Bluetooth low energy) connection, etc.
  • the classic Bluetooth connection is a Bluetooth communication established based on the classic Bluetooth protocol.
  • BLE connection is a Bluetooth communication connection established based on the BLE protocol.
  • the classic Bluetooth protocol usually refers to the Bluetooth protocol below Bluetooth protocol version 4.0
  • the BLE protocol usually refers to the Bluetooth protocol above Bluetooth protocol version 4.0.
  • the Bluetooth connection can be a LE Audio Bluetooth connection established based on a BLE connection, which can support the transmission of audio data.
  • the electronic device can send the target audio data packet to the audio output device through the audio service transmission channel of the Bluetooth connection.
  • the audio service transmission channel can be established based on the A2DP (Advanced Audio Distribution Profile, Bluetooth Audio Transmission Model Protocol) protocol or the HFP (Hands-free Profile) protocol.
  • Transmission channel if the Bluetooth connection is a LE Audio Bluetooth connection, the audio service transmission channel can be a transmission channel such as CIS (Connected Isochronous Streams, based on connection synchronous data streams), but is not limited to this. It should be noted that the embodiments of the present application do not limit the specific Bluetooth connection method and communication channel between the electronic device and the audio output device, and may be changed according to the development of the Bluetooth standard protocol.
  • the electronic device transmits audio encoding data packets with smaller sampling bit width to the audio output device, which can reduce the transmission bandwidth occupied by the audio encoding data packets and reduce the waste of communication transmission resources.
  • the audio output device can decode the audio encoding data packet based on the second sampling bit width to obtain the second decoded data, and then convert the second decoded data into analog data to output the analog data. , to achieve audio playback.
  • the audio output device may unpack the acquired audio encoding data packet to extract audio encoding data corresponding to the second sampling bit width contained in the audio encoding data packet.
  • the audio output device can perform PCM decoding on the audio encoded data based on the second sampling bit width to obtain second decoded data in the form of the second sampling bit width, and then convert the second decoded data from a digital signal to an analog signal through a digital-to-analog converter.
  • the power amplifier can transmit the second analog data to the playback unit, and the audio output device outputs the second analog data through the playback unit to achieve the effect of playing audio.
  • FIG. 4 is a schematic flowchart of an audio processing method in one embodiment.
  • the electronic device can perform PCM decoding on the audio source data based on the first sampling bit width (such as 24bit) to obtain the first decoded data, and then perform Bluetooth on the first decoded data based on the second sampling bit width (such as 16bit).
  • Audio encoding obtain audio encoding data corresponding to the second sampling bit width (eg, 16 bit), and encapsulate the audio encoding data corresponding to the second sampling bit width (eg, 16 bit) into audio encoding data packets.
  • the electronic device can transmit the audio encoded data packet to the audio output device via a Bluetooth wireless channel.
  • the audio output device After the audio output device receives the audio encoding data packet via the Bluetooth wireless channel, it can unpack the audio encoding data packet to extract the audio encoding data corresponding to the second sampling bit width (such as 16bit), and based on the second sampling bit PCM decodes the audio coded data with a wide width (such as 16 bit) to obtain second decoded data.
  • the audio output device can then perform digital-to-analog conversion and power amplification processing on the second decoded data through DAC and AMP respectively to obtain analog data, and finally output the analog data through the playback unit.
  • it can save memory consumption and processing time of electronic equipment and audio output devices in the encoding and decoding process, effectively reduce audio delay and power consumption, and reduce audio encoding data packet transmission. bandwidth occupied.
  • the electronic device encodes the first decoded data through a second sampling bit width corresponding to the original sampling bit width of the audio source data.
  • the original sampling bit width is smaller than the sampling bit width set by the audio codec. Wide, it can reduce memory consumption and processing time during encoding, thereby reducing audio playback delay and device power consumption.
  • the audio output device decodes the audio encoding data packet through the second sampling bit width corresponding to the original sampling bit width of the audio source data, which can reduce memory consumption and processing time during the decoding process, thereby reducing audio playback delay and device performance. Consumption.
  • the audio codec is not initialized, and the sampling bit width set by the audio codec is not changed. This can avoid the problem of hearing lag caused by reinitializing the audio codec, and ensure the playback of audio. quality.
  • the electronic device encodes the first decoded data based on the second sampling bit width to obtain audio encoding data corresponding to the second sampling bit width.
  • the audio encoding data can be processed according to a preset data packet format. Encapsulate and obtain the audio encoding data packet. The following introduces several packet formats of audio encoding data packets:
  • the audio encoding data packet includes a packet header and a data part.
  • the packet header includes a first-bit width field and a second-bit width field.
  • the first bit width field represents the sampling bit width set by the audio codec, and the first bit width field can be used to indicate the above-mentioned first sampling bit width.
  • the second bit width field represents the actual sampling bit width used in the encoding process of the audio encoding data packet, and the second bit width field is used to indicate the above-mentioned second sampling bit width.
  • the data part is used to store audio encoding data corresponding to the second sampling bit width.
  • the first bit width field may be stored in the first data segment of the packet header of the audio encoding data packet
  • the second bit width field may be stored in the second data segment of the packet header of the audio encoded data packet.
  • the first data segment may be located before the second data segment.
  • the second bit-width field may be stored in a reserved field in the packet header.
  • the reserved field is a field in the packet header that has been defined for a specific purpose. Part of the reserved field may be used to store the second bit-width field, so that no It is necessary to make major adjustments to the overall structure of the bag head, so that the bagging method is simpler and faster.
  • the first bit width field indicates the 24-bit sampling set by the audio codec (that is, the first sampling bit width is 24 bit), and the second bit width field indicates the 16-bit sampling used for the audio source data (that is, the first sampling bit width is 24 bit).
  • the second sampling bit width is 16 bits), but the first bit width field and the second bit width field only occupy 2 bits, identifying 4 situations, for example, binary 00 identifies 8-bit sampling, 01 identifies 16-bit sampling, 10 identifies 24-bit sampling, binary 11 identifies 32bit sampling. Therefore, the information of the first sampling bit width and the second sampling bit width can be represented by a total of 4 bits. Taking into account technological development, the first wide field and the second wide field may each occupy 3 bits, thereby identifying 8 situations respectively.
  • the packet header of the audio encoding data packet occupies a total of 64 bits.
  • the first wide field, the second wide field and the possible judgment fields are stored using reserved fields in the packet header. When some reserved fields in the packet header are occupied, the first wide field and the second wide field can be stored by using the unoccupied reserved fields as much as possible.
  • the parts that cannot be stored will be stored in the 64 bits after the packet header. in the newly set bit or byte. It should be understood that the 64-bit packet header is only used for illustration and does not limit the size or structure of the packet header.
  • the first sampling bit width is not the same as the sampling bit width set by the audio codec, and the second sampling bit width is also different from the original sampling bit width of the audio source data; rather, the first sampling bit width is not the same as the sampling bit width set by the audio codec.
  • the first sampling bit width has a first proportional relationship with the sampling bit width set by the audio codec, and the second sampling bit width also has a first proportional relationship with the original sampling bit width of the audio source data.
  • the first sampling bit width has a first difference relationship with the sampling bit width set by the audio codec, and the second sampling bit width also has a first difference relationship with the original sampling bit width of the audio source data.
  • the data part may be stored in the third data segment of the audio encoding data packet.
  • the packet header may be located before the third data segment of the audio encoded data packet.
  • FIG. 5A is a schematic structural diagram of an audio encoding data packet in one embodiment.
  • the audio encoding data packet may include a packet header and a data part.
  • the packet header may include first packet header information and a second bit width field.
  • the first packet header information may be stored in the first data segment of the packet header.
  • the first header information may include a first bit width field, and the second bit width field may be located between the first header information and the data part, that is, the first bit width field is located before the second bit width field. Further, the second bit width field may be located between the first header information and the data part.
  • the bit-width field can be located at the end of the packet header.
  • the first sampling bit width is 24 bit and the second sampling bit width is 16 bit
  • the first bit width field in the packet header is used to indicate 24 bit
  • the second bit width field is used to indicate 16 bit.
  • the data part may include 16 bit The audio encoding data corresponding to the parameter.
  • the audio output device can unpack the audio encoding data packet to extract the packet header and data part in the audio encoding data packet.
  • the audio output device can based on the second bit width field indicated The second sampling bit width is to decode the audio encoded data stored in the data part to obtain the second decoded data.
  • the packet header of the audio encoding data packet may further include a first length field and/or a second length field. Further, the first header information in the packet header may also include a first length field and/or a second length field.
  • the length parameter stored in the first length field is used to indicate the data length of the packet header.
  • the packet header includes first packet header information and a second bit width field.
  • the length parameter stored in the first length field may be the first packet header information. plus the data length of the second wide field.
  • the data length may refer to the number of bits occupied.
  • the length parameter stored in the first length field may be the number of bits occupied by the first packet header information and the number of bits occupied by the second bit width field. For example, as shown in Figure 5A, the number of bits occupied by the first header information in the packet header is M, and the number of bits occupied by the second bit width field is 8, then the length parameter stored in the first length field can be M+8 .
  • the audio output device when it unpacks the audio encoding data packet, it can first extract the packet header from the audio encoding data packet to obtain each field contained in the packet header, such as the first-digit wide field mentioned above. , the second width field, the first length field, etc.
  • the first data segment in the audio packet header can be extracted first. packet header information, and then extract the second bit width field from the second data segment of the packet header according to the length parameter stored in the first length field in the first packet header information, and extract the audio encoding data packet from the third data segment of the audio encoding data packet in the data department.
  • the audio output device can accurately encode the data packet from the audio by using the length parameter stored in the first length field. Extract the second bit-width field from the second bit-width field to ensure that subsequent decoding processing is accurately based on the second sampling bit-width stored in the second bit-width field, which improves processing efficiency and processing accuracy.
  • the length parameter stored in the second length field is used to indicate the data length of the data part.
  • the length parameter stored in the second length field is the number of bits occupied by the data part.
  • the audio output device can extract the data part in the audio encoding data packet from the third data segment of the audio encoding data packet according to the length parameter stored in the second length field in the packet header of the audio encoding data packet, which can ensure The audio output device accurately obtains the audio encoding data corresponding to the second sampling bit width.
  • the audio output device can accurately identify that the audio encoding data packet is encoded according to the second sampling bit width based on the first length field and/or the second length field in the packet header of the audio encoding data packet.
  • the audio encoding data packets and unpacking them accurately can effectively combine the audio encoding data packets in the embodiment of the present application with the audio encoding data packets in related technologies (the entire audio transmission and encoding and decoding processes are in accordance with the settings set by the audio codec. sampling bit width for processing and transmission) to ensure accurate subsequent audio processing and playback.
  • the first header information of the packet header of the audio encoding data packet may also include other fields, such as the supplier identifier of the audio source data, the encoder identifier of the audio source data, the version identification of the audio source data, the sampling rate, One or more of the number of channels, but not limited to this.
  • the supplier identifier can be used to identify the supplier of the audio data
  • the encoder identifier can be used to identify the encoding format of the audio source data
  • the audio source data in different encoding formats can respectively correspond to different encoder identifiers.
  • the audio encoding data packet may include a packet header and a data part.
  • the packet header may include a first-bit width field and a second-bit width field, which ensures that the audio output device, after unpacking, reads the data according to the second bit-width field.
  • the second sampling bit width indicated by the wide field decodes the audio encoded data, which can reduce the memory consumption of the audio output device and reduce the processing time, thereby reducing the audio playback delay and device power consumption, and because the first wide field
  • the indicated first sampling bit width remains unchanged, and the audio output device will not initialize the audio codec, which can avoid the problem of hearing lag caused by re-initialization of the audio codec.
  • the audio encoding data packet includes a packet header and a data part.
  • the packet header includes a first-bit width field, a judgment field, and a second-bit width field.
  • the packet header of the audio encoding data packet may also include a judgment field, which can be used to characterize the content in the packet header. Whether the first bit width field is consistent with the second bit width field, further, this judgment field can be used to characterize the sampling bit width set by the audio codec (corresponding to the first sampling bit width), during the encoding process with the electronic device Whether the sampling bit width adopted (that is, the second sampling bit width) is consistent.
  • the judgment field can use different judgment identifiers to indicate whether the first bit width field and the second bit width field in the packet header are consistent. If the first judgment flag is stored in the judgment field, it means that the first width field in the packet header is consistent with the second bit width field. If the second judgment flag is stored in the judgment field, it means that the first width field in the packet header is consistent with the second bit width field. Wide fields are inconsistent.
  • the first judgment flag and the second judgment flag can be set according to actual needs. For example, the first judgment flag can be 0, the second judgment flag can be 1, etc., but is not limited thereto.
  • the judgment field can be stored in the fourth data segment of the packet header of the audio encoding data packet.
  • the position of the fourth data segment in the packet header can be pre-configured.
  • the fourth data segment can be in the packet header. between the first data segment and the second data segment in the packet header, or the fourth data segment may be after the second data segment in the packet header, etc., which is not limited here.
  • the judgment field may be stored in a reserved field in the packet header.
  • FIG. 5B is a schematic structural diagram of an audio encoding data packet in another embodiment.
  • the sound The frequency-coded data packet may include a packet header and a data part.
  • the packet header may include first packet header information, a judgment field and a second bit width field.
  • the first packet header information may be stored in the first data segment of the packet header.
  • the first header information may include a first bit width field, the judgment field may be located between the first header information and the second bit width field, and the second bit width field may be located before the data part.
  • the first sampling bit width is 24 bit and the second sampling bit width is 16 bit
  • the first bit width field in the packet header is used to indicate 24 bit
  • the second bit width field is used to indicate 16 bit
  • the judgment field can be 1 ( Indicates that the first wide field is inconsistent with the second wide field).
  • FIG. 5C is a schematic structural diagram of an audio encoding data packet in another embodiment.
  • the audio encoding data packet may include a packet header and a data part.
  • the packet header may include first packet header information, a judgment field and a second bit width field.
  • the judgment field may be located after the second bit width field, that is, the judgment field
  • the fields may be located at the end of the packet header, with the first header information preceding the second bit-wide field.
  • the length parameter stored in the first length field in the packet header of the audio encoding data packet may be the sum of the data lengths of the first packet header information, the judgment field, and the actual sampling bit width.
  • the audio output device when the audio output device unpacks the audio encoded data packet, it can first extract the first header information of the packet header from the audio encoded data packet, and then extract the first header information according to the first length field stored in the first packet header information.
  • the length parameter extracts the second bit width field and the judgment field from the second data segment and the fourth data segment of the packet header respectively.
  • the audio output device can determine whether the first width field and the second width field are consistent based on the judgment field, thereby improving the accuracy of subsequent audio processing.
  • the packet header of the audio encoding data packet may also include a judgment field.
  • the audio output device can determine whether the first bit width field and the second bit width field are consistent according to the judgment field, thereby improving the accuracy of subsequent audio processing. sex.
  • the data packet format of the target audio data packet is not limited to the above-mentioned data packet formats.
  • the audio encoding data packet may also include other field information, such as a check code, etc. Each field is included in the audio encoding data.
  • the position in the packet is not limited to the methods described in the above embodiments, and the data packet format of the audio encoding data packet can be adjusted based on actual needs.
  • the audio encoding data can be encapsulated into an audio encoding data packet according to the preset data packet format. It can ensure that the audio output device accurately unpacks and processes audio encoded data packets, and improves the audio processing performance of the audio output device.
  • a chip configured to perform the steps in the audio processing method applied to electronic devices as described in the above embodiments.
  • the chip may include a processor and a communication module.
  • the processor may be configured to: perform decoding of the audio source data based on the first sampling bit width to obtain the first decoded data, and encode the first decoded data based on the second sampling bit width.
  • the communication module may be configured to: perform the step of sending the audio encoding data packet to the audio output device via the wireless communication channel.
  • the chip can be installed in electronic devices, such as mobile phones, wearable devices, vehicle-mounted terminals, tablet computers, etc.
  • another audio processing method is provided, which can be applied to the above audio output device.
  • the method may include the following steps:
  • Step 610 Obtain the audio encoding data packet.
  • step 610 includes: obtaining audio encoding data packets via a wireless communication channel; the wireless communication channel includes a Bluetooth communication channel, and the Bluetooth communication channel includes a broadcast channel and/or a data channel.
  • Step 620 Decode the audio encoding data packet based on the second sampling bit width to obtain second decoded data.
  • the second sampling bit width corresponds to the original sampling bit width of the audio source data.
  • the second sampling bit width is smaller than the first sampling bit width, and the first sampling bit width corresponds to the sampling bit width set by the audio codec.
  • the original sample bit width is smaller than the sample bit width set by the audio codec.
  • the step of decoding the audio encoding data packet based on the second sampling bit width includes: unpacking the audio encoding data packet to extract the packet header and data part in the audio encoding data packet; the packet header includes The first bit width field and the second bit width field of the audio codec.
  • the first bit width field is used to indicate the first sampling bit width
  • the second bit width field is used to indicate the second sampling bit width; based on the second bit width
  • the second sampling bit width indicated in the field is used to decode the audio encoded data stored in the data part.
  • the packet header of the audio encoding data packet further includes a judgment field, and the judgment field is used to indicate whether the first bit width field and the second bit width field are consistent.
  • Step 630 Convert the second decoded data into analog data.
  • the audio output device decodes the audio encoded data packet through the second sampling bit width corresponding to the original sampling bit width of the audio source data, which is smaller than the original sampling bit width of the audio codec.
  • the set sampling bit width can reduce memory consumption and processing time during the decoding process, thereby reducing audio playback delay and device power consumption.
  • the audio codec is not initialized, and the sampling bit width set by the audio codec is not changed. This can avoid the problem of hearing lag caused by reinitializing the audio codec, and ensure the playback of audio. quality.
  • a chip configured to perform the steps in the audio processing method applied to an audio output device as described in the above embodiments.
  • the chip may include a processor and a communication module.
  • the communication module may be configured to: perform the step of obtaining the audio encoding data packet via the wireless communication channel.
  • the processor may be configured to: perform decoding of the audio encoding data packet based on the second sampling bit width. , obtain the second decoded data and other steps.
  • the chip can be set in audio output devices, such as headphones, speakers, car players, etc.
  • an audio processing device 700 is provided, which can be applied to the above-mentioned electronic device.
  • the audio processing device 700 can include a decoding module 710 and an encoding module 720 .
  • the decoding module 710 is configured to decode the sound source data based on the first sampling bit width to obtain first decoded data.
  • the encoding module 720 is configured to encode the first decoded data based on the second sampling bit width to obtain an audio encoded data packet.
  • the first sampling bit width corresponds to the sampling bit width set by the audio codec
  • the second sampling bit width corresponds to the original sampling bit width of the audio source data.
  • the second sampling bit width is smaller than the first sampling bit width.
  • the original sampling bit width of the audio source data is smaller than the sampling bit width set by the audio codec.
  • the encoding module 720 is also configured to clip the first decoded data in binary form from the lower bits to retain the part of the first decoded data corresponding to the second sampling bit width.
  • the audio processing device 700 further includes a sending module.
  • a sending module configured to send the audio encoding data packet to the audio output device via the wireless communication channel.
  • Wireless communication channels include Bluetooth communication channels, and Bluetooth communication channels include broadcast channels and/or data channels.
  • the electronic device encodes the first decoded data through the second sampling bit width corresponding to the original sampling bit width of the audio source data, which is smaller than the original sampling bit width set by the audio codec.
  • the sampling bit width can reduce memory consumption and processing time during the encoding process, thereby reducing audio playback delay and device power consumption.
  • the audio codec is not initialized, and the sampling bit width set by the audio codec is not changed. This can avoid the problem of hearing lag caused by reinitializing the audio codec, and ensure the playback of audio. quality.
  • the audio encoding data packet includes a packet header and a data part.
  • the packet header includes a first bit width field and a second bit width field; the first bit width field is used to indicate the first sampling bit width, and the second bit width The field is used to indicate the second sampling bit width; the data part is used to store audio encoding data.
  • the first bit width field is stored in the first data segment of the packet header
  • the second bit width field is stored in the second data segment of the packet header
  • the first data segment is located before the second data segment.
  • the packet header of the audio encoding data packet further includes a judgment field, and the judgment field is used to indicate whether the first bit width field and the second bit width field are consistent.
  • the audio encoding data can be encapsulated into an audio encoding data packet according to the preset data packet format. It can ensure that the audio output device accurately unpacks and processes audio encoded data packets, and improves the audio processing performance of the audio output device.
  • an audio processing device 800 is provided, which can be applied to the above-mentioned audio output device.
  • the audio processing device 800 can include an acquisition module 810 , a decoding module 820 and a conversion module 830 .
  • Acquisition module 810 is used to acquire audio encoding data packets.
  • the acquisition module 810 is also configured to acquire audio encoding data packets via a wireless communication channel; the wireless communication channel includes a Bluetooth communication channel, and the Bluetooth communication channel includes a broadcast channel and/or a data channel.
  • the decoding module 820 is configured to decode the audio encoded data packet based on the second sampling bit width to obtain second decoded data; the second sampling bit width corresponds to the original sampling bit width of the audio source data.
  • the second sampling bit width is smaller than the first sampling bit width, and the first sampling bit width corresponds to the sampling bit width set by the audio codec.
  • the original sample bitwidth is smaller than the sample bitwidth set by the audio codec.
  • the decoding module 820 includes a depacketizing unit and a decoding unit.
  • the unpacking unit is used to unpack the audio encoding data packet to extract the packet header and data part in the audio encoding data packet;
  • the packet header includes the first wide field and the second wide field, and the first wide field is For indicating the first sampling bit width, the second bit width field is used for indicating the second sampling bit width.
  • the decoding unit is configured to decode the audio encoded data stored in the data part based on the second sampling bit width indicated by the second bit width field.
  • the packet header of the audio encoding data packet further includes a judgment field, and the judgment field is used to indicate whether the first bit width field and the second bit width field are consistent.
  • the conversion module 830 is used to convert the second decoded data into analog data.
  • the audio output device decodes the audio encoded data packet through the second sampling bit width corresponding to the original sampling bit width of the audio source data, which is smaller than the original sampling bit width of the audio codec.
  • the set sampling bit width can reduce memory consumption and processing time during the decoding process, thereby reducing audio playback delay and device power consumption.
  • the audio codec is not initialized, and the sampling bit width set by the audio codec is not changed. This can avoid the problem of hearing lag caused by reinitializing the audio codec, and ensure the playback of audio. quality.
  • Figure 9 is a structural block diagram of an electronic device in one embodiment.
  • the electronic device 900 may include one or more of the following components: a processor 910, a memory 920 coupled to the processor 910, where the memory 920 may store one or more computer programs, and one or more computer programs. It may be configured to implement the audio processing method applied to electronic devices as described in the above embodiments when executed by one or more processors 910 .
  • Processor 910 may include one or more processing cores.
  • the processor 910 uses various interfaces and lines to connect various parts of the entire electronic device 900, and executes by running or executing instructions, programs, code sets or instruction sets stored in the memory 920, and calling data stored in the memory 920.
  • the processor 910 may adopt at least one of digital signal processing (Digital Signal Processing, DSP), field-programmable gate array (Field-Programmable Gate Array, FPGA), and programmable logic array (Programmable Logic Array, PLA). implemented in hardware form.
  • the processor 910 may integrate a central processing unit (Central Processing Unit (CPU), graphics processor (Graphics Processing Unit, GPU), modem, etc., or a combination thereof.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • modem etc., or a combination thereof.
  • the CPU mainly handles the operating system, user interface, and applications; the GPU is responsible for rendering and drawing the display content; and the modem is used to handle wireless communications. It can be understood that the above-mentioned modem may not be integrated into the processor 910 and may be implemented solely through a communication chip.
  • the memory 920 may include random access memory (Random Access Memory, RAM) or read-only memory (Read-Only Memory, ROM). Memory 920 may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
  • the memory 920 may include a program storage area and a data storage area, where the program storage area may store instructions for implementing an operating system and instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , instructions for implementing each of the above method embodiments, etc.
  • the storage data area can also store data created during use of the electronic device 900 and the like.
  • the electronic device 900 may also include a Bluetooth module, which may be used to provide Bluetooth communication functions, establish a Bluetooth connection with a second electronic device, and perform Bluetooth data transmission.
  • the Bluetooth module can support one or more Bluetooth protocols, such as classic Bluetooth, BLE, BLE Audio, etc.), but is not limited to this and can change with the development of the Bluetooth protocol.
  • An embodiment of the present application also provides an electronic device, including a memory and a processor.
  • the memory stores a computer program.
  • the processor implements the audio output device described in the above embodiments. audio processing methods.
  • An embodiment of the present application discloses a computer-readable storage medium that stores a computer program, wherein when the computer program is executed by a processor, the audio processing method applied to electronic devices as described in the above embodiments is implemented.
  • An embodiment of the present application discloses a computer-readable storage medium that stores a computer program, wherein when the computer program is executed by a processor, the audio processing method applied to an audio output device as described in the above embodiments is implemented.
  • An embodiment of the present application discloses a computer program product.
  • the computer program product includes a non-transitory computer-readable storage medium that stores a computer program.
  • the computer program When executed by a processor, it can implement the application described in the above embodiments. Audio processing methods for electronic devices.
  • An embodiment of the present application discloses a computer program product.
  • the computer program product includes a non-transitory computer-readable storage medium that stores a computer program.
  • the computer program When executed by a processor, it can implement the application described in the above embodiments. Audio processing methods for audio output devices.
  • the programs can be stored in a non-volatile computer-readable storage medium. , when the program is executed, it may include the processes of the above-mentioned method embodiments.
  • the storage medium may be a magnetic disk, an optical disk, a ROM, etc.
  • Non-volatile memory may include ROM, programmable ROM (PROM), erasable PROM (Erasable PROM, EPROM), electrically erasable PROM (Electrically Erasable PROM, EEPROM) or flash memory.
  • Volatile memory may include random access memory (RAM), which acts as external cache memory.
  • RAM can be in many forms, such as static RAM (Static RAM, SRAM), dynamic RAM (Dynamic Random Access Memory, DRAM), synchronous DRAM (synchronous DRAM, SDRAM), double data rate SDRAM (Double Data Rate SDRAM, DDR SDRAM), enhanced SDRAM (Enhanced Synchronous DRAM, ESDRAM), synchronous link DRAM (Synchlink DRAM, SLDRAM), memory bus direct RAM (Rambus DRAM, RDRAM) and direct memory bus dynamic RAM (Direct Rambus DRAM , DRDRAM).
  • static RAM Static RAM, SRAM
  • dynamic RAM Dynamic Random Access Memory
  • DRAM Dynamic Random Access Memory
  • SDRAM synchronous DRAM
  • double data rate SDRAM Double Data Rate SDRAM, DDR SDRAM
  • enhanced SDRAM Enhanced Synchronous DRAM, ESDRAM
  • synchronous link DRAM Synchlink DRAM, SLDRAM
  • memory bus direct RAM Rabus DRAM, RDRAM
  • DRDRAM direct memory bus dynamic RAM
  • DRDRAM Direct Rambus DRAM
  • the units described above as separate components may or may not be physically separated.
  • the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本申请实施例公开了一种音频处理方法、装置、芯片、电子设备及存储介质。该方法应用于电子设备,该方法包括:基于第一采样位宽对音源数据进行解码处理,得到第一解码数据;基于第二采样位宽对所述第一解码数据进行编码处理,得到音频编码数据包;其中,所述第一采样位宽对应于音频编解码器设置的采样位宽,所述第二采样位宽对应于音源数据的原始采样位宽。

Description

音频处理方法、装置、芯片、电子设备及存储介质
本申请要求于2022年07月01日提交、申请号为202210773940.9、发明名称为“音频 处理方法、装置、芯片、电子设备及存储介质”的中国专利申请的优先权,其全部内容通 过引用结合在本申请中。
技术领域
本申请涉及音视频技术领域,具体涉及一种音频处理方法、装置、芯片、电子设备及存储介质。
背景技术
随着电子技术的快速发展,用户对于音频的播放效果有着越来越高的要求,特别是对于高保真音源的音乐高品质的体验需求。目前的音频播放,存在功耗高、延迟高的问题,如何降低设备功耗及降低音频播放延迟,成了亟需解决的技术问题。
发明内容
本申请实施例公开了一种音频处理方法、装置、芯片、电子设备及存储介质。
本申请实施例公开了一种音频处理方法,应用于电子设备,所述方法包括:
基于第一采样位宽对音源数据进行解码处理,得到第一解码数据;
基于第二采样位宽对所述第一解码数据进行编码处理,得到音频编码数据包;
其中,所述第一采样位宽对应于音频编解码器所设置的采样位宽,所述第二采样位宽对应于所述音源数据的原始采样位宽。
本申请实施例公开了一种芯片,包括处理器和通信单元;
所述处理器配置成:
基于第一采样位宽对音源数据进行解码处理,得到第一解码数据;
基于第二采样位宽对所述第一解码数据进行编码处理,得到音频编码数据包;其中,所述第一采样位宽对应于音频编解码器所设置的采样位宽,所述第二采样位宽对应于所述音源数据的原始采样位宽;
所述通信单元配置成:
将所述音频编码数据包经由无线通信信道发送至音频输出设备。
本申请实施例公开了一种音频处理方法,应用于音频输出设备,所述方法包括:
获取音频编码数据包;
基于第二采样位宽对所述音频编码数据包进行解码处理,得到第二解码数据;所述第二采样位宽对应于所述音源数据的原始采样位宽;
将所述第二解码数据转换为模拟数据。
本申请实施例公开了一种芯片,包括处理器及通信单元;
所述通信单元,配置成:
经由无线通信信道获取音频编码数据包;
所述处理器配置成:
基于第二采样位宽对所述音频编码数据包进行解码处理,得到第二解码数据;所述第二采样位宽对应于音源数据的原始采样位宽。
本申请实施例公开了一种音频处理装置,应用于电子设备,所述装置包括:
解码模块,用于基于第一采样位宽对音源数据进行解码处理,得到第一解码数据;
编码模块,用于基于第二采样位宽对所述第一解码数据进行编码处理,得到音频编码 数据包;
其中,所述第一采样位宽对应于音频编解码器所设置的采样位宽,所述第二采样位宽对应于所述音源数据的原始采样位宽。
本申请实施例公开了一种音频处理装置,应用于音频输出设备,所述装置包括:
获取模块,用于获取音频编码数据包;
解码模块,用于基于第二采样位宽对所述音频编码数据包进行解码处理,得到第二解码数据;所述第二采样位宽对应于所述音源数据的原始采样位宽;
转换模块,用于将所述第二解码数据转换为模拟数据。
本申请实施例公开了一种电子设备,包括存储器及处理器,所述存储器中存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器实现如上任一所述的方法。
本申请实施例公开了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如上任一所述的方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和有益效果将从说明书、附图以及权利要求书中体现。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1A为相关技术中音频数据处理的示意图;
图1B为一个实施例中音频处理方法的应用场景图;
图2为一个实施例中音频处理方法的流程图;
图3A为一个实施例中对音源数据进行上采样的示意图;
图3B为一个实施例中对第一解码数据进行下采样的示意图;
图4为一个实施例中音频处理方法的流程示意图;
图5A为一个实施例中音频编码数据包的结构示意图;
图5B为另一个实施例中音频编码数据包的结构示意图;
图5C为另一个实施例中音频编码数据包的结构示意图;
图6为另一个实施例中音频处理方法的流程图;
图7为一个实施例中音频处理装置的框图;
图8为另一个实施例中音频处理装置的框图;
图9为一个实施例中电子设备的结构框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,本申请实施例及附图中的术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
可以理解,本申请所使用的术语“第一”、“第二”等可在本文中用于描述各种元件,但这些元件不受这些术语限制。这些术语仅用于将一个元件与另一个元件区分。举例来说, 在不脱离本申请的范围的情况下,可以将第一采样位宽称为第二采样位宽,且类似地,可将第二采样位宽称为第一采样位宽。第一采样位宽和第二采样位宽两者都是采样位宽,但其不是同一采样位宽。本申请所使用的术语“多个”指的是两个及两个以上。本申请所使用的术语“和/或”指的是其中的一种方案,或是其中多种方案的任意组合。
在相关技术中,电子设备在向音频输出设备传输音频数据,以通过接收端设备进行音频播放时,电子设备会按照音频编解码器(Codec)设置的采样位宽,对音源数据进行一系列的编解码处理。示例性地,图1A为相关技术中音频数据处理的示意图。如图1A所示,以电子设备通过蓝牙无线通信,向音频输出设备传输音频数据进行播放为例,音频编解码器设置的采样位宽为24bit(比特)。电子设备获取音源数据,基于24bit对音源数据进行PCM(Pulse Code Modulation,脉冲编码调制)解码,得到24bit形式的PCM数据,再对24bit形式的PCM数据进行SBC(Sub Band Coding,子带编码)、AAC(Advanced Audio Coding,高级音频编码)等蓝牙音频编码处理,以得到与24bit参数对应的蓝牙音频编码数据。电子设备可将与24bit参数对应的蓝牙音频编码数据通过蓝牙无线通信传输给音频输出设备,音频输出设备对与24bit参数对应的蓝牙音频编码数据进行PCM解码,得到24bit形式的PCM数据,再利用DAC(Digital to Analog converter,数模转换器)及AMP(Amplifier for Power,功率放大器)进行数模转换及功率放大,得到模拟数据,从而对模拟数据进行播放。
为了保证兼容大多数的音源数据,音频编解码器所设置的采样位宽通常是最上限的采样位宽,当音频数据的原始采样位宽为较小的采样位宽时,整个音频传输及编解码过程,依然会按照音频编解码器设置的采样位宽进行处理及传输,比如,音频编解码器设置的采样位宽为24bit,音频数据的原始采样位宽为16bit,整个音频传输及编解码过程依然会按照24bit进行编解码处理及传输,造成资源浪费,导致占用的内存增加,增加了音频播放延迟,增加了设备功耗,还增加了传输所占用的带宽。
本申请实施例公开了一种音频处理方法、装置、芯片、电子设备及存储介质,能够降低设备功耗及音频播放延迟,可降低音频处理过程中的内存消耗,且不对音频编解码器进行初始化,不改变音频编解码器所设置的采样位宽,可避免重新对音频编解码器初始化造成听感卡顿的问题,保证了音频的播放品质。
图1B为一个实施例中音频处理方法的应用场景图。如图1B所示,电子设备110与音频输出设备120可建立通信连接。电子设备110可包括但不限于手机、智能可穿戴设备、车载终端、平板电脑、PC(Personal Computer,个人电脑)、PDA(Personal Digital Assistant,个人数字助理)等。音频输出设备120可包括但不限于耳机、音箱设备、车载终端等,进一步地,该音频输出设备120可以是TWS(True Wireless Stereo,真无线立体声)耳机。
电子设备110与音频输出设备120之间可建立蓝牙、WiFi等无线通信连接,也可通过USB(Universal Serial Bus,通用串行总线)接口建立有线通信连接,本申请实施例对电子设备110与音频输出设备120之间的通信连接方式不作具体限定。
在电子设备110向音频输出设备120传输音频数据,以通过音频输出设备120进行音频播放的过程中,电子设备110可基于第一采样位宽对音源数据进行解码处理,得到第一解码数据,再基于第二采样位宽对第一解码数据进行编码处理,得到音频编码数据包。其中,该第一采样位宽对应于音频编解码器所设置的采样位宽,第二采样位宽对应于音源数据的原始采样位宽。电子设备110可将音频编码数据包发送给音频输出设备120,音频输出设备120获取到音频编码数据包后,可基于第二采样位宽对音频编码数据包进行解码处理,得到第二解码数据,再将第二解码数据转换为模拟数据,以输出模拟数据。
如图2所示,在一个实施例中,提供一种音频处理方法,可应用于上述的电子设备,该方法可包括以下步骤:
步骤210,基于第一采样位宽对音源数据进行解码处理,得到第一解码数据。
采样位宽也可叫采样深度,指的是声卡数字信号的二进制位数,可用于反映声卡处理的解析度,采样位宽越大,解析度就越高。声音信号以连续的模拟信号按一定的采样频率经数码脉冲取样后,每一个离散的脉冲信号被以一定的量化精度量化成一串二进制编码流,这串二进制编码流的位数即为采样位宽。
音源数据可指的是待播放或当前正在播放的音频数据。音源数据可为音乐、视频声音、电子设备运行的应用程序的背景音、通话语音、提示音等中的任一种音频数据,但不限于此。电子设备可将音源数据传输给音频输出设备,以通过音频输出设备对音源数据进行播放。
在电子设备向音频输出设备传输音源数据之前,可先对音源数据进行编解码处理。电子设备可基于第一采样位宽对音源数据进行解码处理,得到第一解码数据,该第一采样位宽对应于音频编解码器所设置的采样位宽,可选地,该音频编解码器所设置的采样位宽可为上限采样位宽,能够兼容大部分的音频数据,例如,该音频编解码器所设置的采样位宽可为24bit、32bit等,但不限于此。
在一些实施例中,音源数据的数据格式可包括但不限于FLAC(Free Lossless Audio Codec,无损音频压缩编码)格式、APE(通过Monkey's Audio压缩得到)格式、ALAC(Apple lossless audio codec,苹果公司研发的无损音频格式)格式、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)格式、RealAudio格式等,但不限于此。
进一步地,音源数据的原始采样位宽可小于音频编解码器所设置的采样位宽,音源数据中每一串二进制编码流的位数即为该原始采样位宽。电子设备可基于第一采样位宽对音源数据进行解码处理,以得到对应第一采样位宽的第一解码数据,将与原始采样位宽对应的音源数据解码成第一采样位宽形式的第一解码数据。例如,音源数据对应的原始采样位宽为16bit,第一采样位宽为24bit,则可将与16bit参数对应的音源数据解码成24bit形式的第一解码数据。
电子设备可按照预设上采样方式对与原始采样位宽对应的音源数据进行上采样,并对上采样后的音源数据进行解码处理,以得到对应第一采样位宽的第一解码数据。作为一种具体实施方式,该预设上采样方式可以是在采用二进制形式的音源数据的低位添加预设比特值,以得到与第一采样位宽对应的音源数据,可在对应原始采样位宽的音源数据的末尾,添加N位预设比特值,该N可为第一采样位宽与原始采样位宽之间的差值。该预设比特值可根据实际需求进行设置,例如,可为0或1等,但不限于此。
示例性地,图3A为一个实施例中对音源数据进行上采样的示意图。如图3A所示,音源数据对应的原始采样位宽为16bit,第一采样位宽为24bit,则可在16bit的音源数据的低位添加8位0,得到24bit的音源数据,电子设备可再对该24bit的音源数据进行解码处理,以得到24bit的第一解码数据。需要说明的是,也可先对与原始采样位宽对应的音源数据进行解码处理,再对解码得到的数据进行上采样,得到第一解码数据,本申请实施例对上采样及解码处理的先后顺序不作限定。
在一些实施例中,电子设备可基于第一采样位宽对音源数据进行PCM解码处理,得到PCM格式的第一解码数据。不管音源数据的原始采样位宽为多少,均利用对应于音频编解码器所设置的采样位宽的第一采样位宽对音源数据进行解码,不需要对音频编解码器进行初始化,不改变音频编解码器所设置的采样位宽,可避免对音频编解码器重新初始化造成听感卡顿的问题,保证了音频的播放品质。
步骤220,基于第二采样位宽对第一解码数据进行编码处理,得到音频编码数据包。
第二采样位宽可小于第一采样位宽,可选地,第二采样位宽可大于或等于音源数据的原始采样位宽,从而可保证音源数据播放时的音乐品质,例如,音源数据的原始采样位宽为8bit,第二采样位宽可为16bit、8bit等。电子设备可基于第二采样位宽对第一解码数据进行编码处理,得到与第二采样位宽对应的音频编码数据,将第一采样位宽形式的第一解码数据,按照第二采样位宽进行编码,得到与第二采样位宽对应的音频编码数据,再对音频编码数据进行封装,得到音频编码数据包。
电子设备可按照预设下采样方式对第一采样位宽形式的第一解码数据进行下采样,并对下采样后的第一解码数据进行编码处理,以得到与第二采样位宽对应的音频编码数据。该预设下采样方式可与上述的预设上采样方式为相互对应匹配的采样方式。作为一种具体实施方式,该预设上采样方式可以是在采用二进制形式的音源数据的低位添加预设比特值,以得到与第一采样位宽对应的音源数据,则预设下采样方式可以是对与第一采样位宽对应的第一解码数据中,排列在后P位的比特值进行裁剪。电子设备可对采用二进制形式的第一解码数据从低位进行裁剪,以保留与第二采样位宽对应的第一解码数据的部分,可对与第一采样位宽对应的第一解码数据中,排列在后P位的比特值进行裁剪,并对裁剪后的第一解码数据进行编码,得到与第二采样位宽对应的音频编码数据,再对音频编码数据进行封装,得到音频编码数据包。该P为第一采样位宽与第二采样位宽之间的差值。
示例性地,图3B为一个实施例中对第一解码数据进行下采样的示意图。如图3B所示,第一解码数据对应的第一采样位宽为24bit,第二采样位宽为16bit,则可对在24bit形式的第一解码数据中,排列在低8位的比特值进行裁剪,保留第一解码数据中的高16位,得到16bit的第一解码数据,电子设备可再对该16bit的第一解码数据进行编码处理,以得到与16bit参数对应的音频编码数据。在其它的实施例中,也可采用其它的预设上采样方式及预设下采样方式,本申请实施例不作限定。
作为一种具体实施方式,上述的第二采样位宽对应于音源数据的原始采样位宽,从而可在保证音源数据播放时的音乐品质的情况下,尽可能降低音频编码过程中的内存消耗及处理时长,从而降低了音频播放延迟及设备功耗。以第一采样位宽为24bit,音源数据的原始采样位宽为16bit,第二采样位宽为16bit为例,音频编码过程中占用的内存可减少1/3,编码处理时长及相应的功耗也可减少1/3;而以第一采样位宽为24bit,音源数据的原始采样位宽为8bit,第二采样位宽为8bit为例,音频编码过程中占用的内存可减少2/3,编码处理时长及相应的功耗也可减少2/3。
可选地,对第一解码数据进行编码处理可包括但不限于对第一解码数据进行SBC、AAC等编码处理。
在一些实施例中,电子设备在得到音频编码数据包后,可将音频编码数据包经由无线通信信道发送至音频输出设备。在一些实施例中,无线通信信道可包括蓝牙通信信道,该蓝牙通信信道包括广播信道和/或数据信道。
电子设备与音频输出设备之间可建立蓝牙连接,该蓝牙连接可包括经典蓝牙连接、BLE(Bluetooth Low Energy,蓝牙低功耗)连接等,其中,经典蓝牙连接为基于经典蓝牙协议建立的蓝牙通信连接,BLE连接为基于BLE协议建立的蓝牙通信连接,经典蓝牙协议通常泛指在蓝牙协议4.0版本以下的蓝牙协议,BLE协议通常泛指在蓝牙协议4.0版本以上的蓝牙协议。进一步地,该蓝牙连接可以是基于BLE连接建立的LE Audio蓝牙连接,能够支持音频数据的传输。
电子设备可通过蓝牙连接的音频业务传输信道,将目标音频数据包发送至音频输出设备。若蓝牙连接为经典蓝牙连接,则音频业务传输信道可为基于A2DP(Advanced Audio Distribution Profile,蓝牙音频传输模型协定)协议或HFP(Hands-free Profile)协议等建立 的传输信道,若蓝牙连接为LE Audio蓝牙连接,则音频业务传输通道可为CIS(Connected Isochronous Streams,基于连接同步数据流)等传输信道,但不限于此。需要说明的是,本申请实施例对于电子设备与音频输出设备之间的具体蓝牙连接方式及通信信道不作限定,可根据蓝牙标准协议的发展进行变化。电子设备向音频输出设备传输更小采样位宽的音频编码数据包,可减少音频编码数据包所占的传输带宽,减少对通信传输资源的浪费。
音频输出设备在获取到音频编码数据包后,可基于第二采样位宽对音频编码数据包进行解码处理,得到第二解码数据,再将第二解码数据转换为模拟数据,以输出该模拟数据,实现音频播放。在一些实施例中,音频输出设备可对获取的音频编码数据包进行解包,以提取音频编码数据包中包含的与第二采样位宽对应的音频编码数据。音频输出设备可基于第二采样位宽对该音频编码数据进行PCM解码,得到第二采样位宽形式的第二解码数据,再通过数模转换器将第二解码数据从数字信号转换为模拟信号,得到第一模拟数据,然后通过功率放大器对第一模拟数据进行功率放大,得到第二模拟数据。功率放大器可将第二模拟数据传输至播放单元,音频输出设备通过播放单元输出第二模拟数据,以实现播放音频的效果。
示例性地,图4为一个实施例中音频处理方法的流程示意图。如图4所示,电子设备可基于第一采样位宽(如24bit)对音源数据进行PCM解码,得到第一解码数据,再基于第二采样位宽(如16bit)对第一解码数据进行蓝牙音频编码,得到与第二采样位宽(如16bit)对应的音频编码数据,并将与第二采样位宽(如16bit)对应的音频编码数据封装成音频编码数据包。电子设备可将该音频编码数据包经由蓝牙无线信道传输给音频输出设备。音频输出设备经由蓝牙无线信道接收到该音频编码数据包后,可对音频编码数据包进行解包,以提取与第二采样位宽(如16bit)对应的音频编码数据,并基于第二采样位宽(如16bit)对该音频编码数据进行PCM解码,得到第二解码数据。音频输出设备可再通过DAC及AMP分别对第二解码数据进行数模转换及功率放大处理,得到模拟数据,最后通过播放单元输出该模拟数据。相较于图1A所示的音频处理方式,能够节省电子设备及音频输出设备在编解码过程中的内存消耗及处理时长,可有效降低音频延时及功耗,且可减少音频编码数据包传输时所占的带宽。
在本申请实施例中,电子设备通过对应于音源数据的原始采样位宽的第二采样位宽,对第一解码数据进行编码处理,该原始采样位宽小于音频编解码器所设置的采样位宽,能够降低编码过程中的内存消耗及处理时长,从而降低了音频播放延迟及设备功耗。音频输出设备通过对应于音源数据的原始采样位宽的第二采样位宽,对音频编码数据包进行解码处理,能够降低解码过程中的内存消耗及处理时长,从而降低了音频播放延迟及设备功耗。而且,本申请实施例中不对音频编解码器进行初始化,不改变音频编解码器所设置的采样位宽,可避免对音频编解码器重新初始化造成听感卡顿的问题,保证了音频的播放品质。
在一些实施例中,电子设备基于第二采样位宽对第一解码数据进行编码处理,得到与第二采样位宽对应的音频编码数据,可按照预设的数据包格式对该音频编码数据进行封装,得到音频编码数据包。下面对音频编码数据包的几种数据包格式进行介绍:
(1)音频编码数据包包括包头部及数据部,包头部包括第一位宽字段及第二位宽字段。
第一位宽字段表征音频编解码器所设置的采样位宽,该第一位宽字段可用于指示上述的第一采样位宽。
第二位宽字段表征音频编码数据包在编码过程中实际采用的采样位宽,该第二位宽字段用于指示上述的第二采样位宽。
数据部用于存储对应第二采样位宽的音频编码数据。
作为一种实施方式,第一位宽字段可存储于音频编码数据包的包头部的第一数据段, 第二位宽字段可存储于音频编码数据包的包头部的第二数据段,可选地,第一数据段可位于第二数据段之前。在一些实施例中,第二位宽字段可存储于包头部的保留字段,该保留字段是包头部中已定义用于特定用途的字段,可使用部分保留字段存储第二位宽字段,从而不需要对包头部的整体结构进行大的调整,编包方式更为简单、快捷。
在本申请的一些实施例中,第一位宽字段指示音频编解码器所设置的24bit采样(即第一采样位宽为24bit),第二位宽字段指示音源数据所采用的16bit采样(即第二采样位宽为16bit),但第一位宽字段和第二位宽字段都只是占据2bit,标识4种情况,例如,二进制00标识8bit采样,01标识16bit采样,10标识24bit采样,二进制11标识32bit采样。由此,通过共计4bit即可表示第一采样位宽和第二采样位宽的信息。考虑到技术发展,第一位宽字段和第二位宽字段可能各占据3bit,从而分别标识8种情况。
在本申请的一些实施例中,音频编码数据包的包头部共占据64bit。第一位宽字段、第二位宽字段及可能设置的判断字段均使用包头部中的保留字段来存储。在包头部的部分保留字段被占用的情况下,第一位宽字段、第二位宽字段可尽量利用未被占用的保留字段来存储,未能存储的部分将被存入包头部64bit之后的新设的位或字节中。应理解,64bit的包头部仅用于说明,而非对包头部的尺寸或结构进行限定。
在本申请的一些实施例中,第一采样位宽与音频编解码器所设置的采样位宽并不相同,第二采样位宽与音源数据的原始采样位宽也不相同;而是,第一采样位宽与音频编解码器所设置的采样位宽成第一比例关系,第二采样位宽与音源数据的原始采样位宽同样成第一比例关系。或者,第一采样位宽与音频编解码器所设置的采样位宽成第一差值关系,而第二采样位宽与音源数据的原始采样位宽也同样成第一差值关系。
数据部可存储于音频编码数据包的第三数据段。可选地,包头部可位于该音频编码数据包的第三数据段可位于之前。
示例性地,图5A为一个实施例中音频编码数据包的结构示意图。如图5A所示,音频编码数据包可包括包头部及数据部,包头部可包括第一包头信息及第二位宽字段,第一包头信息可存储于包头部的第一数据段。该第一包头信息中可包括第一位宽字段,第二位宽字段可位于第一包头信息及数据部之间,即第一位宽字段位于第二位宽字段之前,进一步地,第二位宽字段可位于包头部的末尾部分。例如,第一采样位宽为24bit,第二采样位宽为16bit,则该包头部中的第一位宽字段用于指示24bit,第二位宽字段用于指示16bit,数据部可包括与16bit参数对应的音频编码数据。
音频输出设备在获取到音频编码数据包后,可对音频编码数据包进行解包,以提取该音频编码数据包中的包头部及数据部,音频输出设备可基于该第二位宽字段指示的第二采样位宽,对数据部中存储的音频编码数据进行解码处理,得到第二解码数据。
在一些实施例中,音频编码数据包的包头部还可包括第一长度字段和/或第二长度字段。进一步地,包头部中的第一包头信息还可包括第一长度字段和/或第二长度字段。
第一长度字段存储的长度参数用于指示包头部的数据长度,可选地,包头部包括第一包头信息及第二位宽字段,该第一长度字段存储的长度参数可为第一包头信息与第二位宽字段的数据长度之和。该数据长度可指的是所占的比特位数,第一长度字段存储的长度参数可为第一包头信息所占的比特位数与第二位宽字段所占的比特位数,例如,如图5A所示,包头部中的第一包头信息所占的比特位数为M,第二位宽字段所占的比特位数为8,则第一长度字段存储的长度参数可为M+8。
作为一种实施方式,音频输出设备在对音频编码数据包进行解包时,可先从音频编码数据包中提取包头部,以获取包头部中包含的各个字段,如上述的第一位宽字段、第二位宽字段、第一长度字段等。可选地,可先从音频包头部的第一数据段提取包头部中的第一 包头信息,然后根据第一包头信息中第一长度字段存储的长度参数,从包头部的第二数据段提取第二位宽字段,以及从音频编码数据包的第三数据段提取音频编码数据包中的数据部。由于第一长度字段存储的长度参数为包头部中的第一包头信息与第二位宽字段的数据长度之和,音频输出设备利用该第一长度字段存储的长度参数可准确从音频编码数据包中提取第二位宽字段,保证后续准确基于第二位宽字段中存储的第二采样位宽进行解码处理,提高了处理效率及处理准确性。
第二长度字段存储的长度参数用于指示数据部的数据长度。该第二长度字段存储的长度参数为数据部所占的比特位数。作为一种实施方式,音频输出设备可根据音频编码数据包的包头部中第二长度字段存储的长度参数,从音频编码数据包的第三数据段提取音频编码数据包中的数据部,可保证音频输出设备准确获取到对应第二采样位宽的音频编码数据。
在本申请实施例中,音频输出设备根据音频编码数据包的包头部内的第一长度字段和/或第二长度字段,可准确识别出音频编码数据包为按照第二采样位宽进行编码处理得到的音频编码数据包,并准确进行解包,能够有效将本申请实施例的音频编码数据包与相关技术中的音频编码数据包(整个音频传输及编解码过程均按照音频编解码器所设置的采样位宽进行处理及传输)进行区分,从而保证后续音频处理及播放的准确进行。
可选地,上述音频编码数据包的包头部的第一包头信息中还可包括其它字段,例如音源数据的供应商标识符、音源数据的编码器标识符、音源数据的版本标识、采样率、声道数等中的一种或多种,但不限于此。其中,该供应商标识符可用于标识音频数据的供应商,编码器标识符可用于标识音源数据的编码格式,不同编码格式的音源数据可分别对应不同的编码器标识符。
在本申请施例中,音频编码数据包可包括包头部及数据部,包头部可包括第一位宽字段及第二位宽字段,可保证音频输出设备在进行解包后,按照第二位宽字段指示的第二采样位宽对音频编码数据进行解码处理,能够降低音频输出设备的内存消耗,且可降低处理时长,从而降低了音频播放延迟及设备功耗,且由于第一位宽字段指示的第一采样位宽不变,音频输出设备不会对音频编解码器进行初始化,可避免对音频编解码器重新初始化造成听感卡顿的问题。
(2)音频编码数据包包括包头部及数据部,包头部包括第一位宽字段、判断字段及第二位宽字段。
音频编码数据包的包头部除了包括上述数据包格式(1)中所介绍的第一位宽字段、第二位宽字段等字段以外,还可包括判断字段,该判断字段可用于表征包头部内的第一位宽字段与第二位宽字段是否一致,进一步地,该判断字段可用于表征音频编解码器所设置的采样位宽(对应第一采样位宽),与电子设备进行编码处理过程中采用的采样位宽(即第二采样位宽)是否一致。
可选地,判断字段可用不同的判断标识表征包头部内的第一位宽字段与第二位宽字段是否一致。若判断字段中存储第一判断标识,表征包头部内的第一位宽字段与第二位宽字段一致,若判断字段中存储第二判断标识,表征包头部内的第一位宽字段与第二位宽字段不一致。该第一判断标识与第二判断标识可根据实际需求进行设置,例如,第一判断标识可为0,第二判断标识可为1等,但不限于此。
作为一种实施方式,判断字段可存储于音频编码数据包的包头部的第四数据段,该第四数据段在包头部中的位置可预先进行配置,例如,第四数据段可在包头部的第一数据段与包头部的第二数据段之间,或者,第四数据段可在包头部的第二数据段之后等,在此不作限定。进一步地,判断字段可存储于包头部的保留字段。
示例性地,图5B为另一个实施例中音频编码数据包的结构示意图。如图5B所示,音 频编码数据包可包括包头部及数据部,包头部可包括第一包头信息、判断字段及第二位宽字段,第一包头信息可存储于包头部的第一数据段。该第一包头信息中可包括第一位宽字段,判断字段可位于第一包头信息与第二位宽字段之间,第二位宽字段可位于数据部之前。例如,第一采样位宽为24bit,第二采样位宽为16bit,则该包头部中的第一位宽字段用于指示24bit,第二位宽字段用于指示16bit,判断字段可为1(表示第一位宽字段与第二位宽字段不一致)。
示例性地,图5C为另一个实施例中音频编码数据包的结构示意图。如图5C所示,音频编码数据包可包括包头部及数据部,包头部可包括第一包头信息、判断字段及第二位宽字段,判断字段可位于第二位宽字段之后,即,判断字段可位于包头部的末尾,第一包头信息可位于第二位宽字段之前。
作为一种实施方式,音频编码数据包的包头部内第一长度字段存储的长度参数,可为第一包头信息、判断字段以及实际采样位宽的数据长度之和。作为一种实施方式,音频输出设备在对音频编码数据包进行解包时,可先从音频编码数据包中提取包头部的第一包头信息,然后根据第一包头信息中第一长度字段存储的长度参数,分别从包头部的第二数据段及第四数据段中提取第二位宽字段及判断字段。音频输出设备可根据该判断字段确定第一位宽字段与第二位宽字段是否一致,提高了后续音频处理的准确性。
在本申请实施例中,音频编码数据包的包头部还可包括判断字段,音频输出设备可根据该判断字段确定第一位宽字段与第二位宽字段是否一致,提高了后续音频处理的准确性。
需要说明的是,目标音频数据包的数据包格式并不仅限于上述的几种数据包格式,音频编码数据包也可包括其它字段信息,例如还可包括校验码等,各个字段在音频编码数据包中的位置也不仅限于上述实施例中描述的几种方式,音频编码数据包的数据包格式可基于实际需求进行调整。
在本申请实施例中,电子设备在通过对应于音源数据的原始采样位宽的第二采样位宽进行编码处理后,可按照预设的数据包格式将音频编码数据封装成音频编码数据包,能够保证音频输出设备准确对音频编码数据包进行解包及音频处理,提高了音频输出设备的音频处理性能。
在一个实施例中,提供一种芯片,配置成执行如上述各实施例描述的应用于电子设备的音频处理方法中的步骤。
该芯片可包括处理器及通信模块,处理器可配置成:执行基于第一采样位宽对音源数据进行解码处理,得到第一解码数据,以及基于第二采样位宽对第一解码数据进行编码处理,得到音频编码数据包的步骤,通信模块可配置成:执行将音频编码数据包经由无线通信信道发送至音频输出设备的步骤。该芯片可设置在电子设备中,如手机、可穿戴设备、车载终端、平板电脑等。
如图6所示,在一个实施例中,提供另一种音频处理方法,可应用于上述的音频输出设备,该方法可包括以下步骤:
步骤610,获取音频编码数据包。
在一个实施例中,步骤610,包括:经由无线通信信道获取音频编码数据包;无线通信信道包括蓝牙通信信道,蓝牙通信信道包括广播信道和/或数据信道。
步骤620,基于第二采样位宽对音频编码数据包进行解码处理,得到第二解码数据。第二采样位宽对应于音源数据的原始采样位宽。
在一个实施例中,第二采样位宽小于第一采样位宽,第一采样位宽对应于音频编解码器所设置的采样位宽。
在一个实施例中,原始采样位宽小于音频编解码器所设置的采样位宽。
在一个实施例中,步骤基于第二采样位宽对音频编码数据包进行解码处理,包括:对音频编码数据包进行解包,以提取音频编码数据包中的包头部及数据部;包头部包括音频编解码器的第一位宽字段及第二位宽字段,第一位宽字段用于指示第一采样位宽,第二位宽字段用于指示第二采样位宽;基于第二位宽字段中指示的第二采样位宽,对数据部中存储的音频编码数据进行解码处理。
在一个实施例中,音频编码数据包的包头部还包括判断字段,判断字段用于表征第一位宽字段与第二位宽字段是否一致。
步骤630,将第二解码数据转换为模拟数据。
需要说明的是,本申请实施例提供的应用于音频输出设备的音频处理方法的具体描述,可参考上述各实施例中提供的应用于电子设备的音频处理方法的描述,在此不再重复赘述。
在本申请实施例中,音频输出设备通过对应于音源数据的原始采样位宽的第二采样位宽,对音频编码数据包进行解码处理,该音源数据的原始采样位宽小于音频编解码器所设置的采样位宽,能够降低解码过程中的内存消耗及处理时长,从而降低了音频播放延迟及设备功耗。而且,本申请实施例中不对音频编解码器进行初始化,不改变音频编解码器所设置的采样位宽,可避免对音频编解码器重新初始化造成听感卡顿的问题,保证了音频的播放品质。
在一个实施例中,提供一种芯片,配置成执行如上述各实施例描述的应用于音频输出设备的音频处理方法中的步骤。
该芯片可包括处理器及通信模块,通信模块可配置成:执行经由无线通信信道获取音频编码数据包的步骤,处理器可配置成:执行基于第二采样位宽对音频编码数据包进行解码处理,得到第二解码数据等步骤。该芯片可设置在音频输出设备中,如耳机、音箱、车载播放器等。
如图7所示,在一个实施例中,提供一种音频处理装置700,可应用于上述的电子设备,该音频处理装置700可包括解码模块710及编码模块720。
解码模块710,用于基于第一采样位宽对音源数据进行解码处理,得到第一解码数据。
编码模块720,用于基于第二采样位宽对第一解码数据进行编码处理,得到音频编码数据包。其中,第一采样位宽对应于音频编解码器所设置的采样位宽,第二采样位宽对应于音源数据的原始采样位宽。
在一个实施例中,第二采样位宽小于第一采样位宽。
在一个实施例中,音源数据的原始采样位宽小于音频编解码器所设置的采样位宽。
在一个实施例中,编码模块720,还用于对采用二进制形式的第一解码数据从低位进行裁剪,以保留与第二采样位宽对应的第一解码数据的部分。
在一个实施例中,音频处理装置700还包括发送模块。
发送模块,用于将音频编码数据包经由无线通信信道发送至音频输出设备。无线通信信道包括蓝牙通信信道,蓝牙通信信道包括广播信道和/或数据信道。
在本申请实施例中,电子设备通过对应于音源数据的原始采样位宽的第二采样位宽,对第一解码数据进行编码处理,音源数据的原始采样位宽小于音频编解码器所设置的采样位宽,能够降低编码过程中的内存消耗及处理时长,从而降低了音频播放延迟及设备功耗。而且,本申请实施例中不对音频编解码器进行初始化,不改变音频编解码器所设置的采样位宽,可避免对音频编解码器重新初始化造成听感卡顿的问题,保证了音频的播放品质。
在一个实施例中,音频编码数据包包括包头部及数据部,包头部包括第一位宽字段及第二位宽字段;第一位宽字段用于指示第一采样位宽,第二位宽字段用于指示第二采样位宽;数据部用于存储音频编码数据。
在一个实施例中,第一位宽字段存储于包头部的第一数据段,第二位宽字段存储于包头部的第二数据段,第一数据段位于第二数据段之前。
在一个实施例中,音频编码数据包的包头部还包括判断字段,判断字段用于表征第一位宽字段与第二位宽字段是否一致。
在本申请实施例中,电子设备在通过对应于音源数据的原始采样位宽的第二采样位宽进行编码处理后,可按照预设的数据包格式将音频编码数据封装成音频编码数据包,能够保证音频输出设备准确对音频编码数据包进行解包及音频处理,提高了音频输出设备的音频处理性能。
如图8所示,在一个实施例中,提供一种音频处理装置800,可应用于上述的音频输出设备,该音频处理装置800可包括获取模块810、解码模块820及转换模块830。
获取模块810,用于获取音频编码数据包。
在一个实施例中,获取模块810,还用于经由无线通信信道获取音频编码数据包;无线通信信道包括蓝牙通信信道,蓝牙通信信道包括广播信道和/或数据信道。
解码模块820,用于基于第二采样位宽对音频编码数据包进行解码处理,得到第二解码数据;第二采样位宽对应于音源数据的原始采样位宽。
在一个实施例中,第二采样位宽小于第一采样位宽,第一采样位宽对应于音频编解码器所设置的采样位宽。
在一个实施例中,原始采样位宽小于音频编解码器所设置的采样位宽。
在一个实施例中,解码模块820,包括解包单元及解码单元。
解包单元,用于对音频编码数据包进行解包,以提取音频编码数据包中的包头部及数据部;包头部内包括第一位宽字段及第二位宽字段,第一位宽字段用于指示第一采样位宽,第二位宽字段用于指示第二采样位宽。
解码单元,用于基于第二位宽字段指示的第二采样位宽,对数据部中存储的音频编码数据进行解码处理。
在一个实施例中,音频编码数据包的包头部还包括判断字段,判断字段用于表征第一位宽字段与第二位宽字段是否一致。
转换模块830,用于将第二解码数据转换为模拟数据。
在本申请实施例中,音频输出设备通过对应于音源数据的原始采样位宽的第二采样位宽,对音频编码数据包进行解码处理,该音源数据的原始采样位宽小于音频编解码器所设置的采样位宽,能够降低解码过程中的内存消耗及处理时长,从而降低了音频播放延迟及设备功耗。而且,本申请实施例中不对音频编解码器进行初始化,不改变音频编解码器所设置的采样位宽,可避免对音频编解码器重新初始化造成听感卡顿的问题,保证了音频的播放品质。
图9为一个实施例中电子设备的结构框图。如图9所示,电子设备900可以包括一个或多个如下部件:处理器910、与处理器910耦合的存储器920,其中存储器920可存储有一个或多个计算机程序,一个或多个计算机程序可以被配置为由一个或多个处理器910执行时实现如上述各实施例描述的应用于电子设备的音频处理方法。
处理器910可以包括一个或者多个处理核。处理器910利用各种接口和线路连接整个电子设备900内的各个部分,通过运行或执行存储在存储器920内的指令、程序、代码集或指令集,以及调用存储在存储器920内的数据,执行电子设备900的各种功能和处理数据。可选地,处理器910可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器910可集成中央处理器(Central  Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器910中,单独通过一块通信芯片进行实现。
存储器920可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory,ROM)。存储器920可用于存储指令、程序、代码、代码集或指令集。存储器920可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现上述各个方法实施例的指令等。存储数据区还可以存储电子设备900在使用中所创建的数据等。
电子设备900还可包括蓝牙模块,蓝牙模块可用于提供蓝牙通信功能,与第二电子设备建立蓝牙连接,并进行蓝牙数据传输。蓝牙模块可支持一种或多种蓝牙协议,如经典蓝牙、BLE、BLE Audio等),但不限于此,可随着蓝牙协议的发展而变化。
本申请实施例还提供一种电子设备,包括存储器及处理器,该存储器中存储有计算机程序,计算机程序被该处理器执行时,使得处理器实现如上述各实施例描述的应用于音频输出设备的音频处理方法。
本申请实施例公开一种计算机可读存储介质,其存储计算机程序,其中,该计算机程序被处理器执行时实现如上述各实施例描述的应用于电子设备的音频处理方法。
本申请实施例公开一种计算机可读存储介质,其存储计算机程序,其中,该计算机程序被处理器执行时实现如上述各实施例描述的应用于音频输出设备的音频处理方法。
本申请实施例公开一种计算机程序产品,该计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,且该计算机程序可被处理器执行时实现如上述各实施例描述的应用于电子设备的音频处理方法。
本申请实施例公开一种计算机程序产品,该计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,且该计算机程序可被处理器执行时实现如上述各实施例描述的应用于音频输出设备的音频处理方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的存储介质可为磁碟、光盘、ROM等。
如此处所使用的对存储器、存储、数据库或其它介质的任何引用可包括非易失性和/或易失性存储器。合适的非易失性存储器可包括ROM、可编程ROM(Programmable ROM,PROM)、可擦除PROM(Erasable PROM,EPROM)、电可擦除PROM(Electrically Erasable PROM,EEPROM)或闪存。易失性存储器可包括随机存取存储器(random access memory,RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM可为多种形式,诸如静态RAM(Static RAM,SRAM)、动态RAM(Dynamic Random Access Memory,DRAM)、同步DRAM(synchronous DRAM,SDRAM)、双倍数据率SDRAM(Double Data Rate SDRAM,DDR SDRAM)、增强型SDRAM(Enhanced Synchronous DRAM,ESDRAM)、同步链路DRAM(Synchlink DRAM,SLDRAM)、存储器总线直接RAM(Rambus DRAM,RDRAM)及直接存储器总线动态RAM(Direct Rambus DRAM,DRDRAM)。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定特 征、结构或特性可以以任意适合的方式结合在一个或多个实施例中。本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本申请所必须的。需要说明的,本申请中的“多个”包括“两个或两个以上”。
在本申请的各种实施例中,应理解,上述各过程的序号的大小并不意味着执行顺序的必然先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可位于一个地方,或者也可以分布到多个网络单元上。可根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。
另外,在本申请各实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上对本申请实施例公开的一种音频处理方法、装置、芯片、电子设备及存储介质进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想。同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种音频处理方法,其特征在于,应用于电子设备,所述方法包括:
    基于第一采样位宽对音源数据进行解码处理,得到第一解码数据;
    基于第二采样位宽对所述第一解码数据进行编码处理,得到音频编码数据包;
    其中,所述第一采样位宽对应于音频编解码器所设置的采样位宽,所述第二采样位宽对应于所述音源数据的原始采样位宽。
  2. 根据权利要求1所述的方法,其特征在于,所述第二采样位宽小于所述第一采样位宽。
  3. 根据权利要求1所述的方法,其特征在于,所述原始采样位宽小于所述音频编解码器所设置的采样位宽。
  4. 根据权利要求1所述的方法,其特征在于,所述音频编码数据包包括包头部及数据部,所述包头部包括第一位宽字段及第二位宽字段;
    所述第一位宽字段用于指示所述第一采样位宽,所述第二位宽字段用于指示所述第二采样位宽;所述数据部用于存储音频编码数据。
  5. 根据权利要求4所述的方法,其特征在于,所述包头部还包括判断字段,所述判断字段用于表征所述第一位宽字段与所述第二位宽字段是否一致。
  6. 根据权利要求4或5所述的方法,其特征在于,所述第一位宽字段存储于所述包头部的第一数据段,所述第二位宽字段存储于所述包头部的第二数据段,所述第一数据段位于所述第二数据段之前。
  7. 根据权利要求6所述的方法,其特征在于,所述第二位宽字段存储于所述包头部的保留字段。
  8. 根据权利要求1所述的方法,其特征在于,所述基于第二采样位宽对所述第一解码数据进行编码处理,包括:
    对采用二进制形式的所述第一解码数据从低位进行裁剪,以保留与所述第二采样位宽对应的所述第一解码数据的部分。
  9. 根据权利要求1~5、7~8任一所述的方法,其特征在于,所述方法还包括:
    将所述音频编码数据包经由无线通信信道发送至音频输出设备;
    所述无线通信信道包括蓝牙通信信道,所述蓝牙通信信道包括广播信道和/或数据信道。
  10. 一种芯片,其特征在于,包括处理器和通信单元;
    所述处理器配置成:
    基于第一采样位宽对音源数据进行解码处理,得到第一解码数据;
    基于第二采样位宽对所述第一解码数据进行编码处理,得到音频编码数据包;其中,所述第一采样位宽对应于音频编解码器所设置的采样位宽,所述第二采样位宽对应于所述音源数据的原始采样位宽;
    所述通信单元配置成:
    将所述音频编码数据包经由无线通信信道发送至音频输出设备。
  11. 一种音频处理方法,其特征在于,应用于音频输出设备,所述方法包括:
    获取音频编码数据包;
    基于第二采样位宽对所述音频编码数据包进行解码处理,得到第二解码数据;所述第二采样位宽对应于音源数据的原始采样位宽;
    将所述第二解码数据转换为模拟数据。
  12. 根据权利要求11所述的方法,其特征在于,所述第二采样位宽小于第一采样位宽, 所述第一采样位宽对应于音频编解码器所设置的采样位宽。
  13. 根据权利要求11所述的方法,其特征在于,所述原始采样位宽小于音频编解码器所设置的采样位宽。
  14. 根据权利要求11所述的方法,其特征在于,所述基于第二采样位宽对所述音频编码数据包进行解码处理,包括:
    对所述音频编码数据包进行解包,以提取所述音频编码数据包中的包头部及数据部;所述包头部包括第一位宽字段及第二位宽字段,所述第一位宽字段用于指示所述第一采样位宽,所述第二位宽字段用于指示所述第二采样位宽;
    基于所述第二位宽字段指示的所述第二采样位宽,对所述数据部中存储的音频编码数据进行解码处理。
  15. 根据权利要求11~14任一所述的方法,其特征在于,所述获取音频编码数据包,包括:
    经由无线通信信道获取音频编码数据包;
    所述无线通信信道包括蓝牙通信信道,所述蓝牙通信信道包括广播信道和/或数据信道。
  16. 一种芯片,其特征在于,包括处理器及通信单元;
    所述通信单元,配置成:
    经由无线通信信道获取音频编码数据包;
    所述处理器配置成:
    基于第二采样位宽对所述音频编码数据包进行解码处理,得到第二解码数据;所述第二采样位宽对应于音源数据的原始采样位宽。
  17. 一种音频处理装置,其特征在于,应用于电子设备,所述装置包括:
    解码模块,用于基于第一采样位宽对音源数据进行解码处理,得到第一解码数据;
    编码模块,用于基于第二采样位宽对所述第一解码数据进行编码处理,得到音频编码数据包;
    其中,所述第一采样位宽对应于音频编解码器所设置的采样位宽,所述第二采样位宽对应于所述音源数据的原始采样位宽。
  18. 一种音频处理装置,其特征在于,应用于音频输出设备,所述装置包括:
    获取模块,用于获取音频编码数据包;
    解码模块,用于基于第二采样位宽对所述音频编码数据包进行解码处理,得到第二解码数据;所述第二采样位宽对应于音源数据的原始采样位宽;
    转换模块,用于将所述第二解码数据转换为模拟数据。
  19. 一种电子设备,其特征在于,包括存储器及处理器,所述存储器中存储有计算机程序,所述计算机程序被所述处理器执行时,使得所述处理器实现如权利要求1~9或11~15任一项所述的方法。
  20. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1~9或11~15任一项所述的方法。
PCT/CN2023/087246 2022-07-01 2023-04-10 音频处理方法、装置、芯片、电子设备及存储介质 WO2024001405A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210773940.9A CN115206352A (zh) 2022-07-01 2022-07-01 音频处理方法、装置、芯片、电子设备及存储介质
CN202210773940.9 2022-07-01

Publications (1)

Publication Number Publication Date
WO2024001405A1 true WO2024001405A1 (zh) 2024-01-04

Family

ID=83578457

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/087246 WO2024001405A1 (zh) 2022-07-01 2023-04-10 音频处理方法、装置、芯片、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN115206352A (zh)
WO (1) WO2024001405A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115206352A (zh) * 2022-07-01 2022-10-18 哲库科技(上海)有限公司 音频处理方法、装置、芯片、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6208276B1 (en) * 1998-12-30 2001-03-27 At&T Corporation Method and apparatus for sample rate pre- and post-processing to achieve maximal coding gain for transform-based audio encoding and decoding
CN201707924U (zh) * 2010-04-12 2011-01-12 佛山市智邦电子科技有限公司 无损音频播放器
CN110335615A (zh) * 2019-05-05 2019-10-15 北京字节跳动网络技术有限公司 音频数据的处理方法、装置、电子设备及存储介质
CN111402908A (zh) * 2020-03-30 2020-07-10 Oppo广东移动通信有限公司 语音处理方法、装置、电子设备和存储介质
CN115206352A (zh) * 2022-07-01 2022-10-18 哲库科技(上海)有限公司 音频处理方法、装置、芯片、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6208276B1 (en) * 1998-12-30 2001-03-27 At&T Corporation Method and apparatus for sample rate pre- and post-processing to achieve maximal coding gain for transform-based audio encoding and decoding
CN201707924U (zh) * 2010-04-12 2011-01-12 佛山市智邦电子科技有限公司 无损音频播放器
CN110335615A (zh) * 2019-05-05 2019-10-15 北京字节跳动网络技术有限公司 音频数据的处理方法、装置、电子设备及存储介质
CN111402908A (zh) * 2020-03-30 2020-07-10 Oppo广东移动通信有限公司 语音处理方法、装置、电子设备和存储介质
CN115206352A (zh) * 2022-07-01 2022-10-18 哲库科技(上海)有限公司 音频处理方法、装置、芯片、电子设备及存储介质

Also Published As

Publication number Publication date
CN115206352A (zh) 2022-10-18

Similar Documents

Publication Publication Date Title
US11109138B2 (en) Data transmission method and system, and bluetooth headphone
JP7053687B2 (ja) ラストマイル等化
KR101341742B1 (ko) 오디오 프로세싱 성능을 갖는 디바이스의 동적 프러버저닝
KR102569374B1 (ko) 블루투스 장치 동작 방법
WO2024016758A1 (zh) 音频数据传输方法、装置、芯片、电子设备及存储介质
US11595800B2 (en) Bluetooth audio streaming passthrough
WO2024001405A1 (zh) 音频处理方法、装置、芯片、电子设备及存储介质
CN113689864B (zh) 一种音频数据处理方法、装置及存储介质
WO2022022293A1 (zh) 音频信号渲染方法和装置
WO2021160040A1 (zh) 音频传输方法及电子设备
WO2022100414A1 (zh) 音频编解码方法和装置
TW200816655A (en) Method and apparatus for an audio signal processing
CN103237259A (zh) 一种视频声道处理装置及方法
WO2024001398A1 (zh) 音频数据传输方法、装置、芯片、电子设备及存储介质
WO2024001447A1 (zh) 音频处理方法、芯片、装置、设备和计算机可读存储介质
CN110880949A (zh) 一种蓝牙通信方法、装置和系统
CN111385780A (zh) 一种蓝牙音频信号传输方法和装置
CN117062034A (zh) 蓝牙数据的传输方法、装置、设备及存储介质
CN111225102A (zh) 一种蓝牙音频信号传输方法和装置
WO2023124587A1 (zh) 一种媒体文件的传输方法和设备
WO2020232631A1 (zh) 一种语音分频传输方法、源端、播放端、源端电路和播放端电路
CN1738320A (zh) 一种在通话过程中播放音乐的手机及其方法
CN109461451B (zh) 一种基于opus的语音传输方法和设备及系统
CN113347614A (zh) 音频处理设备、系统和方法
CN113838470B (zh) 音频处理方法、装置、电子设备及计算机可读介质及产品

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23829595

Country of ref document: EP

Kind code of ref document: A1