WO2022222713A1 - 一种编解码器协商与切换方法 - Google Patents

一种编解码器协商与切换方法 Download PDF

Info

Publication number
WO2022222713A1
WO2022222713A1 PCT/CN2022/083816 CN2022083816W WO2022222713A1 WO 2022222713 A1 WO2022222713 A1 WO 2022222713A1 CN 2022083816 W CN2022083816 W CN 2022083816W WO 2022222713 A1 WO2022222713 A1 WO 2022222713A1
Authority
WO
WIPO (PCT)
Prior art keywords
category
audio
electronic device
encoder
audio data
Prior art date
Application number
PCT/CN2022/083816
Other languages
English (en)
French (fr)
Inventor
提纯利
卢曰万
韦家毅
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2023564200A priority Critical patent/JP2024515684A/ja
Priority to EP22790822.5A priority patent/EP4318467A1/en
Publication of WO2022222713A1 publication Critical patent/WO2022222713A1/zh
Priority to US18/489,217 priority patent/US20240045643A1/en

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/22Mode decision, i.e. based on audio signal content versus external parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/02Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
    • G10L19/032Quantisation or dequantisation of spectral components
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/167Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/18Vocoders using multiple modes
    • G10L19/24Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Definitions

  • the present application relates to the technical field of audio processing, and in particular, to a codec negotiation and switching method.
  • the electronic device and the audio playback device will negotiate and obtain the codec type supported by both parties, and then the electronic device will process the audio data according to the codec type supported by both parties. , and then the electronic device sends the encoded audio data to the audio playback device, and the audio playback device receives the encoded audio data and decodes it with a corresponding decoder for playback.
  • the electronic device When the network of the electronic device becomes poor or the user selects higher quality sound quality, the electronic device needs to switch the encoder. At this time, the electronic device and the audio playback device need to renegotiate the type of codec. During the process of renegotiating the codec type between the electronic device and the audio playback device, the electronic device will suspend sending audio data to the audio playback device, resulting in interruptions and freezes of audio data, affecting user experience. Therefore, how to realize the rapid switching of the codec between the electronic device and the audio playback device and the uninterrupted flow of audio data during the switching process is an urgent problem to be solved.
  • the present application provides a codec negotiation and switching method, which realizes that when an electronic device needs to switch an encoder, it does not need to re-negotiate the codec type with an audio playback device, and directly selects the codec supported by both parties that have been negotiated before.
  • the decoder category select a codec in a category for audio data transmission, which solves the problem of audio data interruption and freezing when switching codecs between electronic devices and audio playback devices, and improves user experience.
  • the present application provides a codec negotiation and switching system, the system includes an electronic device and an audio playback device, wherein: the electronic device is used for: when the first parameter information of the audio data satisfies the first condition, according to the first
  • the first encoder in a category encodes the audio data into first encoded audio data, and sends the first encoded audio data to the audio playback device; wherein, the first category is that the electronic device determines the electronic device before acquiring the audio data
  • the codec category shared with the audio playback device the identification of the first category is sent to the audio playback device; the audio playback device is used for: receiving the identification of the first category sent by the electronic device;
  • the first encoded audio data is decoded into the first playback audio data;
  • the electronic device is further configured to: when the second parameter information of the audio data satisfies the second condition, encode the audio data into the second audio data according to the second encoder in the second category Encoding audio data, and sending the second encoded audio data to the audio playback device
  • the electronic device and the audio playback device before transmitting audio data, divide the codecs into a plurality of categories, and determine the codec category shared by the electronic device and the audio playback device (for example, the first category and the second category). Afterwards, the electronic device acquires the first parameter information of the audio data, and when the first parameter information of the audio data satisfies the first condition, selects a codec in the first category from the shared codec categories to transmit the audio data. After the content of the played audio data, the application of the played audio data, user selection, or network conditions change, the electronic device acquires the second parameter of the audio data.
  • the electronic device When the second parameter of the audio data satisfies the second condition, the electronic device does not need to Negotiate the codec with the audio playback device again, and directly select the codec in the second category in the common codec category to transmit the audio data. In this way, when the electronic device needs to switch the encoder, it does not need to re-negotiate the type of the codec with the audio playback device, and directly select a codec in a category from the previously negotiated codec categories supported by both parties. Audio data transmission solves the problem of audio data interruption and freeze when switching codecs between electronic devices and audio playback devices, and improves user experience.
  • the encoders in the first category include at least the first encoder, and the encoders in the second category include at least the second encoder.
  • the electronic device is further configured to: receive the identifier of the first category and the identifier of the second category sent by the audio playback device; wherein the decoder in the first category at least includes the first A decoder, the decoders in the second category including at least a second decoder.
  • the audio playback device classifies the decoders into a plurality of categories according to the codec classification criteria.
  • the codec categories whose decoder identifiers are divided into multiple categories are greater than or equal to 1 are the first category and the second category.
  • the decoders classified into the first category include at least the first decoder, and may also include other decoders, such as the third decoder; the decoders classified into the second category include at least the second decoder, and may also include other decoders. , such as the fourth decoder.
  • the electronic device is further used to: confirm that the shared categories of the electronic device and the audio playback device are the first category and the second category; combine the identifier of the first category with the second category
  • the identifier of the device is sent to the audio playback device; the audio playback device is further used for: receiving the identifier of the first category and the identifier of the second category sent by the electronic device.
  • the electronic device sends the codec classes supported by both parties to the audio playback device so that the audio playback device knows the codec classes supported by both parties.
  • the electronic device may not need to send the identifier of the first category and the identifier of the second category to the audio playback device.
  • the electronic device When transmitting audio data, the electronic device only needs to send the codec type identifier used according to the parameter information of the audio data to the audio playback device.
  • the encoder in the first category includes only the first encoder
  • the decoder in the first category includes only the first decoder
  • the electronic device is further configured to: when the first parameter information satisfies the first condition, encode the audio data into the first class using the first encoder in the first class
  • the audio data is encoded, and the first encoded audio data is sent to the audio playback device;
  • the audio playback device is further configured to decode the first encoded audio data into the first playback audio data through the first decoder in the first category.
  • the codec categories supported by both electronic equipment and audio playback equipment are the first category and the second category.
  • the electronic device When only one encoder and one decoder are included in the first category, the electronic device shall use one encoder and one decoder in this category. Decoder as the default encoder and decoder. After that, when the electronic device and the audio playback device use the codec in the first category for audio data transmission, the audio data is encoded into the first encoded audio data according to the default encoder in the first category, and then the electronic device encodes the first encoded audio data. An encoded audio data is sent to the audio playback device, and the audio playback device uses the default decoder in the first category to decode the first encoded audio data into the first playback audio data.
  • the encoder in the first category further includes a third encoder
  • the decoder in the first category further includes a third decoder
  • the electronic device is further configured to: when the first parameter information satisfies the first condition, encode the audio data into the first class using the first encoder in the first class Encoding audio data, and sending the first encoded audio data to the audio playback device; wherein the power consumption of the first encoder is lower than that of the third encoder, or the priority or power of the first encoder is higher than that of the third encoder an audio playback device, further configured to decode the first encoded audio data into the first playback audio data through the first decoder in the first category; wherein, the power consumption of the first decoder is lower than that of the second decoder, or, The priority or power of the first decoder is higher than that of the second de
  • the codec categories supported by both the electronic device and the audio playback device are the first category and the second category.
  • the electronic device will select from multiple encoders according to the preset
  • the rule determines one encoder as the default encoder, and determines one decoder from multiple decoders as the default decoder according to the preset rules.
  • the default rules can be priority rules, efficiency rules, power consumption rules, and so on. It should be noted that the electronic device may determine one encoder from multiple encoders as the default encoder according to preset rules, and determine one decoder from multiple decoders as the default decoder according to the preset rules.
  • the electronic device only needs to determine one encoder from multiple encoders according to preset rules as the default encoder, and the audio playback device determines one encoder from multiple decoders according to preset rules decoder as the default decoder. This application is not limited here.
  • the electronic device when the encoder category shared by the electronic device and the audio playback device only includes the first category, the electronic device is further configured to: when the second parameter information satisfies the second condition, pass The first encoder in the first category encodes the audio data into third encoded audio data, and sends the third encoded audio data to the audio playback device; the audio playback device is further configured to pass the first decoder in the first category The third encoded audio data is decoded into third playback audio data.
  • the electronic device and the audio playback device only support one type of codec
  • the electronic device when the parameter information of the audio data is changed from the first parameter information to the second parameter information, and the second parameter information satisfies the second condition , the electronic device cannot switch the codec, and the electronic device still uses the default codec in the first category to transmit audio data with the audio playback device.
  • the encoder category shared by the electronic device and the audio playback device only includes the first category, including: the electronic device has not received the identifier of the second category sent by the audio playback device; or The number of encoders that the electronic device falls into the second category is zero.
  • the encoder in the first category includes only the first encoder
  • the decoder in the first category includes only the first decoder
  • the electronic device when the first parameter information satisfies the first condition, is further configured to: encode the audio data into the first encoded audio data according to the first encoder in the first category, and send the first encoded audio data to the audio playback device; the audio playback device is also used for The first encoded audio data is decoded into first playback audio data by the first decoder in the first category.
  • the codec category supported by both electronic equipment and audio playback equipment includes only the first category.
  • the electronic device When only one encoder and one decoder are included in the first category, the electronic device shall use one encoder and one decoder in this category as Default encoder and decoder. After that, when the electronic device and the audio playback device use the codec in the first category for audio data transmission, the audio data is encoded into the first encoded audio data according to the default encoder in the first category, and then the electronic device encodes the first encoded audio data. An encoded audio data is sent to the audio playback device, and the audio playback device uses the default decoder in the first category to decode the first encoded audio data into the first playback audio data.
  • the encoder in the first category further includes a third encoder
  • the decoder in the first category further includes a third decoder
  • the electronic device is further configured to: the first encoder in the first category encodes the audio data into the first encoded audio data, and sends the first encoded audio data to the audio playback device; wherein the function of the first encoder The power consumption is lower than that of the third encoder, or the priority or power of the first encoder is higher than that of the third encoder
  • the audio playback device is further configured to decode the first encoded audio data through the first decoder in the first category The audio data is played first; wherein, the power consumption of the first decoder is lower than that of the third decoder, or the priority or power of the first decoder is higher than that of the third decoder.
  • the codec category supported by both the electronic device and the audio playback device only includes the first category.
  • the electronic device will determine the multiple encoders according to preset rules.
  • One encoder is selected as the default encoder, and one decoder is determined from multiple decoders as the default decoder according to preset rules.
  • the default rules can be priority rules, efficiency rules, power consumption rules, and so on. It should be noted that the electronic device may determine one encoder from multiple encoders as the default encoder according to preset rules, and determine one decoder from multiple decoders as the default decoder according to the preset rules.
  • the electronic device only needs to determine one encoder from multiple encoders according to preset rules as the default encoder, and the audio playback device determines one encoder from multiple decoders according to preset rules decoder as the default decoder. This application is not limited here.
  • the codec in the first category is a high-definition sound quality codec
  • the codec in the second category is a standard sound quality codec
  • the codecs are standard sound quality codecs
  • the codecs in the second category are high definition sound quality codecs.
  • the electronic device before the electronic device acquires the audio data, the electronic device is further configured to: classify the first encoder into the first encoder based on the parameter information of the first encoder and the codec classification standard In one category, the second encoder is divided into the second category based on the parameter information of the second encoder and the codec classification standard; wherein the parameter information of the first encoder and the parameter information of the second encoder include the sampling rate one or more of , code rate, quantization bit depth, number of channels, and audio stream format; the audio playback device is also used to: classify the first decoder into In the first category, the second decoder is divided into the second category based on the parameter information of the second decoder and the codec classification standard; wherein the parameter information of the first decoder and the parameter information of the second decoder include sampling One or more of the rate, the code rate, the quantization bit depth, the number of channels, and the audio stream format; wherein, the codec classification standard includes the mapping relationship between
  • the sampling rate of the codec in the first category is greater than or equal to the target sampling rate, and the sampling rate of the codec in the second category is smaller than the target sampling rate; and/or , the code rate of the codec in the first category is greater than or equal to the target code rate, and the code rate of the codec in the second category is less than the target code rate; and/or, the number of channels of the codec in the first category greater than or equal to the target number of channels, the number of channels of the codec in the second category is less than the target number of channels; and/or, the quantization bit depth of the codec in the first category is greater than or equal to the target quantization bit depth, the second The quantization bit depth of the codecs in the class is smaller than the target quantization bit depth; and/or, the audio stream format of the codecs in the first class is the target audio stream format, and the audio stream format of the codecs in the second class is the target audio stream format.
  • the parameter type in the first parameter information, the parameter type in the parameter information of the first encoder, the parameter type in the parameter information of the first decoder, the second parameter The parameter types in the information, the parameter types in the parameter information of the second encoder, and the parameter types in the parameter information of the second decoder are the same; the first parameter information satisfies the first condition, and the second parameter information satisfies the second condition, specifically Including: the sampling rate in the first parameter information is greater than or equal to the target sampling rate, and the sampling rate in the second parameter information is less than the target sampling rate; and/or, the code rate in the first parameter information is greater than or equal to the target code rate, the second parameter The code rate in the information is less than the target code rate; and/or, the quantization bit depth in the first parameter information is greater than or equal to the target quantization bit depth, and the quantization bit depth in the second parameter information is less than the target quantization bit depth; and/or, the first The number of
  • the electronic device is further configured to: convert the first audio frame in the audio data by using the first encoder Encoding into a first encoded audio frame, and sending the first encoded audio frame to an audio playback device; encoding the first audio frame in the audio data into a second encoded audio frame by a second encoder, and encoding the second encoded audio frame Send to the audio playback device, encode the second audio frame in the audio data into the Nth encoded audio frame by the second encoder, and send the Nth encoded audio frame to the audio playback device; the audio playback device is also used for: by The first decoder decodes the first encoded audio frame into the first decoded audio frame, decodes the second encoded audio frame into the second decoded audio frame through the second decoder, and decodes the Nth encoded audio frame through the second decoder as The Nth audio frame is played; the first decoded audio frame and
  • the electronic device 100 first plays the first playback audio frame, and then the electronic device 100 plays the Nth playback audio frame.
  • the switching between the first encoder and the second encoder needs to be completed within one frame, and this frame is the first audio frame, and the audio playback device will After the first audio frame is smoothed, it is played to prevent jamming when the codec is switched, and to achieve a smooth transition.
  • the adjacent audio frames after the first audio frame, such as the second audio frame do not need to be smoothed, and the second decoder is directly decoded and played.
  • the first audio frame to the D th audio frame is encoded to obtain the third coded audio frame to the D+2 th coded audio frame;
  • the D th audio frame in the audio data is encoded by the second encoder to obtain the D+3 th encoding Audio frame, the D+1 th audio frame in the audio data is encoded by the second encoder to obtain the Nth
  • the switching between the first encoder and the second encoder needs to be completed in multiple frames (D frames), so that the first encoder and the second encoder are During the encoder switching process, the audio data encoded by the first encoder arrives at the audio playback device and is decoded, and the audio data encoded by the second encoder arrives at the audio playback device and is decoded at the same time. If the encoder switching needs to be completed in the D frame, the audio playback device directly decodes and plays the first audio frame to the D-1 audio frame. There is a freeze when the decoder is switched to achieve a smooth transition. The adjacent audio frames after the D-th audio frame, such as the N-th audio frame, do not need smoothing processing, and the N-th decoder is directly decoded and played out.
  • the present application provides another codec negotiation and switching method.
  • the method includes: when the first parameter information of the audio data satisfies the first condition, the electronic device encodes the audio data according to the first encoder in the first category into the first encoded audio data, and send the first encoded audio data to the audio playback device; wherein, the second category is that the electronic device determines the codec category shared by the electronic device and the audio playback device before acquiring the audio data; when When the second parameter information satisfies the second condition, the electronic device encodes the audio data into the second encoded audio data according to the second encoder in the second category, and sends the second encoded audio data to the audio playback device;
  • the category is that the electronic device determines the codec category shared by the electronic device and the audio playback device before acquiring the audio data;
  • the second category is that the electronic device determines the codec shared by the electronic device and the audio playback device before acquiring the audio data.
  • Decoder class the first condition is different from the second condition, and the first class
  • the electronic device and the audio playback device before transmitting the audio data, divide the codecs into multiple categories, and determine the codec category (for example, the first category) shared by the electronic device and the audio playback device. and the second category). Afterwards, the electronic device acquires the first parameter information of the audio data, and when the first parameter information of the audio data satisfies the first condition, selects a codec in the first category from the shared codec categories to transmit the audio data. After the content of the played audio data, the application of the played audio data, user selection, or network conditions change, the electronic device acquires the second parameter of the audio data.
  • the codec category for example, the first category
  • the electronic device acquires the first parameter information of the audio data
  • the electronic device acquires the second parameter of the audio data.
  • the electronic device When the second parameter of the audio data satisfies the second condition, the electronic device does not need to Negotiate the codec with the audio playback device again, and directly select the codec in the second category in the common codec category to transmit the audio data. In this way, when the electronic device needs to switch the encoder, it does not need to renegotiate the codec type with the audio playback device, and directly select a codec in a category from the previously negotiated codec categories supported by both parties for audio Data transmission solves the problem of audio data interruption and freeze when switching codecs between electronic devices and audio playback devices, and improves user experience.
  • the encoders in the first category include at least the first encoder, and the encoders in the second category include at least the second encoder.
  • the method further includes: the electronic device receives the identifier of the first category and the identifier of the second category sent by the audio playback device; wherein the decoder in the first category includes at least the first category identifier.
  • the audio playback device classifies the decoders into a plurality of categories according to the codec classification criteria.
  • the codec categories whose decoder identifiers are divided into multiple categories are greater than or equal to 1 are the first category and the second category.
  • the decoders classified into the first category include at least the first decoder, and may also include other decoders, such as the third decoder; the decoders classified into the second category include at least the second decoder, and may also include other decoders. , such as the fourth decoder.
  • the method further includes: the electronic device confirms that the shared categories of the electronic device and the audio playback device are the first category and the second category; the electronic device combines the identifier of the first category with the first category. The identification of the second category is sent to the audio playback device. The electronic device sends the codec classes supported by both parties to the audio playback device so that the audio playback device knows the codec classes supported by both parties.
  • the encoders in the first category only include the first encoder; the electronic device confirms that the shared categories of the electronic device and the audio playback device are the first category and the second category Afterwards, the method further includes: when the first parameter information satisfies the first condition, the electronic device encodes the audio data into the first encoded audio data through the first encoder in the first category, and sends the first encoded audio data to the audio playback device.
  • the codec categories supported by both the electronic device and the audio playback device are the first category and the second category.
  • the electronic device uses one of the encoders in this category as the default encoder and encoder. decoder.
  • the audio data is encoded into the first encoded audio data according to the default encoder in the first category, and then the electronic device encodes the first encoded audio data.
  • An encoded audio data is sent to the audio playback device.
  • the encoder in the first category further includes a third encoder; the electronic device confirms that the shared categories of the electronic device and the audio playback device are the first category and the second category Afterwards, the method further includes: when the first parameter information satisfies the first condition, the electronic device encodes the audio data into the first encoded audio data through the first encoder in the first category, and sends the first encoded audio data to the audio A playback device; wherein the power consumption of the first encoder is lower than that of the third encoder, or the priority or power of the first encoder is higher than that of the third encoder.
  • the codec categories supported by both the electronic device and the audio playback device are the first category and the second category.
  • the electronic device will determine one from the multiple encoders according to preset rules. encoder as the default encoder.
  • the default rules can be priority rules, efficiency rules, power consumption rules, and so on.
  • the method further includes: when the second parameter information satisfies the second condition, the electronic device The audio data is encoded into third encoded audio data by the first encoder in the first category, and the third encoded audio data is sent to the audio playback device.
  • the electronic device and the audio playback device only support one type of codec, in this case, when the parameter information of the audio data is changed from the first parameter information to the second parameter information, and the second parameter information satisfies the second condition , the electronic device cannot switch the codec, and the electronic device still uses the default codec in the first category to transmit audio data with the audio playback device.
  • the electronic device when the electronic device does not receive the identifier of the second category sent by the audio playback device or the number of encoders that the electronic device is divided into the second category is 0, the electronic device
  • the encoder class common to audio playback devices includes only the first class.
  • the encoder in the first category includes only the first encoder, and the decoder in the first category includes only the first decoder; when the first parameter information satisfies the first When conditions are met, the electronic device encodes the audio data into first encoded audio data according to the first encoder in the first category, and sends the first encoded audio data to the audio playback device.
  • the codec category supported by both the electronic device and the audio playback device only includes the first category, and when the first category includes only one encoder, the electronic device uses one encoder in the category as the default encoder.
  • the audio data is encoded into the first encoded audio data according to the default encoder in the first category, and then the electronic device encodes the first encoded audio data.
  • An encoded audio data is sent to the audio playback device.
  • the encoder in the first category further includes a third encoder
  • the decoder in the first category further includes a third decoder
  • the electronic device encodes the audio data into the first encoded audio data according to the first encoder in the first category, and sends the first encoded audio data to the audio playback device; wherein the power consumption of the first encoder is lower than
  • the third encoder alternatively, the first encoder has a higher priority or power than the third encoder.
  • the codec category supported by both the electronic device and the audio playback device only includes the first category.
  • the electronic device will determine an encoder from multiple encoders according to preset rules as the first category. Default encoder.
  • the default rules can be priority rules, efficiency rules, power consumption rules, and so on.
  • the codec in the first category is a high-definition sound quality codec
  • the codec in the second category is a standard sound quality codec
  • the codecs are standard sound quality codecs
  • the codecs in the second category are high definition sound quality codecs.
  • the method further includes: the electronic device classifies the first encoder into the first encoder based on the parameter information of the first encoder and the codec classification standard In one category, the second encoder is divided into the second category based on the parameter information of the second encoder and the codec classification standard; wherein the parameter information of the first encoder and the parameter information of the second encoder include the sampling rate One or more of , code rate, quantization bit depth, number of channels, and audio stream format; the codec classification standard includes the mapping relationship between codec categories and codec parameter information. It should be noted that the parameter information of the first encoder and the parameter information of the second encoder are the same.
  • the sampling rate of the codec in the first category is greater than or equal to the target sampling rate, and the sampling rate of the codec in the second category is smaller than the target sampling rate; and/or , the code rate of the codec in the first category is greater than or equal to the target code rate, and the code rate of the codec in the second category is less than the target code rate; and/or, the number of channels of the codec in the first category greater than or equal to the target number of channels, the number of channels of the codec in the second category is less than the target number of channels; and/or, the quantization bit depth of the codec in the first category is greater than or equal to the target quantization bit depth, the second The quantization bit depth of the codecs in the class is smaller than the target quantization bit depth; and/or, the audio stream format of the codecs in the first class is the target audio stream format, and the audio stream format of the codecs in the second class is the target audio stream format.
  • the parameter type in the first parameter information, the parameter type in the parameter information of the first encoder, the parameter type in the parameter information of the first decoder, the second parameter The parameter types in the information, the parameter types in the parameter information of the second encoder, and the parameter types in the parameter information of the second decoder are the same; the first parameter information satisfies the first condition, and the second parameter information satisfies the second condition, specifically Including: the sampling rate in the first parameter information is greater than or equal to the target sampling rate, and the sampling rate in the second parameter information is less than the target sampling rate; and/or, the code rate in the first parameter information is greater than or equal to the target code rate, the second parameter The code rate in the information is less than the target code rate; and/or, the quantization bit depth in the first parameter information is greater than or equal to the target quantization bit depth, and the quantization bit depth in the second parameter information is less than the target quantization bit depth; and/or, the first The number of
  • the present application provides an electronic device, comprising one or more processors, one or more memories, one or more encoders; one or more memories, one or more encoders and one or more two processors are coupled, one or more memories are used to store computer program code, the computer program code includes computer instructions, the one or more processors invoke the computer instructions to cause the electronic device to perform any of the possible implementations of the second aspect A codec negotiation and switching method in .
  • the present application provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and the computer-executable instructions, when invoked by a computer, are used to cause the computer to execute any one of the above-mentioned second aspects.
  • a codec negotiation and switching method provided in a possible implementation manner.
  • the present application provides a computer program product comprising instructions, which, when the computer program product is run on a computer, causes the computer to execute a codec provided in any of the possible implementations of the second aspect above Negotiation and handover methods.
  • FIG. 1 is a schematic diagram of a process of transmitting audio data between an electronic device and an audio playback device according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a system provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a system for networking transmission provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of another process of transmitting audio data between the electronic device 100 and the audio playback device 200 according to an embodiment of the present application;
  • FIG. 5 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application.
  • FIG. 6 is a software structural block diagram of an electronic device 100 (eg, a mobile phone) provided by an embodiment of the present application;
  • FIG. 7 is a schematic diagram of the hardware structure of an audio playback device 200 provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a codec category that the electronic device 100 and the audio playback device 200 negotiate and share in common according to an embodiment of the present application;
  • 9A-9C are UI diagrams of establishing a communication connection between a group of electronic devices 100 and an audio playback device 200 through Bluetooth provided by an embodiment of the application;
  • 10A to 10D are another set of UI diagrams provided by this embodiment of the present application.
  • first and second are only used for descriptive purposes, and should not be construed as implying or implying relative importance or implying the number of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present application, unless otherwise specified, the “multiple” The meaning is two or more.
  • UI user interface
  • the term "user interface (UI)" in the description, claims and drawings of this application is a medium interface for interaction and information exchange between an application program or an operating system and a user, and it realizes the internal form of information Conversion to and from user-acceptable forms.
  • the user interface of the application is the source code written in a specific computer language such as java and extensible markup language (XML).
  • the interface source code is parsed and rendered on the terminal device, and finally presented as content that the user can recognize.
  • Controls also known as widgets, are the basic elements of the user interface. Typical controls include toolbars, menu bars, input boxes, buttons, scroll bars, images and text.
  • the attributes and content of controls in the interface are defined by tags or nodes.
  • XML specifies the controls contained in the interface through nodes such as ⁇ Textview>, ⁇ ImgView>, and ⁇ VideoView>.
  • a node corresponds to a control or property in the interface, and the node is presented as user-visible content after parsing and rendering.
  • applications such as hybrid applications, often contain web pages in their interface.
  • a web page also known as a page, can be understood as a special control embedded in an application program interface.
  • a web page is source code written in a specific computer language, such as hypertext markup language (HTML), cascading styles Tables (cascading style sheets, CSS), java scripts (JavaScript, JS), etc.
  • the source code of the web page can be loaded and displayed as user-identifiable content by a browser or a web page display component similar in function to a browser.
  • the specific content contained in a web page is also defined by tags or nodes in the source code of the web page. For example, HTML defines the elements and attributes of web pages through ⁇ p>, ⁇ img>, ⁇ video>, and ⁇ canvas>.
  • GUI graphical user interface
  • GUI refers to a user interface related to computer operations that is displayed graphically. It can be an interface element such as a window, a control, etc. displayed in the display screen of the electronic device.
  • an electronic device eg, a mobile phone
  • an audio playback device eg, a headset
  • the electronic device sends audio data to the audio playback device, and the audio playback device plays the audio data sent by the electronic device.
  • the electronic device negotiates the codec type with the audio playback device, the electronic device will select the codec supported by both parties to encode the audio data, and send the encoded audio data to the audio playback device.
  • FIG. 1 exemplarily shows a schematic diagram of a process of transmitting audio data between an electronic device and an audio playback device.
  • the electronic device may be an audio signal source end (Source, SRS), and the audio playback device may be an audio signal sink end (Sink, SNK).
  • Source SRS
  • Sink SNK
  • the electronic device includes an audio data acquisition unit, an audio stream decoding unit, a sound mixing rendering unit, a wireless audio coding unit, a capability negotiation unit and a wireless transmission unit.
  • the audio playback device includes a wireless transmission unit, a wireless audio decoding unit, an audio power amplifier unit, an audio playback unit and a capability negotiation unit.
  • the electronic device and the audio playback device negotiate a type of codec supported by both parties for data transmission. Specifically, after the electronic device establishes a communication connection with the audio playback device, the audio playback device sends all the decoder identifiers and the capabilities of all the decoders to the wireless transmission unit on the audio playback device side through the capability negotiation unit, wherein the decoder's The identifier is the number of the decoder, and the audio playback device can find the decoder corresponding to the identifier of the decoder according to the identifier of the decoder, and obtain the capability of the decoder corresponding to the identifier of the decoder; The decoder identifier and the capabilities of all decoders are sent to the wireless transmission unit on the electronic device side, and the wireless transmission unit on the electronic device side sends all the decoder identifiers and the capabilities of all decoders to the electronic device side.
  • Capability negotiation unit The capability negotiation unit on the electronic device side obtains the identifiers of all encoders and the capabilities of all encoders in the electronic device, wherein the identifiers of the encoders are the serial numbers of the encoders, and the electronic device can find the corresponding identifiers of the encoders according to the identifiers of the encoders. Encoder, and obtain the capability of the encoder corresponding to the identifier of the encoder.
  • the capability negotiation unit on the electronic device side obtains codecs with one or more capabilities shared by the electronic device and the audio playback device according to the capabilities of all codecs, wherein the capabilities of the codec include the supported samples of the codec rate value, quantization bit depth value, bit rate, number of channels, etc.
  • the electronic device will determine a codec identifier from the codecs of one or more capabilities shared by the electronic device and the audio playback device as the default codec according to factors such as the type of audio being played. After that, the electronic device and the audio playback device will transmit the audio data according to the default codec identifier.
  • the capability negotiation unit of the electronic device sends the default encoder identification to the wireless encoding unit.
  • the capability negotiation unit of the electronic device sends the default decoder identifier to the wireless transmission unit on the electronic device side, and the wireless transmission unit on the electronic device side sends the default decoder identifier to the wireless transmission unit on the audio playback device side.
  • the wireless transmission unit on the device side sends the default decoder identifier to the capability negotiation unit of the audio playback device, and the audio playback device sends the default decoder identifier to the wireless audio decoding unit through the capability negotiation unit.
  • the electronic device and the audio playback device may also not include a capability negotiation unit.
  • the wireless transmission unit in the electronic device and the audio playback device can implement the function of codec capability negotiation.
  • the audio data acquisition unit is used to acquire an audio code stream, which can be a network audio code stream acquired by the electronic device in real time, or an audio code stream buffered in the electronic device. After the audio data acquires the audio code stream, the audio code stream is sent to the audio content decoding unit.
  • the audio content decoding unit receives the audio code stream sent by the audio data acquisition unit, decodes the audio code stream, and obtains an uncompressed audio code stream. After that, the audio content decoding unit sends the uncompressed audio stream to the audio mixing and rendering unit.
  • the audio mixing and rendering unit receives the uncompressed audio code stream sent by the audio content decoding unit, mixes and renders the uncompressed audio code stream, and calls the mixed and rendered audio code stream audio data.
  • Mixing is to mix the uncompressed audio stream with the audio data with ambient color, so that the audio code stream after mixing and rendering has ambient color. It is understandable that the electronic device can provide multiple channels for audio mixing and rendering. audio data. For example, in the dubbing and narration of a documentary, the dubbers have recorded the audio code stream of the narration. In order to make the audio code stream match the picture of the documentary, the ambient color of the audio code stream needs to be rendered to increase the mysterious atmosphere.
  • Rendering is the rendering adjustment of the sampling rate, sampling bit depth, and number of channels of audio data.
  • the electronic device may also not include an audio mixing and rendering unit, that is, the electronic device does not need to perform audio mixing and rendering processing on the audio stream. This application is not limited here.
  • the audio mixing and rendering unit sends the audio data to the wireless audio coding unit.
  • the wireless audio encoding unit receives the audio data sent by the audio mixing and rendering unit, and encodes the audio data according to the default encoder identifier, and then the wireless audio encoding unit sends the encoded audio data to the wireless transmission unit.
  • the wireless transmission unit receives the encoded audio data sent by the wireless audio encoding unit, and sends the encoded audio data to the wireless transmission unit of the audio playback device through the transmission channel between the electronic device and the audio playback device.
  • the wireless transmission unit of the audio playback device receives the encoded audio data sent by the wireless transmission unit of the electronic device, and the wireless transmission unit of the audio playback device sends the encoded audio data to the wireless audio decoding unit of the audio playback device.
  • the wireless audio decoding unit of the audio playback device receives the encoded audio data sent by the wireless transmission unit, and performs audio decoding on the encoded audio data according to the default decoder identifier to obtain uncompressed audio data.
  • the wireless audio decoding unit of the audio playback device sends the uncompressed audio data to the audio power amplifier unit of the audio playback device.
  • the audio power amplifier unit of the audio playback device receives the uncompressed audio data, performs digital-to-analog conversion, power amplification and other operations on the uncompressed audio data, and then plays the audio data through the audio playback unit.
  • the electronic device and the audio playback device When the electronic device and the audio playback device initially establish a communication connection, they will transmit audio data according to the default codec identifier obtained through negotiation. However, when the network becomes poor, or the electronic device uses higher-definition sound quality for transmission, the electronic device needs to switch to an encoder suitable for network transmission or an encoder with higher-definition sound quality. However, when the electronic device switches the codec, the electronic device needs to re-negotiate the codec capability with the audio playback device. When the electronic device re-negotiates the codec capability with the audio playback device, the electronic device will suspend sending audio data to the audio playback device, causing the electronic device and the audio playback device to interrupt the audio data and cause the playback card to be interrupted during the process of switching the codec. The problem is that it affects the user experience.
  • the present application provides a codec negotiation and switching method.
  • the method includes: before the electronic device establishes a communication connection with the audio playback device, the electronic device and the audio playback device divide one or more codecs into multiple categories according to parameters such as sampling rate, quantization bit depth, bit rate, number of channels, etc. .
  • the audio playback device After the electronic device establishes a communication connection with the audio playback device, and before the electronic device sends the audio data to the audio playback device, the audio playback device sends the class ID with the number of decoder IDs greater than or equal to 1 to the electronic device.
  • the electronic device obtains a common category according to the category in which the number of identifiers of the decoder is greater than or equal to 1 and the category in which the number of identifiers of the encoder is greater than or equal to 1.
  • the electronic device selects a default codec under one of the categories for audio data transmission according to user selection, characteristics of playing audio data, whether the audio rendering capability of the electronic device is enabled, application type and other conditions.
  • the electronic device When the user selects, plays the audio data characteristics, whether the audio rendering capability of the electronic device is enabled, the application type and other conditions change, the electronic device will reselect the default encoder under another category for encoding and transmission, and will reselect another category.
  • the identifier is sent to the audio playback device, and the audio playback device uses the default decoder under this category to decode and play the audio data.
  • the electronic device needs to switch the encoder, it does not need to re-negotiate the codec type with the audio playback device, which solves the problem of audio data interruption and freezes when the electronic device and the audio playback device switch the codec, and improves the user experience. experience.
  • This technical solution is applicable to the point-to-point connection of a mobile phone and a wireless headset for wireless audio playback; it is also applicable to a point-to-point connection to a wireless headset with a wearable device such as a tablet/PC/smart watch;
  • the audio playback device can play one or more types of speakers/sound bar/smart TV.
  • FIG. 2 is a schematic diagram of a system provided by an embodiment of the present application.
  • the electronic device 100 establishes a communication connection with the audio playback device 200, the electronic device 100 can send audio data to the audio playback device, and the audio playback device plays the audio data.
  • the electronic device 100 may be a cell phone, tablet computer, desktop computer, laptop computer, handheld computer, notebook computer, ultra-mobile personal computer (UMPC), netbook, as well as cellular telephones, personal digital assistants (personal digital assistants) digital assistant (PDA), augmented reality (AR) devices, virtual reality (VR) devices, artificial intelligence (AI) devices, wearable devices, in-vehicle devices, smart home devices and/or For smart city equipment, the embodiment of the present application does not limit the specific type of the electronic equipment 100 .
  • the software system of the electronic device 100 includes but is not limited to Linux or other operating systems. For Huawei's Hongmeng system.
  • the audio playback device 200 refers to a device with audio playback capability, and the audio playback device may be, but is not limited to, headphones, speakers, TVs, AR/VR glasses devices, tablet/PC/smart watches and other wearable devices.
  • the electronic device 100 as a mobile phone and the audio playback device 200 as a Bluetooth headset as an example.
  • the electronic device 100 and the audio playback device 200 may be connected and communicated through wireless communication technology.
  • the wireless communication technologies here include but are not limited to: wireless local area network (WLAN) technology, bluetooth (bluetooth), infrared, near field communication (NFC), ZigBee, wireless fidelity direct , Wi-Fi direct) (also known as wireless fidelity peer-to-peer (Wi-Fi P2P)) and other wireless communication technologies that appear in subsequent development.
  • WLAN wireless local area network
  • bluetooth bluetooth
  • NFC near field communication
  • ZigBee wireless fidelity direct
  • Wi-Fi direct also known as wireless fidelity peer-to-peer (Wi-Fi P2P)
  • Wi-Fi P2P wireless fidelity peer-to-peer
  • the electronic device 100 When the audio playback device 200 is connected to the electronic device 100 through the Bluetooth technology, the electronic device 100 sends synchronization information (eg, handshake information) to the audio playback device 200 for network synchronization. After successful networking and synchronization, the audio playback device 200 plays audio under the control of the electronic device 100 . That is, the electronic device 100 sends audio data to the audio playback device 200 through the established Bluetooth channel, and the audio playback device 200 plays the audio data sent by the electronic device 100 .
  • synchronization information eg, handshake information
  • the schematic diagram of the system shown in FIG. 2 merely exemplarily shows a system.
  • the electronic device 100 can also establish communication connections with multiple audio playback devices 200 at the same time.
  • the following embodiments of the present application are described by using the electronic device 100 to establish a connection with an audio playback device 200 . It should be noted that this application does not limit the number of audio playback devices 200 .
  • the electronic device 100 establishes communication connections with multiple audio playback devices 200 at the same time.
  • the electronic device 100 may be a cell phone, tablet computer, desktop computer, laptop computer, handheld computer, notebook computer, ultra-mobile personal computer (UMPC), netbook, as well as cellular telephones, personal digital assistants (personal digital assistants) digital assistant (PDA), augmented reality (AR) devices, virtual reality (VR) devices, artificial intelligence (AI) devices, wearable devices, in-vehicle devices, smart home devices and/or For smart city equipment, the embodiment of the present application does not limit the specific type of the electronic equipment 100 .
  • the software system of the electronic device 100 includes but is not limited to Linux or other operating systems. For Huawei's Hongmeng system.
  • the audio playback device 200 refers to a device with audio playback capability, and the audio playback device may be, but is not limited to, headphones, speakers, TVs, ARVR glasses, tablet/PC/smart watches and other wearable devices.
  • the embodiments of the present application are described by taking the electronic device 100 as a mobile phone and the plurality of audio playback devices 200 as earphones and speakers as examples. That is, the mobile phone establishes a connection with the headset and the speaker at the same time, and the mobile phone can simultaneously send multimedia content (such as audio data) to the headset and the speaker.
  • multimedia content such as audio data
  • the connection between the electronic device 100 and the multiple audio playback devices can be regarded as multiple components of an independent system. That is, the electronic device 100 negotiates a common codec classification category with the headset, and a default encoder identification and a default decoder identification under each category. The electronic device 100 negotiates a common codec classification category with the speaker, and a default encoder identifier and a default decoder identifier under each category. The electronic device 100 and the headset can independently select one of the codec classification categories shared by both parties to transmit audio data.
  • the electronic device 100 and the speaker can independently select one of the codec classification categories shared by both parties to transmit audio data.
  • the codec classification type selected by the electronic device 100 and the earphone may be the same or different from the codec classification type selected by the electronic device 100 and the speaker.
  • the electronic device 100 and the earphone or the electronic device 100 and the speaker switch codec classification categories do not affect each other.
  • the method for selecting and switching the codec classification between the electronic device 100 and the headset or between the electronic device 100 and the speaker is the same as the method for selecting and switching the codec classification between the electronic device 100 and the audio playback device 200 described in the following embodiments. This will not be repeated here.
  • the connection between the electronic device 100 and the multiple audio playback devices is regarded as a complete system.
  • the headset sends all the codec classification categories and the decoder identifiers under each category to the electronic device 100, the speaker All codec classification categories and the decoder identification under each category are sent to the electronic device 100 .
  • the electronic device 100 acquires all the codec classification categories in the electronic device 100 and the encoder identifiers under each category.
  • the electronic device 100 confirms the common codec classification categories from the codec classification categories in the electronic device 100 and the codec classification categories sent by the earphones and the speakers, that is, the codecs supported by the electronic device 100, the earphones, and the speakers.
  • Decoder classification the electronic device 100 confirms the codec classification supported by the electronic device 100, the earphone and the speaker, and the electronic device 100 and the audio playback device 200 described in the following embodiments confirm the codec classification supported by both The methods of the categories are the same, so please do not repeat them here.
  • the electronic device 100 determines a default encoder and a default decoder in the codec classification category supported by the electronic device 100, the earphone and the speaker.
  • the electronic device 100 sends the codec classification categories supported by the electronic device 100, the earphone and the speaker, and a default encoder identifier and a default decoder identifier under each category to the earphone and the speaker. It should be noted that the selection and switching of the codec classification by the electronic device 100 is the same as the method for selecting and switching the codec classification by the electronic device 100 and the audio playback device 200 described in the following embodiments, and will not be repeated here.
  • the following describes a codec negotiation and switching method provided by this embodiment by taking an example of establishing a connection between the electronic device 100 and an audio playback device 200 .
  • the connection between the electronic device 100 and a pair of audio playback devices 200 is the same as the principle of establishing a connection between the electronic device 100 and one audio playback device 200 , and details are not described herein again.
  • FIG. 4 exemplarily shows a schematic diagram of another process of transmitting audio data between the electronic device 100 and the audio playback device 200 .
  • the electronic device 100 includes an audio data acquisition unit, an audio stream decoding unit, a sound mixing rendering unit, a wireless audio encoding unit, a capability negotiation unit, an encoding control unit, and a wireless transmission unit.
  • the audio playback device 200 includes a wireless transmission unit, a wireless audio decoding unit, an audio power amplifier unit, an audio playback unit, a capability negotiation unit and a decoding control unit.
  • the audio data acquisition unit, audio stream decoding unit, audio mixing rendering unit, wireless audio coding unit and wireless transmission unit in the electronic device 100 are the same as the audio data acquisition unit, audio stream decoding unit, audio mixing rendering unit shown in FIG. 1 .
  • the functions of the unit, the wireless audio coding unit and the wireless transmission unit are the same, and will not be repeated in this application.
  • the functions of the wireless transmission unit, wireless audio decoding unit, audio power amplifier unit, and audio playback unit in the audio playback device 200 are the same as those of the wireless transmission unit, wireless audio decoding unit, audio power amplifier unit, and audio playback unit shown in FIG. 1 . The same, this application will not repeat them again.
  • the capability negotiation unit is specifically configured to obtain the codec classification standard, the identifiers of all encoders in the electronic device 100, and all the encoder capabilities.
  • the encoder capabilities include the sampling rate, quantization bit depth, Parameter information such as bit rate and number of channels. And according to the codec classification standard and the capabilities of all the encoders in the electronic device 100, all the encoders in the electronic device 100 are divided into multiple categories. An encoder can belong to one or more categories. It should be noted that the codec classification standard is preset in the electronic device 100 .
  • the capability negotiation unit is specifically configured to obtain the codec classification standard, the identifiers of all the decoders in the audio playback device 200, and the capabilities of all the decoders. And according to the codec classification standard and the capabilities of all the decoders in the electronic device 100, all the decoders in the audio playback device 200 are divided into multiple categories. A decoder can belong to one or more categories. It should be noted that the codec classification standard is preset in the audio playback device 200 .
  • the audio playback device 200 sends the category identifiers whose number of decoder identifiers in the audio playback device 200 is greater than or equal to 1 to the wireless transmission unit in the audio playback device 200 through the capability negotiation unit.
  • the wireless transmission unit in the audio playback device 200 sends the identifiers of the categories whose number of decoder identifiers is greater than or equal to 1 to the wireless transmission unit in the electronic device 100 .
  • a category in which the number of decoder identifiers is greater than or equal to 1 can be understood as that the number of decoder identifiers included in this category is greater than or equal to 1.
  • the wireless transmission unit in the electronic device 100 receives and sends the class identifiers whose number of decoder identifiers is greater than or equal to 1 to the capability negotiation unit in the electronic device 100 .
  • the capability negotiation unit in the electronic device 100 receives the identifiers of the categories in which the number of decoder identifiers is greater than or equal to 1.
  • the capability negotiation unit in the electronic device 100 also acquires the category identifiers in which the number of encoder identifiers in the electronic device 100 is greater than or equal to 1.
  • the capability negotiation unit in the electronic device 100 sends the class identifiers with the number of decoder identifiers greater than or equal to 1 and the class identifiers with the number of encoder identifiers greater than or equal to 1 to the encoding control unit in the electronic device 100, and the encoding control unit in the electronic device 100
  • the unit confirms the common category identification from the category identification with the number of decoder identifications greater than or equal to 1 and the category identification with the number of encoder identifications greater than or equal to 1. And determine the default one encoder for each common category.
  • how the encoding control unit negotiates a default encoder in each common category of the electronic device 100 and the audio playback device 200 will be described in detail in subsequent embodiments, which is not limited in this application.
  • the encoding control unit After the encoding control unit confirms the common category and the default encoder under each common category, the encoding control unit will determine the type of application, playback audio characteristics (sampling rate, quantization bit depth, number of channels), and audio rendering capabilities of electronic equipment. Whether to open or not, network conditions of the channel, etc., select an appropriate category (eg, the first category) from the common categories, and perform audio data transmission with the default encoder in the category. How does the encoding control unit select the first category from the common categories according to the application type, playback audio characteristics (sampling rate, quantization bit depth, number of channels), whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, etc. Subsequent embodiments will be described in detail, which is not limited in this application.
  • the encoding negotiation unit sends the identifier of the first category to the wireless audio encoding unit, and the wireless audio encoding unit encodes the audio data according to the encoder corresponding to the default encoder identifier in the first category.
  • the coding negotiation unit sends the identifier of the first type to the wireless transmission unit
  • the electronic device 100 sends the identifier of the first type to the wireless transmission unit of the audio playback device 200 through the wireless transmission unit
  • the wireless transmission list of the audio playback device 200 sends the
  • the identification of the first category is sent to the capability negotiation unit in the audio playback device 200
  • the wireless transmission single capability negotiation unit of the audio playback device 200 sends the identification of the first category to the decoding control unit in the audio playback device 200
  • the audio playback device 200 The decoding control unit in the identification of the first category sends the identification of the first category to the wireless audio decoding unit
  • the wireless audio decoding unit sends the encoded audio data to the electronic device 100 according to the decoder corresponding to the default decoder identification in the identification of the first category. to decode.
  • the coding negotiation unit selects an appropriate category (such as the first category) from the common categories, and uses the default codec in the first category for audio data transmission, due to changes in the application type, playback audio characteristics (sampling rate , quantization bit depth, number of channels) changes, the audio rendering capability of the electronic device is turned on, the network conditions of the channel change and other factors, the electronic device 100 will re-select another category through the encoding control unit, and inform the audio playback device of the identification of the category The decoding control unit in 200.
  • the electronic device 100 and the audio playback device 200 use the default codec in another category to transmit audio data.
  • FIG. 5 exemplarily shows a schematic structural diagram of the electronic device 100 .
  • the electronic device 100 as a mobile phone as an example. It should be understood that the electronic device 100 shown in FIG. 5 is only an example, and the electronic device 100 may have more or fewer components than those shown in FIG. 5, two or more components may be combined, or Different component configurations are possible.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, And a subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
  • the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may contain multiple sets of I2C buses.
  • the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193 and the like through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • the processor 110 may contain multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 110 with the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 communicates with the camera 193 through a CSI interface, so as to realize the photographing function of the electronic device 100 .
  • the processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the electronic device 100 .
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
  • the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared technology
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou navigation satellite system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
  • global positioning system global positioning system, GPS
  • global navigation satellite system global navigation satellite system, GLONASS
  • Beidou navigation satellite system beidou navigation satellite system, BDS
  • quasi-zenith satellite system quadsi -zenith satellite system, QZSS
  • SBAS satellite based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • Display screen 194 is used to display images, videos, and the like.
  • Display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED diode AMOLED
  • flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
  • the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
  • the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP is used to process the data fed back by the camera 193 .
  • the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193 .
  • the camera 193 is used to capture still images or video.
  • the object is projected through the lens to generate an optical image onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG Moving Picture Experts Group
  • MPEG2 moving picture experts group
  • MPEG3 MPEG4
  • MPEG4 Moving Picture Experts Group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 .
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
  • the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
  • the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also referred to as a "speaker" is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
  • the voice can be answered by placing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the earphone jack 170D is used to connect wired earphones.
  • the earphone interface 170D may be the USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
  • the pressure sensor 180A may be provided on the display screen 194 .
  • the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyroscope sensor 180B detects the shaking angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse motion to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the electronic device 100 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
  • Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
  • the light emitting diodes may be infrared light emitting diodes.
  • the electronic device 100 emits infrared light to the outside through the light emitting diode.
  • Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket, so as to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking pictures with fingerprints, answering incoming calls with fingerprints, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection.
  • the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 caused by the low temperature.
  • the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to touch operations may be provided through display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 180M can also be disposed in the earphone, combined with the bone conduction earphone.
  • the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 180M, so as to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the function of heart rate detection.
  • the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • Motor 191 can generate vibrating cues.
  • the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
  • touch operations acting on different applications can correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 .
  • Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
  • the touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the electronic device 100 employs an eSIM, ie: an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 as an example.
  • FIG. 6 is a block diagram of a software structure of an electronic device 100 (eg, a mobile phone) according to an embodiment of the present invention.
  • an electronic device 100 eg, a mobile phone
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
  • the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
  • the application layer can include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and so on.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
  • Content providers are used to store and retrieve data and make these data accessible to applications.
  • the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
  • a display interface can consist of one or more views.
  • the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide the communication function of the electronic device 100 .
  • the management of call status including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
  • the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
  • a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • surface manager surface manager
  • media library Media Libraries
  • 3D graphics processing library eg: OpenGL ES
  • 2D graphics engine eg: SGL
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
  • a corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.). Raw input events are stored at the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon, as an example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer, and then starts the camera driver by calling the kernel layer.
  • the camera 193 captures still images or video.
  • FIG. 7 exemplarily shows a schematic diagram of the hardware structure of the audio playback device 200 .
  • FIG. 7 exemplarily shows a schematic structural diagram of an audio playback device 200 (eg, a Bluetooth device) provided by an embodiment of the present application.
  • an audio playback device 200 eg, a Bluetooth device
  • the audio playback device 200 as a Bluetooth device as an example. It should be understood that the audio playback device 200 shown in FIG. 7 is only an example, and the audio playback device 200 may have more or less components than those shown in FIG. 7, and two or more components may be combined, Or can have different component configurations.
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the audio playback device 200 may include: a processor 201 , a memory 202 , a Bluetooth communication module 203 , an antenna 204 , a power switch 205 , a USB communication processing module 206 , and an audio module 207 . in:
  • the processor 201 may be used to read and execute computer readable instructions.
  • the processor 201 may mainly include a controller, an arithmetic unit, and a register.
  • the controller is mainly responsible for instruction decoding, and sends out control signals for the operations corresponding to the instructions.
  • the arithmetic unit is mainly responsible for saving the register operands and intermediate operation results temporarily stored during the execution of the instruction.
  • the hardware architecture of the processor 201 may be an application specific integrated circuit (ASIC) architecture, a MIPS architecture, an ARM architecture, an NP architecture, or the like.
  • ASIC application specific integrated circuit
  • the processor 201 may be configured to parse a signal received by the Bluetooth communication module 203, such as a pairing mode modification request sent by the terminal 100, and so on.
  • the processor 201 may be configured to perform corresponding processing operations according to the parsing result, such as generating a pairing mode modification response, and the like.
  • Memory 202 is coupled to processor 201 for storing various software programs and/or sets of instructions.
  • memory 202 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory 202 can store operating systems, such as embedded operating systems such as uCOS, VxWorks, RTLinux, and the like.
  • Memory 202 may also store communication programs that may be used to communicate with terminal 100, one or more servers, or other devices.
  • the Bluetooth communication module 203 may include a Classic Bluetooth (BT) module and a Bluetooth Low Energy (BLE) module.
  • BT Classic Bluetooth
  • BLE Bluetooth Low Energy
  • the Bluetooth communication module 203 can monitor signals transmitted by other devices (such as the terminal 100 ), such as probe requests, scan signals, etc., and can send response signals, scan responses, etc., so that other devices (such as the terminal 100)
  • the audio playback device 200 can be discovered, and a wireless communication connection can be established with other devices (such as the terminal 100), and communicate with other devices (such as the terminal 100) through Bluetooth.
  • the Bluetooth communication module 203 can also transmit signals, such as broadcasting BLE signals, so that other devices (such as the terminal 100 ) can discover the audio playback device 200 and establish a wireless communication connection with other devices (such as the terminal 100 ), Communicate with other devices (such as the terminal 100 ) through Bluetooth.
  • signals such as broadcasting BLE signals
  • the wireless communication function of the audio playback device 200 may be implemented by an antenna 204, a Bluetooth communication module 203, a modem processor, and the like.
  • Antenna 204 may be used to transmit and receive electromagnetic wave signals. Each antenna in audio playback device 200 may be used to cover a single or multiple communication frequency bands.
  • the Bluetooth communication module 203 may have one or more antennas.
  • the power switch 205 may be used to control the power supplied by the power source to the audio playback device 200 .
  • the USB communication processing module 206 may be used to communicate with other devices through a USB interface (not shown). In some embodiments, the audio playback device 200 may also not include the USB communication processing module 206 .
  • the audio module 207 can be used to output audio signals through the audio output interface, so that the audio playback device 200 can support audio playback.
  • the audio module can also be used to receive audio data through the audio input interface.
  • the audio playback device 200 may be a media playback device such as a Bluetooth headset.
  • the audio playback device 200 may further include a display screen (not shown), wherein the display screen may be used to display images, prompt information, and the like.
  • the display screen can be a liquid crystal display (LCD), an organic light-emitting diode (OLED) display, and an active-matrix organic light emitting diode (AMOLED) display. screen, flexible light-emitting diode (flexible light-emitting diode, FLED) display, quantum dot light emitting diode (quantum dot light emitting diodes, QLED) display and so on.
  • the audio playback device 200 may also include a serial interface such as an RS-232 interface.
  • the serial interface can be connected to other devices, such as audio external devices such as speakers, so that the audio playback device 200 and the audio external device can cooperate to play audio and video.
  • the structure shown in FIG. 7 does not constitute a specific limitation on the audio playback device 200 .
  • the audio playback device 200 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the electronic device 100 classifies all encoders in the electronic device 100 into multiple categories according to the codec classification standard, and the audio playback device 200 classifies the audio playback device 200 All decoders in are classified into multiple categories.
  • the electronic device 100 and the audio playback device 200 may also classify one or more codecs of the electronic device 100 and the audio playback device 200 into multiple categories after establishing a communication connection. This application does not limit the time at which the codec 100 and the audio playback device 200 classify one or more codecs.
  • codec classification standard will be described in detail, and how the electronic device 100 and the audio playback device 200 classify the encoders in the electronic device 100 and the audio playback device 200 into multiple categories according to the codec classification standard.
  • the codec classification standard can be obtained according to one or a combination of two or more parameters, such as sampling rate, quantization bit depth, code rate, and number of channels.
  • the codec classification standard may be obtained according to one parameter of sampling rate, quantization bit depth, code rate, number of channels, etc.
  • one parameter may be the sampling rate;
  • the codec classification standard may also be obtained according to two parameters.
  • Parameters can be obtained, for example, two parameters can be sampling rate and quantization bit depth; codec classification criteria can also be obtained according to three parameters, for example, three parameters can be sampling rate, quantization bit depth and bit rate; codec
  • the classification standard can also be obtained according to four parameters, for example, the four parameters can be sampling rate, quantization bit depth, code rate, and number of channels.
  • the codec classification standard may also refer to other parameters, such as audio formats, etc., which are not limited in this application.
  • the codec classification criteria are pre-existing in the electronic device 100 and the audio playback device 200 .
  • the sampling rate is the number of times the sound signal is sampled in a unit time (for example, one second). The higher the sampling rate, the more realistic the restoration of the sound and the better the sound quality.
  • Quantization bit depth is the quantization precision, which determines the dynamic range of digital audio. When frequency sampling, a higher quantization bit depth can provide more possible amplitude values, resulting in a larger vibration range, a higher signal-to-noise ratio, and improved fidelity.
  • the bit rate refers to the bit rate, which identifies the number of bits transmitted per unit of time, in bits per second or kilobits per second. The higher the bit rate, the more audio data is transmitted per second, and the clearer the sound quality.
  • the number of channels is the number of speakers that support different sounds.
  • the number of channels includes mono, dual, 2.1, 5.1, 7.1 and so on.
  • the format of audio data is generally PCM data format.
  • PCM (pulse code modulation, pulse code modulation) data format is an uncompressed audio data stream, which is a standard digital signal converted from an analog signal through sampling, quantization and encoding. audio data.
  • the format of the audio data also includes MP3 data format, MPEG data format, MPEG-4 data format, WAVE data format, CD data format and the like.
  • the codec classification criteria is to classify codecs according to the value range of one or more parameters.
  • the sampling rate may be divided into multiple segments according to the lowest sampling rate and the highest sampling rate of the codec frequently used in the device, and the sampling rate of each segment is divided into multiple segments.
  • the value of the sample rate is different. Specifically, when the numerical value of the sampling rate of the codec is greater than or equal to the first sampling rate, the codec is divided into category one; when the numerical value of the sampling rate of the codec is less than the first sampling rate, greater than or equal to the second sampling rate When the sampling rate is higher, the codec is divided into category two; when the value of the sampling rate of the codec is smaller than the second sampling rate, the codec is divided into category three. Wherein, the first sampling rate is greater than the second sampling rate.
  • the sampling rate can be sampled according to the lowest sampling rate and the highest sampling rate, the lowest code rate and the highest code rate of codecs often used in the device.
  • the rate and code rate are divided into multiple segments.
  • the codec when the numerical value of the sampling rate of the codec is greater than or equal to the first sampling rate, and the numerical value of the codec of the codec is greater than or equal to the first code rate, the codec is divided into category one; When the numerical value of the sampling rate of the codec is less than the first sampling rate and equal to the second sampling rate, and the numerical value of the codec rate is less than the first code rate and greater than or equal to the second code rate, the codec is divided into category two ; When the numerical value of the sampling rate of the codec is less than the second sampling rate, and the numerical value of the codec of the codec is less than the second code rate, the codec is divided into category three. Wherein, the first sampling rate is greater than the second sampling rate, and the first code rate is greater than the second code rate.
  • the sampling rate may be determined according to the lowest sampling rate, the highest sampling rate, the lowest code rate and the highest sampling rate of codecs often used in the device.
  • the code rate, the lowest quantization bit depth, and the highest quantization bit depth divide the sample rate, code rate, and quantization bit depth into segments.
  • the codec when the value of the sampling rate of the codec is greater than or equal to the first sampling rate, the value of the codec of the codec is greater than or equal to the first code rate, and the value of the quantization bit depth of the codec is greater than or equal to the first quantization bit When it is deep, the codec is divided into category one; when the value of the codec's sampling rate is less than the first sampling rate and equal to the second sampling rate, the codec's codec value is less than the first code rate and greater than or equal to When the second code rate is used, and the value of the quantization bit depth of the codec is smaller than the first quantization bit depth and greater than or equal to the second quantization bit depth, the codec is divided into category two; When the numerical value is less than the second sampling rate, the numerical value of the coding rate of the codec is less than the second coding rate, and the numerical value of the quantization bit depth of the codec is less than the second quantization bit depth, the codec is divided into category three.
  • the sampling rate may be based on the lowest sampling rate, the highest sampling rate, the lowest sampling rate and the lowest Bit rate and highest bit rate, lowest quantization bit depth and highest quantization bit depth, lowest number of channels most commonly used (eg, two-channel) divides the sample rate, bit rate, quantization bit depth, and number of channels into segments.
  • the codec when the value of the sampling rate of the codec is greater than or equal to the first sampling rate, the value of the codec of the codec is greater than or equal to the first code rate, and the value of the quantization bit depth of the codec is greater than or equal to the first quantization bit depth , when the value of the number of channels of the codec is greater than or equal to the number of the first channels, the codec is divided into category one; when the value of the sampling rate of the codec is less than the first sampling rate and greater than or equal to the second sampling rate , when the value of the code rate of the codec is less than the first code rate and greater than or equal to the second code rate, the value of the codec's quantization bit depth is less than the first quantization bit depth and greater than or equal to the second quantization bit depth, and the codec's quantization bit depth When the value of the number of channels is greater than or equal to the number of the first channel, the codec is divided into category two; when the value of the sampling rate of the codec is less than
  • the codec When the codec has two bit rates, the value of the quantization bit depth of the codec is less than the second quantization bit depth, and the value of the number of channels of the codec is greater than or equal to the first number of channels, the codec is divided into category three.
  • the first sampling rate is greater than the second sampling rate, the first code rate is greater than the second code rate, and the first quantization bit depth is greater than the second quantization bit depth.
  • codec classification standards can be obtained by setting according to different requirements, which will not be listed one by one in this application.
  • the electronic device 100 and the audio playback device 200 classify the encoder in the electronic device 100 and the decoder in the audio playback device 200 into multiple categories according to the codec classification standard.
  • the electronic device 100 classifies all the encoders into multiple classes, and the audio playback device 200 classifies all the decoders into multiple classes.
  • the electronic device 100 will obtain the classification standard of the codec.
  • the classification standard of the codec can be used to classify the codec into multiple categories according to the information of one or more parameters of the codec.
  • the electronic device 100 will acquire the values of one or more parameters corresponding to all encoders in the electronic device 100 .
  • the electronic device 100 divides the encoder into one or more parameters. under a category.
  • the electronic device 100 divides all encoders into one or more categories according to the above method. It should be noted that an encoder can be divided into multiple categories.
  • the electronic device 100 records the identifier of the corresponding encoder under each category. For example, when the category of the codec classification standard is category 1, the identifiers of the encoders corresponding to category 1 include encoder 1 and encoder 2.
  • the audio playback device 200 divides all the decoders into multiple categories, which is consistent with the method for the electronic device 100 to divide all the encoders into multiple categories, and details are not described herein again.
  • the audio playback device 200 records the identifier of the corresponding decoder under each category. For example, when the category of the codec classification standard is category 1, the identifiers of decoders corresponding to category 1 include decoder 1 and decoder 2.
  • the codec classification criteria can be obtained according to the sampling rate.
  • Table 1 exemplarily shows the codec classification standards obtained according to the sampling rate.
  • the encoder when the value of one or more sampling rates supported by the codec is greater than or equal to the first sampling rate, the encoder belongs to category one; when the value of one or more sampling rates supported by the codec is less than or equal to If the first sampling rate is greater than or equal to the second sampling rate, the codec belongs to category two; when the value of one or more sampling rates supported by the codec is less than the second sampling rate, the codec belongs to category three . Wherein, the second sampling rate is smaller than the first sampling rate.
  • the electronic device 100 obtains the values of the sampling rates supported by all encoders in the electronic device 100 .
  • the electronic device 100 divides the encoders into the categories shown in Table 1 according to the values of the sampling rates supported by all the encoders in the electronic device 100 . It should be noted that the same encoder can be divided into multiple categories.
  • Table 2 exemplarily shows that the electronic device 100 and the audio playback device 200 classify the codecs in the electronic device 100 and the audio playback device 200 into a plurality of categories according to the sampling rate.
  • the identifiers of the codecs shown in the embodiments of the present application may also be expressed in binary, for example, the first encoder may also be expressed as e001, the second encoder may also be expressed as e010, and the third encoder may also be expressed as e011, the decoder one can also be expressed as d001, the encoder one can also be expressed as d010, and the encoder one can also be expressed as d011.
  • the first sampling rate is 48 kHz
  • the second sampling rate is 24 kHz.
  • the encoders classified into category one in the electronic device 100 can be called high-definition sound quality encoders
  • the encoders classified into category two in the electronic device 100 can be called standard definition sound quality encoders
  • the encoder is called a base-quality encoder.
  • the decoders classified into category one in the audio playback device 200 can be called high-definition sound quality decoders, and the decoders classified into category two in the audio playback device 200 can be called standard definition sound quality decoders; Decoders up to category three are called base quality decoders.
  • the values of the sampling rate supported by the encoder 1 are 8kHz, 16kHz, 24kHz, 32kHz, 48kHz and 96kHz.
  • the sample rate values supported by encoder two are 32kHz and 48kHz.
  • the sample rate values supported by encoder three are 8kHz and 16Hz.
  • the codec classification criteria shown in Table 2 the first encoder belongs to the first category, and the first encoder also belongs to the second and third categories, the second encoder belongs to the second category, and the third encoder belongs to the third category.
  • the audio playback device 200 also has a decoder 1, a decoder 2, and a decoder 3.
  • the audio playback device 200 acquires the values of the sampling rates supported by all the decoders in the audio playback device 200 .
  • the audio playback device 200 classifies the decoders into the categories of the codec classification standards shown in Table 2 according to the values of the sampling rates supported by all the decoders in the audio playback device 200 .
  • the audio playback device 200 divides the decoders into the codec classification standard categories shown in Table 1 according to the values of the sampling rates supported by all the decoders in the audio playback device 200.
  • the method for classifying the encoder into the categories of the codec classification standards shown in Table 1 is the same as the value of the sampling rate supported by the encoder, and the description is not repeated here in this application.
  • the codec classification standard can be obtained according to the sampling rate, quantization bit depth, code rate and number of channels.
  • Table 3 exemplarily shows the codec classification standards obtained according to the sampling rate, the quantization bit depth, the code rate and the number of channels.
  • the codec when the value of one or more sampling rates supported by the codec is greater than or equal to the first sampling rate, and the value of one or more quantization bit depths supported by the codec is greater than or equal to the first quantization bit depth, the codec If the value of one or more code rates supported by the decoder is greater than or equal to the first code rate, and the number of channels supported by the codec is greater than or equal to the number of the first channels, the encoder belongs to category one.
  • the value of one or more sampling rates supported by the codec is smaller than the first sampling rate and greater than or equal to the second sampling rate
  • the value of one or more quantization bit depths supported by the codec is smaller than the first quantization bit depth and greater than or equal to The second quantization bit depth
  • the value of one or more code rates supported by the codec is less than the first code rate and greater than or equal to the second code rate
  • the number of channels supported by the codec is greater than or equal to the first channel number
  • the codec belongs to category three.
  • the electronic device 100 obtains the values of the sampling rates supported by all the encoders in the electronic device 100, the values of the code rates supported by all the encoders, the values of the quantization bit depths supported by all the encoders, and the values of the quantization bit depths supported by all the encoders.
  • the electronic device 100 divides all encoders in the electronic device 100 into the categories shown in Table 3. It should be noted that the same encoder can be divided into multiple categories.
  • Table 4 exemplarily shows that the codecs in the electronic device 100 and the audio playback device 200 are classified into a plurality of categories according to the sampling rate, the quantization bit depth, the code rate, and the number of channels.
  • the encoders classified into category one in the electronic device 100 can be called high-definition sound quality encoders, and the encoders classified into category two in the electronic device 100 can be called standard definition sound quality encoders;
  • the encoder is called a base-quality encoder.
  • the values of sampling rate supported by encoder one are 8kHz, 16kHz, 24kHz, 32kHz, 48kHz and 96kHz, and the values of quantization bit depth supported by encoder one are 16 bits, 24 bits and 32 bits.
  • the rate values are 600kbps, 900kbps and 1200kbps, and the number of channels supported by encoder 1 is mono, dual, 2.1 and 5.1.
  • the sampling rates supported by encoder two are 16 bits, 32kHz and 48kHz, the quantization bit depths supported by encoder two are 8 bits, 16 bits and 24 bits, and the code rates supported by encoder two are 200kbps and 300kbps , 400kbps and 600kbps, the number of channels supported by encoder two is mono and dual.
  • the sample rate values supported by encoder three are 8kHz and 16kHz.
  • the quantization bit depth values supported by encoder 3 are 8 bits and 16 bits, the code rates supported by encoder 3 are 200kbps and 300kbps, and the number of channels supported by encoder 3 is mono, dual, and 2.1. channel and 5.1 channel and 7.1 channel.
  • encoder one belongs to category one. Then encoder two belongs to category two, and at the same time, encoder two also belongs to category three. Encoder three falls into category three.
  • the audio playback device 200 also has a decoder 1, a decoder 2, and a decoder 3.
  • the audio playback device 200 obtains the numerical value of the sampling rate supported by all the decoders in the audio playback device 200, the numerical value of the code rate supported by all the decoders, the numerical value of the quantization bit depth supported by all the decoders, and the number of channels supported by all the decoders.
  • Numerical value, the decoder is divided into the categories of the codec classification standards shown in Table 4. It should be noted that the same decoder can belong to the codec classification standards of multiple categories.
  • the audio playback device 200 determines the number of sampling rates supported by all decoders in the audio playback device 200, the code rate supported by all decoders, the quantization bit depth supported by all decoders, and the number of channels supported by all decoders.
  • Numerical value, the decoders are divided into the categories shown in Table 4, and the electronic device 100 is based on the numerical value of the sampling rate supported by all the encoders in the electronic device 100, the numerical value of the code rate supported by all the encoders, and the quantization supported by all the encoders.
  • the numerical value of the bit depth and the numerical value of the number of channels supported by all encoders are the same as the method for classifying the encoders into the categories shown in Table 4, which will not be repeated in this application.
  • the electronic device 100 After the electronic device 100 divides the encoder and the audio playback device 200 into multiple categories, the electronic device 100 negotiates with the audio playback device 200 to obtain a common category. After that, when the electronic device 100 and the audio playback device 200 perform audio data transmission, the electronic device 100 will use the default encoder in a category under the common category to encode the audio data and send it to the audio playback device 200, and the audio playback device 200 will encode the audio data to the audio playback device 200. The default decoder in this category decodes the encoded audio data and then plays the audio data.
  • FIG. 8 exemplarily shows a schematic diagram of the electronic device 100 negotiating a common category with the audio playback device 200 .
  • the electronic device 100 establishes a communication connection with the audio playback device 200 .
  • the electronic device 100 may establish a communication connection with the audio playback device 200 through any one of Bluetooth, Wi-Fi Direct, local area network, and the like. How to establish a communication connection between the electronic device 100 and the audio playback device 200 will be described in detail later, and will not be repeated in this application. The embodiments of the present application are described by taking an example of establishing a communication connection between the electronic device 100 and the audio playback device 200 through the Bluetooth technology.
  • the audio playback device 200 classifies all the codecs into multiple categories according to the codec classification standard.
  • the audio playback device 200 first acquires the codec classification standard. It will be appreciated that the codec category standard is pre-existing in the audio playback device 200 .
  • the audio playback device 200 acquires the values of one or more parameters of all the decoders in the audio playback device 200 .
  • the audio playback device 200 determines the value of one or more parameters of the decoder, and within the value range of one or more parameters adopted by the codec classification standard, the audio playback device 200 divides the decoder into one or more parameters. under a category.
  • the audio playback device 200 divides all the decoders in the audio playback device 200 into a plurality of categories. It should be noted that a decoder can be divided into multiple categories.
  • the audio playback device 200 records the identification of the decoders in each category. Exemplarily, when the class of the codec classification standard is class one, the identifiers of the decoders included in class one include decoder one and decoder two; when the class of the codec classification standard is class two, the decoders included in class two include decoders one and two.
  • the identifier of the decoder includes decoder two and decoder three; when the category of the codec classification standard is category three, the identifier of the decoder included in category three includes decoder three; when the category of the codec classification standard is category four, the category The identifiers of the four included decoders are empty.
  • the audio playback device 200 acquires the decoder identifier under each category.
  • the audio playback device 200 sends the category identifiers whose number of decoder identifiers is greater than or equal to 1 to the electronic device 100 .
  • the audio playback device 200 After the audio playback device 200 obtains the decoder identifiers under each category, the audio playback device 200 only needs to send the category identifiers whose number of decoder identifiers is greater than or equal to 1 to the electronic device 100 .
  • the audio playback device 200 may also send the category identifiers whose number of decoder identifiers is greater than or equal to 1, and the corresponding decoder identifiers under each category to the electronic device 100 .
  • the audio playback device 200 may also only send all the decoder identifiers and the values of one or more parameters corresponding to each decoder to the electronic device 100 .
  • the electronic device 100 divides all the decoders in the audio playback device 200 into multiple categories according to the codec classification standard, that is, the electronic device 100 obtains the corresponding decoder identifiers under each category. Specifically, for the electronic device 100, the electronic device 100 will acquire the codec classification standard. It will be appreciated that the codec class standard is pre-existing in the electronic device 100.
  • the electronic device 100 determines that the value of one or more parameters of the decoder is within the value range of one or more parameters adopted by the codec classification standard, and the electronic device 100 classifies the decoder into one or more categories Down.
  • the electronic device 100 divides all the decoders in the audio electronic device 100 into a plurality of categories. It should be noted that a decoder can be divided into multiple categories.
  • the electronic device 100 records the identification of the decoders in each category. Exemplarily, when the class of the codec classification standard is class one, the identifiers of the decoders included in class one include decoder one and decoder two; when the class of the codec classification standard is class two, the decoders included in class two include decoders one and two.
  • the identifier of the decoder includes decoder two and decoder three; when the category of the codec classification standard is category three, the identifier of the decoder included in category three includes decoder three; when the category of the codec classification standard is category four, the category The identifiers of the four included decoders are empty.
  • the audio playback device 200 can acquire category identifiers in which the number of decoder identifiers is greater than or equal to 1.
  • the electronic device 100 classifies all encoders into multiple categories according to the codec classification standard.
  • the electronic device 100 first obtains the codec classification criteria. It will be appreciated that the codec category standard is pre-existing in the audio playback device 200 .
  • the electronic device 100 acquires the values of one or more parameters of all encoders in the electronic device 100 .
  • the electronic device 100 determines that the value of one or more parameters of the encoder is within the value range of one or more parameters adopted by the codec classification standard, and the audio playback device 200 divides the encoder into one or more parameters. under the category.
  • the electronic device 100 divides all encoders into multiple categories according to the above method. It should be noted that an encoder can be divided into multiple categories.
  • the electronic device 100 records the identification of the encoder under each category. Exemplarily, when the class of the codec classification standard is class one, the identifiers of the encoders included in class one include encoder one and encoder two; when the class of the codec classification standard is class two, the codes included in class two include encoder one and encoder two.
  • the identifier of the encoder includes encoder two and encoder three; when the category of the codec classification standard is category three, the identifier of the encoder included in category three includes encoder three; when the category of the codec classification standard is category four, the category
  • the identifiers of the included encoders include encoder one and encoder four.
  • the electronic device 100 acquires the encoder identifier under each category.
  • S705-S706 may be executed before S702, which is not limited in this application.
  • the electronic device 100 confirms the shared category among the categories with the number of encoder identifiers greater than or equal to 1 and the category with the number of decoder identifiers greater than or equal to 1.
  • the electronic device 100 receives a category in which the number of encoder identifiers sent by the audio playback device 200 is greater than or equal to 1.
  • the categories in which the number of encoder identifiers sent by the audio playback device 200 is greater than or equal to 1 may be category one, category two, category three, and category four.
  • the electronic device 100 After acquiring the encoder identifiers under each category, the electronic device 100 confirms the categories in which the number of encoder identifiers is greater than or equal to 1. Exemplarily, the categories for which the electronic device 100 determines that the number of encoder identifiers is greater than or equal to 1 may be category one, category two, and category four.
  • the electronic device 100 confirms the shared category from the categories with the number of encoder identifiers equal to or greater than 1 and the category with the number of decoder identifiers greater than or equal to 1.
  • Common categories that is, the intersection of the categories with the number of encoder identifications greater than or equal to 1 and the categories with the number of decoder identifications greater than or equal to 1.
  • the common category is that the electronic device 100 can transmit audio data with the audio playback device 200 through the encoder and decoder under this category.
  • the common ones may be category one, category two, and category four.
  • the electronic device 100 determines the default encoder identifier in the shared category.
  • the electronic device 100 confirms that the encoder identifier under this category is a default encoder identifier, or the decoder identifier under this category is The identifier of the decoder is the default one of the decoder identifier.
  • the identifiers of encoders included in category three include encoder three
  • the category of the codec classification standard is category three
  • the identifiers of decoders included in category three include decoder three. Because category three includes only one encoder identifier, the electronic device 100 determines that encoder three is a default encoder under category three. Because category three includes only one decoder identifier, the electronic device 100 determines that decoder three is a default decoder under category three.
  • the electronic device 100 will confirm a default code from more than one encoder identifier according to a preset rule the decoder identifier, or the electronic device 100 will confirm a default decoder identifier from more than one decoder identifier according to a preset rule.
  • the preset rules may be priority rules, low power rules, high efficiency rules, and so on.
  • the electronic device 100 confirms a default encoder identification from more than one encoder identification according to the priority rule, or the electronic device 100 determines the default encoder identification from more than one encoder identification according to the priority rule
  • a default decoder ID is identified in the decoder ID of .
  • Table 5 exemplarily shows the priority ranking of encoders and decoders.
  • the electronic device 100 confirms a default encoder ID from more than one encoder ID according to the priority of the codecs shown in Table 5, or the electronic device 100 determines the default encoder ID from more than one encoder ID according to the priority rule.
  • a default decoder ID is identified in the decoder ID.
  • the identifiers of decoders included in category one include decoder one and decoder two, and the identifiers of encoders included in category one include encoder one and encoder two. Since the priority ranking of encoder 1 is higher than that of encoder 2, the electronic device 100 determines that encoder 1 is the default encoder of category 2. Since the priority ranking of decoder 1 is higher than that of decoder 2, the electronic device 100 determines that decoder 1 is the default decoder of category 2.
  • the electronic device 100 confirms a default encoder identifier from more than one encoder identifiers according to the low power rule, or the electronic device 100 determines from more than one encoder identifiers according to the low power rule A default decoder ID is identified in the decoder ID of .
  • Table 6 exemplarily shows the power ranking of the encoder and the decoder.
  • the power levels of the encoder and decoder may be commonly used in the industry, or may be set by developers, and the power level of the encoder and decoder is not limited in this application.
  • the electronic device 100 is ranked according to the power of the encoders shown in Table 6 from low to high, and confirms a default encoder identification from more than one encoder identification, or the electronic device 100 will be based on the encoder power from low to low.
  • a default decoder ID is identified from more than 1 decoder ID to a high ranking.
  • the identifiers of decoders included in category one include decoder one and decoder two, and the identifiers of encoders included in category one include encoder one and encoder two. Since the power of the encoder 1 is lower than the power of the encoder 2, the electronic device 100 determines that the encoder 1 is the default encoder of the category 2. Since the power of the decoder 1 is lower than the power of the decoder 2, the electronic device 100 determines that the decoder 1 is the default decoder of the category 2.
  • the electronic device 100 confirms a default encoder ID from more than one encoder ID according to the efficiency rule, or the electronic device 100 selects a default encoder ID from more than one encoder ID according to the efficiency rule A default decoder ID is identified in the decoder ID of .
  • Table 7 exemplarily shows the efficiency ranking of the encoder and the decoder.
  • the efficiency of the encoder and the decoder may be commonly used in the industry, or may be set by the developer, and the present application does not limit the efficiency of the codec.
  • the electronic device 100 is ranked according to the efficiency of the encoders shown in Table 6, and confirms a default encoder identification from more than one encoder identification, or the electronic device 100 will be based on the efficiency of the encoder.
  • a default decoder ID is identified in the decoder ID of each.
  • the identifiers of decoders included in category one include decoder one and decoder two, and the identifiers of encoders included in category one include encoder one and encoder two. Since the efficiency of encoder 1 is higher than that of encoder 2, the electronic device 100 determines that encoder 1 is the default encoder of category 2. Since the efficiency of the decoder 1 is higher than that of the decoder 2, the electronic device 100 determines that the decoder 1 is a default decoder of the category 2.
  • the electronic device 100 can also confirm a default encoder identification from more than one encoder identification according to other rules, or confirm a default decoder identification from more than one decoder identification. This is not limited.
  • the electronic device 100 determines the default encoder identifier in the common category, and the electronic device 100 also needs to determine the default decoder identifier in the common category.
  • the electronic device 100 assigns the audio playback device 200 to the audio playback device 200 according to the codec classification standard. All the decoders are divided into a plurality of categories, and then the electronic device 100 confirms the category shared by the electronic device 100 and the audio playback device 200 . Or the audio playback device 200 divides all the decoders in the audio playback device 200 into multiple categories according to the codec classification standard, and classifies the number of decoder identifiers greater than or equal to 1, and the corresponding decoders under each category.
  • the electronic device 100 When the identification is sent to the electronic device 100 , the electronic device 100 then confirms the type shared by the electronic device 100 and the audio playback device 200 . After the electronic device 100 confirms the common categories, it also confirms a default encoder identifier under each common category. The electronic device 100 also needs to confirm a default decoder identifier under each common category. Specifically, when the number of corresponding decoder identifiers in some shared categories is 1, the electronic device 100 confirms that the corresponding decoder identifiers under this category are the default decoder identifiers. When there are some common categories, the number of corresponding decoder identifiers is greater than 1. The electronic device 100 can adopt the embodiments shown in Table 5 to Table 7 to confirm the default decoder identification under this category. Specifically, this application will not repeat them here.
  • the electronic device 100 sends the shared category identifier to the audio playback device 200 .
  • the electronic device 100 After the electronic device 100 confirms the shared category, the electronic device 100 sends the shared category identifier to the audio playback device 200 .
  • the audio playback device 200 receives the shared category identifier sent by the electronic device 100 .
  • the audio playback device 200 also needs to confirm the default decoder identifier in each common category.
  • the audio playback device 200 confirms that the identifiers of the decoders in this category are the default decoder identifiers.
  • the identifiers of the decoders included in category three include decoder three. Because category three includes only one decoder identifier, the audio playback device 200 confirms that encoder three is a default decoder under category three.
  • the audio playback device 200 will confirm a default decoder identifier from more than one encoder identifiers according to a preset rule.
  • the preset rules may be priority rules, low power rules, high efficiency rules, and so on.
  • the method for the audio playback device 200 to confirm a default decoder identifier from more than one encoder identifiers according to the priority rule or the low power rule or the high efficiency rule is the same as the method for the aforementioned electronic device 100 according to the priority rule or the low power rule
  • the method of confirming a default decoder identifier from more than one encoder identifier is the same as the high-efficiency rule, which is not repeated in this application.
  • S809 may also be executed after S802, which is not limited in this application.
  • the electronic device 100 when the electronic device 100 confirms the default decoder identifier in the shared category, when the electronic device 100 sends the shared category identifier to the audio playback device 200, it also needs to send the default decoder identifier in the shared category.
  • the decoder identifier of the audio playback device is sent to the audio playback device 200.
  • the electronic device 100 may also send the common category identifier and the default decoder identifier and default encoder identifier under each category to the audio playback device 200 .
  • the electronic device 100 After the electronic device 100 and the audio playback device 200 have confirmed the shared categories and the default codec under each shared category, the electronic device 100 will play the audio characteristics (sampling rate, quantization bit depth, number of channels) according to the application type and audio playback characteristics. , whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, etc., select an appropriate category from the common categories, and perform audio data transmission with the default codec in the category.
  • the audio characteristics sampling rate, quantization bit depth, number of channels
  • the electronic device 100 selects an appropriate category from the common categories according to the application type, playback audio characteristics (sampling rate, quantization bit depth, number of channels), whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, etc. of.
  • Application type In the electronic device 100, different types of application programs that play audio have different requirements for the characteristics of the audio playback.
  • the electronic device 100 will obtain the minimum sampling rate and the lowest quantization bit of the audio data by the application program that plays the audio. According to the minimum sampling rate, the minimum quantization bit depth, and the number of channels of the audio data, an appropriate category is selected from the common categories according to the requirements of the application program that plays the audio. For example, some applications have relatively high requirements on sound quality, and relatively high requirements on the value of the sampling rate of the audio data and the value of the quantization bit depth. For example, a general application program that plays audio sets the sampling rate of the audio data at 32 kHz, and sets the quantization bit depth of the audio data at 16 bits.
  • the sampling rate of audio data be at least 48 kHz, and the quantization bit depth of audio data must be at least 24 bits.
  • the electronic device 100 may select a default codec in this category for audio data transmission according to the condition that the value of the sampling rate includes 48 kHz and the value of the quantization bit depth includes 24 bits.
  • Playing audio characteristics In the electronic device 100, different audio data that can be played by an application program may have different characteristics.
  • the electronic device 100 will obtain the requirements of the minimum sampling rate, the minimum quantization bit depth, and the number of channels of the audio data being played, and then obtain the requirements of the minimum sampling rate, the minimum quantization bit depth, and the number of channels of the audio data being played from the total number of channels. Choose the appropriate one from the categories. For example, some audio data have relatively high requirements on sound quality, and relatively high requirements on the sampling rate and quantization bit depth of the audio data. For example, the value of the sampling rate of general audio data is 32 kHz, and the value of the quantization bit depth of audio data is 16 bits.
  • the minimum value of the sampling rate of some preset audio data with relatively high sound quality is 48 kHz
  • the minimum value of the quantization bit depth of the audio data is 24 bits.
  • the electronic device 100 may select a default codec in this category for audio data transmission according to the condition that the value of the sampling rate includes 48 kHz and the value of the quantization bit depth includes 24 bits.
  • the value of the sampling rate of the currently playing audio of the electronic device is the sampling rate 1
  • the value of the quantization bit depth is the quantization bit depth 1
  • the value of the code rate is the code rate 1
  • the value of the number of channels Number one for the channel is the audio rendering capability of the electronic device 100.
  • the rendering unit of the electronic device 100 can increase the value of the sampling rate of the audio data from the sampling rate 1 to the sampling rate 2, and the rendering unit of the electronic device 100 can change the quantization bit depth of the audio data to a higher value.
  • the numerical value is increased from the quantization bit depth one to the quantization bit depth two, and the rendering unit of the electronic device 100 may increase the numerical value of the number of channels of the audio data from the number of channels one to the number of channels two.
  • the sampling rate 2 is greater than the sampling rate 1
  • the quantization bit depth 2 is larger than the quantization bit depth 1
  • the number of channels 2 is larger than the number of channels 1.
  • the value of the sampling rate of the electronic device 100 is the sampling rate two
  • the value of the quantization bit depth of the audio data is the value of the quantization bit depth two
  • the value of the code rate is the code rate one
  • the number of channels of the audio data is The value of the number of channels is 2.
  • sampling rate including the sampling rate of 2, the quantization bit depth including the quantization bit depth of 2, the value of the code rate including the code rate of 1, and the number of channels including the number of channels.
  • a category of 2 Under the default codec for audio data transmission.
  • the value of the sampling rate of the audio data currently played by the electronic device is the sampling rate 1
  • the value of the quantization bit depth is the quantization bit depth 1
  • the value of the number of channels is the number of channels 1
  • the value of the bit rate is rate one.
  • the numerical value of the sampling rate of the audio data of the electronic device 100 according to the audio data is the sampling rate one
  • the numerical value of the quantization bit depth of the audio data is the quantization bit depth one
  • the numerical value of the number of channels of the audio data is the number of channels one
  • the numerical value of the number of channels of the audio data is one.
  • the value of the code rate is code rate 1. From the common categories, select the sampling rate including sampling rate 1, the quantization bit depth including quantization bit depth 1, the number of channels including the number of channels 1, and the code rate including the code rate 1. category, and use the default codec under this category for audio data transmission.
  • the electronic device 100 can also select an appropriate category from the shared categories according to other parameters, and is not limited to the application types and playback audio characteristics (sampling rate, quantization bit depth, and number of channels) listed in the foregoing embodiment. , whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, etc., which will not be repeated in this application.
  • the electronic device 100 and the audio playback device 200 select an appropriate category from the shared categories, and use the default codec in the category for audio data transmission, due to changes in the application type, playback audio characteristics (sampling rate, quantization bit depth, The number of channels) changes, the audio rendering capability of the electronic device is turned on, the network conditions of the channel change and other factors, the electronic device 100 will re-select another category and notify the audio playback device 200 of the identification of the category.
  • the electronic device 100 and the audio playback device 200 use the default codec under another category to transmit audio data.
  • the application type is changed from application type 1 to application type 2: in the electronic device 100, different types of application programs that play audio have different requirements for the characteristics of playing audio.
  • the value of the sampling rate of the audio data is the sampling rate one
  • the value of the quantization bit depth is the quantization bit depth one
  • the value of the code rate is the code rate one
  • the value of the number of channels is one. The value is one channel number.
  • the electronic device 100 switches the application program for playing audio data from application program 1 to application program 2, if application program 2 has relatively high requirements on the sound quality of audio data, when application type 2 plays audio data, the value of the sampling rate of audio data is sampling rate 2, the value of quantization bit depth is quantization bit depth 2, the value of code rate is code rate 2, and the value of channel number is channel number 1, where sampling rate 2 is greater than sampling rate 1, quantization bit depth Two is greater than the quantization bit depth one, and the code rate two is greater than the code rate one. Due to the parameter change, the electronic device 100 will reselect the codec classification category.
  • the electronic device 100 selects the default codec under a category in which the sampling rate includes the sampling rate 2, the quantization bit depth includes the quantization bit depth 2, the numerical value of the code rate includes the code rate 2, and the number of channels includes the number of channels 1.
  • the device performs the transmission of audio data.
  • the audio content is switched from audio data 1 to audio data 2: in the electronic device 100, different audio data that can be played by an application program may have different characteristics.
  • the value of the sampling rate of the audio data one is the sampling rate one
  • the value of the quantization bit depth is the quantization bit depth one
  • the value of the code rate is the code rate one
  • the value of the number of channels is one. The value is one channel number.
  • the electronic device 100 switches the playing audio content from audio data 1 to audio data 2, if the sound quality of audio data 2 is relatively high and the electronic device 100 plays audio data 2, the value of the sampling rate of audio data 2 is the sampling rate 2 , the value of quantization bit depth is quantization bit depth 2, the value of code rate is code rate 2, and the value of channel number is channel number 1, where sampling rate 2 is greater than sampling rate 1, quantization bit depth 2 is greater than quantization bit Deep one, bit rate two is greater than bit rate one. Due to the parameter change, the electronic device 100 will reselect the codec classification category.
  • the electronic device 100 selects the default codec under a category in which the sampling rate includes the sampling rate 2, the quantization bit depth includes the quantization bit depth 2, the numerical value of the code rate includes the code rate 2, and the number of channels includes the number of channels 1.
  • the device performs the transmission of audio data.
  • the audio rendering capability of the electronic device is turned from off to on: the value of the sampling rate of the currently playing audio of the electronic device is the sampling rate 1, the value of the quantization bit depth is the quantization bit depth 1, the value of the code rate is the code rate 1, and the number of channels The value is one channel number.
  • the rendering unit of the electronic device 100 can increase the value of the sampling rate of the audio data from the sampling rate 1 to the sampling rate 2, and the rendering unit of the electronic device 100 can change the quantization bit depth of the audio data to a higher value.
  • the numerical value is increased from the quantization bit depth one to the quantization bit depth two, and the rendering unit of the electronic device 100 may increase the numerical value of the number of channels of the audio data from the number of channels one to the number of channels two.
  • the sampling rate 2 is greater than the sampling rate 1
  • the quantization bit depth 2 is larger than the quantization bit depth 1
  • the number of channels 2 is larger than the number of channels 1. Due to the parameter change, the electronic device 100 will reselect the codec classification category.
  • the electronic device 100 will select the sampling rate including the sampling rate 2, the quantization bit depth including the quantization bit depth 2, the value of the code rate including the code rate 1, and the number of channels including the number of channels.
  • the codec performs the transmission of audio data.
  • the value of the sampling rate of the audio data currently played by the electronic device is the sampling rate 1
  • the value of the quantization bit depth is the quantization bit depth 1
  • the value of the number of channels is the number of channels 1
  • the value of the bit rate is 1.
  • the value is code rate one. Due to strong interference and attenuation of the wireless transmission channel, the code rate supported by the wireless transmission channel is reduced from code rate 1 to code rate 2, where code rate 2 is smaller than code rate 1. Due to the parameter change, the electronic device 100 will reselect the codec classification category. The electronic device 100 will select the sampling rate including the sampling rate 1, the quantization bit depth 2 including the quantization bit depth 1, the numerical value of the code rate including the code rate 2, and the number of channels including the number of channels. codec for the transmission of audio data.
  • the electronic device 100 After the electronic device 100 reselects the default codec under another category, the electronic device 100 and the audio playback device 200 use the default codec under another category to transmit audio data.
  • the embodiments of the present application may adopt the following methods to achieve smooth transition during codec switching.
  • category 1 corresponds to default encoder 1 and decoder 1
  • category 2 corresponds to default encoder 2 and decoder 2. Then, the electronic device 100 will switch from encoder one to encoder two, and the audio playback device 200 will switch from decoder one to decoder two.
  • the electronic device 100 needs to complete the switching of the encoder 1 to the encoder 2 within one frame of audio data.
  • This frame of audio data in the encoder-to-encoder-two transition process is referred to as the i-th frame of audio data.
  • the encoder encodes the audio data of the i-th frame to obtain packet A (data packet A).
  • the second encoder encodes the audio data of the i-th frame to obtain packet B (data packet B).
  • the electronic device 100 transmits packet A and packet B to the audio playback device 200.
  • the audio playback device 200 uses the decoder 1 to decode the packet A to obtain the audio data pcmA; the audio playback device 200 also uses the decoder 2 to decode the packet A to obtain the audio data pcmB. Then, the audio playback device 200 performs smoothing processing on the i-th frame of audio data, and the smoothing process is shown in formula (1):
  • Pcm(i) represents the ith frame of audio data after smoothing
  • wi represents a smoothing coefficient
  • wi can be linear smoothing or cos smoothing, and so on.
  • the value range of wi is between 0 and 1. The smaller the smoothing coefficient wi, the stronger the smoothing effect, and the smaller the adjustment to the prediction result; the larger the smoothing coefficient wi, the weaker the smoothing effect, and the greater the adjustment to the prediction result.
  • pcmA(i) represents the audio data obtained by the decoder 1 decoding packet A
  • pcmB(i) represents the audio data obtained by the decoder 2 decoding packet B.
  • the audio playback device 200 can obtain the audio data frame Pcm(i) after smoothing the audio data of the ith frame. In this way, the audio playback device 200 plays the audio data frame after the audio data of the ith frame is smoothed, so that the audio data frame in the codec switching process can be smoothly transitioned.
  • max indicates the operation of taking the maximum value
  • % indicates the operation of taking the remainder
  • the frame length indicates that the encoder encodes a certain period of audio data into one frame, and a frame of audio data of this certain period of time is the frame length .
  • each frame of audio data can obtain two data packets, which are packet A (packet A ) and packet B (packet B).
  • the electronic device 100 transmits packet A and packet B to the audio playback device 200.
  • the audio playback device receives packet A and packet B, uses decoder 1 to decode packet A to obtain audio data pcmA, and uses decoder 2 to decode packet B to obtain audio data pcm B.
  • the first D-1 audio data frames are still the audio data decoded by the decoder 1.
  • the audio playback device 200 uses the D-th audio data frame. Smoothing is performed, and the smoothing process is shown in formula (3):
  • Pcm(i) represents the D-th audio data frame after smoothing
  • wi represents a smoothing coefficient
  • wi can be linear smoothing or cos smoothing, and so on.
  • the value range of wi is between 0 and 1. The smaller the smoothing coefficient wi, the stronger the smoothing effect, and the smaller the adjustment to the prediction result; the larger the smoothing coefficient wi, the weaker the smoothing effect, and the greater the adjustment to the prediction result.
  • pcmA(i) represents the audio data obtained by the decoder 1 decoding the D th audio data frame
  • pcmB(i) represents the audio data decoded by the decoder 2 on the D th audio data frame.
  • the audio playback device 200 can obtain the audio data frame Pcm(i) after smoothing the D-th audio data frame. In this way, the audio playback device 200 plays the first D-1 audio data frames and the audio data frames after the audio data of the D th frame is smoothed, so that the audio data frames in the codec switching process can be smoothly transitioned.
  • FIG. 9 is a flowchart of a codec negotiation and switching method provided by an embodiment of the present application.
  • the electronic device 100 establishes a communication connection with the audio playback device 200 .
  • the electronic device 100 may establish a communication connection with the audio playback device 200 through one or more of Bluetooth, Wi-Fi Direct, and NFC.
  • the embodiments of the present application are described by taking an example of establishing a communication connection between the electronic device 100 and the audio playback device 200 through the Bluetooth technology.
  • the following describes how to establish a communication connection between the electronic device 100 and the audio playback device 200 in detail with reference to the UI diagram.
  • FIGS. 9A-9C exemplarily show UI diagrams for establishing a communication connection between the electronic device 100 and the audio playback device 200 through Bluetooth.
  • the electronic device 100 may also establish a communication connection with the audio playback device 200 through one or more of Wi-Fi Direct and NFC.
  • FIG. 9A shows an example audio playback user interface 600 on electronic device 100 .
  • the audio playback interface 600 includes a music name 601 , a playback control 602 , a previous control 603 , a next control 604 , a playback progress bar 605 , a download control 606 , a sharing control 607 , and a more button 608 ,and many more.
  • the music name 601 may be "Dream it possible”.
  • the play control 602 is used to trigger the terminal 100 to play the audio data corresponding to the music name 601 .
  • the previous control 603 can be used to trigger the electronic device 100 to switch to the previous audio data in the playlist for playback.
  • the next track control 604 can be used to trigger the electronic device 100 to switch to the next audio data in the playlist to play.
  • the playback progress bar 605 can be used to indicate the playback progress of the current audio data.
  • the download control 606 can be used to trigger the electronic device 100 to download and save the audio data of the music title 601 to a local storage medium.
  • the sharing control 607 can be used to trigger the electronic device 100 to share the playback link of the audio data corresponding to the music name 601 to other applications.
  • the more controls 608 may be used to trigger the electronic device 100 to display more functional controls related to music playback.
  • the electronic device 100 can also play audio data played by video applications, audio data played by game applications, and audio data of real-time calls, etc.
  • the source of the audio data played by the electronic device 100 is not limited in this application.
  • the electronic device 100 when the electronic device 100 detects the downward swipe gesture on the display screen, in response to the swipe gesture, the electronic device 100 displays the window 610 shown in FIG. 9C on the user interface 20 .
  • a Bluetooth control 611 may be displayed in the window 610 , and the Bluetooth control 611 may receive an operation (eg, touch operation, click operation) for turning on/off the Bluetooth function of the electronic device 100 .
  • the representation of the Bluetooth control 611 may include icons and/or text (eg, the text "Co-cast").
  • the window 610 can also display other functions such as Wi-Fi, hotspot, flashlight, ringing, auto-rotate, instant sharing, airplane mode, mobile data, location information, screen capture, eye protection mode, screen recording, collaborative screencasting, Switch controls such as NFC, that is, the user operation to enable the collaborative screen projection function is detected.
  • the electronic device 100 can change the display form of the Bluetooth control 611 , such as adding a shadow when the Bluetooth control 611 is added.
  • the user may also input a downward swipe gesture on other interfaces to trigger the electronic device 100 to display the window 610 .
  • the user operation of enabling the Bluetooth function can also be implemented in other forms, which are not limited in the embodiment of the present application. .
  • the electronic device 100 may also display a setting interface provided by a settings application, and the setting interface may include a control provided to the user for turning on/off the Bluetooth function of the electronic device 100, and the user can input a control on the control. The user operates to turn on the Bluetooth function of the electronic device 100 .
  • the electronic device 100 After detecting the user operation to enable the Bluetooth function, the electronic device 100 discovers other electronic devices with the Bluetooth function enabled near the electronic device 100 through Bluetooth. For example, the electronic device 100 can discover and connect to the nearby audio playback device 200 and other electronic devices through Bluetooth.
  • the electronic device 100 determines whether a connection is established with the audio playback device 200 for the first time. If the electronic device 100 establishes a connection with the audio playback device 200 for the first time, the electronic device 100 executes S903; otherwise, the electronic device 100 executes S907.
  • the audio playback device 200 sends the category identifiers whose number of decoder identifiers is greater than or equal to 1 to the electronic device 100 .
  • the audio playback device 200 Before the audio playback device 200 sends the category identifiers whose number of decoder identifiers is greater than or equal to 1 to the electronic device 100 , the audio playback device 200 divides all the decoders into multiple categories according to the codec classification standard. How the audio playback device 200 classifies all decoders into multiple categories according to the codec classification standard has been described in detail in the embodiment shown in S702, and will not be repeated in this application.
  • the audio playback device 200 sends the identification of the first category and the identification of the second category to the electronic device 100, and the electronic device 100 receives the identification of the first category and the identification of the second category sent by the audio playback device 200; or, The audio playback device 200 sends the identifier of the first category to the electronic device 100 , and the electronic device 100 receives the identifier of the first category sent by the audio playback device 200 .
  • the decoders in the first category include at least the first decoder
  • the decoders in the second category include at least the second decoder.
  • the audio playback device 200 Before the audio playback device 200 sends the category identifiers whose number of decoder identifiers is greater than or equal to 1 to the electronic device 100, the audio playback device 200 classifies the first encoder into the first encoder based on the parameter information of the first encoder and the codec classification standard.
  • the second encoder is divided into the second category based on the parameter information of the second encoder and the codec classification standard; wherein the parameter information of the first encoder and the parameter information of the second encoder include the sampling rate one or more of , code rate, quantization bit depth, number of channels, and audio stream format; the audio playback device is also used to: classify the first decoder into In the first category, the second decoder is divided into the second category based on the parameter information of the second decoder and the codec classification standard; wherein the parameter information of the first decoder and the parameter information of the second decoder include sampling One or more of the rate, the code rate, the quantization bit depth, the number of channels, and the audio stream format; wherein, the codec classification standard includes the mapping relationship between the codec category and the parameter information of the codec. It should be noted that the parameter information of the first encoder, the parameter information of the second encoder, the parameter information of the first decoder, and the parameter information of the second decoder are all the same
  • the electronic device 100 confirms a common category from the categories with the number of encoder identifiers greater than or equal to 1 and the category with the number of decoder identifiers greater than or equal to 1.
  • the electronic device 100 receives the category identifiers with the number of decoder identifiers greater than or equal to 1 sent by the audio playback device 200.
  • the electronic device 100 confirms that the shared categories of the electronic device and the audio playback device are the first category and the second category.
  • the electronic device has not received the identification of the second category sent by the audio playback device; or the number of encoders classified into the second category by the electronic device is 0, and the electronic device 100 confirms that the shared category of the electronic device and the audio playback device is the first. a category.
  • the electronic device 100 can perform audio data transmission with the audio playback device 200 through the codec under this category.
  • the electronic device 100 may also classify all encoders in the electronic device 100 into multiple categories according to a preset codec classification standard after the connection is established.
  • the audio playback device 200 may also classify all the decoders in the audio playback device 200 into multiple categories according to the preset codec classification standard, and confirm that the number of decoder identifiers is greater than or equal to one category. Please do not limit yourself here.
  • the audio playback device 200 sends all the decoder identifiers in the audio playback device 200 and the numerical value of one or more parameters corresponding to each decoder
  • the electronic device 100 divides all the encoders in the electronic device 100 and all the decoders in the audio playback device 200 into multiple categories according to the codec classification standard, and confirms that the number of encoder identifiers is greater than Classes equal to 1 and classes whose number of decoder identities is greater than or equal to 1. Please do not limit yourself here.
  • the codec classification standard can be obtained according to one or a combination of two or more parameters such as sampling rate, quantization bit depth, code rate, number of channels, and audio stream format.
  • the codec classification standards when the codec classification standards are obtained according to sampling rate, quantization bit depth, code rate, number of channels, and audio stream format, the codec classification standards can be divided into two categories.
  • the value of the sampling rate of the codec is greater than or equal to the first sampling rate (target sampling rate)
  • the numerical value of the codec of the codec is greater than or equal to the first code rate (target code rate)
  • the quantization bit of the codec The value of the depth is greater than or equal to the first quantization bit depth (target quantization bit depth)
  • the value of the number of channels of the codec is greater than or equal to the first channel number (target channel number)
  • the audio stream format is PCM (target audio stream format).
  • the codec is divided into category one; when the numerical value of the sampling rate of the codec is less than the first sampling rate, the numerical value of the codec of the codec is less than the first code rate, the quantization bit depth of the codec The value of is less than the first quantization bit depth, and the value of the number of channels of the codec is greater than or equal to the number of the first channels, and when the audio stream format is PCM, the codec is divided into category two.
  • the first sampling rate is 48khz
  • the first bit rate is 600kps
  • the first quantization bit depth is 24 bits
  • the first channel number is 2
  • the audio stream format is PCM.
  • the codec classification criteria of category 1 are: the sampling rate of the codec is greater than or equal to 48khz, the code rate of the codec is greater than or equal to 600kps, the quantization bit depth of the codec is greater than or equal to 24 bits, the number of channels of the codec More than or equal to 2 channels, the audio stream format is PCM.
  • the codec classification criteria for category 2 are: the sampling rate of the codec is less than 48khz, the code rate of the codec is less than or equal to 600kps, the quantization bit depth of the codec is less than 24 bits, and the number of channels of the codec is greater than or equal to 2 channel, the audio stream format is PCM.
  • Codecs classified into category one may be referred to as high-definition sound quality codecs, and codecs classified into category two may be referred to as standard sound quality codecs.
  • the audio playback device 200 includes decoder one, decoder two, and decoder three, and decoder one belongs to category one, and decoder two and decoder three belong to category two.
  • the electronic device 100 includes encoder one, encoder two, and encoder three, and encoder one belongs to category one, and encoder two and encoder three belong to category two.
  • the categories shared by the electronic device 100 and the audio playback device 200 include category one and category two.
  • the electronic device 100 confirms a default encoder identifier and a default decoder identifier in each common category.
  • the electronic device 100 and the audio playback device 200 can transmit audio data through the codecs classified in the shared category.
  • the electronic device 100 needs to confirm a default encoder ID and a default decoder ID under each shared category. After that, the electronic device 100 and the audio playback device 200 will use a default encoder and a default decoder under each common category to transmit audio data.
  • the electronic device 100 confirms that the encoder under this category is the default encoder, or the decoder under this category The identifier of the decoder is the default one.
  • the codecs classified into category 1 are high-definition audio codecs.
  • the encoders in category one include encoder one (first encoder), and the decoders in category one include decoder one (first decoder). Because the category only includes one encoder and one decoder, the electronic device 100 confirms that the encoder one and the decoder one are the default one encoder and the default one decoder under the category. When the electronic device 100 confirms that the codec in the first category is used to transmit the audio data, the electronic device 100 sends the adopted category identification (the identification of the category one) to the audio playback device 200 .
  • the electronic device 100 uses the encoder in the first category to encode the audio data into first encoded audio data and sends it to the audio playback device 200, and the audio playback device 200 uses the decoder in the category one to decode the compressed audio data into (the first playback device). audio data), and play the first playback audio data.
  • the codecs classified into category 2 are basic audio codecs.
  • the encoder in category two includes encoder two (second encoder), and the decoder in category two includes decoder two (second decoder). Because category two includes only one encoder and one decoder, the electronic device 100 confirms that encoder two and decoder two are default one encoder and one default decoder under category two. When the electronic device 100 confirms that the codec in the second category is used to transmit the audio data, the electronic device 100 sends the adopted category identifier (the identifier of the second category) to the audio playback device 200 .
  • the electronic device 100 uses the second encoder in the second category to package the audio data and sends it to the audio playback device 200 , and the audio playback device 200 uses the second decoder in the second category to decompress the compressed audio data and play the audio data.
  • the electronic device 100 needs to identify one of the multiple encoders in the category as the default encoder , or one of the multiple decoders in this category as the default one.
  • the codecs classified into category 1 are high-definition audio codecs.
  • the encoder in category one includes encoder one (first encoder) and encoder three (third encoder), and the decoder in category one includes decoder one (first decoder) and decoder three ( third decoder). Because category one includes multiple encoders and multiple decoders, the electronic device 100 determines one default encoder and one default decoder for category next.
  • the electronic device 100 can identify a default encoder and a default decoder from a plurality of encoders and a plurality of decoders according to preset rules, and the preset rules can be priority rules, low power rules, Efficient rules and so on.
  • the electronic device 100 identifies a default encoder (first encoder) and a default decoder from the plurality of encoders and the plurality of decoders according to the priority rule, the low power rule, and the high efficiency rule.
  • first decoder For the method of the (first decoder), please refer to the embodiment shown in S808, which is not repeated in this application.
  • the electronic device 100 After the electronic device 100 confirms that the encoder one and the decoder one are the default encoder and the default decoder under the category, the electronic device 100 sends the default decoder identification and/or encoder identification under the category to the audio Playback device 200 .
  • the electronic device 100 when the electronic device 100 confirms that the codec in the category 1 is used to transmit the audio data, the electronic device 100 sends the adopted category identifier (the identifier of the category 1) to the audio playback device 200 .
  • the electronic device 100 uses the encoder in the first category to encode the audio data into the first encoded audio data and sends it to the audio playback device 200, and the audio playback device 200 uses the decoder in the category one to decode the first encoded audio data into the first playback device. audio data, and play the first playback audio data.
  • the codecs classified into class 1 are high-definition audio codecs
  • the codecs classified into class 2 are basic audio codecs.
  • the encoder in category one includes encoder one
  • the decoder in category one includes decoder one
  • the encoder in category two includes encoder two
  • the decoder in category two includes decoder two. Because category only includes one encoder and one decoder, and category two includes only one decoder and one encoder, the electronic device 100 confirms that encoder one and decoder one are the default one encoder and the default one for category one Decoder, Encoder Two and Decoder Two are the default one encoder and the default one decoder under category two.
  • the electronic device 100 can use the codec in class 1 or class 2 to transmit audio data
  • the electronic device 100 sends the adopted class identifier (the identifier of class 1 or the identifier of class 2) to the audio playback device 200 .
  • the codecs classified into category 1 are high-definition audio codecs
  • the codecs classified into category 2 are basic audio codecs.
  • the encoder in category one includes encoder one and encoder three
  • the decoder in category one includes decoder one and decoder three
  • the encoder in category two includes encoder two and encoder four
  • the decoder in category two The decoder includes Decoder Hot and Decoder Four. Because category one includes multiple encoders and multiple decoders, category two also includes multiple encoders and multiple decoders.
  • the electronic device 100 may identify a default encoder and a default decoder under category 1 and category 2 from a plurality of encoders and a plurality of decoders according to a preset rule.
  • the preset rules may be priority rules, low power rules, high efficiency rules, and so on.
  • the electronic device 100 determines a default encoder and a default decoder under the first and second categories from the plurality of encoders and the plurality of decoders according to the priority rule, the low power rule, and the high efficiency rule. , please refer to the embodiment shown in S808, which will not be repeated in this application.
  • the electronic device 100 When the electronic device 100 confirms that the encoder 1 and the decoder 1 are the default encoder and the default decoder of the category 1, the encoder 2 and the decoder 2 are the default encoder and the default decoder of the category 2 After that, the electronic device 100 sends the default decoder identification and/or encoder identification under the first category and the default decoder identification and/or encoder identification under the second category to the audio playback device 200 .
  • the electronic device 100 may use the default codec in category 1 or category 2 to transmit audio data.
  • the electronic device 100 may use a codec in category 1 to transmit audio data, and the codec in category 1 is a high-definition audio codec.
  • the electronic device 100 can switch the codec in the first category to the codec in the second category. And use the codec in category two to transmit audio data.
  • the electronic device 100 After the electronic device 100 confirms the category shared by the electronic device 100 and the audio playback device 200, in an optional implementation manner, the electronic device 100 confirms the default encoder identification and the default decoder identification in each shared category , the method for the electronic device 100 to confirm the default encoder identification and the default decoder identification in the category shared by each electronic device 100 and the audio playback device 200 has been described in detail in the embodiment shown in FIG. This will not be repeated here.
  • the electronic device 100 only needs to confirm the default encoder identifier in the category shared by the electronic device 100 and the audio playback device 200 . After that, the electronic device 100 sends the shared category of the electronic device 100 and the audio playback device 200 to the audio playback device 200, and the audio playback device 200 confirms the default decoder identifier in the category shared by the electronic device 100 and the audio playback device 200.
  • the preset codec classification criteria may be updated every fixed period (eg, one month). Therefore, when the codec classification standard is periodically updated, the electronic device 100 also needs to periodically (for example, one month) divide the codecs into multiple categories, and the audio playback device 200 also needs to periodically (for example, one month) to decode the codecs. devices are grouped into multiple categories.
  • the electronic device 100 and the audio playback device 200 may classify the codecs into multiple categories at an appropriate time according to the collected user behavior habits.
  • the electronic device 100 and the audio playback device 200 may classify the codecs into multiple categories during the time period "24:00-7:00", Because the user rests at home during the time period "24:00-7:00", at this time, when the codec classification is performed in this time period, the user's experience of using the device will not be affected.
  • the electronic device 100 sends the shared category identifier and a default decoder identifier in each shared category to the audio playback device 200 .
  • the electronic device 100 After the electronic device 100 confirms the category shared by the electronic device 100 and the audio playback device 200, the electronic device 100 identifies the category shared by the electronic device 100 and the audio playback device 200 (the identification of the first category and the identification of the second category, or the identification of the first category. category identifier) and in each common category, a default decoder identifier is sent to the audio playback device 200.
  • the audio playback device 200 receives the category identifier shared by the electronic device 100 and the audio playback device 200 sent by the electronic device 100 and a default decoder identifier in each shared category. After that, the electronic device 100 and the audio playback device 200 will use the category shared by the electronic device 100 and the audio playback device 200 to transmit audio data.
  • the electronic device 100 may negotiate the shared categories between the electronic device 100 and the audio playback device 200 before negotiating the categories. , the audio playback device 200 sends each category identifier and the corresponding decoder identifier under each category to the electronic device 100 . Then, after the electronic device 100 confirms the category shared by the electronic device 100 , the electronic device 100 only needs to send the category shared by the electronic device 100 and the audio playback device 200 to the audio playback device 200 .
  • the electronic device 100 After the electronic device 100 confirms the category shared by the electronic device 100 and the audio playback device 200 , when the audio playback device 200 confirms the default decoder identifier in the category shared by the electronic device 100 and the audio playback device 200 .
  • the electronic device 100 only needs to send the category identifier shared by the electronic device 100 and the audio playback device 200 to the audio playback device 200 .
  • the audio playback device 200 when the electronic device 100 negotiates a common category, sends the audio playback device 200, all decoder identifiers and the value of one or more parameters corresponding to each decoder to the electronic device 200.
  • the electronic device 100 divides all the encoders in the electronic device 100 and all the decoders in the audio playback device 200 into a plurality of categories according to the codec classification standard. After that, the electronic device 100 confirms the type shared by the electronic device 100 and the audio playback device 200 .
  • the electronic device 100 in addition to sending the electronic device 100 to the audio playback device 200 of the category shared by the electronic device 100 and the audio playback device 200, the electronic device 100 also needs to decode the decoding under the category shared by the electronic device 100 and the audio playback device 200.
  • the player identification is sent to the audio playback device 200 .
  • the electronic device 100 can identify the default decoder identifier in the category shared by the electronic device 100 and the audio playback device 200 .
  • the electronic device 100 confirms the default decoder identifier in the category shared by the electronic device 100 and the audio playback device 200 .
  • the electronic device 100 sends the category identifier shared by the electronic device 100 and the audio playback device 200 to the audio playback device 200, it also needs to send the default decoder identifier in the category shared by the electronic device 100 and the audio playback device 200 to the audio playback device 200. device 200.
  • the electronic device 100 confirms the type shared by the electronic device 100 and the audio playback device 200 .
  • the audio playback device 200 confirms the default decoder identifier in the category shared by the electronic device 100 and the audio playback device 200 .
  • the electronic device 100 sends the category shared by the electronic device 100 and the audio playback device 200 to the audio playback device 200, it also needs to send the decoder identifier under the category shared by the electronic device 100 and the audio playback device 200 to the audio playback device 200. .
  • the electronic device 100 selects a default codec in the first category from the common categories to transmit audio data.
  • the electronic device 100 acquires the first parameter information of the audio data, and when the first parameter information of the audio data satisfies the first condition, encodes the audio data into the first encoded audio data according to the first encoder in the first category, and Sending the first encoded audio data to the audio playback device.
  • the electronic device 100 can choose between the electronic device 100 and the electronic device 100 according to the application type, playback audio characteristics (sampling rate, quantization bit depth, number of channels), whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, the audio stream format, etc.
  • Select the default codec in one category eg, the first category
  • the default encoder in the first category is the first encoder
  • the default decoding in the first category is the first decoder.
  • the electronic device 100 determines that the first parameter information is: sampling rate is the first sampling rate, the quantization bit depth is the first quantization bit depth, the code rate is the first code rate, and the number of channels is the first number of channels.
  • the electronic device 100 selects the sampling rate including the first sampling rate, the quantization bit depth includes the first quantization bit depth, the code rate includes the first code rate, the number of channels includes the first channel number, and the audio stream format includes The default codec in the PCM category is for the transmission of audio data.
  • the electronic device 100 plays the audio from the electronic device 100 and the audio playback device 200 according to the application type, playback audio characteristics (sampling rate, quantization bit depth, number of channels), whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, the audio stream format, etc.
  • the electronic device 100 may select the category with the highest priority from the two or more categories as the first category, and use the category with the highest priority.
  • the default codec for audio data transmission It can be understood that the higher the sampling rate, the higher the code rate, and the higher the quantization depth specified in the codec classification standard, the better the sound quality of the codec classified into this category. The better the sound quality of a codec, the higher the priority of the category that the codec is in.
  • the categories shared by the electronic device 100 and the audio playback device 20 include category one and category two. If the electronic device 100 selects the categories according to the application type, playback audio characteristics (sampling rate, quantization bit depth, number of channels), whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, the audio stream format, etc., the categories include category 1 and category 1. Category two. Since the codecs classified into category 1 are high-definition sound quality codecs, the codecs classified into category 2 are standard sound quality codecs. Category one has higher priority than category two. Therefore, the electronic device 100 will preferentially select the default codec in category 1 to transmit audio data.
  • the electronic device 100 sends the first category identifier to the audio playback device 200 .
  • the audio playback device 200 receives the first category identifier sent by the electronic device 100, and the electronic device 100 and the audio playback device 200 use the default codec in the first category identifier to transmit audio data.
  • the electronic device 100 acquires audio data, and encodes the audio data by using an encoder corresponding to the first encoder identifier to obtain encoded audio data.
  • the first encoder identifier is the default encoder identifier in the first category.
  • the electronic device 100 sends the encoded audio data (the first encoded audio data) to the audio playback device 200 .
  • the electronic device 100 may acquire the currently playing audio data by recording or other means, and then compress the acquired audio data and send it to the audio playback device 200 through a communication connection with the audio playback device 200 .
  • the electronic device 100 collects the audio played by the electronic device 100, and uses an advanced audio coding (advanced audio coding, AAC) algorithm to compress the audio; then compress the audio
  • AAC advanced audio coding
  • the resulting audio data is encapsulated into a transport stream (TS), and then the TS stream is encoded according to the real-time transport protocol (RTP) and the encoded data is sent to the audio playback device through the Bluetooth channel connection 200.
  • TS transport stream
  • RTP real-time transport protocol
  • the audio playback device 200 receives the encoded audio data sent by the electronic device 100, and uses a decoder corresponding to the first decoder identifier to decode the encoded audio data to obtain audio data (first playback audio data).
  • the first decoder identifier is the default decoder identifier in the first category.
  • the audio playback device 200 uses a decoder corresponding to the first decoder identifier to decode the encoded audio data, obtains unencoded audio data, and plays the audio data.
  • the electronic device 100 switches the first category to the second category.
  • the electronic device 100 When the sampling rate and/or code rate and/or quantization bit depth and/or number of channels of the audio data of the electronic device 100 change, the electronic device 100 will reselect a codec in another category (ie, the second category) for When the audio data is transmitted, the electronic device 100 informs the audio playback device 200 of the identification of the category. The electronic device 100 and the audio playback device 200 use the codec in the second category to transmit audio data. This part of the content has been described in detail in the foregoing embodiments, and will not be repeated in this application.
  • the application type changes, the audio content changes, the audio rendering capability of the electronic device is turned on, the network conditions of the channel become worse, etc., the sampling rate and/or code rate and/or quantization bit depth and/or
  • the electronic device 100 switches the first category to the second category.
  • the electronic device 100 when the electronic device 100 receives the user's selection of the high-quality sound mode, the electronic device 100 switches the first encoding category to the second category, wherein the audio quality of the codec in the second category is higher than that in the first category.
  • the audio quality of the codec The sampling rate, code rate, and quantization bit depth of the codec in the second category are all greater than the sampling rate, code rate, and quantization bit depth in the first category.
  • the electronic device 100 receives an operation of the user clicking the more control 608 , and in response to the user operation, the electronic device 100 will display a prompt box 900 as shown in FIG. 10B .
  • the prompt box 900 includes a high-quality sound mode control 901 , a stable transmission mode control 902 , and an audio rendering mode enable control 903 .
  • the high-quality sound mode control 901 in the prompt box 900 can receive the user's click operation, and in response to the user's click operation, the electronic device 100 switches the first category to A second category, wherein the audio quality of the codecs in the second category is higher than the audio quality of the codec classification of the first category.
  • the electronic device 100 when the electronic device 100 receives a user operation to enable the audio rendering capability, the electronic device 100 increases the sampling rate of the playing audio data from the first sampling rate to the second sampling rate, and the rendering unit of the electronic device 100 can quantize the audio data.
  • the value of the bit depth is increased from the first quantized bit depth to the second quantized bit depth, and the rendering unit of the electronic device 100 may increase the value of the number of channels of the audio data from the first number of channels to the second number of channels.
  • the second sampling rate is greater than the first sampling rate
  • the second quantization bit depth is greater than the first quantization bit depth
  • the second channel number is greater than the first channel number.
  • the enable audio rendering mode control 903 in the prompt box 900 can receive the user's click operation, and in response to the user's click operation, the electronic device 100 will first The category is switched to the second category, wherein the sampling rate, code rate, quantization bit depth, and number of channels of the codec in the second category are all greater than the sampling rate, code rate, quantization bit depth, number of channels.
  • the electronic device 100 may receive a user operation to switch the current audio data transmission mode to the stable transmission mode.
  • the electronic device 100 switches the first category to the second category, wherein the code rate of the codec in the second category is lower than the code rate of the codec in the first category.
  • the electronic device 100 may automatically switch to the stable transmission mode. This application is not limited here.
  • the stable transmission mode control 902 in the prompt box 900 can receive the user's click operation, and in response to the user's click operation, the electronic device 100 switches the first category to the second category category, where the codec in the second category has a lower code rate than the codec in the first category.
  • the electronic device 100 acquires the second parameter information of the audio data, and when the second parameter information of the audio data satisfies the second condition, the electronic device 100 switches the first category to the second category, and according to the second encoder in the second category
  • the audio data is encoded into second encoded audio data, and the second encoded audio data is sent to the audio playback device.
  • the electronic device 100 sends the identification of the second category to the audio playback device 200 .
  • the second parameter information is: the sampling rate of the audio data played by the electronic device 100 changes from the first sampling rate to the second sampling rate, then the electronic device 100 selects from the categories shared by the electronic device 100 and the audio playback device 200, and the selected sampling rate includes The second sampling rate, the quantization bit depth includes the first quantization bit depth, the code rate includes the first code rate, the number of channels includes the first channel number, and the audio stream format includes the default codec in the category of PCM for audio data processing. transmission.
  • the parameter types of the second decoder are the same as the parameter types in the parameter information of the second decoder; the first parameter information satisfies the first condition, and the second parameter information satisfies the second condition, specifically including: the sampling rate in the first parameter information is greater than or equal to the target sampling rate rate, the sampling rate in the second parameter information is less than the target sampling rate; and/or, the code rate in the first parameter information is greater than or equal to the target code rate, and the code rate in the second parameter information is less than the target code rate; and/or,
  • the quantization bit depth in the first parameter information is greater than or equal to the target quantization bit depth, and the quantization bit depth in the second parameter information is less than the target quantization bit depth; and/or, the number of channels in the first parameter information
  • the electronic device 100 sends the identifier of the second category to the audio playback device 200 .
  • the audio playback device 200 receives the second category identifier sent by the electronic device 100 .
  • the electronic device 100 and the audio playback device 200 will transmit audio data through the default codec in the second category identifier.
  • the electronic device 100 collects audio data, and the electronic device 100 encodes the audio data by using the encoder corresponding to the second encoder identifier to obtain encoded audio data (second encoded audio data), and the electronic device 100 encodes the encoded audio data.
  • the audio data is sent to the audio playback device 200, and the audio playback device 200 will use the decoder corresponding to the second decoder to decode the audio data to obtain uncoded audio data (the second playback audio data), and the audio playback device 200 will Unencoded audio data is played (second playback audio data).
  • the second encoder identifier is the default encoder identifier in the second category; the second decoder identifier is the default decoder identifier in the second category.
  • the audio data during the switching between the electronic device 100 and the audio playback device 200 will be smoothly transitioned to improve user experience.
  • the electronic device encodes the first audio frame in the audio data into the first encoded audio frame through the first encoder, and sends the first encoded audio frame to the audio Playing device; encode the first audio frame in the audio data into a second encoded audio frame by the second encoder, send the second encoded audio frame to the audio playback device, and use the second encoder to encode the second encoded audio frame in the audio data
  • the audio frame is encoded into the Nth encoded audio frame, and the Nth encoded audio frame is sent to the audio playback device; the audio playback device decodes the first encoded audio frame into the first decoded audio frame through the first decoder, and uses the second decoder to decode the audio frame.
  • the electronic device 100 first plays the first playback audio frame, and then the electronic device 100 plays the Nth playback audio frame.
  • the switching between the first encoder and the second encoder needs to be completed within one frame, and this frame is the first audio frame, and the audio playback device will After the first audio frame is smoothed, it is played to prevent jamming when the codec is switched, and to achieve a smooth transition.
  • the adjacent audio frames after the first audio frame, such as the second audio frame do not need to be smoothed, and the second decoder is directly decoded and played.
  • the frame is encoded to obtain the Nth encoded audio frame; the third encoded audio frame to the D+2th encoded audio frame, the D+3th encoded audio frame, and the Nth encoded audio frame are sent to the audio playback device;
  • a decoder decodes the third encoded audio frame to the D+2 encoded audio frame into the second playback audio frame to the D+1 playback audio frame, and decodes the D+3 encoded audio frame into the third decoded audio frame through the second decoder Audio frame; play the second playback audio frame to the D-th playback audio frame; perform smooth processing on the D+1-th playback audio frame and the third decoded audio frame to obtain the target playback audio frame, and the playback target playback audio frame; through the second decoding
  • the processor decodes the Nth encoded audio frame into the Nth decoded audio frame, and plays the Nth decoded audio frame.
  • the switching between the first encoder and the second encoder needs to be completed in multiple frames (D frames), so that the first encoder and the second encoder are During the encoder switching process, the audio data encoded by the first encoder arrives at the audio playback device and is decoded, and the audio data encoded by the second encoder arrives at the audio playback device and is decoded at the same time. If the encoder switching needs to be completed in the D frame, the audio playback device directly decodes and plays the first audio frame to the D-1 audio frame. There is a freeze when the decoder is switched to achieve a smooth transition. The adjacent audio frames after the D-th audio frame, such as the N-th audio frame, do not need smoothing processing, and the N-th decoder is directly decoded and played out.
  • S912-S913 can also be replaced by, when the second parameter information satisfies the second condition, encode the audio data into the third encoded audio data by the first encoder in the first category, and send the third encoded audio data to the audio A playback device; an audio playback device, further configured to decode the third encoded audio data into third playback audio data through the first decoder in the first category.
  • the electronic device and the audio playback device only support one codec category (the first category), in this case, when the parameter information of the audio data is changed from the first parameter information to the second parameter information, and the second parameter information If the information satisfies the second condition, the electronic device cannot switch the codec, and the electronic device still uses the default codec in the first category to transmit audio data with the audio playback device.
  • the first category when the parameter information of the audio data is changed from the first parameter information to the second parameter information, and the second parameter information If the information satisfies the second condition, the electronic device cannot switch the codec, and the electronic device still uses the default codec in the first category to transmit audio data with the audio playback device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Theoretical Computer Science (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

一种编解码器协商与切换方法、系统、电子设备、计算机可读存储介质和计算机程序产品,该系统包括电子设备(100)和音频播放设备(200)。电子设备(100)与音频播放设备(200)在传输音频数据之前,将编解码器划分到多个类别,并确定出电子设备(100)与音频播放设备(200)共有的编解码器类别,例如第一类别和第二类别。电子设备(100)根据第一类别中的编解码器与音频播放设备(200)进行音频数据的传输,当需要切换编解码器时,电子设备(100)不需要与音频播放设备(200)重新协商编解码器的类型,直接根据第二类别中的编解码器进行音频数据的传输。该方法解决了电子设备与音频播放设备切换编解码器时音频数据中断和卡顿的问题,提高了用户体验。

Description

一种编解码器协商与切换方法
本申请要求于2021年04月20日提交中国专利局、申请号为202110423987.8、申请名称为“一种编解码器协商与切换方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及音频处理技术领域,尤其涉及一种编解码器协商与切换方法。
背景技术
随着用户对音频体验追求的不断提升,支持高清音质、多声道、3D Audio等特性的音频内容日渐丰富,相应的编解码技术也迅速涌现,配备多个音频编解码器(Coder-Decoder,Codec)成为无线音频设备的常态。
在电子设备与音频播放设备每次建立连接时,电子设备和音频播放设备会协商并得到双方都支持的编解码器类型,之后,电子设备根据双方都支持的编解码器类型对音频数据进行处理,然后电子设备将编码后的音频数据发送至音频播放设备,音频播放设备接收到编码后的音频数据,并采用对应的解码器解码出来进行播放。
当电子设备的网络变差或者用户选择了更高质量音质时,电子设备需要切换编码器。这时,电子设备与音频播放设备需要重新协商编解码器的类型。在电子设备与音频播放设备重新协商编解码器的类型过程中,电子设备会暂停发送音频数据至音频播放设备,导致出现音频数据断流、卡顿的情况,影响用户体验。因此,如何实现电子设备与音频播放设备编解码器快速切换和切换过程中音频数据无断流是亟待解决的问题。
发明内容
本申请提供了一种编解码器协商与切换方法,实现了当电子设备需要切换编码器时,不需要再和音频播放设备重新协商编解码的类型,直接从之前协商好的双方均支持的编解码器类别中选择一个类别中的编解码器进行音频数据传输,解决了电子设备与音频播放设备切换编解码器时音频数据中断和卡顿的问题,提高了用户体验。
第一方面,本申请提供了一种编解码器协商与切换系统,系统包括电子设备和音频播放设备,其中:电子设备用于:当音频数据的第一参数信息满足第一条件时,根据第一类别中的第一编码器将音频数据编码成第一编码音频数据,并将第一编码音频数据发送至音频播放设备;其中,第一类别为电子设备在获取音频数据之前,确定出电子设备与音频播放设备共有的编解码器类别;将第一类别的标识至音频播放设备;音频播放设备用于:接收电子设备发送的第一类别的标识;通过第一类别中的第一解码器将第一编码音频数据解码成第一播放音频数据;电子设备还用于:当音频数据的第二参数信息满足第二条件时,根据第二类别中的第二编码器将音频数据编码成第二编码音频数据,并将第二编码音频数据发送至音频播放设备;其中,第二类别为电子设备在获取音频数据之前,确定的电子设备与音频播放设备共有的编解码器类别;将第二类别的标识发送至音频播放设备;音频播放设备还用于:接收电子设备发送的第二类别的标识;通过第二类别中的第二解码器将第二编码音频数据解码成第 二播放音频数据;其中,第一条件与第二条件不同,第一类别与第二类别不同。
通过第一方面的系统,电子设备与音频播放设备在传输音频数据之前,将编解码器划分到多个类别中,并确定出电子设备与音频播放设备共有的编解码器类别(例如第一类别和第二类别)。之后,电子设备获取音频数据的第一参数信息,当音频数据的第一参数信息满足第一条件时,从共有的编解码器类别中选用第一类别中的编解码器进行音频数据的传输。当播放的音频数据内容、播放的音频数据的应用、用户选择、或者网络条件变化之后,电子设备获取音频数据的第二参数,当音频数据的第二参数满足第二条件时,电子设备不需要再次和音频播放设备协商编解码器,直接选用共有的编解码器类别中的第二类别中的编解码器进行音频数据的传输。这样,当电子设备需要切换编码器时,不需要再和音频播放设备重新协商编解码器的类型,直接从之前协商好的双方均支持的编解码器类别中选择一个类别中的编解码器进行音频数据传输,解决了电子设备与音频播放设备切换编解码器时音频数据中断和卡顿的问题,提高了用户体验。
结合第一方面,在一种可能的实现方式中,第一类别中的编码器至少包括第一编码器,第二类别中的编码器至少包括第二编码器。
结合第一方面,在一种可能的实现方式中,电子设备还用于:接收音频播放设备发送的第一类别的标识和第二类别的标识;其中,第一类别中的解码器至少包括第一解码器,第二类别中的解码器的至少包括第二解码器。这样,音频播放设备根据编解码器分类标准将解码器划分到多个类别中。示例性的,划分到多个类别中的解码器标识大于等于1的编解码器类别为第一类别和第二类别。划分到第一类别中的解码器至少包括第一解码器,还可以包括其他的解码器,例如第三解码器;划分到第二类别中的解码器至少包括第二解码器,还可以包括其他的解码器,例如第四解码器。
结合第一方面,在一种可能的实现方式中,电子设备还用于:确认出电子设备与音频播放设备的共有类别为第一类别和第二类别;将第一类别的标识和第二类别的标识发送至音频播放设备;音频播放设备,还用于:接收电子设备发送的第一类别的标识和第二类别的标识。电子设备将双方均支持的编解码器类别发送至音频播放设备,使得音频播放设备知道双方均支持的编解码器类别。
在另一种可能的实现方式中,电子设备也可以不需要将第一类别的标识和第二类别的标识发送至音频播放设备。在传输音频数据时,电子设备只需将根据音频数据的参数信息采用的编解码器类别的标识发送至音频播放设备。
结合第一方面,在一种可能的实现方式中,第一类别中的编码器只包括第一编码器,第一类别中的解码器只包括第一解码器;在电子设备确认出电子设备与音频播放设备的共有类别为第一类别和第二类别之后,电子设备还用于:当第一参数信息满足第一条件时,通过第一类别中的第一编码器将音频数据编码成第一编码音频数据,并将第一编码音频数据发送至音频播放设备;音频播放设备,还用于通过第一类别中的第一解码器将第一编码音频数据解码成第一播放音频数据。电子设备与音频播放设备均支持的编解码器类别为第一类别和第二类别,当第一类别中只包括一个编码器和一个解码器时,电子设备将该类别中的一个编码器和一个解码器作为默认的编码器和解码器。之后,当电子设备与音频播放设备采用第一类别中的编解码器进行音频数据传输时,根据第一类别中默认的编码器将音频数据编码为第一编码音频数据,之后,电子设备将第一编码音频数据发送至音频播放设备,音频播放设备采用第一类别中默认的解码器将第一编码音频数据解码为第一播放音频数据。
结合第一方面,在一种可能的实现方式中,第一类别中的编码器还包括第三编码器,第 一类别中的解码器还包括第三解码器;在电子设备确认出电子设备与音频播放设备的共有类别为第一类别和第二类别之后,电子设备还用于:当第一参数信息满足第一条件时,通过第一类别中的第一编码器将音频数据编码成第一编码音频数据,并将第一编码音频数据发送至音频播放设备;其中,第一编码器的功耗低于第三编码器,或者,第一编码器的优先级或功率高于第三编码器;音频播放设备,还用于通过第一类别中的第一解码器将第一编码音频数据解码成第一播放音频数据;其中,第一解码器的功耗低于第二解码器,或者,第一解码器的优先级或功率高于第二解码器。电子设备与音频播放设备均支持的编解码器类别为第一类别和第二类别,当第一类别中包括多个编码器和多个解码器时,电子设备将从多个编码器根据预设的规则确定出一个编码器作为默认的编码器,根据预设规则从多个解码器确定出一个解码器作为默认的解码器。默认的规则可以是优先级规则、效率高低规则和功耗高低规则等等。需要说明的是,电子设备可以从多个编码器根据预设的规则确定出一个编码器作为默认的编码器,并根据预设规则从多个解码器确定出一个解码器作为默认的解码器。在另一种可能的实现方式中,电子设备只需从多个编码器根据预设的规则确定出一个编码器作为默认的编码器,音频播放设备根据预设规则从多个解码器确定出一个解码器作为默认的解码器。本申请在此不做限定。
结合第一方面,在一种可能的实现方式中,电子设备与音频播放设备共有的编码器类别只包括第一类别时,电子设备还用于:当第二参数信息满足第二条件时,通过第一类别中的第一编码器将音频数据编码成第三编码音频数据,并将第三编码音频数据发送至音频播放设备;音频播放设备,还用于通过第一类别中的第一解码器将第三编码音频数据解码成第三播放音频数据。当电子设备与音频播放设备只支持一种编解码器类别时,这种情况下,当音频数据的参数信息由第一参数信息变化为第二参数信息时,并且第二参数信息满足第二条件,电子设备无法切换编解码器,电子设备还是采用第一类别中默认的编解码器与音频播放设备进行音频数据的传输。
结合第一方面,在一种可能的实现方式中,电子设备与音频播放设备共有的编码器类别只包括第一类别,包括:电子设备未收到音频播放设备发送的第二类别的标识;或电子设备划分到第二类别中的编码器的数量为0。
结合第一方面,在一种可能的实现方式中,第一类别中的编码器只包括第一编码器,第一类别中的解码器只包括第一解码器;当第一参数信息满足第一条件时,电子设备还用于:根据第一类别中的第一编码器将音频数据编码成第一编码音频数据,并将第一编码音频数据发送至音频播放设备;音频播放设备,还用于通过第一类别中的第一解码器将第一编码音频数据解码成第一播放音频数据。电子设备与音频播放设备均支持的编解码器类别只包括第一类别,当第一类别中只包括一个编码器和一个解码器时,电子设备将该类别中的一个编码器和一个解码器作为默认的编码器和解码器。之后,当电子设备与音频播放设备采用第一类别中的编解码器进行音频数据传输时,根据第一类别中默认的编码器将音频数据编码为第一编码音频数据,之后,电子设备将第一编码音频数据发送至音频播放设备,音频播放设备采用第一类别中默认的解码器将第一编码音频数据解码为第一播放音频数据。
结合第一方面,在一种可能的实现方式中,第一类别中的编码器还包括第三编码器,第一类别中的解码器还包括第三解码器;当第一参数信息满足第一条件时,电子设备还用于:第一类别中的第一编码器将音频数据编码成第一编码音频数据,并将第一编码音频数据发送至音频播放设备;其中,第一编码器的功耗低于第三编码器,或者,第一编码器的优先级或功率高于第三编码器;音频播放设备,还用于通过第一类别中的第一解码器将第一编码音频 数据解码成第一播放音频数据;其中,第一解码器的功耗低于第三解码器,或者,第一解码器的优先级或功率高于第三解码器。电子设备与音频播放设备均支持的编解码器类别只包括第一类别,当第一类别中包括多个编码器和多个解码器时,电子设备将从多个编码器根据预设的规则确定出一个编码器作为默认的编码器,根据预设规则从多个解码器确定出一个解码器作为默认的解码器。默认的规则可以是优先级规则、效率高低规则和功耗高低规则等等。需要说明的是,电子设备可以从多个编码器根据预设的规则确定出一个编码器作为默认的编码器,并根据预设规则从多个解码器确定出一个解码器作为默认的解码器。在另一种可能的实现方式中,电子设备只需从多个编码器根据预设的规则确定出一个编码器作为默认的编码器,音频播放设备根据预设规则从多个解码器确定出一个解码器作为默认的解码器。本申请在此不做限定。
结合第一方面,在一种可能的实现方式中,第一类别中的编解码器为高清音质编解码器,第二类别中的编解码器为标准音质编解码器;或第一类别中的编解码器为标准音质编解码器,第二类别中的编解码器为高清音质编解码器。
结合第一方面,在一种可能的实现方式中,在电子设备获取音频数据之前,电子设备还用于:基于第一编码器的参数信息以及编解码器分类标准将第一编码器划到第一类别中,基于第二编码器的参数信息以及编解码器分类标准将第二编码器划分到第二类别中;其中,第一编码器的参数信息和第二编码器的参数信息包括采样率、码率、量化位深、声道数和音频流格式中的一个或多个;音频播放设备还用于:基于第一解码器的参数信息以及编解码器分类标准将第一解码器划到第一类别中,基于第二解码器的参数信息以及编解码器分类标准将第二解码器划分到第二类别中;其中,第一解码器的参数信息和第二解码器的参数信息包括采样率、码率、量化位深、声道数和音频流格式中的一个或多个;其中,编解码器分类标准包括编解码器类别与编解码器的参数信息的映射关系。需要说明的是,第一编码器的参数信息、第二编码器的参数信息、第一解码器的参数信息和第二解码器的参数信息均相同。
结合第一方面,在一种可能的实现方式中,第一类别中的编解码器的采样率大于等于目标采样率,第二类别中的编解码器的采样率小于目标采样率;和/或,第一类别中的编解码器的码率大于等于目标码率,第二类别中的编解码器的码率小于目标码率;和/或,第一类别中的编解码器的声道数大于等于目标声道数,第二类别中的编解码器的声道数小于目标声道数;和/或,第一类别中的编解码器的量化位深大于等于目标量化位深,第二类别中的编解码器的量化位深小于目标量化位深;和/或,第一类别中的编解码器的音频流格式为目标音频流格式,第二类别中的编解码器的音频流格式为目标音频流格式。
结合第一方面,在一种可能的实现方式中,第一参数信息中的参数种类、第一编码器的参数信息中的参数种类、第一解码器的参数信息中的参数种类、第二参数信息中的参数种类、第二编码器的参数信息中的参数种类、第二解码器的参数信息中的参数种类相同;第一参数信息满足第一条件,第二参数信息满足第二条件,具体包括:第一参数信息中的采样率大于等于目标采样率,第二参数信息中的采样率小于目标采样率;和/或,第一参数信息中的码率大于等于目标码率,第二参数信息中的码率小于目标码率;和/或,第一参数信息中的量化位深大于等于目标量化位深,第二参数信息中的量化位深小于目标量化位深;和/或,第一参数信息中的声道数大于等于目标声道数,第二参数信息中的声道数小于于目标声道数;和/或,第一参数信息中的音频流格式为目标音频流格式,第二参数信息中的音频流格式为目标音频流格式。
结合第一方面,在一种可能的实现方式中,当第一编码器与第二编码器的时延相同时, 电子设备还用于:通过第一编码器将音频数据中的第一音频帧编码成第一编码音频帧,并将第一编码音频帧发送给音频播放设备;通过第二编码器将音频数据中的第一音频帧编码成第二编码音频帧,并将第二编码音频帧发送至音频播放设备,通过第二编码器将音频数据中的第二音频帧编码成第N编码音频帧,并将第N编码音频帧发送至音频播放设备;音频播放设备,还用于:通过第一解码器将第一编码音频帧解码为第一解码音频帧,通过第二解码器将第二编码音频帧解码为第二解码音频帧,通过第二解码器将第N编码音频帧解码为第N播放音频帧;对第一解码音频帧和第二解码音频帧进行平滑处理,得到第一播放音频帧。电子设备100首先播放第一播放音频帧,之后,电子设备100播放第N播放音频帧。这样,当第一编码器与第二编码器的时延相同时,第一编码器与第二编码器的切换需要在一帧内完成,该一帧即为第一音频帧,音频播放设备对第一音频帧进行平滑处理之后在播放,防止编解码器切换时出现卡顿的情况,实现平滑过渡。对第一音频帧之后相邻的音频帧,例如第二音频帧,不需要平滑处理,直接将第二解码器进行解码并播放出来。
结合第一方面,在一种可能的实现方式中,当第一编码器与第二编码器的时延不同,电子设备还用于:通过公式D=取整((max(编码器一的总时延,编码器二的总时延)+(帧长-编码器一的总时延%帧长)+帧长-1)/帧长)获取D帧音频数据帧;其中,D表示第一编码器与第二编码器切换过程中的总音频数据帧数,max表示取最大值操作,%表示取余操作,帧长表示一帧音频数据的时长;通过第一编码器将音频数据中的第一音频帧至第D音频帧进行编码,得到第三编码音频帧至第D+2编码音频帧;通过第二编码器将音频数据中的第D音频帧进行编码,得到第D+3编码音频帧,通过第二编码器将音频数据中的第D+1音频帧进行编码,得到第N编码音频帧;将第三编码音频帧至第D+2编码音频帧、第D+3编码音频帧、第N编码音频帧发送至音频播放设备;音频播放设备,还用于:通过第一解码器将第三编码音频帧至第D+2编码音频帧解码为第二播放音频帧至第D+1播放音频帧,通过第二解码器将D+3编码音频帧解码为第三解码音频帧;播放第二播放音频帧至第D播放音频帧;对第D+1播放音频帧和第三解码音频帧进行平滑处理,得到目标播放音频帧,播放目标播放音频帧;通过第二解码器将第N编码音频帧解码为第N解码音频帧,播放第N解码音频帧。这样,当第一编码器与第二编码器的时延不同时,第一编码器与第二编码器的切换需要在多帧(D帧)内完成,这样,使得第一编码器与第二编码器切换过程中的,由第一编码器编码的音频数据到达音频播放设备并解码出来,由第二编码器编码的音频数据到达音频播放设备并解码出来的时刻是一样的。若编码器切换需要在D帧内完成,音频播放设备直接将第一音频帧至第D-1音频帧解码并播放出啦,音频播放设备对第D音频帧进行平滑处理之后再播放,防止编解码器切换时出现卡顿的情况,实现平滑过渡。对第D音频帧之后相邻的音频帧,例如第N音频帧,不需要平滑处理,直接将第N解码器进行解码并播放出来。
结合第一方面,在一种可能的实现方式中,音频播放设备,还用于:通过公式Pcm=wi*pcmA+(1-wi)*pcmB得到第一播放音频帧;其中,Pcm为第一播放音频帧,wi为平滑系数,wi i大于0小于1,pcmA为第一解码音频帧,pcmB为第二解码音频帧。
结合第一方面,在一种可能的实现方式中,音频播放设备,还用于:通过公式Pcm=wi*pcmA+(1-wi)*pcmB得到第一播放音频帧;其中,Pcm为第一播放音频帧,wi为平滑系数,wi i大于0小于1,pcmA为第一解码音频帧,pcmB为第二解码音频帧。
结合第一方面,在一种可能的实现方式中,音频播放设备,还用于:通过公式Pcm=wi*pcmA+(1-wi)*pcmB得到目标播放音频帧;其中,Pcm为目标播放音频帧,wi为平滑系数,wi i大于0小于1,pcmA为第D+1播放音频帧,pcmB为第三解码音频帧。
第二方面,本申请提供另了编解码器协商与切换方法,方法包括:当音频数据的第一参数信息满足第一条件时,电子设备根据第一类别中的第一编码器将音频数据编码成第一编码音频数据,并将第一编码音频数据发送至音频播放设备;其中,第二类别为电子设备在获取音频数据之前,确定出电子设备与音频播放设备共有的编解码器类别;当第二参数信息满足第二条件时,电子设备根据第二类别中的第二编码器将音频数据编码成第二编码音频数据,并将第二编码音频数据发送至音频播放设备;其中,第二类别为电子设备在获取音频数据之前,确定出电子设备与音频播放设备共有的编解码器类别;其中,第二类别为电子设备在获取音频数据之前,确定出电子设备与音频播放设备共有的编解码器类别,第一条件与第二条件不同,第一类别与第二类别不同。
通过第一方面的方法,电子设备与音频播放设备在传输音频数据之前,将编解码器划分到多个类别中,并确定出电子设备与音频播放设备共有的编解码器类别(例如第一类别和第二类别)。之后,电子设备获取音频数据的第一参数信息,当音频数据的第一参数信息满足第一条件时,从共有的编解码器类别中选用第一类别中的编解码器进行音频数据的传输。当播放的音频数据内容、播放的音频数据的应用、用户选择、或者网络条件变化之后,电子设备获取音频数据的第二参数,当音频数据的第二参数满足第二条件时,电子设备不需要再次和音频播放设备协商编解码器,直接选用共有的编解码器类别中的第二类别中的编解码器进行音频数据的传输。这样,当电子设备需要切换编码器时,不需要再和音频播放设备重新协商编解码的类型,直接从之前协商好的双方均支持的编解码器类别中选择一个类别中的编解码器进行音频数据传输,解决了电子设备与音频播放设备切换编解码器时音频数据中断和卡顿的问题,提高了用户体验。
结合第一方面,在一种可能的实现方式中,第一类别中的编码器至少包括第一编码器,第二类别中的编码器至少包括第二编码器。
结合第一方面,在一种可能的实现方式中,方法还包括:电子设备接收音频播放设备发送的第一类别的标识和第二类别的标识;其中,第一类别中的解码器至少包括第一解码器,第二类别中的解码器的至少包括第二解码器。这样,音频播放设备根据编解码器分类标准将解码器划分到多个类别中。示例性的,划分到多个类别中的解码器标识大于等于1的编解码器类别为第一类别和第二类别。划分到第一类别中的解码器至少包括第一解码器,还可以包括其他的解码器,例如第三解码器;划分到第二类别中的解码器至少包括第二解码器,还可以包括其他的解码器,例如第四解码器。
结合第一方面,在一种可能的实现方式中,方法还包括:电子设备确认出电子设备与音频播放设备的共有类别为第一类别和第二类别;电子设备将第一类别的标识和第二类别的标识发送至音频播放设备。电子设备将双方均支持的编解码器类别发送至音频播放设备,使得音频播放设备知道双方均支持的编解码器类别。
结合第一方面,在一种可能的实现方式中,第一类别中的编码器只包括第一编码器;在电子设备确认出电子设备与音频播放设备的共有类别为第一类别和第二类别之后,方法还包括:当第一参数信息满足第一条件时,电子设备通过第一类别中的第一编码器将音频数据编码成第一编码音频数据,并将第一编码音频数据发送至音频播放设备。电子设备与音频播放设备均支持的编解码器类别为第一类别和第二类别,当第一类别中只包括一个编码器时,电子设备将该类别中的一个编码器作为默认的编码器和解码器。之后,当电子设备与音频播放设备采用第一类别中的编解码器进行音频数据传输时,根据第一类别中默认的编码器将音频 数据编码为第一编码音频数据,之后,电子设备将第一编码音频数据发送至音频播放设备。
结合第一方面,在一种可能的实现方式中,第一类别中的编码器还包括第三编码器;在电子设备确认出电子设备与音频播放设备的共有类别为第一类别和第二类别之后,方法还包括:当第一参数信息满足第一条件时,电子设备通过第一类别中的第一编码器将音频数据编码成第一编码音频数据,并将第一编码音频数据发送至音频播放设备;其中,第一编码器的功耗低于第三编码器,或者,第一编码器的优先级或功率高于第三编码器。电子设备与音频播放设备均支持的编解码器类别为第一类别和第二类别,当第一类别中包括多个编码器时,电子设备将从多个编码器根据预设的规则确定出一个编码器作为默认的编码器。默认的规则可以是优先级规则、效率高低规则和功耗高低规则等等。
结合第一方面,在一种可能的实现方式中,当电子设备与音频播放设备共有的编码器类别只包括第一类别时,方法还包括:当第二参数信息满足第二条件时,电子设备通过第一类别中的第一编码器将音频数据编码成第三编码音频数据,并将第三编码音频数据发送至音频播放设备。当电子设备与音频播放设备只支持一种编解码器类别时,这种情况下,当音频数据的参数信息由第一参数信息变化为第二参数信息时,并且第二参数信息满足第二条件,电子设备无法切换编解码器,电子设备还是采用第一类别中默认的编解码器与音频播放设备进行音频数据的传输。
结合第一方面,在一种可能的实现方式中,当电子设备未收到音频播放设备发送的第二类别的标识或电子设备划分到第二类别中的编码器的数量为0时,电子设备与音频播放设备共有的编码器类别只包括第一类别。
结合第一方面,在一种可能的实现方式中,第一类别中的编码器只包括第一编码器,第一类别中的解码器只包括第一解码器;当第一参数信息满足第一条件时,电子设备根据第一类别中的第一编码器将音频数据编码成第一编码音频数据,并将第一编码音频数据发送至音频播放设备。电子设备与音频播放设备均支持的编解码器类别只包括第一类别,当第一类别中只包括一个编码器时,电子设备将该类别中的一个编码器作为默认的编码器。之后,当电子设备与音频播放设备采用第一类别中的编解码器进行音频数据传输时,根据第一类别中默认的编码器将音频数据编码为第一编码音频数据,之后,电子设备将第一编码音频数据发送至音频播放设备。
结合第一方面,在一种可能的实现方式中,第一类别中的编码器还包括第三编码器,第一类别中的解码器还包括第三解码器;当第一参数信息满足第一条件时,电子设备根据第一类别中的第一编码器将音频数据编码成第一编码音频数据,并将第一编码音频数据发送至音频播放设备;其中,第一编码器的功耗低于第三编码器,或者,第一编码器的优先级或功率高于第三编码器。电子设备与音频播放设备均支持的编解码器类别只包括第一类别,当第一类别中包括多个编码器时,电子设备将从多个编码器根据预设的规则确定出一个编码器作为默认的编码器。默认的规则可以是优先级规则、效率高低规则和功耗高低规则等等。
结合第一方面,在一种可能的实现方式中,第一类别中的编解码器为高清音质编解码器,第二类别中的编解码器为标准音质编解码器;或第一类别中的编解码器为标准音质编解码器,第二类别中的编解码器为高清音质编解码器。
结合第一方面,在一种可能的实现方式中,在电子设备获取音频数据之前,方法还包括:电子设备基于第一编码器的参数信息以及编解码器分类标准将第一编码器划到第一类别中,基于第二编码器的参数信息以及编解码器分类标准将第二编码器划分到第二类别中;其中,第一编码器的参数信息和第二编码器的参数信息包括采样率、码率、量化位深、声道数和音 频流格式中的一个或多个;编解码器分类标准包括编解码器类别与编解码器的参数信息的映射关系。需要说明的是,第一编码器的参数信息、第二编码器的参数信息均相同。
结合第一方面,在一种可能的实现方式中,第一类别中的编解码器的采样率大于等于目标采样率,第二类别中的编解码器的采样率小于目标采样率;和/或,第一类别中的编解码器的码率大于等于目标码率,第二类别中的编解码器的码率小于目标码率;和/或,第一类别中的编解码器的声道数大于等于目标声道数,第二类别中的编解码器的声道数小于目标声道数;和/或,第一类别中的编解码器的量化位深大于等于目标量化位深,第二类别中的编解码器的量化位深小于目标量化位深;和/或,第一类别中的编解码器的音频流格式为目标音频流格式,第二类别中的编解码器的音频流格式为目标音频流格式。
结合第一方面,在一种可能的实现方式中,第一参数信息中的参数种类、第一编码器的参数信息中的参数种类、第一解码器的参数信息中的参数种类、第二参数信息中的参数种类、第二编码器的参数信息中的参数种类、第二解码器的参数信息中的参数种类相同;第一参数信息满足第一条件,第二参数信息满足第二条件,具体包括:第一参数信息中的采样率大于等于目标采样率,第二参数信息中的采样率小于目标采样率;和/或,第一参数信息中的码率大于等于目标码率,第二参数信息中的码率小于目标码率;和/或,第一参数信息中的量化位深大于等于目标量化位深,第二参数信息中的量化位深小于目标量化位深;和/或,第一参数信息中的声道数大于等于目标声道数,第二参数信息中的声道数小于于目标声道数;和/或,第一参数信息中的音频流格式为目标音频流格式,第二参数信息中的音频流格式为目标音频流格式。
第三方面,本申请提供了一种电子设备,包括一个或多个处理器、一个或多个存储器,一个或多个编码器;一个或多个存储器、一个或多个编码器与一个或多个处理器耦合,一个或多个存储器用于存储计算机程序代码,计算机程序代码包括计算机指令,一个或多个处理器调用计算机指令以使得电子设备执行如第二方面中任一种可能的实现方式中的一种编解码器协商与切换方法。
第四方面,本申请提供了一种计算机可读存储介质,计算机可读存储介质中存储有计算机可执行指令,计算机可执行指令在被计算机调用时用于使计算机执行上述第二方面中任一种可能的实现方式中提供的一种编解码器协商与切换方法。
第五方面,本申请提供了一种包含指令的计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述第二方面中任一种可能的实现方式中提供的一种编解码器协商与切换方法。
附图说明
图1为本申请实施例提供的一种电子设备与音频播放设备传输音频数据的过程示意图;
图2为本申请实施例提供的一种系统示意图;
图3为本申请实施例提供的一种组网传输的系统示意图;
图4为本申请实施例提供的另一种电子设备100与音频播放设备200传输音频数据的过程示意图;
图5为本申请实施例提供的一种电子设备100的结构示意图;
图6为本申请实施例提供的一种电子设备100(例如手机)的软件结构框图;
图7为本申请实施例提供的一种音频播放设备200的硬件结构示意图;
图8为本申请实施例提供的一种电子设备100与音频播放设备200协商共有的编解码器类别的示意图;
图9为本申请实施例提供的一种编解码器协商与切换方法的流程图;
图9A-图9C为本申请实施例提供的一组电子设备100与音频播放设备200通过蓝牙建立通信连接的UI图;
图10A-图10D为本申请实施例提供的另一组UI图。
具体实施方式
下面将结合附图对本申请实施例中的技术方案进行清除、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请的说明书和权利要求书及附图中的术语“用户界面(user interface,UI)”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。应用程序的用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在终端设备上经过解析,渲染,最终呈现为用户可以识别的内容,比如图像、文本、按钮等控件。控件(control)也称为部件(widget),是用户界面的基本元素,典型的控件有工具栏(toolbar)、菜单栏(menu bar)、输入框、按钮(button)、滚动条(scrollbar)、图像和文本。界面中的控件的属性和内容是通过标签或者节点来定义的,比如XML通过<Textview>、<ImgView>、<VideoView>等节点来规定界面所包含的控件。一个节点对应界面中一个控件或属性,节点经过解析和渲染之后呈现为用户可视的内容。此外,很多应用程序,比如混合应用(hybrid application)的界面中通常还包含有网页。网页,也称为页面,可以理解为内嵌在应用程序界面中的一个特殊的控件,网页是通过特定计算机语言编写的源代码,例如超文本标记语言(hyper text markup language,HTML),层叠样式表(cascading style sheets,CSS),java脚本(JavaScript,JS)等,网页源代码可以由浏览器或与浏览器功能类似的网页显示组件加载和显示为用户可识别的内容。网页所包含的具体内容也是通过网页源代码中的标签或者节点来定义的,比如HTML通过<p>、<img>、<video>、<canvas>来定义网页的元素和属性。
用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的一个窗口、控件等界面元素。
目前,电子设备(例如手机)与音频播放设备(例如耳机)建立通信连接之后,电子设备将音频数据发送至音频播放设备,音频播放设备将播放电子设备发送的音频数据。在电子设备将音频数据发送至音频播放设备之前,电子设备与音频播放设备协商编解码器类型,电子设备将选择双方都支持的编解码器对音频数据进行编码,并将编码后的音频数据发送至音频播放设备。
如图1所示,图1示例性示出了一种电子设备与音频播放设备传输音频数据的过程示意图。
其中,电子设备可以是音频信号源端(Source,SRS),音频播放设备可以是音频信号宿端(Sink,SNK)。
电子设备包括音频数据获取单元、音频流解码单元、混音渲染单元、无线音频编码单元、能力协商单元和无线传输单元。
音频播放设备包括无线传输单元、无线音频解码单元、音频功放单元、音频播放单元和能力协商单元。
在电子设备将获取的音频数据发送给音频播放设备之前,电子设备与音频播放设备将协商得到双方均支持的一种类型的编解码器进行数据传输。具体的,在电子设备与音频播放设备建立通信连接之后,音频播放设备通过能力协商单元将所有的解码器标识和所有解码器的能力发送至音频播放设备侧的无线传输单元,其中,解码器的标识为解码器的编号,音频播放设备可以根据解码器的标识找到解码器的标识对应的解码器,并获取到解码器的标识对应的解码器的能力;音频播放设备侧的无线传输单元将所有的解码器标识和所有解码器的能力发送至电子设备侧的无线传输单元,电子设备侧的无线传输单元将所有的解码器标识和所有解码器的能力发送至电子设备侧的能力协商单元。电子设备侧的能力协商单元获取电子设备中所有编码器的标识和所有编码器的能力,其中,编码器的标识为编码器的编号,电子设备可以根据编码器的标识找到编码器的标识对应的编码器,并获取到编码器的标识对应的编码器的能力。电子设备侧的能力协商单元根据所有编解码器的能力,得到电子设备和音频播放设备共有的一种或多种能力的编解码器,其中,编解码器的能力包括编解码器的支持的采样率数值、量化位深的数值、码率、声道数等等。电子设备将根据播放的音频类型等因素从电子设备和音频播放设备共有的一种或多种能力的编解码器中确定出一个编解码器标识,作为默认的编解码器。之后,电子设备和音频播放设备将根据该默认的编解码器标识进行音频数据的传输。电子设备的能力协商单元将默认的编码器标识发送至无线编码单元。同时,电子设备的能力协商单元将默认的解码器标识发送至电子设备侧的无线传输单元,电子设备侧的无线传输单元将默认的解码器标识发送至音频播放设备侧的无线传输单元,音频播放设备侧的无线传输单元将默认的解码器标识发送至音频播放设备的能力协商单元,音频播放设备通过能力协商单元将默认的解码器标识发送至无线音频解码单元。
在一些实施例中,电子设备和音频播放设备也可以不包括能力协商单元。当电子设备和音频播放设备中没有能力协商单元时,电子设备和音频播放设备中的无线传输单元可以实现编解码器能力协商的功能。
对于电子设备,音频数据获取单元用于获取音频码流,该音频码流可以是电子设备实时获取的网络音频码流,也可以是电子设备设备中缓存的音频码流。音频数据获取音频码流之后,将音频码流发送至音频内容解码单元。
音频内容解码单元接收音频数据获取单元发送的音频码流,将音频码流解码出来,得到未压缩的音频码流。之后,音频内容解码单元将未压缩的音频码流发送至混音渲染单元。
混音渲染单元接收音频内容解码单元发送的未压缩的音频码流,对未压缩的音频码流进行混音和渲染,将混音和渲染后的音频码流称为音频数据。混音即将未压缩的音频码流与具有环境色彩的音频数据进行混音,使得混音渲染后的音频码流具有环境色彩,可以理解的是,电子设备可以提供多路用于混音渲染的音频数据。例如,在纪录片的配音解说中,配音员录制好解说的音频码流,为了使得音频码流与纪录片的画面相符,需要渲染音频码流的环境色彩,增加神秘气氛。渲染是对音频数据的采样率、采样位深、声道数等进行渲染调整。
需要说明的是,在一些实施例中,电子设备也可以不包括混音渲染单元,即电子设备不需要对音频码流进行混音渲染处理。本申请在此不做限定。之后,混音渲染单元将音频数据发送至无线音频编码单元。
无线音频编码单元接收混音渲染单元发送的音频数据,并根据默认的编码器标识将音频数据进行编码,之后,无线音频编码单元将编码后的音频数据发送至无线传输单元。
无线传输单元接收无线音频编码单元发送的编码后的音频数据,并将编码后的音频数据通过电子设备与音频播放设备之间的传输通道发送至音频播放设备的无线传输单元。
音频播放设备的无线传输单元接收电子设备的无线传输单元发送的编码后的音频数据,音频播放设备的无线传输单元将编码后的音频数据发送至音频播放设备的无线音频解码单元。
音频播放设备的无线音频解码单元接收无线传输单元发送的编码后的音频数据,并根据默认的解码器标识将编码后的音频数据进行音频解码,得到未压缩的音频数据。音频播放设备的无线音频解码单元将未压缩的音频数据发送至音频播放设备的音频功放单元。
音频播放设备的音频功放单元接收未压缩的音频数据,并将未压缩的音频数据进行数模转换、功率放大等操作,再通过音频播放单元将音频数据播放出去。
电子设备与音频播放设备在初始建立通信连接时,将根据协商得到的默认的编解码标识进行音频数据传输。但是当网络变差,或者电子设备使用更高清音质进行传输时,电子设备需要切换至适应于网络传输的编码器或者更高清音质的编码器。但是,电子设备切换编码器时,电子设备需要重新与音频播放设备协商编解码器的能力。在电子设备重新与音频播放设备协商编解码器的能力时,电子设备会暂停发送音频数据至音频播放设备,导致电子设备与音频播放设备在切换编解码器的过程中导致音频数据中断和播放卡顿的问题,影响用户体验。
因此,本申请提供了一种编解码器协商与切换方法。方法包括:在电子设备与音频播放设备建立通信连接之前,电子设备与音频播放设备根据采样率、量化位深、码率、声道数等参数将一个或多个编解码器分为多个类别。在电子设备与音频播放设备建立通信连接之后,在电子设备将音频数据发送至音频播放设备之前,音频播放设备将解码器标识数量大于等于1的类别标识发送至电子设备。电子设备根据解码器的标识数量大于等于1的类别和编码器的标识数量大于等于1的类别,得到共有的类别。之后,电子设备根据用户选择、播放音频数据特性、电子设备音频渲染能力是否开启、应用类型等条件选择其中一个类别下默认的一个编解码器进行音频数据传输。当用户选择、播放音频数据特性、电子设备音频渲染能力是否开启、应用类型等条件发生改变时,电子设备将重新选择另一个类别下默认的编码器进行编码传输,并将重新选择的另一个类别的标识发送至音频播放设备,音频播放设备采用该类别下默认的解码器进行解码并播放音频数据。这样,当电子设备需要切换编码器时,不需要再和音频播放设备重新协商编解码的类型,解决了电子设备与音频播放设备切换编解码器时音频数据中断和卡顿的问题,提高了用户体验。
本技术方案适用于以手机与无线耳机点对点连接的无线音频播放场景;也适用于以平板 /PC/智能手表等穿戴设备等与无线耳机点对点连接的场景;也适用于以手机/PC/平板/智能电视/机顶盒/智能音箱/智能路由为中心进行组网的智能家庭场景,音频播放设备可以为音箱/回音壁/智能电视中的一类或多类进行组网播放。
接下来介绍本申请实施例提供的一种无线音频播放系统架构。
如图2所示,图2为本申请实施例提供的一种系统示意图。
首先,电子设备100与音频播放设备200建立通信连接,电子设备100可以将音频数据发送至音频播放设备,音频播放设备播放音频数据。
电子设备100可以是手机、平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,本申请实施例对电子设备100的具体类型不作特殊限制。电子设备100的软件系统包括但不限于
Figure PCTCN2022083816-appb-000001
Figure PCTCN2022083816-appb-000002
Linux或者其它操作系统。
Figure PCTCN2022083816-appb-000003
为华为的鸿蒙系统。
音频播放设备200是指具备音频播放能力的设备,音频播放设备可以是但不仅限于耳机、音箱、电视、AR/VR眼镜设备、平板/PC/智能手表等穿戴设备等。
本申请以下实施例以电子设备100为手机,音频播放设备200为蓝牙耳机为例进行说明。
电子设备100与音频播放设备200之间可以通过无线通信技术连接并进行通信。这里的无线通信技术包括但不仅限于:无线局域网(wireless local area network,WLAN)技术、蓝牙(bluetooth)、红外线、近场通信(near field communication,NFC)、ZigBee、无线保真直连(wireless fidelity direct,Wi-Fi direct)(又称为无线保真点对点(wirelessfidelity peer-to-peer,Wi-Fi P2P))以及后续发展中出现的其他无线通信技术等。为了描述方便,以下实施例将以电子设备100与音频播放设备200之间通过蓝牙(bluetooth)无线通信技术通信为例进行说明。
当音频播放设备200通过蓝牙技术连接到电子设备100时,之后,电子设备100向音频播放设备200发送同步对时信息(例如握手信息)来进行网络同步。在组网成功并完成同步之后,音频播放设备200在电子设备100的控制下播放音频。即电子设备100通过建立的蓝牙通道将音频数据发送给该音频播放设备200,音频播放设备200播放电子设备100发送的音频数据。
图2所示的系统示意图只是示例性的示出了一种系统,在一些实施例中,电子设备100也可以同时与多个音频播放设备200建立通信连接。本申请以下实施例以电子设备100与一个音频播放设备200建立连接进行说明。需要说明的是,本申请对于音频播放设备200的数量不做限定。
接下来介绍本申请实施例提供的另一种组网连接的无线音频播放系统架构。
如图3所示,电子设备100同时与多个音频播放设备200建立通信连接。
电子设备100可以是手机、平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,本申请实施例对电子设备100的具体类型不作特殊限制。电子设备100的软件系统包括但不限于
Figure PCTCN2022083816-appb-000004
Figure PCTCN2022083816-appb-000005
Linux或者其它操作系统。
Figure PCTCN2022083816-appb-000006
为华为的鸿蒙系统。
音频播放设备200是指具备音频播放能力的设备,音频播放设备可以是但不仅限于耳机、音箱、电视、ARVR眼镜设备、平板/PC/智能手表等穿戴设备等。
本申请实施例以电子设备100为手机,多个音频播放设备200分别为耳机、音箱为例进行说明。即手机同时与耳机、音箱建立了连接,手机可以同时将多媒体内容(例如音频数据)发送至耳机、音箱。
当多个音频播放设备200(例如耳机和音箱)对电子设备100传输的同一音频数据无同步时延要求时,这种情况下,可以将电子设备100与多个音频播放设备的连接视为多个独立的系统的组成。即电子设备100与耳机协商共有的编解码器分类类别,以及每个类别下默认的一个编码器标识和一个默认的解码器标识。电子设备100与音箱协商共有的编解码器分类类别,以及每个类别下默认的一个编码器标识和一个默认的解码器标识。电子设备100与耳机可以独立的选择双方共有的编解码器分类类别中的一个类别进行音频数据的传输。电子设备100与音箱可以独立的选择双方共有的编解码器分类类别中的一个类别进行音频数据的传输。电子设备100与耳机选择的编解码器分类类别和电子设备100与音响选择的编解码器分类类别可以相同也可以不相同。并且,电子设备100与耳机或电子设备100与音箱切换编解码器分类类别互不影响。电子设备100与耳机或电子设备100与音箱选择与切换编解码器分类类别的方法与以下实施例介绍的电子设备100与音频播放设备200选择和切换编解码器分类类别的方法一致,本申请在此不再赘述。
当多个音频播放设备200(例如耳机和音箱)对电子设备100传输的同一音频数据有同步时延要求时,这种情况下,电子设备100与多个音频播放设备的连接视为一个完整的系统。具体的,当电子设备100与多个音频播放设备协商得到多方均支持的编解码器分类类别时,耳机将所有编解码器分类类别以及每个类别下的解码器标识发送至电子设备100,音箱将所有编解码器分类类别以及每个类别下的解码器标识发送至电子设备100。电子设备100获取电子设备100中所有编解码器分类类别以及每个类别下的编码器标识。之后,电子设备100从电子设备100中编解码器分类类别、耳机和音箱发送的编解码器分类类别中,确认出共有的编解码器分类类别,即电子设备100、耳机和音箱均支持的编解码器分类类别,电子设备100确认出电子设备100、耳机和音箱均支持的编解码器分类类别的方法与以下实施例介绍的电子设备100与音频播放设备200确认双方均支持的编解码器分类类别的方法一致,本身请在此不再赘述。然后,电子设备100确定出电子设备100、耳机和音箱均支持的编解码器分类类别中一个默认的编码器和一个默认的解码器。电子设备100将电子设备100、耳机和音箱均支持的编解码器分类类别、以及每个类别下默认的一个编码器标识和默认的一个解码器标识发送至耳机和音箱。需要说明的是,电子设备100选择和切换编解码器分类类别与以下实施例介绍的电子设备100与音频播放设备200选择和切换编解码器分类类别的方法一致,本身请在此不再赘述。
以下以电子设备100与一个音频播放设备200建立连接为例,对本实施例提供的一种编解码器协商与切换方法进行说明。电子设备100与对个音频播放设备200建立连接的情况,与电子设备100与一个音频播放设备200建立连接的原理相同,本申请在此不再赘述。
如图4所示,图4示例性示出了另一种电子设备100与音频播放设备200传输音频数据的过程示意图。
电子设备100包括音频数据获取单元、音频流解码单元、混音渲染单元、无线音频编码 单元、能力协商单元、编码控制单元和无线传输单元。
音频播放设备200包括无线传输单元、无线音频解码单元、音频功放单元、音频播放单元和能力协商单元和解码控制单元。
其中,电子设备100中的音频数据获取单元、音频流解码单元、混音渲染单元、无线音频编码单元和无线传输单元与图1中所示的音频数据获取单元、音频流解码单元、混音渲染单元、无线音频编码单元和无线传输单元的功能相同,本申请再次不再赘述。
同理,音频播放设备200中的无线传输单元、无线音频解码单元、音频功放单元、音频播放单元与图1中所示的无线传输单元、无线音频解码单元、音频功放单元、音频播放单元的功能相同,本申请再次不再赘述。
对于电子设备100,能力协商单元具体用于获取编解码器分类标准、电子设备100中所有的编码器的标识和所有的编码器能力,编码器能力包括编解码器的采样率、量化位深、码率、声道数等参数信息。并根据编解码器分类标准和电子设备100中所有的编码器的能力,将电子设备100中所有的编码器划分到多个类别。一个编码器可以属于一个或多个类别。需要说明的是,编解码器分类标准是电子设备100中预置的。
同理,对于音频播放设备200,能力协商单元具体用于获取编解码器分类标准、音频播放设备200中所有的解码器的标识和所有的解码器能力。并根据编解码器分类标准以及电子设备100中所有的解码器的能力,将音频播放设备200中所有的解码器分为多个类别。一个解码器可以属于一个或多个类别。需要说明的是,编解码器分类标准是音频播放设备200中预置的。
之后,音频播放设备200通过能力协商单元将音频播放设备200中解码器标识的数量大于等于1的类别标识发送至音频播放设备200中的无线传输单元。音频播放设备200中的无线传输单元将解码器标识的数量大于等于1的类别的标识发送至电子设备100中的无线传输单元。解码器标识的数量大于等于1的类别可以理解为,该类别下包括的解码器标识的数量大于等于1。
电子设备100中的无线传输单元接收并将解码器标识的数量大于等于1的类别标识发送至电子设备100中的能力协商单元。
电子设备100中的能力协商单元接收解码器标识的数量大于等于1的类别的标识。
电子设备100中的能力协商单元也获取到电子设备100中编码器标识的数量大于等于1的类别标识。
电子设备100中的能力协商单元将解码器标识的数量大于等于1的类别标识和编码器标识的数量大于等于1的类别标识发送至电子设备100中的编码控制单元,电子设备100中的编码控制单元从解码器标识的数量大于等于1的类别标识和编码器标识的数量大于等于1的类别标识,确认出共同的类别标识。并确定出每一个共同的类别中,默认的一个编码器。这里,编码控制单元如何协商电子设备100与音频播放设备200每一个共同的类别中中,默认的一个编码器,将在后续实施例详细介绍,本申请在此不做限定。编码控制单元确认出共同的类别以及每个共同的类别下默认的编码器之后,编码控制单元将根据应用类型、播放音频特性(采样率、量化位深、声道数)、电子设备音频渲染能力是否开启、信道的网络条件等等从共同的类别中选择适当的一个类别(例如第一类别),和该类别中默认的编码器进行音频数据传输。编码控制单元如何根据应用类型、播放音频特性(采样率、量化位深、声道数)、电子设备音频渲染能力是否开启、信道的网络条件等等从共同的类别中选择第一类别,将在后续实施例详细介绍,本申请在此不做限定。
之后,编码协商单元将第一类别的标识发送至无线音频编码单元,无线音频编码单元将根据第一类别中默认的编码器标识对应的编码器对音频数据进行编码。
同时,编码协商单元将第一类别的标识发送至无线传输单元,电子设备100通过无线传输单元将第一类别的标识发送至音频播放设备200的无线传输单元,音频播放设备200的无线传输单将第一类别的标识发送至音频播放设备200中的能力协商单元,音频播放设备200的无线传输单能力协商单元将第一类别的标识发送至音频播放设备200中的解码控制单元,音频播放设备200中的解码控制单元将第一类别的标识发送至无线音频解码单元,无线音频解码单元将根据第一类别的标识中默认的解码器标识对应的解码器对电子设备100发送的已编码的音频数据进行解码。
当编码协商单元从共同的类别中选择适当的一个类别(例如第一类别),并采用该第一类别中默认的编解码器进行音频数据传输之后,由于应用类型变化、播放音频特性(采样率、量化位深、声道数)变化、电子设备音频渲染能力开启、信道的网络条件变化等因素,电子设备100将通过编码控制单元重新选择另一个类别,并将该类别的标识告知音频播放设备200中的解码控制单元。电子设备100与音频播放设备200采用另一个类别中默认的编解码器进行音频数据的传输。
如图5所示,图5示例性示出了电子设备100的结构示意图。
下面以电子设备100为手机为例对实施例进行具体说明。应该理解的是,图5所示电子设备100仅是一个范例,并且电子设备100可以具有比图5中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
电子设备100可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不 构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信 模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备 100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备100 抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备100的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,电子设备100通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。电子设备100可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备100是翻盖机时,电子设备100可以根据磁传感器180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。电子设备100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备100可以利用距离传感器180F测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备100通过发光二极管向外发射红外光。电子设备100使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备100附近有物体。当检测到不充分的反射光时,电子设备100可以确定电子设备100附近没有物体。电子设备100可以利用接近光传感器180G检测用户手持电子设备100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。电子设备100可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。电子设备100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,电子设备100利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备100执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备100对电池142加热,以避免低温导致电子设备100异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备100对电池142的输出电压执行升压,以避免低温导致的异常关机。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备100的表面,与显示屏194所处的位置不同。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解 析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图6是本发明实施例的电子设备100(例如手机)的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图6所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图6所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合捕获拍照场景,示例性说明电子设备100软件以及硬件的工作流程。
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头193捕获静态图像或视频。
如图7所示,图7示例性示出了音频播放设备200的硬件结构示意图。
图7示例性的示出了本申请实施例提供的音频播放设备200(例如蓝牙设备)的结构示意图。
下面以音频播放设备200为蓝牙设备为例对实施例进行具体说明。应该理解的是,图7所示音频播放设备200仅是一个范例,并且音频播放设备200可以具有比图7中所示的更多或更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
如图7所示,音频播放设备200可以包括:处理器201,存储器202,蓝牙通信模块203,天线204,电源开关205,USB通信处理模块206,音频模块207。其中:
处理器201可用于读取和执行计算机可读指令。具体实现中,处理器201可主要包括控制器、运算器和寄存器。其中,控制器主要负责指令译码,并为指令对应的操作发出控制信号。运算器主要负责保存指令执行过程中临时存放的寄存器操作数和中间操作结果等。具体实现中,处理器201的硬件架构可以是专用集成电路(ASIC)架构、MIPS架构、ARM架构或者NP架构等等。
在一些实施例中,处理器201可以用于解析蓝牙通信模块203接收到的信号,如终端100发送的配对模式修改请求,等等。处理器201可以用于根据解析结果进行相应的处理操作,如生成配对模式修改响应,等等。
存储器202与处理器201耦合,用于存储各种软件程序和/或多组指令。具体实现中,存储器202可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器202可以存储操作系统,例如uCOS,VxWorks、RTLinux等嵌入式操作系统。存储器202还可以存储通信程序,该通信程序可用于与终端100,一个或多个服务器,或其他设备进行通信。
蓝牙通信模块203可以包括经典蓝牙(BT)模块和低功耗蓝牙(BLE)模块。
在一些实施例中,蓝牙通信模块203、可以监听到其他设备(如终端100)发射的信号,如探测请求、扫描信号等等,并可以发送响应信号、扫描响应等,使得其他设备(如终端100)可以发现音频播放设备200,并去其他设备(如终端100)建立无线通信连接,通过蓝牙与其他设备(如终端100)进行通信。
在另一些实施例中,蓝牙通信模块203也可以发射信号,如广播BLE信号,使得其他设备(如终端100)可以发现音频播放设备200,并与其他设备(如终端100)建立无线通信连接,通过蓝牙与其他设备(如终端100)进行通信。
音频播放设备200的无线通信功能可以通过天线204,蓝牙通信模块203,调制解调处理器等实现。
天线204可用于发射和接收电磁波信号。音频播放设备200中的每个天线可用于覆盖单个或多个通信频带。
在一些实施例中蓝牙通信模块203的天线可以有一个或多个。
电源开关205可用于控制电源向音频播放设备200的供电。
USB通信处理模块206可用于通过USB接口(未示出)与其他设备进行通信。在一些实施例中,音频播放设备200也可以不包括USB通信处理模块206。
音频模块207可用于通过音频输出接口输出音频信号,这样可使得音频播放设备200支持音频播放。音频模块还可用于通过音频输入接口接收音频数据。音频播放设备200可以为蓝牙耳机等媒体播放设备。
在一些实施例中,音频播放设备200还可以包括显示屏(未示出),其中,该显示屏可用于显示图像,提示信息等。显示屏可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED)显示屏,有源矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED)显示屏,柔性发光二极管(flexible light-emitting diode,FLED)显示屏,量子点发光二极管(quantum dot light emitting diodes,QLED)显示屏等等。
在一些实施例中,音频播放设备200还可以包括RS-232接口等串行接口。该串行接口可连接至其他设备,如音箱等音频外放设备,使得音频播放设备200和音频外放设备协作播放音视频。
可以理解的是图7示意的结构并不构成对音频播放设备200的具体限定。在本申请另一 些实施例中,音频播放设备200可以包括比图示更多或更少的部件,或组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
在电子设备100与音频播放设备200建立通信连接之前,电子设备100根据编解码器分类标准,将电子设备100中的所有编码器归类到多个类别中,音频播放设备200将音频播放设备200中的所有解码器归类到多个类别中。
在一些实施例中,电子设备100与音频播放设备200也可以在建立通信连接之后,再分别将电子设备100和音频播放设备200的中一个或多个编解码器归类到多个类别中。本申请对于编解码器100与音频播放设备200将一个或多个编解码器进行分类的时间不做限定。
接下来将详细介绍编解码器分类标准,以及电子设备100和音频播放设备200如何根据编解码器分类标准将电子设备100和音频播放设备200中的编码器归类到多个类别下的。
编解码器分类标准。
编解码器分类标准可以根据采样率、量化位深、码率、声道数等中的一个或两个及以上参数的组合得到。示例性的,编解码器分类标准可以根据采样率、量化位深、码率、声道数等中的一个参数得到,例如,一个参数可以是采样率;编解码器分类标准也可以根据两个参数得到,例如,两个参数可以是采样率和量化位深;编解码器分类标准也可以根据三个参数得到,例如,三个参数可以是采样率、量化位深与码率;编解码器分类标准也可以根据四个参数得到,例如,四个参数可以是采样率、量化位深、码率、声道数。本申请对于编解码器分类标准使用的参数种类不做限定。编解码器分类标准还可以参考其他的参数,例如音频格式等等,本申请在此不做限定。示例性地,编解码器分类标准是电子设备100和音频播放设备200中预先存在的。
其中,采样率为单位时间(例如一秒钟)内对声音信号的采样次数,采样率越高,声音的还原就越真实,音质越好。
量化位深为量化精度,它决定数字音频的动态范围。当进行频率采样时,较高的量化位深可以提供更多可能性的振幅值,从而产生更为大的振动范围,更高的信噪比,提高保真度。
码率是指比特率,标识单位时间内传送比特数的数目,单位为比特每秒或者千比特每秒。码率越高,每秒传送的音频数据越多,音质就越清晰。
声道数为支持能不同发声的音响的个数。声道数包括单声道、双声道、2.1声道、5.1声道、7.1声道等等。
音频数据的格式目前使用的一般为PCM数据格式,PCM(pulse code modulation,脉冲编码调制)数据格式是未经压缩的音频数据流,它是由模拟信号经过采样、量化、编码转换成的标准数字音频数据。音频数据的格式还包括MP3数据格式、MPEG数据格式、MPEG-4数据格式、WAVE数据格式、CD数据格式等等。
如上所述,编解码器分类标准是根据一个或多个参数的取值范围对编解码器进行分类的。
示例性的,当编解码器分类标准是根据采样率得到时,可以将采样率按照设备中经常使用的编解码的最低采样率和最高采样率将采样率分为多个段,每一个段的采样率的数值不同。具体的,当编解码器的采样率的数值大于等于第一采样率时,则将该编解码器划分为类别一;当编解码器的采样率的数值小于第一采样率,大于等于第二采样率时,则将该编解码器划分为类别二;当编解码器的采样率的数值小于第二采样率时,则将该编解码器划分为类别三。其中,第一采样率大于第二采样率。
示例性的,当编解码器分类标准是根据采样率、码率得到时,可以将采样率按照设备中经常使用的编解码的最低采样率和最高采样率、最低码率和最高码率将采样率和码率分为多个段。具体的,当编解码器的采样率的数值大于等于第一采样率,并且编解码器的码率的数值大于等于第一码率时,则将该编解码器划分为类别一;当编解码器的采样率的数值小于第一采样率大于等于第二采样率,并且编解码器的码率的数值小于第一码率大于等于第二码率时,则将该编解码器划分为类别二;当编解码器的采样率的数值小于第二采样率,并且编解码器的码率的数值小于第二码率时,则将该编解码器划分为类别三。其中,第一采样率大于第二采样率,第一码率大于第二码率。
示例性的,当编解码器分类标准是根据采样率、码率、量化位深得到时,可以将采样率按照设备中经常使用的编解码的最低采样率和最高采样率、最低码率和最高码率、最低量化位深和最高量化位深将采样率、码率和量化位深分为多个段。具体的,当编解码器的采样率的数值大于等于第一采样率,编解码器的码率的数值大于等于第一码率,并且编解码器的量化位深的数值大于等于第一量化位深时,则将该编解码器划分为类别一;当编解码器的采样率的数值小于第一采样率大于等于第二采样率,编解码器的码率的数值小于第一码率大于等于第二码率时,并且编解码器的量化位深的数值小于第一量化位深大于等于第二量化位深时,则将该编解码器划分为类别二;当编解码器的采样率的数值小于第二采样率,编解码器的码率的数值小于第二码率,并且编解码器的量化位深的数值小于第二量化位深时,则将该编解码器划分为类别三。其中,第一采样率大于第二采样率,第一码率大于第二码率,第一量化位深大于第二量化位深。
示例性的,当编解码器分类标准是根据采样率、码率、量化位深和声道数得到时,可以将采样率按照设备中经常使用的编解码的最低采样率和最高采样率、最低码率和最高码率、最低量化位深和最高量化位深、最常用的最低声道数(例如双声道)将采样率、码率、量化位深和声道数分为多个段。具体的,当编解码器的采样率的数值大于等于第一采样率,编解码器的码率的数值大于等于第一码率,编解码器的量化位深的数值大于等于第一量化位深,编解码器的声道数的数值大于等于第一声道数时,则将该编解码器划分为类别一;当编解码器的采样率的数值小于第一采样率大于等于第二采样率,编解码器的码率的数值小于第一码率大于等于第二码率时,编解码器的量化位深的数值小于第一量化位深大于等于第二量化位深,并且编解码器的声道数的数值大于等于第一声道数时,则将该编解码器划分为类别二;当编解码器的采样率的数值小于第二采样率,编解码器的码率的数值小于第二码率,编解码器的量化位深的数值小于第二量化位深,并且编解码器的声道数的数值大于等于第一声道数时,则将该编解码器划分为类别三。其中,第一采样率大于第二采样率,第一码率大于第二码率,第一量化位深大于第二量化位深。
上述只是示例性的示出了部分编解码器分类标准。具体实现中,编解码器分类标准可以根据不同的需求进行设置得到,本申请在此不再一一列举。
接下来介绍,电子设备100和音频播放设备200如何根据编解码器分类标准将电子设备100中的编码器和音频播放设备200中的解码器归类到多个类别中的。
得到编解码器的分类标准之后,电子设备100将所有的编码器划分到多个类别中,以及音频播放设备200将所有的解码器划分到多个类别中。
具体的,对于电子设备100,电子设备100将获取编解码器的分类标准。其中,编解码器的分类标准可以根据编解码器的一个或多个参数的信息,将编解码器划分到多个类别中。
之后,电子设备100将获取电子设备100中所有编码器对应的一个或多个参数的数值。 当电子设备100中编码器对应的一个或多个参数的数值,在编解码器的分类标准采用的一个或多个参数的取值范围内,则电子设备100将该编码器划分到一个或多个类别下。以此类推,电子设备100按照上述方法将所有编码器划分到一个或多个类别中。需要说明的是,一个编码器可以划分为多个类别。电子设备100记录下每一个类别下对应的编码器的标识。例如,当编解码器分类标准的类别为类别一,类别一下对应的编码器的标识包括编码器一和编码器二。
音频播放设备200将所有解码器划分到多个类别,与电子设备100将所有编码器划分到多个类别的方法是一致的,本申请在此不再赘述。音频播放设备200记录下每一个类别下对应的解码器的标识。例如,当编解码器分类标准的类别为类别一,类别一下对应的解码器的标识包括解码器一和解码器二。
接下来结合具体的示例,介绍编解码器分类标准,以及电子设备100将编码器、音频播放设备200将解码器分类到多个类别下的具体实现。
在一种可选的实现方式中,编解码器分类标准可以根据采样率得到。
如表1所示,表1示例性示出了根据采样率得到的编解码器分类标准。
表1
Figure PCTCN2022083816-appb-000007
如表1所示,当编解码器支持的一个或多个采样率的数值大于等于第一采样率,则该编码器属于类别一;当编解码器支持的一个或多个采样率的数值小于第一采样率,大于等于第二采样率,则该编解码器的属于类别二;当编解码器支持的一个或多个采样率的数值小于第二采样率,则该编解码器属于类别三。其中,第二采样率小于第一采样率。
具体实现中,首先,电子设备100获取电子设备100中所有编码器支持的采样率的数值。电子设备100根据电子设备100中所有编码器支持的采样率的数值,将编码器划分到表1所示的类别中。需要说明的是,同一个编码器可以分为多个类别。
如表2所示,表2示例性示出了电子设备100和音频播放设备200根据采样率将电子设备100和音频播放设备200中的编解码器分类到多个类别。
表2
Figure PCTCN2022083816-appb-000008
Figure PCTCN2022083816-appb-000009
可以理解为,本申请实施例中所示的编解码器的标识也可以是用二进制表示,例如编码器一也可以表示为e001,编码器二也可以表示为e010,编码器三也可以表示为e011,解码器一也可以表示为d001,编码器一也可以表示为d010,编码器一也可以表示为d011。
如表2所示,示例性的,第一采样率为48kHz,第二采样率为24kHz。则可以将电子设备100中划分到类别一的编码器称为高清音质编码器,将电子设备100中划分到类别二的编码器称为标清音质编码器;将电子设备100中划分到类别三的编码器称为基础音质编码器。同时,可以将音频播放设备200中划分到类别一的解码器称为高清音质解码器,将音频播放设备200中划分到类别二的解码器称为标清音质解码器;将音频播放设备200中划分到类别三的解码器称为基础音质解码器。
示例性的,电子设备100中有三个编码器,分别为编码器一、编码器二和编码器三。其中,编码器一支持的采样率的数值为8kHz、16kHz、24kHz、32kHz、48kHz和96kHz。编码器二支持的采样率的数值为32kHz和48kHz。编码器三支持的采样率的数值为8kHz和16Hz。根据表2所示的编解码器分类标准,则编码器一属于类别一,同时编码器一也属于类别二和类别三,编码器二属于类别二,编码器三属于类别三。
同理,对于音频播放设备200,当音频播放设备200也有解码器一、解码器二和解码器三。音频播放设备200获取音频播放设备200中所有解码器支持的采样率的数值。音频播放设备200根据音频播放设备200中所有解码器支持的采样率的数值,将解码器划分到表2所示的编解码器分类标准的类别中。音频播放设备200根据音频播放设备200中所有解码器支持的采样率的数值,将解码器划分到表1所示的编解码器分类标准的类别中,与电子设备100根据电子设备100中所有编码器的支持的采样率的数值,将编码器划分到表1所示的编解码器分类标准的类别中的方法是一样的,本申请在此不再赘述。
在一种可选的实现方式中,编解码器分类标准可以根据采样率、量化位深、码率和声道数得到。
如表3所示,表3示例性示出了根据采样率、量化位深、码率和声道数得到的编解码器分类标准。
表3
Figure PCTCN2022083816-appb-000010
Figure PCTCN2022083816-appb-000011
如表3所示,当编解码器支持的一个或多个采样率的数值大于等于第一采样率,编解码器支持的一个或多个量化位深的数值大于等于第一量化位深,编解码器支持的一个或多个码率的数值大于等于第一码率,编解码器支持的声道数大于等于第一声道数,则该编码器属于类别一。当编解码器支持的一个或多个采样率的数值小于第一采样率,大于等于第二采样率,编解码器支持的一个或多个量化位深的数值小于第一量化位深,大于等于第二量化位深,编解码器支持的一个或多个码率的数值小于第一码率,大于等于第二码率,编解码器支持的声道数大于等于第一声道数,则该编解码器的属于类别二。当编解码器支持的一个或多个采样率的数值小于第二采样率,编解码器支持的一个或多个量化位深的数值小于第二量化位深,编解码器支持的一个或多个码率的数值小于第二码率,编解码器支持的声道数大于等于第一声道数,则该编解码器属于类别三。
具体实现中,首先,电子设备100获取电子设备100中所有编码器支持的采样率的数值、所有编码器支持的码率的数值、所有编码器支持的量化位深的数值和所有编码器支持的声道数的数值。电子设备100将电子设备100中所有编码器划分到表3所示的类别中。需要说明的是,同一个编码器可以划分到多个类别。
如表4所示,表4示例性示出了根据采样率采样率、量化位深、码率和声道数将电子设备100和音频播放设备200中的编解码器分类到多个类别。
表4
Figure PCTCN2022083816-appb-000012
Figure PCTCN2022083816-appb-000013
如表4所示,示例性的,当第一采样率为48kHz,第二采样率为24kHz;第一量化位深为24比特,第二量化位深为16比特;第一码率为600kbps,第二码率为300kbps;第一声道数为两声道。则可以将电子设备100中划分到类别一的编码器称为高清音质编码器,将电子设备100中划分到类别二的编码器称为标清音质编码器;将电子设备100中划分到类别三的编码器称为基础音质编码器。
示例性的,电子设备100中有三个编码器,分别为编码器一、编码器二和编码器三。其中,编码器一支持的采样率的数值为8kHz、16kHz、24kHz、32kHz、48kHz和96kHz,编码器一支持的量化位深的数值为16比特、24比特和32比特,编码器一支持的码率的数值为600kbps、900kbps和1200kbps,编码器一支持的声道数为单声道、双声道、2.1声道和5.1声道。编码器二支持的采样率的数值为16比特、32kHz和48kHz,编码器二支持的量化位深的数值为8比特、16比特和24比特,编码器二支持的码率的数值为200kbps、300kbps、400kbps和600kbps,编码器二支持的声道数为单声道和双声道。编码器三支持的采样率的数值为8kHz和16kHz。编码器三支持的量化位深的数值为8比特和16比特,编码器三支持的码率的数值为200kbps、300kbps,编码器三支持的声道数为单声道、双声道、2.1声道和5.1声道和7.1声道。
根据表4所示的编解码器分类标准,则编码器一属于类别一。则编码器二属于类别二,同时,编码器二也属于类别三。编码器三属于类别三。
同理,对于音频播放设备200,当音频播放设备200也有解码器一、解码器二和解码器三。音频播放设备200获取音频播放设备200中所有解码器支持的采样率的数值、所有解码器支持的码率的数值、所有解码器支持的量化位深的数值和所有解码器支持的声道数的数值,将解码器划分到表4所示的编解码器分类标准的类别中,需要说明的是,同一个解码器可以属于多个类别的编解码器分类标准。音频播放设备200根据音频播放设备200中所有解码器支持的采样率的数值、所有解码器支持的码率的数值、所有解码器支持的量化位深的数值和所有解码器支持的声道数的数值,将解码器划分到表4所示的类别中,与电子设备100根据电子设备100中所有编码器支持的采样率的数值、所有编码器支持的码率的数值、所有编码器支持的量化位深的数值和所有编码器支持的声道数的数值,将编码器划分到表4所示的类别中的方法是一样的,本申请在此不再赘述。
电子设备100将编码器、音频播放设备200将解码器划分为多个类别后,电子设备100将与音频播放设备200协商得到共同的类别。之后,电子设备100与音频播放设备200进行音频数据传输时,电子设备100将使用共同的类别下的一个类别中默认的编码器将音频数据 进行编码发送至音频播放设备200,音频播放设备200将该类别中默认的解码器将已编码的音频数据进行解码,然后播放此音频数据。
如图8所示,图8示例性示出了一种电子设备100与音频播放设备200协商共同的类别的示意图。
S801、电子设备100与音频播放设备200建立通信连接。
电子设备100可以通过蓝牙、Wi-Fi直连、局域网等任一项与音频播放设备200建立通信连接。电子设备100与音频播放设备200如何建立通信连接的,将在后面详细介绍,本申请在此不再赘述。本申请实施例以电子设备100与音频播放设备200通过蓝牙技术建立通信连接为例进行说明。
S802、音频播放设备200根据编解码器分类标准将所有的解码器分类到多个类别。
音频播放设备200首先获取到编解码器分类标准。可以理解的是,编解码器类别标准是音频播放设备200中预先存在的。
之后,音频播放设备200获取音频播放设备200中所有解码器的一个或多个参数的数值。
音频播放设备200判断出解码器的一个或多个参数的数值,在编解码器分类标准采用的一个或多个参数的取值范围内,则音频播放设备200将该解码器划分到一个或多个类别下。
按照此方法,音频播放设备200将音频播放设备200中所有的解码器划分到多个类别中。需要说明的是,一个解码器可以划分到多个类别中。音频播放设备200记录下每一类别中的解码器的标识。示例性的,当编解码器分类标准的类别为类别一,类别一包括的解码器的标识包括解码器一、解码器二;当编解码器分类标准的类别为类别二,类别二包括的解码器的标识包括解码器二、解码器三;当编解码器分类标准的类别为类别三,类别三包括的解码器的标识包括解码器三;当编解码器分类标准的类别为类别四,类别四包括的解码器的标识为空。
编解码器分类标准以及音频播放设备200如何根据所有解码器的一个或多个参数的数值,将所有解码器划分到多个类别中的,在表1至表4中已经详细介绍了,本申请在此不再赘述。
S803、音频播放设备200获取到每一个类别下的解码器标识。
由S802可知,音频播放设备200将音频播放设备200中所有的解码器划分到多个类别中之后,并记录下每一类别中的解码器的标识。
S804、音频播放设备200将解码器标识的数量大于等于1的类别标识发送至电子设备100。
音频播放设备200获取到每一个类别下的解码器标识后,音频播放设备200只需将解码器标识的数量大于等于1的类别标识发送至电子设备100。
在一些实施例中,音频播放设备200也可以将解码器标识的数量大于等于1的类别标识,以及该每个类别下对应的解码器标识发送至电子设备100。
在一些实施例中,音频播放设备200也可以只将所有的解码器标识和每个解码器对应的一个或多个参数的数值发送至电子设备100。电子设备100根据编解码器分来标准将音频播放设备200中所有的解码器划分到多个类别中,即电子设备100获取到每一个类别下对应的解码器标识。具体的,对于电子设备100,电子设备100将获取编解码器分类标准。可以理 解的是,编解码器类别标准是电子设备100中预先存在的。
电子设备100判断出解码器的一个或多个参数的数值,在编解码器分类标准采用的一个或多个参数的取值范围内,则电子设备100将该解码器划分到一个或多个类别下。
按照此方法,电子设备100将音频电子设备100中所有的解码器划分到多个类别中。需要说明的是,一个解码器可以划分到多个类别中。电子设备100记录下每一类别中的解码器的标识。示例性的,当编解码器分类标准的类别为类别一,类别一包括的解码器的标识包括解码器一、解码器二;当编解码器分类标准的类别为类别二,类别二包括的解码器的标识包括解码器二、解码器三;当编解码器分类标准的类别为类别三,类别三包括的解码器的标识包括解码器三;当编解码器分类标准的类别为类别四,类别四包括的解码器的标识为空。
电子设备100获取到每一个类别下的解码器标识后,音频播放设备200可以得到解码器标识的数量大于等于1的类别标识。
S805、电子设备100根据编解码器分类标准将所有的编码器分类到多个类别。
电子设备100首先获取到编解码器分类标准。可以理解的是,编解码器类别标准是音频播放设备200中预先存在的。
之后,电子设备100获取电子设备100中所有编码器的一个或多个参数的数值。
电子设备100判断出编码器的一个或多个参数的数值,在编解码器分类标准采用的一个或多个参数的取值范围内,则音频播放设备200将该编码器划分到一个或多个类别下。
按照此方法,电子设备100将所有编码器按照上述方法将划分到多个类别中。需要说明的是,一个编码器可以划分到多个类别。电子设备100记录下每一个类别下的编码器的标识。示例性的,当编解码器分类标准的类别为类别一,类别一包括的编码器的标识包括编码器一、编码器二;当编解码器分类标准的类别为类别二,类别二包括的编码器的标识包括编码器二、编码器三;当编解码器分类标准的类别为类别三,类别三包括的编码器的标识包括编码器三;当编解码器分类标准的类别为类别四,类别四包括的编码器的标识包括编码器一、编码器四。
编解码器分类标准以及电子设备100如何根据所有编码器的一个或多个参数的数值,将所有编码器划分到多个类别中的,在表1至表4中已经详细介绍了,本申请在此不再赘述。
S806、电子设备100获取每一个类别下的编码器标识。
由S805可知,电子设备100将电子设备100中所有的编码器划分到多个类别中之后,并记录下每一类别中的编码器的标识。可以理解的是,S705-S706可以在S702之前执行,本申请在此不做限定。
S807、电子设备100确认出编码器标识的数量大于等于1的类别与解码器标识的数量大于等于1的类别中,共有的类别。
电子设备100接收音频播放设备200发送的编码器标识的数量大于等于1的类别。示例性的,音频播放设备200发送的编码器标识的数量大于等于1的类别可以是类别一、类别二、类别三和类别四。
电子设备100获取到每一个类别下的编码器标识之后,确认出编码器标识的数量大于等于1的类别。示例性的,电子设备100确认出编码器标识的数量大于等于1的类别可以是类别一、类别二和类别四。
之后,电子设备100从编码器标识的数量大于等于1的类别与解码器标识的数量大于等 于1的类别中,确认出共有类别。共有的类别,即编码器标识的数量大于等于1的类别与解码器标识的数量大于等于1的类别的交集。共有的类别是电子设备100可以通过该类别下的编码器和解码器与音频播放设备200进行音频数据的传输。示例性的,共有的可以是类别一、类别二和类别四。
S808、电子设备100确定出共有的类别中默认的编码器标识。
对于任意一个共有类别,若编码器标识的数量为1或者解码器标识的数量为1,则电子设备100确认出该类别下的编码器标识为默认的一个编码器标识,或该类别下的解码器的标识为默认的一个解码器标识。
示例性的,当共有的类别为类别三,类别三包括的编码器的标识包括编码器三,当编解码器分类标准的类别为类别三,类别三包括的解码器的标识包括解码器三。因为类别三只包括一个编码器标识,因此电子设备100确认出编码器三为类别三下默认的一个编码器。因为类别三只包括一个解码器标识,因此电子设备100确认出解码器三为类别三下默认的一个解码器。
对于任意一个共有的类别,若编码器标识的数量大于1或者解码器标识的数量大于1,则电子设备100将根据预设的规则从多于1个的编码器标识中确认出一个默认的编码器标识,或者电子设备100将根据预设的规则从多于1个的解码器标识中确认出一个默认的解码器标识。
预设的规则可以是优先级规则、功率低规则、效率高规则等等。
在一种可选的实现方式中,电子设备100根据优先级规则从多于1个的编码器标识中确认出一个默认的编码器标识,或者电子设备100将根据优先级规则从多于1个的解码器标识中确认出一个默认的解码器标识。
如表5所示,表5示例性示出了编码器和解码器的优先级排名。
表5
Figure PCTCN2022083816-appb-000014
可以理解的是,编码器和解码器的优先级可以是业界通用的,也可以是开发人员设定的,本申请对于编解码器的优先级顺序不做限定。
电子设备100按照表5所示的编解码器的优先级,从多于1个的编码器标识中确认出一个默认的编码器标识,或者电子设备100将根据优先级规则从多于1个的解码器标识中确认出一个默认的解码器标识。
示例性的,对于类别一,类别一包括的解码器的标识包括解码器一、解码器二,类别一包括的编码器的标识包括编码器一、编码器二。由于编码器一的优先级排名高于编码器二的优先级排名,因此电子设备100确认出编码器一为类别一下默认的一个编码器。由于解码器一的优先级排名高于解码器二的优先级排名,因此电子设备100确认出解码器一为类别一下默认的一个解码器。
在一种可选的实现方式中,电子设备100根据功率低规则从多于1个的编码器标识中确认出一个默认的编码器标识,或者电子设备100将根据功率低规则从多于1个的解码器标识中确认出一个默认的解码器标识。
如表6所示,表6示例性示出了编码器和解码器的功率高低排名。
表6
Figure PCTCN2022083816-appb-000015
可以理解的是,编码器和解码器的功率高低可以是业界通用的,也可以是开发人员设定的,本申请对于编解码器的功率高低不做限定。
电子设备100按照表6所示的编码器的功率从低到高排名,从多于1个的编码器标识中确认出一个默认的编码器标识,或者电子设备100将根据编码器的功率从低到高排名从多于1个的解码器标识中确认出一个默认的解码器标识。
示例性的,对于类别一,类别一包括的解码器的标识包括解码器一、解码器二,类别一包括的编码器的标识包括编码器一、编码器二。由于编码器一的功率低于编码器二的功率,因此电子设备100确认出编码器一为类别一下默认的一个编码器。由于解码器一的功率低于解码器二的功率,因此电子设备100确认出解码器一为类别一下默认的一个解码器。
在一种可选的实现方式中,电子设备100根据效率高低规则从多于1个的编码器标识中确认出一个默认的编码器标识,或者电子设备100将根据效率高低规则从多于1个的解码器标识中确认出一个默认的解码器标识。
如表7所示,表7示例性示出了编码器和解码器的效率高低排名。
表7
Figure PCTCN2022083816-appb-000016
可以理解的是,编码器和解码器的效率高低可以是业界通用的,也可以是开发人员设定的,本申请对于编解码器的效率高低不做限定。
电子设备100按照表6所示的编码器的效率高低排名,从多于1个的编码器标识中确认出一个默认的编码器标识,或者电子设备100将根据编码器的效率高低从多于1个的解码器标识中确认出一个默认的解码器标识。
示例性的,对于类别一,类别一包括的解码器的标识包括解码器一、解码器二,类别一包括的编码器的标识包括编码器一、编码器二。由于编码器一的效率高于编码器二的效率, 因此电子设备100确认出编码器一为类别一下默认的一个编码器。由于解码器一的效率高于解码器二的效率,因此电子设备100确认出解码器一为类别一下默认的一个解码器。
电子设备100还可以根据其他的规则从多于1个的编码器标识中确认出一个默认的编码器标识,或从多于1个的解码器标识中确认出一个默认的解码器标识,本申请在此不做限定。
在一些实施例中,电子设备100确定出共有的类别中默认的编码器标识,电子设备100还需确认出共有的类别中默认的解码器标识。
具体的,当音频播放设备200将所有的解码器标识和每个解码器对应的一个或多个参数的数值发送至电子设备100,由电子设备100根据编解码器分类标准将音频播放设备200中所有的解码器划分多个类别中,之后,电子设备100确认出电子设备100与音频播放设备200共有的类别。或者音频播放设备200根据编解码器分类标准将音频播放设备200中所有的解码器划分多个类别中,并将解码器标识的数量大于等于1的类别,以及该每个类别下对应的解码器标识发送至电子设备100时,之后,电子设备100确认出电子设备100与音频播放设备200共有的类别。电子设备100确认出共有的类别之后,并确认出每一个共有的类别下默认的一个编码器标识。电子设备100也需确认出每一个共有的类别下默认的一个解码器标识。具体的,当有些共有的类别下,对应的解码器标识的数量为1,则电子设备100确认出该类别下,对应的解码器标识为默认的解码器标识。当有些共有的类别下,对应的解码器标识的数量为大于1时。电子设备100可以采取表5-表7所示的实施例确认出该类别下默认的解码器标识。具体的,本申请在此不再赘述。
S809、电子设备100将共有的类别标识发送至音频播放设备200。
电子设备100确认出共有的类别之后,电子设备100将共有的类别标识发送至音频播放设备200。
音频播放设备200接收电子设备100发送的共有的类别标识。
在一些实施例中,音频播放设备200还需确认出每一个共有的类别中,默认的解码器标识。
对于任意一个共有类别,若解码器标识的数量为1,则音频播放设备200确认出该类别下的解码器的标识为默认的一个解码器标识。
示例性的,当编解码器分类标准的类别为类别三,类别三包括的解码器的标识包括解码器三。因为类别三只包括一个解码器标识,因此音频播放设备200确认出编码器三为类别三下默认的一个解码器。
对于任意一个共有类别,若解码器标识的数量大于1,则音频播放设备200将根据预设的规则从多于1个的编码器标识中确认出一个默认的解码器标识。
预设的规则可以是优先级规则、功率低规则、效率高规则等等。
音频播放设备200根据优先级规则或功率低规则或效率高规则从多于1个的编码器标识中确认出一个默认的解码器标识的方法与前述的电子设备100根据优先级规则或功率低规则或效率高规则从多于1个的编码器标识中确认出一个默认的解码器标识的方法一致,本申请在此不再赘述。
S809也可以在S802之后执行,本申请在此不做限定。
在一些实施例中,当由电子设备100确认出共有的类别中,默认的解码器标识时,电子设备100将共有的类别标识发送至音频播放设备200的同时,还需将共有类别中,默认的解码器标识发送至音频播放设备200。
可选的,电子设备100也可以将共有的类别标识和该每一个类别下默认的解码器标识和默认的编码器标识发送至音频播放设备200。
电子设备100与音频播放设备200确认出共有的类别以及每个共有的类别下默认的编解码器之后,电子设备100将根据应用类型、播放音频特性(采样率、量化位深、声道数)、电子设备音频渲染能力是否开启、信道的网络条件等等从共有的类别中选择适当的一个类别,和该类别中默认的编解码器进行音频数据传输。
接下来介绍电子设备100如何根据应用类型、播放音频特性(采样率、量化位深、声道数)、电子设备音频渲染能力是否开启、信道的网络条件等等从共有类别中选择适当的一个类别的。
1、应用类型:电子设备100中,不同播放音频的应用程序的类型,对播放音频的特性有不同的要求,电子设备100将获取播放音频的应用程序对音频数据的最低采样率、最低量化位深、以及声道数的要求,再根据播放音频的应用程序对音频数据的最低采样率、最低量化位深、以及声道数的要求从共有类别中选择适当的一个类别。例如,有的应用程序对音质要求比较高,对音频数据的采样率的数值、和量化位深的数值要求比较高。例如,一般的播放音频的应用程序对音频数据的采样率的数值在32kHz,对音频数据的量化位深的数值在16比特。但是,对音质要求比较高的应用程序要求音频数据的采样率的数值最低为48kHz,对音频数据的量化位深的数值最低为24比特。电子设备100可以根据采样率的数值包括48kHz和量化位深的数值包括24比特的条件,选择该类别下默认的编解码器进行音频数据传输。
2、播放音频特性:电子设备100中,应用程序可以播放的不同的音频数据,不同的音频数据的特性可能不同。电子设备100将获取正在播放音频数据的最低采样率、最低量化位深、以及声道数的要求,再根据正在播放音频数据的最低采样率、最低量化位深、以及声道数的要求从共有的类别中选择适当的一个类别。例如,有的音频数据对音质要求比较高,音频数据的采样率和量化位深要求比较高。例如,一般音频数据的采样率的数值在32kHz,对音频数据的量化位深的数值在16比特。但是,一些预设的音质比较高的音频数据的采样率的数值最低为48kHz,对音频数据的量化位深的数值最低为24比特。电子设备100可以根据采样率的数值包括48kHz和量化位深的数值包括24比特的条件,选择该类别下默认的编解码器进行音频数据传输。
3、电子设备音频渲染能力是否开启:电子设备当前播放音频的采样率的数值为采样率一,量化位深的数值为量化位深一,码率的数值为码率一,声道数的数值为声道数一。当电子设备100音频渲染能力开启后,电子设备100的渲染单元可以将音频数据的采样率的数值由采样率一提升至采样率二,电子设备100的渲染单元可以将音频数据的量化位深的数值由量化位深一提升至量化位深二,电子设备100的渲染单元可以将音频数据的声道数的数值由声道数一提升至声道数二。其中,采样率二大于采样率一,量化位深二大于量化位深一,声道数二大于声道数一。之后,电子设备100根据音频数据的音频数据的采样率的数值为采样率二,音频数据的量化位深的数值为量化位深二,码率的数值为码率一,音频数据的声道数的数值 为声道数二,从共有类别中,选择采样率包括采样率二,量化位深包括量化位深二,码率的数值包括码率一,声道数包括声道数二的一个类别下默认的编解码器进行音频数据的传输。
4、信道的网络条件:电子设备当前播放音频数据的采样率的数值为采样率一,量化位深的数值为量化位深一,声道数的数值为声道数一,码率的数值为码率一。则电子设备100根据音频数据的音频数据的采样率的数值为采样率一,音频数据的量化位深的数值为量化位深一,音频数据的声道数的数值为声道数一,音频数据的码率的数值为码率一,从共有的类别中,选择采样率包括采样率一,量化位深包括量化位深一,声道数包括声道数一,码率包括码率一的一个类别,并采用该类别下默认的编解码器进行音频数据的传输。
需要说明的是,电子设备100还可以根据其他的参数从共有的类别中选择适当的一个类别,不限于上述实施例列举的应用类型、播放音频特性(采样率、量化位深、声道数)、电子设备音频渲染能力是否开启、信道的网络条件等等,本申请再此不再赘述。
电子设备100与音频播放设备200从共有类别中选择适当的一个类别,并采用该类别中默认的编解码器进行音频数据传输之后,由于应用类型变化、播放音频特性(采样率、量化位深、声道数)变化、电子设备音频渲染能力开启、信道的网络条件变化等因素,电子设备100将重新选择另一个类别,并将该类别的标识告知音频播放设备200。电子设备100与音频播放设备200采用另一个类别下默认的编解码器进行音频数据的传输。
1、应用类型由应用类型一变换为应用类型二:电子设备100中,不同播放音频的应用程序的类型,对播放音频的特性有不同的要求。当电子设备100根据应用类型一播放音频数据时,音频数据的采样率的数值为采样率一,量化位深为的数值为量化位深一,码率的数值为码率一,声道数的数值为声道数一。当电子设备100将播放音频数据的应用程序从应用程序一切换为应用程序二之后,若应用程序二对音频数据的音质要求比较高,应用类型二播放音频数据时,音频数据的采样率的数值为采样率二,量化位深为的数值为量化位深二,码率的数值为码率二,声道数的数值为声道数一,其中,采样率二大于采样率一,量化位深二大于量化位深一,码率二大于码率一。由于参数变化,因此电子设备100将重新选择编解码器分类类别。电子设备100从共有类别中,选择采样率包括采样率二,量化位深包括量化位深二,码率的数值包括码率二,声道数包括声道数一的一个类别下默认的编解码器进行音频数据的传输。
2、音频内容由音频数据一切换为音频数据二:电子设备100中,应用程序可以播放的不同的音频数据,不同的音频数据的特性可能不同。当电子设备100根据应用类型播放音频数据一时,音频数据一的采样率的数值为采样率一,量化位深为的数值为量化位深一,码率的数值为码率一,声道数的数值为声道数一。当电子设备100将播放的音频内容由音频数据一切换为音频数据二之后,若音频数据二的音质比较高,电子设备100播放音频数据二时,音频数据二的采样率的数值为采样率二,量化位深为的数值为量化位深二,码率的数值为码率二,声道数的数值为声道数一,其中,采样率二大于采样率一,量化位深二大于量化位深一,码率二大于码率一。由于参数变化,因此电子设备100将重新选择编解码器分类类别。电子设备100从共有类别中,选择采样率包括采样率二,量化位深包括量化位深二,码率的数值包括码率二,声道数包括声道数一的一个类别下默认的编解码器进行音频数据的传输。
3、电子设备音频渲染能力由关闭到开启:电子设备当前播放音频的采样率的数值为采样率一,量化位深的数值为量化位深一,码率的数值为码率一,声道数的数值为声道数一。当电子设备100音频渲染能力开启后,电子设备100的渲染单元可以将音频数据的采样率的数值由采样率一提升至采样率二,电子设备100的渲染单元可以将音频数据的量化位深的数值由量化位深一提升至量化位深二,电子设备100的渲染单元可以将音频数据的声道数的数值由声道数一提升至声道数二。其中,采样率二大于采样率一,量化位深二大于量化位深一,声道数二大于声道数一。由于参数变化,因此电子设备100将重新选择编解码器分类类别。电子设备100将从共有的类别中,选择采样率包括采样率二,量化位深包括量化位深二,码率的数值包括码率一,声道数包括声道数二的一个类别下默认的编解码器进行音频数据的传输。
4、信道的网络条件变差:电子设备当前播放音频数据的采样率的数值为采样率一,量化位深的数值为量化位深一,声道数的数值为声道数一,码率的数值为码率一。无线传输信道由于干扰性强,衰减等原因,导致无线传输信道支持的码率从码率一降为码率二,其中,码率二小于码率一。由于参数变化,因此电子设备100将重新选择编解码器分类类别。电子设备100将从共有的类别中,选择采样率包括采样率一,量化位深二包括量化位深一,码率的数值包括码率二,声道数包括声道数一的一个类别下默认的编解码器进行音频数据的传输。
电子设备100重新选择另一个类别下默认的编解码器之后,电子设备100与音频播放设备200采用另一个类别下默认的编解码器进行音频数据的传输。为了减少编解码器切换时出现卡段情况,本申请实施例可以采用以下方法实现编解码器切换时的平滑过渡。
当电子设备100将的编解码器分类标准的类别从类别一切换为类别二,类别一对应有默认的编码器一和解码器一,类别二对应有默认的编码器二和解码器二。则电子设备100将由编码器一切换为编码器二,音频播放设备200将解码器一切换为解码器二。
当编码器一与编码器二的时延相同时,则电子设备100将编码器一切换为编码器二需要在一帧音频数据帧内完成。将编码器一语编码器二过渡过程的这一帧音频数据帧称为第i帧音频数据。编码器一对第i帧音频数据进行编码,得到packet A(数据包A)。编码器二对第i帧音频数据进行编码,得到packet B(数据包B)。
电子设备100将packet A和packet B发送至音频播放设备200。音频播放设备200采用解码器一将packet A解码出来,得到音频数据pcmA;音频播放设备200也采用解码器二将packet A解码出来,得到音频数据pcmB。然后,音频播放设备200对第i帧音频数据进行平滑处理,平滑过程如公式(1)所示:
Pcm(i)=wi*pcmA(i)+(1-wi)*pcmB(i)    公式(1)
如公式(1)所示,Pcm(i)表示平滑处理之后的第i帧音频数据,wi表示平滑系数,wi可以是线性平滑或者cos平滑等等。wi的取值范围在0~1之间。平滑系数wi越小,平滑作用越强,对预测结果的调整就越小;平滑系数wi越大,平滑作用越弱,对预测结果的调整就越大。pcmA(i)表示解码器一将packet A解码出来得到的音频数据,pcmB(i)表示解码器二将packetB解码出来得到的音频数据。通过公式(1),音频播放设备200可以得到对第i帧音频数据平滑之后的音频数据帧Pcm(i)。这样,音频播放设备200将第i帧音频数据平滑之后的音频数据帧播放出来,可以使编解码器切换过程中的音频数据帧平滑过渡。
当编码器一与编码器二的时延不同时,电子设备100将编码器一切换为编码器二需要在多帧音频数据帧内完成。编码器一与编码器二切换过程的总音频数据帧数D的计算过程如公式(2)所示:
D=取整((max(编码器一的总时延,编码器二的总时延)+(帧长-编码器一的总时延%帧长)+帧长-1)/帧长)公式(2)
如公式(2)所示,max表示取最大值操作,%表示取余操作,帧长表示编码器一将一段特定时长的音频数据编码为一帧,该特定时长的一帧音频数据为帧长。
对于过渡过程中的总音频数据帧数D,分别对每一帧音频数据使用编码器一和编码器二进行编码,每一帧音频数据可以得到两个数据包,分别为packet A(数据包A)和packet B(数据包B)。电子设备100将packet A和packet B发送至音频播放设备200。音频播放设备接收packet A和packet B,使用解码器一将packet A解码出来,得到音频数据pcmA,使用解码器二将packet B解码出来,得到音频数据pcm B。对于切换过程中的总音频数据帧数D,前D-1个音频数据帧还是采用解码器一解码出来的音频数据,对于第D个音频数据帧,音频播放设备200对第D个音频数据帧进行平滑处理,平滑过程如公式(3)所示:
Pcm(i)=wi*pcmA(i)+(1-wi)*pcmB(i)   公式(3)
如公式(3)所示,Pcm(i)表示平滑处理之后的第D个音频数据帧,wi表示平滑系数,wi可以是线性平滑或者cos平滑等等。wi的取值范围在0~1之间。平滑系数wi越小,平滑作用越强,对预测结果的调整就越小;平滑系数wi越大,平滑作用越弱,对预测结果的调整就越大。pcmA(i)表示解码器一将第D个音频数据帧解码出来得到的音频数据,pcmB(i)表示解码器二将第D个音频数据帧解码出来的音频数据。通过公式(3),音频播放设备200可以得到对第D个音频数据帧平滑之后的音频数据帧Pcm(i)。这样,音频播放设备200将前D-1个音频数据帧和第D帧音频数据平滑之后的音频数据帧播放出来,可以使编解码器切换过程中的音频数据帧平滑过渡。
如图9所示,图9为本申请实施例提供的一种编解码器协商与切换方法的流程图。
S901、电子设备100与音频播放设备200建立通信连接。
电子设备100可以通过蓝牙、Wi-Fi直连、NFC中一项或多项与音频播放设备200建立通信连接。本申请实施例以电子设备100与音频播放设备200通过蓝牙技术建立通信连接为例进行说明。
下面结合UI图具体介绍电子设备100与音频播放设备200如何建立通信连接的。
图9A-图9C示例性示出了电子设备100与音频播放设备200通过蓝牙建立通信连接的UI图。不限于电子设备100与音频播放设备200通过蓝牙建立通信连接,电子设备100还可以通过Wi-Fi直连、NFC中一项或多项与音频播放设备200建立通信连接。
图9A示出了电子设备100上的示例性音频播放用户界面600。如图9A所示,该音频播放界面600包括有音乐名称601、播放控件602、上一首控件603、下一首控件604、播放进度条605、下载控件606、分享控件607、更多按钮608,等等。例如,该音乐名称601可以是“Dream it possible”。该播放控件602用于触发终端100播放该音乐名称601对应的音频数据。该上一首控件603可用于触发电子设备100切换至播放列表中的上一个音频数据进行播放。该下一首控件604可用于触发电子设备100切换至播放列表中的下一个音频数据进行播放。该播放进度条605可用于指示当前音频数据的播放进度。该下载控件606可用于触发电子设备100下载并保存该音乐名称601的音频数据至本地存储介质中。该分享控件607可用 于触发电子设备100分享该音乐名称601对应音频数据的播放链接至其他应用。该更多控件608可用于触发电子设备100显示更多关于音乐播放的功能控件。
不限于音乐播放,电子设备100还可以播放视频应用播放的音频数据、游戏应用播放的音频数据、以及实时通话的音频数据等等,本申请对于电子设备100播放的音频数据的来源不做限定。
如图9B及图9C所示,当电子设备100检测到在显示屏上的向下滑动手势时,响应于该滑动手势,电子设备100在用户界面20上显示如图9C所示的窗口610。如图9C所示,窗口610中可以显示有蓝牙控件611,蓝牙控件611可接收开启/关闭电子设备100的蓝牙功能功能的操作(例如触摸操作、点击操作)。蓝牙控件611的表现形式可以包括图标和/或文本(例如文本“协同投屏”)。窗口610中还可以显示有其他功能例如Wi-Fi、热点、手电筒、响铃、自动旋转、即时分享、飞行模式、移动数据、位置信息、截屏、护眼模式、屏幕录制、、协同投屏、NFC等开关控件,即检测到开启协同投屏功能的用户操作。在一些实施例中,电子设备100检测到作用于蓝牙控件611的用户操作后,可以更改蓝牙控件611的显示形式,例如增加蓝牙控件611时的阴影等。
不限于在如图9B所示的界面上,用户还可以在其他界面上输入向下滑动的手势,触发电子设备100显示窗口610。
不限于图9B及图9C示出的用户在窗口610中作用于蓝牙控件611的用户操作,在本申请实施例中,开启蓝牙功能的用户操作还可以实现为其他形式,本申请实施例不作限制。
例如,电子设备100还可以显示设置(settings)应用提供的设置界面,该设置界面中可包括提供给用户的用于开启/关闭电子设备100的蓝牙功能的控件,用户可通过在该控件上输入用户操作来开启电子设备100的蓝牙功能。
检测到开启蓝牙功能的用户操作,电子设备100通过蓝牙发现该电子设备100附近的其他开启蓝牙功能的电子设备。例如,电子设备100可以通过蓝牙发现并连接附近的音频播放设备200以及其他电子设备。
S902、电子设备100判断与音频播放设备200是否首次建立连接。若电子设备100与音频播放设备200首次建立连接,电子设备100执行S903;否则,电子设备100执行S907。
S903、音频播放设备200将解码器标识的数量大于等于1的类别标识发送至电子设备100。
在音频播放设备200将解码器标识的数量大于等于1的类别标识发送至电子设备100之前,音频播放设备200根据编解码器分类标准将所有的解码器划分到多个类别中。对于音频播放设备200如何根据编解码器分类标准将所有的解码器划分到多个类别中,在S702所示的实施例中已详细介绍,本申请在此不再赘述。
示例性的,音频播放设备200将第一类别的标识和第二类别的标识发送至电子设备100,电子设备100接收音频播放设备200发送的第一类别的标识和第二类别的标识;或者,音频播放设备200将第一类别的标识发送至电子设备100,电子设备100接收音频播放设备200发送的第一类别的标识。其中,第一类别中的解码器至少包括第一解码器,所述第二类别中的解码器的至少包括第二解码器。
在音频播放设备200将解码器标识的数量大于等于1的类别标识发送至电子设备100之前,音频播放设备200基于第一编码器的参数信息以及编解码器分类标准将第一编码器划到第一类别中,基于第二编码器的参数信息以及编解码器分类标准将第二编码器划分到第二类 别中;其中,第一编码器的参数信息和第二编码器的参数信息包括采样率、码率、量化位深、声道数和音频流格式中的一个或多个;音频播放设备还用于:基于第一解码器的参数信息以及编解码器分类标准将第一解码器划到第一类别中,基于第二解码器的参数信息以及编解码器分类标准将第二解码器划分到第二类别中;其中,第一解码器的参数信息和第二解码器的参数信息包括采样率、码率、量化位深、声道数和音频流格式中的一个或多个;其中,编解码器分类标准包括编解码器类别与编解码器的参数信息的映射关系。需要说明的是,第一编码器的参数信息、第二编码器的参数信息、第一解码器的参数信息和第二解码器的参数信息均相同。
S904、电子设备100从编码器标识的数量大于等于1的类别与解码器标识的数量大于等于1的类别中,确认出共有的类别。
响应于音频播放设备200发送的解码器标识的数量大于等于1的类别标识,电子设备100接收音频播放设备200发送的解码器标识的数量大于等于1的类别标识。
示例性的,电子设备100确认出电子设备与音频播放设备的共有类别为第一类别和第二类别。
或者电子设备未收到音频播放设备发送的第二类别的标识;或电子设备划分到第二类别中的编码器的数量为0,电子设备100确认出电子设备与音频播放设备的共有类别为第一类别。
共有类别,即电子设备100可以通过该类别下的编解码器与音频播放设备200进行音频数据传输。
可以理解的是,电子设备100在建立连接之前已根据编解码器分类标准将电子设备100中的所有编码器划分到多个类别中,并确认出编码器标识的数量大于等于1的类别。
在一些实施例中,电子设备100也可以在建立连接之后根据预设的编解码器分类标准将电子设备100中的所有编码器划分到多个类别中。音频播放设备200也可以在建立连接之后根据预设的编解码器分类标准将音频播放设备200中的所有解码器划分到多个类别中,并确认出解码器标识的数量大于等于1的类别。本身请在此不做限定。
在一些实施例中,在电子设备100与音频播放设备200建立连接之后,音频播放设备200将音频播放设备200中所有的解码器标识以及每个解码器对应的一个或多个的参数的数值发送至电子设备100,电子设备100根据编解码器分类标准将电子设备100中的所有编码器以及音频播放设备200中所有的解码器分别划分到多个类别中,并确认出编码器标识的数量大于等于1的类别和解码器标识的数量大于等于1的类别。本身请在此不做限定。
编解码器分类标准可以根据采样率、量化位深、码率、声道数、音频流格式等中的一个或两个及以上参数的组合得到。
示例性的,编解码器分类标准根据采样率、量化位深、码率、声道数、音频流格式得到时,可以将编解码器分类标准划分为两个类别。
具体的,当编解码器的采样率的数值大于等于第一采样率(目标采样率),编解码器的码率的数值大于等于第一码率(目标码率),编解码器的量化位深的数值大于等于第一量化位深(目标量化位深),编解码器的声道数的数值大于等于第一声道数(目标声道数),音频流格式为PCM(目标音频流格式)时,则将该编解码器划分为类别一;当编解码器的采样率的数值小于第一采样率,编解码器的码率的数值小于第一码率,编解码器的量化位深的数值小于第一量化位深,并且编解码器的声道数的数值大于等于第一声道数,音频流格式为PCM时, 则将该编解码器划分为类别二。
示例性的,第一采样率为48khz,第一码率为600kps,第一量化位深为24比特,第一声道数为2,音频流格式为PCM。则类别一的编解码器分类标准为:编解码器的采样率大于等于48khz,编解码器的码率大于等于600kps,编解码器的量化位深大于等于24比特,编解码器的声道数大于等于2声道,音频流格式为PCM。类别二的编解码器分类标准为:编解码器的采样率小于48khz,编解码器的码率小于等于600kps,编解码器的量化位深小于24比特,编解码器的声道数大于等于2声道,音频流格式为PCM。可以将划分为类别一的编解码器称为高清音质编解码器,将划分为类别二的编解码器称为标准音质编解码器。
音频播放设备200包括解码器一、解码器二和解码器三,并且解码器一属于类别一,解码器二和解码器三属于类别二。
电子设备100包括编码器一、编码器二和编码器三,并且编码器一属于类别一,编码器二、编码器三属于类别二。
则电子设备100和音频播放设备200共有的类别包括类别一和类别二。
电子设备100如何将编码器归类到对应类别的编解码器分类标准,以及音频播放设备200如何将解码器归类到对应类别的编解码器分类标准,上述图7所示的实施例已详细介绍,本申请在此不再赘述。
S905、电子设备100确认出每一个共有的类别中,默认的一个编码器标识和默认的一个解码器标识。
电子设备100确认出电子设备100与音频播放设备200共有的类别后,电子设备100和音频播放设备200可以通过共有的类别中划分的编解码器进行音频数据的传输。
若共有的类别中编码器的数量大于1和/或解码器的数量大于1,电子设备100需要确认出每一个共有的类别下,默认的一个编码器标识和默认的一个解码器标识。之后,电子设备100和音频播放设备200将采用每一个共有的类别下,默认的一个编码器和默认的一个解码器进行音频数据的传输。
对于任意一个共有类别中,若编码器标识的数量为1或者解码器标识的数量为1,则电子设备100确认出该类别下的编码器即为默认的一个编码器,或该类别下的解码器的标识为默认的一个解码器。
示例性的,当共有的类别中为类别一(第一类别)时,划分到类别一中的编解码器为高清音频编解码器。类别一中的编码器包括编码器一(第一编码器),类别一中的解码器的包括解码器一(第一解码器)。因为类别一只包括一个编码器和一个解码器,因此电子设备100确认出编码器一和解码器一为类别一下默认的一个编码器和默认的一个解码器。当电子设备100确认出采用类别一中的编解码器进行音频数据的传输时,电子设备100将采用的类别标识(类别一的标识)发送至音频播放设备200。
电子设备100采用类别一中的编码器一将音频数据编码成第一编码音频数据发送至音频播放设备200,音频播放设备200采用类别一中解码器一将压缩的音频数据解码成(第一播放音频数据),并播放第一播放音频数据。
示例性的,当共有的类别中为类别二时,划分到类别二中的编解码器为基础音频编解码器。类别二中的编码器包括编码器二(第二编码器),类别二中的解码器包括解码器二(第二解码器)。因为类别二只包括一个编码器和一个解码器,因此电子设备100确认出编码器二和 解码器二为类别二下默认的一个编码器和默认的一个解码器。当电子设备100确认出采用类别二中的编解码器进行音频数据的传输时,电子设备100将采用的类别标识(类别二的标识)发送至音频播放设备200。
电子设备100采用类别二中的编码器二将音频数据进行打包发送至音频播放设备200,音频播放设备200采用类别二中解码器二将压缩的音频数据解压出来,并播放音频数据。
对于任意一个共有类别中,若编码器标识的数量大于1或者解码器标识的数量大于1,则电子设备100需确认出该类别下的多个编码器的其中一个编码器作为默认的一个编码器,或该类别下的多个解码器的其中一个解码器作为默认的一个解码器。
示例性的,当共有的类别中为类别一时,划分到类别一中的编解码器为高清音频编解码器。类别一中的编码器的包括编码器一(第一编码器)和编码器三(第三编码器),类别一中的解码器的包括解码器一(第一解码器)和解码器三(第三解码器)。因为类别一包括多个编码器和多个解码器,因此电子设备100确认出类别一下默认的一个编码器和默认的一个解码器。电子设备100可以根据预设的规则从多个编码器和多个解码器中确认出类别一下默认的一个编码器和默认的一个解码器,预设的规则可以是优先级规则、功率低规则、效率高规则等等。具体的,电子设备100根据优先级规则、功率低规则、效率高规则从多个编码器和多个解码器中确认出类别一下默认的一个编码器(第一编码器)和默认的一个解码器(第一解码器)的方法,请参考S808所示的实施例,本申请在此不再赘述。当电子设备100确认出编码器一和解码器一为类别一下默认的一个编码器和默认的一个解码器之后,电子设备100将该类别下默认的解码器标识和/或编码器标识发送至音频播放设备200。
之后,当电子设备100确认出采用类别一中的编解码器进行音频数据的传输时,电子设备100将采用的类别标识(类别一的标识)发送至音频播放设备200。
电子设备100采用类别一中的编码器一将音频数据编码为第一编码音频数据发送至音频播放设备200,音频播放设备200采用类别一中解码器一将第一编码音频数据解码成第一播放音频数据,并播放第一播放音频数据。
示例性的,当共有的类别中为类别一和类别二时,划分到类别一中的编解码器为高清音频编解码器,划分到类别二中的编解码器为基础音频编解码器。类别一中的编码器包括编码器一,类别一中的解码器包括解码器一,类别二中的编码器包括编码器二,类别二中的解码器包括解码器二。因为类别一只包括一个编码器和一个解码器,类别二只包括一个解码器和一个编码器,因此电子设备100确认出编码器一和解码器一为类别一下默认的一个编码器和默认的一个解码器,编码器二和解码器二为类别二下默认的一个编码器和默认的一个解码器。当电子设备100可以采用类别一或类别二中的编解码器进行音频数据的传输时,电子设备100将采用的类别标识(类别一的标识或类别二的标识)发送至音频播放设备200。
或者,当共有的类别中为类别一和类别二时,划分到类别一中的编解码器为高清音频编解码器,划分到类别二中的编解码器为基础音频编解码器。类别一中的编码器包括编码器一和编码器三,类别一中的解码器包括解码器一和解码器三,类别二中的编码器包括编码器二和编码器四,类别二中的解码器包括解码器热和解码器四。因为类别一包括多个编码器和多个解码器,类别二也包括多个编码器和多个解码器。电子设备100可以根据预设的规则从多个编码器和多个解码器中确认出类别一和类别二下默认的一个编码器和默认的一个解码器。预设的规则可以是优先级规则、功率低规则、效率高规则等等。具体的,电子设备100根据 优先级规则、功率低规则、效率高规则从多个编码器和多个解码器中确认出类别一和类别二下默认的一个编码器和默认的一个解码器的方法,请参考S808所示的实施例,本申请在此不再赘述。当电子设备100确认出编码器一和解码器一为类别一下默认的一个编码器和默认的一个解码器,编码器二和解码器二为类别二下默认的一个编码器和默认的一个解码器之后,电子设备100将类别一下默认的解码器标识和/或编码器标识和类别二下默认的解码器标识和/或编码器标识发送至音频播放设备200。
电子设备100可以采用类别一或类别二中默认的编解码器进行音频数据的传输。示例性的,电子设备100可以采用类别一中的编解码器进行音频数据的传输,类别一中的编解码器为高高清音频编解码器。当网络条件变化或者播放的音频内容变化等等,电子设备100可以将类别一中的编解码器切换为类别二中的编解码器。并采用类别二中的编解码器进行音频数据的传输。
电子设备100确认出电子设备100与音频播放设备200共有的类别之后,在一种可选的实现方式中,电子设备100确认出每一个共有的类别中默认的编码器标识和默认的解码器标识,电子设备100确认出每一个电子设备100与音频播放设备200共有的类别中默认的编码器标识和默认的解码器标识的方法,在图7所示的实施例中已详细介绍,本申请在此不再赘述。
在另一种可选的实现方式中,电子设备100只需确认出电子设备100与音频播放设备200共有的类别中默认的编码器标识。之后,电子设备100将电子设备100与音频播放设备200共有的类别发送至音频播放设备200,音频播放设备200确认出电子设备100与音频播放设备200共有的类别类别中默认的解码器标识。
在一些实施例中,预设的编解码器分类标准可以是每隔固定周期(例如一个月)进行更新的。因此,当编解码器分类标准周期性更新时,电子设备100也需周期性(例如一个月)将编码器划分到多个类别下,音频播放设备200也需周期性(例如一个月)将解码器归类到多个类别下。
在一种可选的实现方式中,电子设备100与音频播放设备200可以根据收集的用户行为习惯,在合适的时机将编解码器分类到多个类别下。
例如在时间段“24:00-7:00”之间,电子设备100与音频播放设备200可以在时间段“24:00-7:00”之间将编解码器分类到多个类别下,因为在时间段“24:00-7:00”之间,用户在家休息,此时,当在这个时间段进行编解码器分类时,不会影响用户使用设备的体验。
S906、电子设备100将共有的类别标识以及每一个共有的类别中,默认的一个解码器标识发送至音频播放设备200。
电子设备100确认出电子设备100与音频播放设备200共有的类别之后,电子设备100将电子设备100与音频播放设备200共有的类别标识(第一类别的标识和第二类别的标识,或者第一类别的标识)以及每一个共有的类别中,默认的一个解码器标识发送至音频播放设备200。
音频播放设备200接收电子设备100发送的电子设备100与音频播放设备200共有的类别标识以及每一个共有的类别中,默认的一个解码器标识。之后,电子设备100与音频播放设备200将采用电子设备100与音频播放设备200共有的类别进行音频数据的传输。
在一些实施例中,若音频播放设备200按照编解码器分类标准将音频播放设备200中所有解码器划分到多个类别中,电子设备100在协商电子设备100与音频播放设备200共有的类别之前,音频播放设备200将每一个类别标志和每一个类别下对应的解码器标识发送至电子设备100。那么,在电子设备100确认出电子设备100共有的类别之后,电子设备100只需将电子设备100与音频播放设备200共有的类别发送至音频播放设备200。
在电子设备100确认出电子设备100与音频播放设备200共有的类别之后,当由音频播放设备200确认出电子设备100与音频播放设备200共有的类别中,默认的解码器标识时。电子设备100只需将电子设备100与音频播放设备200共有的类别标识发送至音频播放设备200。
在一些实施例中,电子设备100在协商共有的类别时,音频播放设备200将音频播放设备200中,所有的解码器标识以及每个解码器对应的一个或多个的参数的数值发送至电子设备100,电子设备100根据编解码器分类标准将电子设备100中的所有编码器以及音频播放设备200中所有的解码器划分到多个类别中。之后,电子设备100确认出电子设备100与音频播放设备200共有的类别。在这种情况下,电子设备100除了需要将电子设备100与音频播放设备200共有的类别发送至音频播放设备200,电子设备100还需要将电子设备100与音频播放设备200共有的类别下的解码器标识发送至音频播放设备200。
进一步的,在电子设备100确认出电子设备100与音频播放设备200共有的类别之后,电子设备100可以确认出电子设备100与音频播放设备200共有的类别中,默认的解码器标识。当由电子设备100确认出电子设备100与音频播放设备200共有的类别中,默认的解码器标识时。电子设备100将电子设备100与音频播放设备200共有的类别标识发送至音频播放设备200的同时,还需将电子设备100与音频播放设备200共有的类别中,默认的解码器标识发送至音频播放设备200。
或者,在电子设备100确认出电子设备100与音频播放设备200共有的类别之后。当由音频播放设备200确认出电子设备100与音频播放设备200共有的类别中,默认的解码器标识时。电子设备100将电子设备100与音频播放设备200共有的类别发送至音频播放设备200的同时,还需将电子设备100与音频播放设备200共有的类别类别下的解码器标识发送至音频播放设备200。
S907、电子设备100从共有类别中,选择第一类别中默认的编解码器进行音频数据的传输。
电子设备100获取音频数据的第一参数信息,当音频数据的第一参数信息满足第一条件时,根据第一类别中的第一编码器将所述音频数据编码成第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备。
具体的,电子设备100可以根据应用类型、播放音频特性(采样率、量化位深、声道数)、电子设备音频渲染能力是否开启、信道的网络条件、音频流格式等等从电子设备100与音频播放设备200共有的类别中选择一个类别(例如第一类别)中默认的编解码器进行音频数据的传输,第一类别中默认的编码器为第一编码器,第一类别中默认的解码器为第一解码器。这部分内容在前述实施例已详细介绍,本申请再次不再赘述。
示例性的,电子设备100根据应用类型、播放音频特性(采样率、量化位深、声道数)、电子设备音频渲染能力是否开启、信道的网络条件等确定出第一参数信息为:采样率为第一 采样率,量化位深为第一量化位深、码率为第一码率、声道数为第一声道数。电子设备100从共有的类别中,选择采样率包括第一采样率,量化位深包括第一量化位深,码率包括第一码率,声道数包括第一声道数、音频流格式包括PCM的类别中默认的编解码器进行音频数据的传输。
当电子设备100根据应用类型、播放音频特性(采样率、量化位深、声道数)、电子设备音频渲染能力是否开启、信道的网络条件、音频流格式等从电子设备100与音频播放设备200共有的类别中确认出了两个及两个以上类别之后,电子设备100可以从两个及两个以上类别中选择优先级最高的一个类别作为第一类别,并采用该优先级最高的类别中默认的编解码器进行音频数据的传输。可以理解的是,编解码器分类标准中规定的采样率越高、码率越高、量化深度越高,则划分到该类别中的编解码器的音质越好。编解码器的音质越好,则该编解码器所在的类别的优先级越高。
示例性的,电子设备100与音频播放设备20共有的类别中包括类别一和类别二。若电子设备100根据应用类型、播放音频特性(采样率、量化位深、声道数)、电子设备音频渲染能力是否开启、信道的网络条件、音频流格式等等筛选出的类别包括类别一和类别二。由于划分到类别一中的编解码器为高清音质编解码器,划分到类别二中的编解码器为标准音质编解码器。类别一的优先级大于类别二。因此,电子设备100将优先选择类别一中默认的编解码器进行音频数据的传输。
S908、电子设备100将第一类别标识发送至音频播放设备200。
音频播放设备200接收电子设备100发送的第一类别标识,电子设备100与音频播放设备200将采用第一类别标识中默认的编解码器进行音频数据传输。
S909、电子设备100获取音频数据,并采用第一编码器标识对应的编码器将音频数据进行编码,得到编码后的音频数据。
第一编码器标识为第一类别中,默认的编码器标识。
S910、电子设备100将编码后的音频数据(第一编码音频数据)发送至音频播放设备200。
具体的,电子设备100可以通过录音等方式获取当前播放的音频数据,然后将获取的音频数据压缩后,通过和音频播放设备200之间的通信连接发送给音频播放设备200。以电子设备100和音频播放设备200基于miracast共享多媒体内容为例,电子设备100采集电子设备100所播放的音频,使用高级音频编码(advanced audio coding,AAC)算法对该音频进行压缩;然后将压缩后的音频数据封装为传输流(transport stream,TS),之后对TS流按照实时传送协议(real-time transport protocol,RTP)进行编码并将编码后得到的数据通过蓝牙通道连接发送给音频播放设备200。
S911、音频播放设备200接收电子设备100发送的编码后的音频数据,并采用第一解码器标识对应的解码器将编码后的音频数据解码出来,得到音频数据(第一播放音频数据)。
第一解码器标识为第一类别中,默认的解码器标识。
音频播放设备200采用第一解码器标识对应的解码器将编码后的音频数据解码出来,得到未编码的音频数据,并将音频数据播放出来。
S912、电子设备100将第一类别切换为第二类别。
当电子设备100的音频数据的采样率和/或码率和/或量化位深和/或声道数变化,电子设备100将重新选择另一个类别(即第二类别)中的编解码器进行音频数据传输时,电子设备100将该类别的标识告知音频播放设备200。电子设备100与音频播放设备200采用第二类别中的编解码器进行音频数据的传输。这部分内容在前述实施例已经详细介绍过了,本申请在此不再赘述。
当用户选择、应用类型变化、音频内容变化、电子设备音频渲染能力开启、信道的网络条件变差等原因导致电子设备播放音频数据的采样率和/或码率和/或量化位深和/或声道数变化,则电子设备100将第一类别切换为第二类别。
示例性的,当电子设备100接收用户选择高音质模式时,电子设备100将第一编类别切换为第二类别,其中第二类别的中的编解码器的音频质量高于第一类别中的编解码器的音频质量。第二类别中编解码器的采样率、码率、量化位深均大于第一类别中的采样率、码率、量化位深。
如图10A所示,电子设备100接收用户单击更多控件608的操作,响应于用户操作,电子设备100将显示如图10B所示的提示框900。提示框900包括高音质模式控件901、稳定传输模式控件902、开启音频渲染模式控件903。当用户想要电子设备100播放的音频数据的音质更好时,提示框900中的高音质模式控件901可以接收用户的点击操作,响应于用户的点击操作,电子设备100将第一类别切换为第二类别,其中第二类别中的编解码器的音频质量高于第一类别的编解码器分类的音频质量。
示例性的,当电子设备100接收用户操作开启音频渲染能力,电子设备100将播放音频数据的采样率由第一采样率提升至第二采样率,电子设备100的渲染单元可以将音频数据的量化位深的数值由第一量化位深提升至第二量化位深,电子设备100的渲染单元可以将音频数据的声道数的数值由第一声道数提升至第二声道数。其中,第二采样率大于第一采样率,第二量化位深大于第一量化位深,第二声道数大于第一声道数。
如图10C所示,当用户想要电子设备100开启音频渲染能力时,提示框900中的开启音频渲染模式控件903可以接收用户的点击操作,响应于用户的点击操作,电子设备100将第一类别切换为第二类别,其中,第二类别中编解码器的采样率、码率、量化位深、声道数均大于第一类别中编解码器的采样率、码率、量化位深、声道数。
示例性的,当电子设备100的网络不稳定,电子设备100可以接收用户操作将当前音频数据的传输模式切换为稳定传输模式。当用户选择开启稳定传输模式时,电子设备100将第一类别切换为第二类别,其中,第二类别中编解码器的码率低于第一类别中编解码器的码率。或者,电子设备100可以自动的切换至稳定传输模式。本申请在此不做限定。
如图10D所示,当用户选择开启稳定传输模式时,提示框900中的稳定传输模式控件902可以接收用户的点击操作,响应于用户的点击操作,电子设备100将第一类别切换为第二类别,其中,第二类别中编解码器的码率低于第一类别中编解码器的码率。
电子设备100获取音频数据的第二参数信息,当音频数据的第二参数信息满足第二条件时,电子设备100将第一类别切换为第二类别,并根据第二类别中的第二编码器将所述音频数据编码成第二编码音频数据,并将所述第二编码音频数据发送至所述音频播放设备。
电子设备100将第二类别的标识发送至音频播放设备200。
当电子设备100播放的音频数据的应用类型由应用类型一变换为应用类型二,或者电子设备100播放的音频数据音频数据一切换为音频数据二,或者电子设备音频渲染能力开启或 开启,使得第二参数信息为:电子设备100播放的音频数据的采样率从第一采样率变化为第二采样率,则电子设备100选择从电子设备100与音频播放设备200共有的类别中,选择采样率包括第二采样率,量化位深包括第一量化位深,码率包括第一码率,声道数包括第一声道数、音频流格式包括PCM的类别中的默认的编解码器进行音频数据的传输。
对于S907-S912中的第一条件和第二条件的具体解释:
第一参数信息中的参数种类、第一编码器的参数信息中的参数种类、第一解码器的参数信息中的参数种类、第二参数信息中的参数种类、第二编码器的参数信息中的参数种类、第二解码器的参数信息中的参数种类相同;第一参数信息满足第一条件,第二参数信息满足第二条件,具体包括:第一参数信息中的采样率大于等于目标采样率,第二参数信息中的采样率小于目标采样率;和/或,第一参数信息中的码率大于等于目标码率,第二参数信息中的码率小于目标码率;和/或,第一参数信息中的量化位深大于等于目标量化位深,第二参数信息中的量化位深小于目标量化位深;和/或,第一参数信息中的声道数大于等于目标声道数,第二参数信息中的声道数小于于目标声道数;和/或,第一参数信息中的音频流格式为目标音频流格式,第二参数信息中的音频流格式为目标音频流格式。
S913、电子设备100将第二类别的标识发送至音频播放设备200。
音频播放设备200接收电子设备100发送的第二类别标识。电子设备100与音频播放设备200将通过第二类别标识中默认的编解码器进行音频数据的传输。
具体的,电子设备100采集音频数据,电子设备100将采用第二编码器标识对应的编码器将音频数据进行编码,得到编码后的音频数据(第二编码音频数据),电子设备100将编码后的音频数据发送至音频播放设备200,音频播放设备200将采用第二解码器标识对应的解码器将音频数据解码出来,得到未编码的音频数据(第二播放音频数据),音频播放设备200将播放未编码的音频数据(第二播放音频数据)。
其中,第二编码器标识为第二类别中,默认的编码器标识;第二解码器标识为第二类别中,默认的解码器标识。
为了避免电子设备100与音频播放设备200在切换编解码器时的画面卡顿,将对电子设备100与音频播放设备200切换过程中的音频数据进行平滑过渡,提高用户的体验。
当第一编码器与第二编码器的时延相同时,电子设备通过第一编码器将音频数据中的第一音频帧编码成第一编码音频帧,并将第一编码音频帧发送给音频播放设备;通过第二编码器将音频数据中的第一音频帧编码成第二编码音频帧,并将第二编码音频帧发送至音频播放设备,通过第二编码器将音频数据中的第二音频帧编码成第N编码音频帧,并将第N编码音频帧发送至音频播放设备;音频播放设备通过第一解码器将第一编码音频帧解码为第一解码音频帧,通过第二解码器将第二编码音频帧解码为第二解码音频帧,通过第二解码器将第N编码音频帧解码为第N播放音频帧;对第一解码音频帧和第二解码音频帧进行平滑处理,得到第一播放音频帧。电子设备100首先播放第一播放音频帧,之后,电子设备100播放第N播放音频帧。这样,当第一编码器与第二编码器的时延相同时,第一编码器与第二编码器的切换需要在一帧内完成,该一帧即为第一音频帧,音频播放设备对第一音频帧进行平滑处理之后在播放,防止编解码器切换时出现卡顿的情况,实现平滑过渡。对第一音频帧之后相邻的音频帧,例如第二音频帧,不需要平滑处理,直接将第二解码器进行解码并播放出来。
音频播放设备通过公式Pcm=wi*pcmA+(1-wi)*pcmB得到第一播放音频帧;其中,Pcm为第一播放音频帧,wi为平滑系数,wi i大于0小于1,pcmA为第一解码音频帧,pcmB 为第二解码音频帧。
当第一编码器与第二编码器的时延不同,电子设备通过公式D=取整((max(编码器一的总时延,编码器二的总时延)+(帧长-编码器一的总时延%帧长)+帧长-1)/帧长)获取D帧音频数据帧;其中,D表示第一编码器与第二编码器切换过程中的总音频数据帧数,max表示取最大值操作,%表示取余操作,帧长表示一帧音频数据的时长;通过第一编码器将音频数据中的第一音频帧至第D音频帧进行编码,得到第三编码音频帧至第D+2编码音频帧;通过第二编码器将音频数据中的第D音频帧进行编码,得到第D+3编码音频帧,通过第二编码器将音频数据中的第D+1音频帧进行编码,得到第N编码音频帧;将第三编码音频帧至第D+2编码音频帧、第D+3编码音频帧、第N编码音频帧发送至音频播放设备;音频播放设备通过第一解码器将第三编码音频帧至第D+2编码音频帧解码为第二播放音频帧至第D+1播放音频帧,通过第二解码器将D+3编码音频帧解码为第三解码音频帧;播放第二播放音频帧至第D播放音频帧;对第D+1播放音频帧和第三解码音频帧进行平滑处理,得到目标播放音频帧,播放目标播放音频帧;通过第二解码器将第N编码音频帧解码为第N解码音频帧,播放第N解码音频帧。这样,当第一编码器与第二编码器的时延不同时,第一编码器与第二编码器的切换需要在多帧(D帧)内完成,这样,使得第一编码器与第二编码器切换过程中的,由第一编码器编码的音频数据到达音频播放设备并解码出来,由第二编码器编码的音频数据到达音频播放设备并解码出来的时刻是一样的。若编码器切换需要在D帧内完成,音频播放设备直接将第一音频帧至第D-1音频帧解码并播放出啦,音频播放设备对第D音频帧进行平滑处理之后再播放,防止编解码器切换时出现卡顿的情况,实现平滑过渡。对第D音频帧之后相邻的音频帧,例如第N音频帧,不需要平滑处理,直接将第N解码器进行解码并播放出来。
音频播放设备,通过公式Pcm=wi*pcmA+(1-wi)*pcmB得到目标播放音频帧;其中,Pcm为目标播放音频帧,wi为平滑系数,wi i大于0小于1,pcmA为第D+1播放音频帧,pcmB为第三解码音频帧。
电子设备100与音频播放设备200如何在切换过程中对音频数据进行平滑过渡的,在前述实施例已详细介绍,本身请在此不再赘述。
S912-S913也可以替换为,当第二参数信息满足第二条件时,通过第一类别中的第一编码器将音频数据编码成第三编码音频数据,并将第三编码音频数据发送至音频播放设备;音频播放设备,还用于通过第一类别中的第一解码器将第三编码音频数据解码成第三播放音频数据。当电子设备与音频播放设备只支持一种编解码器类别(第一类别)时,这种情况下,当音频数据的参数信息由第一参数信息变化为第二参数信息时,并且第二参数信息满足第二条件,电子设备无法切换编解码器,电子设备还是采用第一类别中默认的编解码器与音频播放设备进行音频数据的传输。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (35)

  1. 一种编解码器协商与切换系统,其特征在于,所述系统包括电子设备和音频播放设备,其中:
    所述电子设备用于:
    当音频数据的第一参数信息满足第一条件时,根据第一类别中的第一编码器将所述音频数据编码成第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;其中,所述第一类别为所述电子设备在获取所述音频数据之前,确定出所述电子设备与所述音频播放设备共有的编解码器类别;
    将所述第一类别的标识发送至所述音频播放设备;
    所述音频播放设备用于:
    接收所述电子设备发送的所述第一类别的标识;
    通过所述第一类别中的第一解码器将所述第一编码音频数据解码成第一播放音频数据;
    所述电子设备还用于:
    当音频数据的第二参数信息满足第二条件时,根据第二类别中的第二编码器将所述音频数据编码成第二编码音频数据,并将所述第二编码音频数据发送至所述音频播放设备;其中,所述第二类别为所述电子设备在获取所述音频数据之前,确定的所述电子设备与所述音频播放设备共有的编解码器类别;
    将所述第二类别的标识发送至所述音频播放设备;
    所述音频播放设备还用于:
    接收所述电子设备发送的所述第二类别的标识;
    通过所述第二类别中的第二解码器将所述第二编码音频数据解码成第二播放音频数据;
    其中,所述第一条件与所述第二条件不同,所述第一类别与所述第二类别不同。
  2. 根据权利要求1所述的系统,其特征在于,所述第一类别中的编码器至少包括所述第一编码器,所述第二类别中的编码器至少包括所述第二编码器。
  3. 根据权利要求2所述的系统,其特征在于,所述电子设备还用于:
    接收所述音频播放设备发送的所述第一类别的标识和所述第二类别的标识;其中,所述第一类别中的解码器至少包括所述第一解码器,所述第二类别中的解码器的至少包括所述第二解码器。
  4. 根据权利要求3所述的系统,其特征在于,所述电子设备还用于:
    确认出所述电子设备与所述音频播放设备的共有类别为所述第一类别和所述第二类别;
    将所述第一类别的标识和所述第二类别的标识发送至所述音频播放设备;
    所述音频播放设备,还用于:
    接收所述电子设备发送的所述第一类别的标识和所述第二类别的标识。
  5. 根据权利要求4所述的系统,其特征在于,所述第一类别中的编码器只包括所述第一编码器,所述第一类别中的解码器只包括所述第一解码器;
    在所述电子设备确认出所述电子设备与所述音频播放设备的共有类别为所述第一类别和所述第二类别之后,所述电子设备还用于:
    当所述第一参数信息满足所述第一条件时,通过所述第一类别中的所述第一编码器将所述音频数据编码成所述第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;
    所述音频播放设备,还用于通过所述第一类别中的所述第一解码器将所述第一编码音频数据解码成所述第一播放音频数据。
  6. 根据权利要求5所述的系统,其特征在于,所述第一类别中的编码器还包括第三编码器,所述第一类别中的解码器还包括第三解码器;
    在所述电子设备确认出所述电子设备与所述音频播放设备的共有类别为所述第一类别和所述第二类别之后,所述电子设备还用于:
    当所述第一参数信息满足所述第一条件时,通过所述第一类别中的第一编码器将所述音频数据编码成第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;其中,所述第一编码器的功耗低于所述第三编码器,或者,所述第一编码器的优先级或功率高于所述第三编码器;
    所述音频播放设备,还用于通过所述第一类别中的所述第一解码器将所述第一编码音频数据解码成第一播放音频数据;其中,所述第一解码器的功耗低于所述第二解码器,或者,所述第一解码器的优先级或功率高于所述第二解码器。
  7. 根据权利要求1所述的系统,其特征在于,当所述电子设备与所述音频播放设备共有的编码器类别只包括所述第一类别时,所述电子设备还用于:
    当所述第二参数信息满足所述第二条件时,通过所述第一类别中的所述第一编码器将所述音频数据编码成第三编码音频数据,并将所述第三编码音频数据发送至所述音频播放设备;
    所述音频播放设备,还用于通过所述第一类别中的所述第一解码器将所述第三编码音频数据解码成第三播放音频数据。
  8. 根据权利要求7所述的方法,其特征在于,所述电子设备与所述音频播放设备共有的编码器类别只包括所述第一类别,包括:
    所述电子设备未收到所述音频播放设备发送的所述第二类别的标识或所述电子设备划分到所述第二类别中的编码器的数量为0。
  9. 根据权利要求7所述的系统,其特征在于,所述第一类别中的编码器只包括所述第一编码器,所述第一类别中的解码器只包括所述第一解码器;
    当所述第一参数信息满足所述第一条件时,所述电子设备还用于:
    根据所述第一类别中的所述第一编码器将所述音频数据编码成所述第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;
    所述音频播放设备,还用于通过所述第一类别中的所述第一解码器将所述第一编码音频数据解码成所述第一播放音频数据。
  10. 根据权利要求7所述的系统,其特征在于,所述第一类别中的编码器还包括第三编码器,所述第一类别中的解码器还包括第三解码器;
    当所述第一参数信息满足所述第一条件时,所述电子设备还用于:
    根据所述第一类别中的所述第一编码器将所述音频数据编码成所述第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;其中,所述第一编码器的功耗低于所述第三编码器,或者,所述第一编码器的优先级或功率高于所述第三编码器;
    所述音频播放设备,还用于通过所述第一类别中的所述第一解码器将所述第一编码音频数据解码成所述第一播放音频数据;其中,所述第一解码器的功耗低于所述第三解码器,或者,所述第一解码器的优先级或功率高于所述第三解码器。
  11. 根据权利要求2-10任一项所述的系统,其特征在于,所述第一类别中的编解码器为高清音质编解码器,所述第二类别中的编解码器为标准音质编解码器;或
    所述第一类别中的编解码器为标准音质编解码器,所述第二类别中的编解码器为高清音质编解码器。
  12. 根据权利要求1所述的系统,其特征在于,在所述电子设备获取音频数据之前,所述电子设备还用于:
    基于所述第一编码器的参数信息以及编解码器分类标准将所述第一编码器划到所述第一类别中,基于所述第二编码器的参数信息以及所述编解码器分类标准将所述第二编码器划分到所述第二类别中;其中,所述第一编码器的参数信息和所述第二编码器的参数信息包括采样率、码率、量化位深、声道数和音频流格式中的一个或多个;
    所述音频播放设备还用于:
    基于所述第一解码器的参数信息以及所述编解码器分类标准将所述第一解码器划到所述第一类别中,基于所述第二解码器的参数信息以及所述编解码器分类标准将所述第二解码器划分到所述第二类别中;其中,所述第一解码器的参数信息和所述第二解码器的参数信息包括采样率、码率、量化位深、声道数和音频流格式中的一个或多个;
    其中,所述编解码器分类标准包括编解码器类别与编解码器的参数信息的映射关系。
  13. 根据权利要求12所述的系统,其特征在于,所述第一类别中的编解码器的采样率大于等于目标采样率,所述第二类别中的编解码器的采样率小于目标采样率;和/或,
    所述第一类别中的编解码器的码率大于等于目标码率,所述第二类别中的编解码器的码率小于目标码率;和/或,
    所述第一类别中的编解码器的声道数大于等于目标声道数,所述第二类别中的编解码器的声道数小于目标声道数;和/或,
    所述第一类别中的编解码器的量化位深大于等于目标量化位深,所述第二类别中的编解码器的量化位深小于目标量化位深;和/或,
    所述第一类别中的编解码器的音频流格式为目标音频流格式,所述第二类别中的编解码器的音频流格式为所述目标音频流格式。
  14. 根据权利要求13所述的系统,其特征在于,所述第一参数信息中的参数种类、所述第一编码器的参数信息中的参数种类、所述第一解码器的参数信息中的参数种类、所述第二参数信息中的参数种类、所述第二编码器的参数信息中的参数种类、所述第二解码器的参数信息中的参数种类相同;
    所述第一参数信息满足所述第一条件,所述第二参数信息满足所述第二条件,具体包括:
    所述第一参数信息中的采样率大于等于所述目标采样率,所述第二参数信息中的采样率小于所述目标采样率;和/或,
    所述第一参数信息中的码率大于等于所述目标码率,所述第二参数信息中的码率小于所述目标码率;和/或,
    所述第一参数信息中的量化位深大于等于所述目标量化位深,所述第二参数信息中的量化位深小于所述目标量化位深;和/或,
    所述第一参数信息中的声道数大于等于所述目标声道数,所述第二参数信息中的声道数小于于所述目标声道数;和/或,
    所述第一参数信息中的音频流格式为所述目标音频流格式,所述第二参数信息中的音频流格式为所述目标音频流格式。
  15. 根据权利要求1所述的系统,其特征在于,当所述第一编码器与所述第二编码器的时延相同时,所述电子设备还用于:
    通过所述第一编码器将所述音频数据中的第一音频帧编码成第一编码音频帧,并将所述第一编码音频帧发送给所述音频播放设备;
    通过所述第二编码器将所述音频数据中的第一音频帧编码成第二编码音频帧,并将所述第二编码音频帧发送至所述音频播放设备;
    所述音频播放设备,还用于:
    通过所述第一解码器将所述第一编码音频帧解码为第一解码音频帧,通过所述第二解码器将所述第二编码音频帧解码为第二解码音频帧;
    对所述第一解码音频帧和所述第二解码音频帧进行平滑处理,得到第一播放音频帧。
  16. 根据权利要求1所述的系统,其特征在于,当所述第一编码器与所述第二编码器的时延不同,所述电子设备还用于:
    通过公式D=取整((max(编码器一的总时延,编码器二的总时延)+(帧长-编码器一的总时延%帧长)+帧长-1)/帧长)获取D帧音频数据帧;其中,D表示所述第一编码器与所述第二编码器切换过程中的总音频数据帧数,max表示取最大值操作,%表示取余操作,帧长表示一帧音频数据的时长;
    通过所述第一编码器将所述音频数据中的第一音频帧至第D音频帧进行编码,得到第三编码音频帧至第D+2编码音频帧;
    通过所述第二编码器将所述音频数据中的所述第D音频帧进行编码,得到第D+3编码音频帧;
    将所述第三编码音频帧至所述第D+2编码音频帧、所述第D+3编码音频帧发送至所述音频播放设备;
    所述音频播放设备,还用于:
    通过所述第一解码器将所述第三编码音频帧至所述第D+2编码音频帧解码为第二播放音频帧至第D+1播放音频帧,通过第二解码器将所述D+3编码音频帧解码为第三解码音频帧;
    播放所述第二播放音频帧至第D播放音频帧;
    对所述第D+1播放音频帧和所述第三解码音频帧进行平滑处理,得到目标播放音频帧。
  17. 根据权利要求15所述的系统,其特征在于,所述音频播放设备,还用于:
    通过公式Pcm=wi*pcmA+(1-wi)*pcmB得到所述第一播放音频帧;其中,Pcm为所述第一播放音频帧,wi为平滑系数,wi i大于0小于1,pcmA为所述第一解码音频帧,pcmB为所述第二解码音频帧。
  18. 根据权利要求16所述的系统,其特征在于,所述音频播放设备,还用于:
    通过公式Pcm=wi*pcmA+(1-wi)*pcmB得到所述目标播放音频帧;其中,Pcm为所述目标播放音频帧,wi为平滑系数,wi i大于0小于1,pcmA为所述第D+1播放音频帧,pcmB为所述第三解码音频帧。
  19. 一种编解码器协商与切换方法,其特征在于,所述方法包括:
    当音频数据的第一参数信息满足第一条件时,所述电子设备根据第一类别中的第一编码器将所述音频数据编码成第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;其中,所述第二类别为所述电子设备在获取所述音频数据之前,确定的所述电子设备与所述音频播放设备共有的编解码器类别;
    当所述第二参数信息满足第二条件时,所述电子设备根据第二类别中的第二编码器将所述音频数据编码成第二编码音频数据,并将所述第二编码音频数据发送至所述音频播放设备;其中,所述第二类别为所述电子设备在获取所述音频数据之前,确定出所述电子设备与所述音频播放设备共有的编解码器类别;其中,所述第二类别为所述电子设备在获取所述音频数据之前,确定的所述电子设备与所述音频播放设备共有的编解码器类别,所述第一条件与所述第二条件不同,所述第一类别与所述第二类别不同。
  20. 根据权利要求19所述的方法,其特征在于,所述第一类别中的编码器至少包括所述第一编码器,所述第二类别中的编码器至少包括所述第二编码器。
  21. 根据权利要求20所述的方法,其特征在于,所述方法还包括:
    所述电子设备接收所述音频播放设备发送的所述第一类别的标识和所述第二类别的标识;其中,所述第一类别中的解码器至少包括所述第一解码器,所述第二类别中的解码器的至少包括所述第二解码器。
  22. 根据权利要求21所述的方法,其特征在于,所述方法还包括:
    所述电子设备确认出所述电子设备与所述音频播放设备的共有类别为所述第一类别和所述第二类别;
    所述电子设备将所述第一类别的标识和所述第二类别的标识发送至所述音频播放设备。
  23. 根据权利要求22所述的方法,其特征在于,所述第一类别中的编码器只包括所述第一编码器;在所述电子设备确认出所述电子设备与所述音频播放设备的共有类别为所述第一类别和所述第二类别之后,所述方法还包括:
    当所述第一参数信息满足所述第一条件时,所述电子设备通过所述第一类别中的所述第一编码器将所述音频数据编码成所述第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备。
  24. 根据权利要求23所述的方法,其特征在于,所述第一类别中的编码器还包括第三编码器;在所述电子设备确认出所述电子设备与所述音频播放设备的共有类别为所述第一类别和所述第二类别之后,所述方法还包括:
    当所述第一参数信息满足所述第一条件时,所述电子设备通过所述第一类别中的第一编码器将所述音频数据编码成第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;其中,所述第一编码器的功耗低于所述第三编码器,或者,所述第一编码器的优先级或功率高于所述第三编码器。
  25. 根据权利要求19所述的方法,其特征在于,当所述电子设备与所述音频播放设备共有的编码器类别只包括所述第一类别时,所述方法还包括:
    当所述第二参数信息满足所述第二条件时,所述电子设备通过所述第一类别中的所述第一编码器将所述音频数据编码成第三编码音频数据,并将所述第三编码音频数据发送至所述音频播放设备。
  26. 根据权利要求25所述的方法,其特征在于,所述电子设备与所述音频播放设备共有的编码器类别只包括所述第一类别,包括:所述电子设备未收到所述音频播放设备发送的所述第二类别的标识;或所述电子设备划分到所述第二类别中的编码器的数量为0。
  27. 根据权利要求25所述的方法,其特征在于,所述第一类别中的编码器只包括所述第一编码器,所述第一类别中的解码器只包括所述第一解码器;
    当所述第一参数信息满足所述第一条件时,所述电子设备根据所述第一类别中的所述第一编码器将所述音频数据编码成所述第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备。
  28. 根据权利要求25所述的方法,其特征在于,所述第一类别中的编码器还包括第三编码器,所述第一类别中的解码器还包括第三解码器;
    当所述第一参数信息满足所述第一条件时,所述电子设备根据所述第一类别中的所述第一编码器将所述音频数据编码成所述第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;其中,所述第一编码器的功耗低于所述第三编码器,或者,所述第一编码器的优先级或功率高于所述第三编码器。
  29. 根据权利要求20-28任一项所述的方法,其特征在于,所述第一类别中的编解码器为高清音质编解码器,所述第二类别中的编解码器为标准音质编解码器;或
    所述第一类别中的编解码器为标准音质编解码器,所述第二类别中的编解码器为高清音质编解码器。
  30. 根据权利要求10所述的方法,其特征在于,在所述电子设备获取音频数据之前,所述方法还包括:
    所述电子设备基于所述第一编码器的参数信息以及编解码器分类标准将所述第一编码器划到所述第一类别中,基于所述第二编码器的参数信息以及所述编解码器分类标准将所述第二编码器划分到所述第二类别中;其中,所述第一编码器的参数信息和所述第二编码器的参 数信息包括采样率、码率、量化位深、声道数和音频流格式中的一个或多个;所述编解码器分类标准包括编解码器类别与编解码器的参数信息的映射关系。
  31. 根据权利要求30所述的方法,其特征在于,所述第一类别中的编解码器的采样率大于等于目标采样率,所述第二类别中的编解码器的采样率小于目标采样率;和/或,
    所述第一类别中的编解码器的码率大于等于目标码率,所述第二类别中的编解码器的码率小于目标码率;和/或,
    所述第一类别中的编解码器的声道数大于等于目标声道数,所述第二类别中的编解码器的声道数小于目标声道数;和/或,
    所述第一类别中的编解码器的量化位深大于等于目标量化位深,所述第二类别中的编解码器的量化位深小于目标量化位深;和/或,
    所述第一类别中的编解码器的音频流格式为目标音频流格式,所述第二类别中的编解码器的音频流格式为所述目标音频流格式。
  32. 根据权利要求31所述的方法,其特征在于,所述第一参数信息中的参数种类、所述第一编码器的参数信息中的参数种类、所述第一解码器的参数信息中的参数种类、所述第二参数信息中的参数种类、所述第二编码器的参数信息中的参数种类、所述第二解码器的参数信息中的参数种类相同;
    所述第一参数信息满足所述第一条件,所述第二参数信息满足所述第二条件,具体包括:
    所述第一参数信息中的采样率大于等于所述目标采样率,所述第二参数信息中的采样率小于所述目标采样率;和/或,
    所述第一参数信息中的码率大于等于所述目标码率,所述第二参数信息中的码率小于所述目标码率;和/或,
    所述第一参数信息中的量化位深大于等于所述目标量化位深,所述第二参数信息中的量化位深小于所述目标量化位深;和/或,
    所述第一参数信息中的声道数大于等于所述目标声道数,所述第二参数信息中的声道数小于于所述目标声道数;和/或,
    所述第一参数信息中的音频流格式为所述目标音频流格式,所述第二参数信息中的音频流格式为所述目标音频流格式。
  33. 一种电子设备,其特征在于,包括一个或多个处理器、一个或多个存储器,一个或多个编码器;所述一个或多个存储器、所述一个或多个编码器与所述一个或多个处理器耦合,所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行如权利要求19至32任一项所述的方法。
  34. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机可执行指令,所述计算机可执行指令在被所述计算机调用时用于执行如权利要求19至32任一项所述的方法。
  35. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求19至32中任意一项所述的方法。
PCT/CN2022/083816 2021-04-20 2022-03-29 一种编解码器协商与切换方法 WO2022222713A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2023564200A JP2024515684A (ja) 2021-04-20 2022-03-29 コーデックネゴシエーションおよび切替方法
EP22790822.5A EP4318467A1 (en) 2021-04-20 2022-03-29 Codec negotiation and switching method
US18/489,217 US20240045643A1 (en) 2021-04-20 2023-10-18 Codec negotiation and switching method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110423987.8A CN115223579A (zh) 2021-04-20 2021-04-20 一种编解码器协商与切换方法
CN202110423987.8 2021-04-20

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/489,217 Continuation US20240045643A1 (en) 2021-04-20 2023-10-18 Codec negotiation and switching method

Publications (1)

Publication Number Publication Date
WO2022222713A1 true WO2022222713A1 (zh) 2022-10-27

Family

ID=83604709

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/083816 WO2022222713A1 (zh) 2021-04-20 2022-03-29 一种编解码器协商与切换方法

Country Status (5)

Country Link
US (1) US20240045643A1 (zh)
EP (1) EP4318467A1 (zh)
JP (1) JP2024515684A (zh)
CN (1) CN115223579A (zh)
WO (1) WO2022222713A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116261008A (zh) * 2022-12-14 2023-06-13 海信视像科技股份有限公司 音频处理方法和音频处理装置
CN116580716B (zh) * 2023-07-12 2023-10-27 腾讯科技(深圳)有限公司 音频编码方法、装置、存储介质及计算机设备

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1300508A (zh) * 1999-04-06 2001-06-20 阿尔卡塔尔公司 在话音信道上传输数据的方法和设备
US20070255433A1 (en) * 2006-04-25 2007-11-01 Choo Eugene K Method and system for automatically selecting digital audio format based on sink device
CN103477388A (zh) * 2011-10-28 2013-12-25 松下电器产业株式会社 声音信号混合解码器、声音信号混合编码器、声音信号解码方法及声音信号编码方法
CN104509119A (zh) * 2012-04-24 2015-04-08 Vid拓展公司 用于mpeg/3gpp-dash中平滑流切换的方法和装置
CN107404339A (zh) * 2017-08-14 2017-11-28 青岛海信电器股份有限公司 一种调节蓝牙a2dp编码设置的方法和装置
WO2020239985A1 (en) * 2019-05-31 2020-12-03 Tap Sound System Method for operating a bluetooth device
WO2021018739A1 (en) * 2019-07-26 2021-02-04 Tap Sound System Method for managing a plurality of multimedia communication links in a point-to-multipoint bluetooth network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1300508A (zh) * 1999-04-06 2001-06-20 阿尔卡塔尔公司 在话音信道上传输数据的方法和设备
US20070255433A1 (en) * 2006-04-25 2007-11-01 Choo Eugene K Method and system for automatically selecting digital audio format based on sink device
CN103477388A (zh) * 2011-10-28 2013-12-25 松下电器产业株式会社 声音信号混合解码器、声音信号混合编码器、声音信号解码方法及声音信号编码方法
CN104509119A (zh) * 2012-04-24 2015-04-08 Vid拓展公司 用于mpeg/3gpp-dash中平滑流切换的方法和装置
CN107404339A (zh) * 2017-08-14 2017-11-28 青岛海信电器股份有限公司 一种调节蓝牙a2dp编码设置的方法和装置
WO2020239985A1 (en) * 2019-05-31 2020-12-03 Tap Sound System Method for operating a bluetooth device
WO2021018739A1 (en) * 2019-07-26 2021-02-04 Tap Sound System Method for managing a plurality of multimedia communication links in a point-to-multipoint bluetooth network

Also Published As

Publication number Publication date
CN115223579A (zh) 2022-10-21
EP4318467A1 (en) 2024-02-07
US20240045643A1 (en) 2024-02-08
JP2024515684A (ja) 2024-04-10

Similar Documents

Publication Publication Date Title
CN111316598B (zh) 一种多屏互动方法及设备
WO2020253719A1 (zh) 一种录屏方法及电子设备
CN110109636B (zh) 投屏方法、电子设备以及系统
WO2020238871A1 (zh) 一种投屏方法、系统及相关装置
CN113497909B (zh) 一种设备交互的方法和电子设备
EP4084486B1 (en) Cross-device content projection method, and electronic device
CN113438354B (zh) 数据传输方法、装置、电子设备和存储介质
WO2022222713A1 (zh) 一种编解码器协商与切换方法
WO2022105445A1 (zh) 基于浏览器的应用投屏方法及相关装置
CN114040242A (zh) 投屏方法和电子设备
WO2022222924A1 (zh) 一种投屏显示参数调节方法
WO2023030099A1 (zh) 跨设备交互的方法、装置、投屏系统及终端
JP2022537012A (ja) マルチ端末マルチメディアデータ通信方法及びシステム
CN114185503A (zh) 多屏交互的系统、方法、装置和介质
CN116170629A (zh) 一种传输码流的方法、电子设备及计算机可读存储介质
CN112437341B (zh) 一种视频流处理方法及电子设备
WO2022206763A1 (zh) 一种显示方法、电子设备和系统
WO2022161006A1 (zh) 合拍的方法、装置、电子设备和可读存储介质
WO2022135157A1 (zh) 页面显示的方法、装置、电子设备以及可读存储介质
WO2022135254A1 (zh) 一种编辑文本的方法、电子设备和系统
WO2024051634A1 (zh) 一种投屏显示的方法、系统以及电子设备
WO2023016347A1 (zh) 声纹认证应答方法、系统及电子设备
CN113271577B (zh) 媒体数据播放系统、方法及相关装置
WO2023093778A1 (zh) 一种截屏方法及相关装置
WO2023011220A1 (zh) 一种数据同步方法、终端和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22790822

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023564200

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2022790822

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022790822

Country of ref document: EP

Effective date: 20231030

NENP Non-entry into the national phase

Ref country code: DE