WO2022222713A1 - 一种编解码器协商与切换方法 - Google Patents
一种编解码器协商与切换方法 Download PDFInfo
- Publication number
- WO2022222713A1 WO2022222713A1 PCT/CN2022/083816 CN2022083816W WO2022222713A1 WO 2022222713 A1 WO2022222713 A1 WO 2022222713A1 CN 2022083816 W CN2022083816 W CN 2022083816W WO 2022222713 A1 WO2022222713 A1 WO 2022222713A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- category
- audio
- electronic device
- encoder
- audio data
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 93
- 238000004590 computer program Methods 0.000 claims abstract description 8
- 238000005070 sampling Methods 0.000 claims description 253
- 238000013139 quantization Methods 0.000 claims description 197
- 238000009499 grossing Methods 0.000 claims description 34
- 230000015654 memory Effects 0.000 claims description 30
- 230000008569 process Effects 0.000 claims description 28
- 101100083374 Dictyostelium discoideum pcmA gene Proteins 0.000 claims description 20
- 230000001934 delay Effects 0.000 claims description 12
- 238000013507 mapping Methods 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 abstract description 73
- 208000003028 Stuttering Diseases 0.000 abstract 1
- 238000004891 communication Methods 0.000 description 76
- 230000006854 communication Effects 0.000 description 76
- 238000009877 rendering Methods 0.000 description 47
- 230000006870 function Effects 0.000 description 40
- 238000012545 processing Methods 0.000 description 28
- 238000010586 diagram Methods 0.000 description 21
- 238000007726 management method Methods 0.000 description 21
- 230000005236 sound signal Effects 0.000 description 17
- 230000008859 change Effects 0.000 description 16
- 238000005516 engineering process Methods 0.000 description 13
- 238000010295 mobile communication Methods 0.000 description 11
- 210000000988 bone and bone Anatomy 0.000 description 10
- 230000004044 response Effects 0.000 description 9
- 230000000694 effects Effects 0.000 description 8
- 230000007704 transition Effects 0.000 description 8
- 239000000523 sample Substances 0.000 description 7
- 229920001621 AMOLED Polymers 0.000 description 5
- 230000009977 dual effect Effects 0.000 description 4
- 230000001965 increasing effect Effects 0.000 description 4
- 239000002096 quantum dot Substances 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000006855 networking Effects 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000036772 blood pressure Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 210000004027 cell Anatomy 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 238000010009 beating Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000007175 bidirectional communication Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000013529 biological neural network Methods 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008014 freezing Effects 0.000 description 1
- 238000007710 freezing Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 230000003862 health status Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001939 inductive effect Effects 0.000 description 1
- 239000010985 leather Substances 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000013515 script Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/18—Vocoders using multiple modes
- G10L19/22—Mode decision, i.e. based on audio signal content versus external parameters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/162—Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/18—Vocoders using multiple modes
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/02—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using spectral analysis, e.g. transform vocoders or subband vocoders
- G10L19/032—Quantisation or dequantisation of spectral components
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/167—Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/04—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
- G10L19/16—Vocoder architecture
- G10L19/18—Vocoders using multiple modes
- G10L19/24—Variable rate codecs, e.g. for generating different qualities using a scalable representation such as hierarchical encoding or layered encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W4/00—Services specially adapted for wireless communication networks; Facilities therefor
- H04W4/80—Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
Definitions
- the present application relates to the technical field of audio processing, and in particular, to a codec negotiation and switching method.
- the electronic device and the audio playback device will negotiate and obtain the codec type supported by both parties, and then the electronic device will process the audio data according to the codec type supported by both parties. , and then the electronic device sends the encoded audio data to the audio playback device, and the audio playback device receives the encoded audio data and decodes it with a corresponding decoder for playback.
- the electronic device When the network of the electronic device becomes poor or the user selects higher quality sound quality, the electronic device needs to switch the encoder. At this time, the electronic device and the audio playback device need to renegotiate the type of codec. During the process of renegotiating the codec type between the electronic device and the audio playback device, the electronic device will suspend sending audio data to the audio playback device, resulting in interruptions and freezes of audio data, affecting user experience. Therefore, how to realize the rapid switching of the codec between the electronic device and the audio playback device and the uninterrupted flow of audio data during the switching process is an urgent problem to be solved.
- the present application provides a codec negotiation and switching method, which realizes that when an electronic device needs to switch an encoder, it does not need to re-negotiate the codec type with an audio playback device, and directly selects the codec supported by both parties that have been negotiated before.
- the decoder category select a codec in a category for audio data transmission, which solves the problem of audio data interruption and freezing when switching codecs between electronic devices and audio playback devices, and improves user experience.
- the present application provides a codec negotiation and switching system, the system includes an electronic device and an audio playback device, wherein: the electronic device is used for: when the first parameter information of the audio data satisfies the first condition, according to the first
- the first encoder in a category encodes the audio data into first encoded audio data, and sends the first encoded audio data to the audio playback device; wherein, the first category is that the electronic device determines the electronic device before acquiring the audio data
- the codec category shared with the audio playback device the identification of the first category is sent to the audio playback device; the audio playback device is used for: receiving the identification of the first category sent by the electronic device;
- the first encoded audio data is decoded into the first playback audio data;
- the electronic device is further configured to: when the second parameter information of the audio data satisfies the second condition, encode the audio data into the second audio data according to the second encoder in the second category Encoding audio data, and sending the second encoded audio data to the audio playback device
- the electronic device and the audio playback device before transmitting audio data, divide the codecs into a plurality of categories, and determine the codec category shared by the electronic device and the audio playback device (for example, the first category and the second category). Afterwards, the electronic device acquires the first parameter information of the audio data, and when the first parameter information of the audio data satisfies the first condition, selects a codec in the first category from the shared codec categories to transmit the audio data. After the content of the played audio data, the application of the played audio data, user selection, or network conditions change, the electronic device acquires the second parameter of the audio data.
- the electronic device When the second parameter of the audio data satisfies the second condition, the electronic device does not need to Negotiate the codec with the audio playback device again, and directly select the codec in the second category in the common codec category to transmit the audio data. In this way, when the electronic device needs to switch the encoder, it does not need to re-negotiate the type of the codec with the audio playback device, and directly select a codec in a category from the previously negotiated codec categories supported by both parties. Audio data transmission solves the problem of audio data interruption and freeze when switching codecs between electronic devices and audio playback devices, and improves user experience.
- the encoders in the first category include at least the first encoder, and the encoders in the second category include at least the second encoder.
- the electronic device is further configured to: receive the identifier of the first category and the identifier of the second category sent by the audio playback device; wherein the decoder in the first category at least includes the first A decoder, the decoders in the second category including at least a second decoder.
- the audio playback device classifies the decoders into a plurality of categories according to the codec classification criteria.
- the codec categories whose decoder identifiers are divided into multiple categories are greater than or equal to 1 are the first category and the second category.
- the decoders classified into the first category include at least the first decoder, and may also include other decoders, such as the third decoder; the decoders classified into the second category include at least the second decoder, and may also include other decoders. , such as the fourth decoder.
- the electronic device is further used to: confirm that the shared categories of the electronic device and the audio playback device are the first category and the second category; combine the identifier of the first category with the second category
- the identifier of the device is sent to the audio playback device; the audio playback device is further used for: receiving the identifier of the first category and the identifier of the second category sent by the electronic device.
- the electronic device sends the codec classes supported by both parties to the audio playback device so that the audio playback device knows the codec classes supported by both parties.
- the electronic device may not need to send the identifier of the first category and the identifier of the second category to the audio playback device.
- the electronic device When transmitting audio data, the electronic device only needs to send the codec type identifier used according to the parameter information of the audio data to the audio playback device.
- the encoder in the first category includes only the first encoder
- the decoder in the first category includes only the first decoder
- the electronic device is further configured to: when the first parameter information satisfies the first condition, encode the audio data into the first class using the first encoder in the first class
- the audio data is encoded, and the first encoded audio data is sent to the audio playback device;
- the audio playback device is further configured to decode the first encoded audio data into the first playback audio data through the first decoder in the first category.
- the codec categories supported by both electronic equipment and audio playback equipment are the first category and the second category.
- the electronic device When only one encoder and one decoder are included in the first category, the electronic device shall use one encoder and one decoder in this category. Decoder as the default encoder and decoder. After that, when the electronic device and the audio playback device use the codec in the first category for audio data transmission, the audio data is encoded into the first encoded audio data according to the default encoder in the first category, and then the electronic device encodes the first encoded audio data. An encoded audio data is sent to the audio playback device, and the audio playback device uses the default decoder in the first category to decode the first encoded audio data into the first playback audio data.
- the encoder in the first category further includes a third encoder
- the decoder in the first category further includes a third decoder
- the electronic device is further configured to: when the first parameter information satisfies the first condition, encode the audio data into the first class using the first encoder in the first class Encoding audio data, and sending the first encoded audio data to the audio playback device; wherein the power consumption of the first encoder is lower than that of the third encoder, or the priority or power of the first encoder is higher than that of the third encoder an audio playback device, further configured to decode the first encoded audio data into the first playback audio data through the first decoder in the first category; wherein, the power consumption of the first decoder is lower than that of the second decoder, or, The priority or power of the first decoder is higher than that of the second de
- the codec categories supported by both the electronic device and the audio playback device are the first category and the second category.
- the electronic device will select from multiple encoders according to the preset
- the rule determines one encoder as the default encoder, and determines one decoder from multiple decoders as the default decoder according to the preset rules.
- the default rules can be priority rules, efficiency rules, power consumption rules, and so on. It should be noted that the electronic device may determine one encoder from multiple encoders as the default encoder according to preset rules, and determine one decoder from multiple decoders as the default decoder according to the preset rules.
- the electronic device only needs to determine one encoder from multiple encoders according to preset rules as the default encoder, and the audio playback device determines one encoder from multiple decoders according to preset rules decoder as the default decoder. This application is not limited here.
- the electronic device when the encoder category shared by the electronic device and the audio playback device only includes the first category, the electronic device is further configured to: when the second parameter information satisfies the second condition, pass The first encoder in the first category encodes the audio data into third encoded audio data, and sends the third encoded audio data to the audio playback device; the audio playback device is further configured to pass the first decoder in the first category The third encoded audio data is decoded into third playback audio data.
- the electronic device and the audio playback device only support one type of codec
- the electronic device when the parameter information of the audio data is changed from the first parameter information to the second parameter information, and the second parameter information satisfies the second condition , the electronic device cannot switch the codec, and the electronic device still uses the default codec in the first category to transmit audio data with the audio playback device.
- the encoder category shared by the electronic device and the audio playback device only includes the first category, including: the electronic device has not received the identifier of the second category sent by the audio playback device; or The number of encoders that the electronic device falls into the second category is zero.
- the encoder in the first category includes only the first encoder
- the decoder in the first category includes only the first decoder
- the electronic device when the first parameter information satisfies the first condition, is further configured to: encode the audio data into the first encoded audio data according to the first encoder in the first category, and send the first encoded audio data to the audio playback device; the audio playback device is also used for The first encoded audio data is decoded into first playback audio data by the first decoder in the first category.
- the codec category supported by both electronic equipment and audio playback equipment includes only the first category.
- the electronic device When only one encoder and one decoder are included in the first category, the electronic device shall use one encoder and one decoder in this category as Default encoder and decoder. After that, when the electronic device and the audio playback device use the codec in the first category for audio data transmission, the audio data is encoded into the first encoded audio data according to the default encoder in the first category, and then the electronic device encodes the first encoded audio data. An encoded audio data is sent to the audio playback device, and the audio playback device uses the default decoder in the first category to decode the first encoded audio data into the first playback audio data.
- the encoder in the first category further includes a third encoder
- the decoder in the first category further includes a third decoder
- the electronic device is further configured to: the first encoder in the first category encodes the audio data into the first encoded audio data, and sends the first encoded audio data to the audio playback device; wherein the function of the first encoder The power consumption is lower than that of the third encoder, or the priority or power of the first encoder is higher than that of the third encoder
- the audio playback device is further configured to decode the first encoded audio data through the first decoder in the first category The audio data is played first; wherein, the power consumption of the first decoder is lower than that of the third decoder, or the priority or power of the first decoder is higher than that of the third decoder.
- the codec category supported by both the electronic device and the audio playback device only includes the first category.
- the electronic device will determine the multiple encoders according to preset rules.
- One encoder is selected as the default encoder, and one decoder is determined from multiple decoders as the default decoder according to preset rules.
- the default rules can be priority rules, efficiency rules, power consumption rules, and so on. It should be noted that the electronic device may determine one encoder from multiple encoders as the default encoder according to preset rules, and determine one decoder from multiple decoders as the default decoder according to the preset rules.
- the electronic device only needs to determine one encoder from multiple encoders according to preset rules as the default encoder, and the audio playback device determines one encoder from multiple decoders according to preset rules decoder as the default decoder. This application is not limited here.
- the codec in the first category is a high-definition sound quality codec
- the codec in the second category is a standard sound quality codec
- the codecs are standard sound quality codecs
- the codecs in the second category are high definition sound quality codecs.
- the electronic device before the electronic device acquires the audio data, the electronic device is further configured to: classify the first encoder into the first encoder based on the parameter information of the first encoder and the codec classification standard In one category, the second encoder is divided into the second category based on the parameter information of the second encoder and the codec classification standard; wherein the parameter information of the first encoder and the parameter information of the second encoder include the sampling rate one or more of , code rate, quantization bit depth, number of channels, and audio stream format; the audio playback device is also used to: classify the first decoder into In the first category, the second decoder is divided into the second category based on the parameter information of the second decoder and the codec classification standard; wherein the parameter information of the first decoder and the parameter information of the second decoder include sampling One or more of the rate, the code rate, the quantization bit depth, the number of channels, and the audio stream format; wherein, the codec classification standard includes the mapping relationship between
- the sampling rate of the codec in the first category is greater than or equal to the target sampling rate, and the sampling rate of the codec in the second category is smaller than the target sampling rate; and/or , the code rate of the codec in the first category is greater than or equal to the target code rate, and the code rate of the codec in the second category is less than the target code rate; and/or, the number of channels of the codec in the first category greater than or equal to the target number of channels, the number of channels of the codec in the second category is less than the target number of channels; and/or, the quantization bit depth of the codec in the first category is greater than or equal to the target quantization bit depth, the second The quantization bit depth of the codecs in the class is smaller than the target quantization bit depth; and/or, the audio stream format of the codecs in the first class is the target audio stream format, and the audio stream format of the codecs in the second class is the target audio stream format.
- the parameter type in the first parameter information, the parameter type in the parameter information of the first encoder, the parameter type in the parameter information of the first decoder, the second parameter The parameter types in the information, the parameter types in the parameter information of the second encoder, and the parameter types in the parameter information of the second decoder are the same; the first parameter information satisfies the first condition, and the second parameter information satisfies the second condition, specifically Including: the sampling rate in the first parameter information is greater than or equal to the target sampling rate, and the sampling rate in the second parameter information is less than the target sampling rate; and/or, the code rate in the first parameter information is greater than or equal to the target code rate, the second parameter The code rate in the information is less than the target code rate; and/or, the quantization bit depth in the first parameter information is greater than or equal to the target quantization bit depth, and the quantization bit depth in the second parameter information is less than the target quantization bit depth; and/or, the first The number of
- the electronic device is further configured to: convert the first audio frame in the audio data by using the first encoder Encoding into a first encoded audio frame, and sending the first encoded audio frame to an audio playback device; encoding the first audio frame in the audio data into a second encoded audio frame by a second encoder, and encoding the second encoded audio frame Send to the audio playback device, encode the second audio frame in the audio data into the Nth encoded audio frame by the second encoder, and send the Nth encoded audio frame to the audio playback device; the audio playback device is also used for: by The first decoder decodes the first encoded audio frame into the first decoded audio frame, decodes the second encoded audio frame into the second decoded audio frame through the second decoder, and decodes the Nth encoded audio frame through the second decoder as The Nth audio frame is played; the first decoded audio frame and
- the electronic device 100 first plays the first playback audio frame, and then the electronic device 100 plays the Nth playback audio frame.
- the switching between the first encoder and the second encoder needs to be completed within one frame, and this frame is the first audio frame, and the audio playback device will After the first audio frame is smoothed, it is played to prevent jamming when the codec is switched, and to achieve a smooth transition.
- the adjacent audio frames after the first audio frame, such as the second audio frame do not need to be smoothed, and the second decoder is directly decoded and played.
- the first audio frame to the D th audio frame is encoded to obtain the third coded audio frame to the D+2 th coded audio frame;
- the D th audio frame in the audio data is encoded by the second encoder to obtain the D+3 th encoding Audio frame, the D+1 th audio frame in the audio data is encoded by the second encoder to obtain the Nth
- the switching between the first encoder and the second encoder needs to be completed in multiple frames (D frames), so that the first encoder and the second encoder are During the encoder switching process, the audio data encoded by the first encoder arrives at the audio playback device and is decoded, and the audio data encoded by the second encoder arrives at the audio playback device and is decoded at the same time. If the encoder switching needs to be completed in the D frame, the audio playback device directly decodes and plays the first audio frame to the D-1 audio frame. There is a freeze when the decoder is switched to achieve a smooth transition. The adjacent audio frames after the D-th audio frame, such as the N-th audio frame, do not need smoothing processing, and the N-th decoder is directly decoded and played out.
- the present application provides another codec negotiation and switching method.
- the method includes: when the first parameter information of the audio data satisfies the first condition, the electronic device encodes the audio data according to the first encoder in the first category into the first encoded audio data, and send the first encoded audio data to the audio playback device; wherein, the second category is that the electronic device determines the codec category shared by the electronic device and the audio playback device before acquiring the audio data; when When the second parameter information satisfies the second condition, the electronic device encodes the audio data into the second encoded audio data according to the second encoder in the second category, and sends the second encoded audio data to the audio playback device;
- the category is that the electronic device determines the codec category shared by the electronic device and the audio playback device before acquiring the audio data;
- the second category is that the electronic device determines the codec shared by the electronic device and the audio playback device before acquiring the audio data.
- Decoder class the first condition is different from the second condition, and the first class
- the electronic device and the audio playback device before transmitting the audio data, divide the codecs into multiple categories, and determine the codec category (for example, the first category) shared by the electronic device and the audio playback device. and the second category). Afterwards, the electronic device acquires the first parameter information of the audio data, and when the first parameter information of the audio data satisfies the first condition, selects a codec in the first category from the shared codec categories to transmit the audio data. After the content of the played audio data, the application of the played audio data, user selection, or network conditions change, the electronic device acquires the second parameter of the audio data.
- the codec category for example, the first category
- the electronic device acquires the first parameter information of the audio data
- the electronic device acquires the second parameter of the audio data.
- the electronic device When the second parameter of the audio data satisfies the second condition, the electronic device does not need to Negotiate the codec with the audio playback device again, and directly select the codec in the second category in the common codec category to transmit the audio data. In this way, when the electronic device needs to switch the encoder, it does not need to renegotiate the codec type with the audio playback device, and directly select a codec in a category from the previously negotiated codec categories supported by both parties for audio Data transmission solves the problem of audio data interruption and freeze when switching codecs between electronic devices and audio playback devices, and improves user experience.
- the encoders in the first category include at least the first encoder, and the encoders in the second category include at least the second encoder.
- the method further includes: the electronic device receives the identifier of the first category and the identifier of the second category sent by the audio playback device; wherein the decoder in the first category includes at least the first category identifier.
- the audio playback device classifies the decoders into a plurality of categories according to the codec classification criteria.
- the codec categories whose decoder identifiers are divided into multiple categories are greater than or equal to 1 are the first category and the second category.
- the decoders classified into the first category include at least the first decoder, and may also include other decoders, such as the third decoder; the decoders classified into the second category include at least the second decoder, and may also include other decoders. , such as the fourth decoder.
- the method further includes: the electronic device confirms that the shared categories of the electronic device and the audio playback device are the first category and the second category; the electronic device combines the identifier of the first category with the first category. The identification of the second category is sent to the audio playback device. The electronic device sends the codec classes supported by both parties to the audio playback device so that the audio playback device knows the codec classes supported by both parties.
- the encoders in the first category only include the first encoder; the electronic device confirms that the shared categories of the electronic device and the audio playback device are the first category and the second category Afterwards, the method further includes: when the first parameter information satisfies the first condition, the electronic device encodes the audio data into the first encoded audio data through the first encoder in the first category, and sends the first encoded audio data to the audio playback device.
- the codec categories supported by both the electronic device and the audio playback device are the first category and the second category.
- the electronic device uses one of the encoders in this category as the default encoder and encoder. decoder.
- the audio data is encoded into the first encoded audio data according to the default encoder in the first category, and then the electronic device encodes the first encoded audio data.
- An encoded audio data is sent to the audio playback device.
- the encoder in the first category further includes a third encoder; the electronic device confirms that the shared categories of the electronic device and the audio playback device are the first category and the second category Afterwards, the method further includes: when the first parameter information satisfies the first condition, the electronic device encodes the audio data into the first encoded audio data through the first encoder in the first category, and sends the first encoded audio data to the audio A playback device; wherein the power consumption of the first encoder is lower than that of the third encoder, or the priority or power of the first encoder is higher than that of the third encoder.
- the codec categories supported by both the electronic device and the audio playback device are the first category and the second category.
- the electronic device will determine one from the multiple encoders according to preset rules. encoder as the default encoder.
- the default rules can be priority rules, efficiency rules, power consumption rules, and so on.
- the method further includes: when the second parameter information satisfies the second condition, the electronic device The audio data is encoded into third encoded audio data by the first encoder in the first category, and the third encoded audio data is sent to the audio playback device.
- the electronic device and the audio playback device only support one type of codec, in this case, when the parameter information of the audio data is changed from the first parameter information to the second parameter information, and the second parameter information satisfies the second condition , the electronic device cannot switch the codec, and the electronic device still uses the default codec in the first category to transmit audio data with the audio playback device.
- the electronic device when the electronic device does not receive the identifier of the second category sent by the audio playback device or the number of encoders that the electronic device is divided into the second category is 0, the electronic device
- the encoder class common to audio playback devices includes only the first class.
- the encoder in the first category includes only the first encoder, and the decoder in the first category includes only the first decoder; when the first parameter information satisfies the first When conditions are met, the electronic device encodes the audio data into first encoded audio data according to the first encoder in the first category, and sends the first encoded audio data to the audio playback device.
- the codec category supported by both the electronic device and the audio playback device only includes the first category, and when the first category includes only one encoder, the electronic device uses one encoder in the category as the default encoder.
- the audio data is encoded into the first encoded audio data according to the default encoder in the first category, and then the electronic device encodes the first encoded audio data.
- An encoded audio data is sent to the audio playback device.
- the encoder in the first category further includes a third encoder
- the decoder in the first category further includes a third decoder
- the electronic device encodes the audio data into the first encoded audio data according to the first encoder in the first category, and sends the first encoded audio data to the audio playback device; wherein the power consumption of the first encoder is lower than
- the third encoder alternatively, the first encoder has a higher priority or power than the third encoder.
- the codec category supported by both the electronic device and the audio playback device only includes the first category.
- the electronic device will determine an encoder from multiple encoders according to preset rules as the first category. Default encoder.
- the default rules can be priority rules, efficiency rules, power consumption rules, and so on.
- the codec in the first category is a high-definition sound quality codec
- the codec in the second category is a standard sound quality codec
- the codecs are standard sound quality codecs
- the codecs in the second category are high definition sound quality codecs.
- the method further includes: the electronic device classifies the first encoder into the first encoder based on the parameter information of the first encoder and the codec classification standard In one category, the second encoder is divided into the second category based on the parameter information of the second encoder and the codec classification standard; wherein the parameter information of the first encoder and the parameter information of the second encoder include the sampling rate One or more of , code rate, quantization bit depth, number of channels, and audio stream format; the codec classification standard includes the mapping relationship between codec categories and codec parameter information. It should be noted that the parameter information of the first encoder and the parameter information of the second encoder are the same.
- the sampling rate of the codec in the first category is greater than or equal to the target sampling rate, and the sampling rate of the codec in the second category is smaller than the target sampling rate; and/or , the code rate of the codec in the first category is greater than or equal to the target code rate, and the code rate of the codec in the second category is less than the target code rate; and/or, the number of channels of the codec in the first category greater than or equal to the target number of channels, the number of channels of the codec in the second category is less than the target number of channels; and/or, the quantization bit depth of the codec in the first category is greater than or equal to the target quantization bit depth, the second The quantization bit depth of the codecs in the class is smaller than the target quantization bit depth; and/or, the audio stream format of the codecs in the first class is the target audio stream format, and the audio stream format of the codecs in the second class is the target audio stream format.
- the parameter type in the first parameter information, the parameter type in the parameter information of the first encoder, the parameter type in the parameter information of the first decoder, the second parameter The parameter types in the information, the parameter types in the parameter information of the second encoder, and the parameter types in the parameter information of the second decoder are the same; the first parameter information satisfies the first condition, and the second parameter information satisfies the second condition, specifically Including: the sampling rate in the first parameter information is greater than or equal to the target sampling rate, and the sampling rate in the second parameter information is less than the target sampling rate; and/or, the code rate in the first parameter information is greater than or equal to the target code rate, the second parameter The code rate in the information is less than the target code rate; and/or, the quantization bit depth in the first parameter information is greater than or equal to the target quantization bit depth, and the quantization bit depth in the second parameter information is less than the target quantization bit depth; and/or, the first The number of
- the present application provides an electronic device, comprising one or more processors, one or more memories, one or more encoders; one or more memories, one or more encoders and one or more two processors are coupled, one or more memories are used to store computer program code, the computer program code includes computer instructions, the one or more processors invoke the computer instructions to cause the electronic device to perform any of the possible implementations of the second aspect A codec negotiation and switching method in .
- the present application provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and the computer-executable instructions, when invoked by a computer, are used to cause the computer to execute any one of the above-mentioned second aspects.
- a codec negotiation and switching method provided in a possible implementation manner.
- the present application provides a computer program product comprising instructions, which, when the computer program product is run on a computer, causes the computer to execute a codec provided in any of the possible implementations of the second aspect above Negotiation and handover methods.
- FIG. 1 is a schematic diagram of a process of transmitting audio data between an electronic device and an audio playback device according to an embodiment of the present application
- FIG. 2 is a schematic diagram of a system provided by an embodiment of the present application.
- FIG. 3 is a schematic diagram of a system for networking transmission provided by an embodiment of the present application.
- FIG. 4 is a schematic diagram of another process of transmitting audio data between the electronic device 100 and the audio playback device 200 according to an embodiment of the present application;
- FIG. 5 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present application.
- FIG. 6 is a software structural block diagram of an electronic device 100 (eg, a mobile phone) provided by an embodiment of the present application;
- FIG. 7 is a schematic diagram of the hardware structure of an audio playback device 200 provided by an embodiment of the present application.
- FIG. 8 is a schematic diagram of a codec category that the electronic device 100 and the audio playback device 200 negotiate and share in common according to an embodiment of the present application;
- 9A-9C are UI diagrams of establishing a communication connection between a group of electronic devices 100 and an audio playback device 200 through Bluetooth provided by an embodiment of the application;
- 10A to 10D are another set of UI diagrams provided by this embodiment of the present application.
- first and second are only used for descriptive purposes, and should not be construed as implying or implying relative importance or implying the number of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features. In the description of the embodiments of the present application, unless otherwise specified, the “multiple” The meaning is two or more.
- UI user interface
- the term "user interface (UI)" in the description, claims and drawings of this application is a medium interface for interaction and information exchange between an application program or an operating system and a user, and it realizes the internal form of information Conversion to and from user-acceptable forms.
- the user interface of the application is the source code written in a specific computer language such as java and extensible markup language (XML).
- the interface source code is parsed and rendered on the terminal device, and finally presented as content that the user can recognize.
- Controls also known as widgets, are the basic elements of the user interface. Typical controls include toolbars, menu bars, input boxes, buttons, scroll bars, images and text.
- the attributes and content of controls in the interface are defined by tags or nodes.
- XML specifies the controls contained in the interface through nodes such as ⁇ Textview>, ⁇ ImgView>, and ⁇ VideoView>.
- a node corresponds to a control or property in the interface, and the node is presented as user-visible content after parsing and rendering.
- applications such as hybrid applications, often contain web pages in their interface.
- a web page also known as a page, can be understood as a special control embedded in an application program interface.
- a web page is source code written in a specific computer language, such as hypertext markup language (HTML), cascading styles Tables (cascading style sheets, CSS), java scripts (JavaScript, JS), etc.
- the source code of the web page can be loaded and displayed as user-identifiable content by a browser or a web page display component similar in function to a browser.
- the specific content contained in a web page is also defined by tags or nodes in the source code of the web page. For example, HTML defines the elements and attributes of web pages through ⁇ p>, ⁇ img>, ⁇ video>, and ⁇ canvas>.
- GUI graphical user interface
- GUI refers to a user interface related to computer operations that is displayed graphically. It can be an interface element such as a window, a control, etc. displayed in the display screen of the electronic device.
- an electronic device eg, a mobile phone
- an audio playback device eg, a headset
- the electronic device sends audio data to the audio playback device, and the audio playback device plays the audio data sent by the electronic device.
- the electronic device negotiates the codec type with the audio playback device, the electronic device will select the codec supported by both parties to encode the audio data, and send the encoded audio data to the audio playback device.
- FIG. 1 exemplarily shows a schematic diagram of a process of transmitting audio data between an electronic device and an audio playback device.
- the electronic device may be an audio signal source end (Source, SRS), and the audio playback device may be an audio signal sink end (Sink, SNK).
- Source SRS
- Sink SNK
- the electronic device includes an audio data acquisition unit, an audio stream decoding unit, a sound mixing rendering unit, a wireless audio coding unit, a capability negotiation unit and a wireless transmission unit.
- the audio playback device includes a wireless transmission unit, a wireless audio decoding unit, an audio power amplifier unit, an audio playback unit and a capability negotiation unit.
- the electronic device and the audio playback device negotiate a type of codec supported by both parties for data transmission. Specifically, after the electronic device establishes a communication connection with the audio playback device, the audio playback device sends all the decoder identifiers and the capabilities of all the decoders to the wireless transmission unit on the audio playback device side through the capability negotiation unit, wherein the decoder's The identifier is the number of the decoder, and the audio playback device can find the decoder corresponding to the identifier of the decoder according to the identifier of the decoder, and obtain the capability of the decoder corresponding to the identifier of the decoder; The decoder identifier and the capabilities of all decoders are sent to the wireless transmission unit on the electronic device side, and the wireless transmission unit on the electronic device side sends all the decoder identifiers and the capabilities of all decoders to the electronic device side.
- Capability negotiation unit The capability negotiation unit on the electronic device side obtains the identifiers of all encoders and the capabilities of all encoders in the electronic device, wherein the identifiers of the encoders are the serial numbers of the encoders, and the electronic device can find the corresponding identifiers of the encoders according to the identifiers of the encoders. Encoder, and obtain the capability of the encoder corresponding to the identifier of the encoder.
- the capability negotiation unit on the electronic device side obtains codecs with one or more capabilities shared by the electronic device and the audio playback device according to the capabilities of all codecs, wherein the capabilities of the codec include the supported samples of the codec rate value, quantization bit depth value, bit rate, number of channels, etc.
- the electronic device will determine a codec identifier from the codecs of one or more capabilities shared by the electronic device and the audio playback device as the default codec according to factors such as the type of audio being played. After that, the electronic device and the audio playback device will transmit the audio data according to the default codec identifier.
- the capability negotiation unit of the electronic device sends the default encoder identification to the wireless encoding unit.
- the capability negotiation unit of the electronic device sends the default decoder identifier to the wireless transmission unit on the electronic device side, and the wireless transmission unit on the electronic device side sends the default decoder identifier to the wireless transmission unit on the audio playback device side.
- the wireless transmission unit on the device side sends the default decoder identifier to the capability negotiation unit of the audio playback device, and the audio playback device sends the default decoder identifier to the wireless audio decoding unit through the capability negotiation unit.
- the electronic device and the audio playback device may also not include a capability negotiation unit.
- the wireless transmission unit in the electronic device and the audio playback device can implement the function of codec capability negotiation.
- the audio data acquisition unit is used to acquire an audio code stream, which can be a network audio code stream acquired by the electronic device in real time, or an audio code stream buffered in the electronic device. After the audio data acquires the audio code stream, the audio code stream is sent to the audio content decoding unit.
- the audio content decoding unit receives the audio code stream sent by the audio data acquisition unit, decodes the audio code stream, and obtains an uncompressed audio code stream. After that, the audio content decoding unit sends the uncompressed audio stream to the audio mixing and rendering unit.
- the audio mixing and rendering unit receives the uncompressed audio code stream sent by the audio content decoding unit, mixes and renders the uncompressed audio code stream, and calls the mixed and rendered audio code stream audio data.
- Mixing is to mix the uncompressed audio stream with the audio data with ambient color, so that the audio code stream after mixing and rendering has ambient color. It is understandable that the electronic device can provide multiple channels for audio mixing and rendering. audio data. For example, in the dubbing and narration of a documentary, the dubbers have recorded the audio code stream of the narration. In order to make the audio code stream match the picture of the documentary, the ambient color of the audio code stream needs to be rendered to increase the mysterious atmosphere.
- Rendering is the rendering adjustment of the sampling rate, sampling bit depth, and number of channels of audio data.
- the electronic device may also not include an audio mixing and rendering unit, that is, the electronic device does not need to perform audio mixing and rendering processing on the audio stream. This application is not limited here.
- the audio mixing and rendering unit sends the audio data to the wireless audio coding unit.
- the wireless audio encoding unit receives the audio data sent by the audio mixing and rendering unit, and encodes the audio data according to the default encoder identifier, and then the wireless audio encoding unit sends the encoded audio data to the wireless transmission unit.
- the wireless transmission unit receives the encoded audio data sent by the wireless audio encoding unit, and sends the encoded audio data to the wireless transmission unit of the audio playback device through the transmission channel between the electronic device and the audio playback device.
- the wireless transmission unit of the audio playback device receives the encoded audio data sent by the wireless transmission unit of the electronic device, and the wireless transmission unit of the audio playback device sends the encoded audio data to the wireless audio decoding unit of the audio playback device.
- the wireless audio decoding unit of the audio playback device receives the encoded audio data sent by the wireless transmission unit, and performs audio decoding on the encoded audio data according to the default decoder identifier to obtain uncompressed audio data.
- the wireless audio decoding unit of the audio playback device sends the uncompressed audio data to the audio power amplifier unit of the audio playback device.
- the audio power amplifier unit of the audio playback device receives the uncompressed audio data, performs digital-to-analog conversion, power amplification and other operations on the uncompressed audio data, and then plays the audio data through the audio playback unit.
- the electronic device and the audio playback device When the electronic device and the audio playback device initially establish a communication connection, they will transmit audio data according to the default codec identifier obtained through negotiation. However, when the network becomes poor, or the electronic device uses higher-definition sound quality for transmission, the electronic device needs to switch to an encoder suitable for network transmission or an encoder with higher-definition sound quality. However, when the electronic device switches the codec, the electronic device needs to re-negotiate the codec capability with the audio playback device. When the electronic device re-negotiates the codec capability with the audio playback device, the electronic device will suspend sending audio data to the audio playback device, causing the electronic device and the audio playback device to interrupt the audio data and cause the playback card to be interrupted during the process of switching the codec. The problem is that it affects the user experience.
- the present application provides a codec negotiation and switching method.
- the method includes: before the electronic device establishes a communication connection with the audio playback device, the electronic device and the audio playback device divide one or more codecs into multiple categories according to parameters such as sampling rate, quantization bit depth, bit rate, number of channels, etc. .
- the audio playback device After the electronic device establishes a communication connection with the audio playback device, and before the electronic device sends the audio data to the audio playback device, the audio playback device sends the class ID with the number of decoder IDs greater than or equal to 1 to the electronic device.
- the electronic device obtains a common category according to the category in which the number of identifiers of the decoder is greater than or equal to 1 and the category in which the number of identifiers of the encoder is greater than or equal to 1.
- the electronic device selects a default codec under one of the categories for audio data transmission according to user selection, characteristics of playing audio data, whether the audio rendering capability of the electronic device is enabled, application type and other conditions.
- the electronic device When the user selects, plays the audio data characteristics, whether the audio rendering capability of the electronic device is enabled, the application type and other conditions change, the electronic device will reselect the default encoder under another category for encoding and transmission, and will reselect another category.
- the identifier is sent to the audio playback device, and the audio playback device uses the default decoder under this category to decode and play the audio data.
- the electronic device needs to switch the encoder, it does not need to re-negotiate the codec type with the audio playback device, which solves the problem of audio data interruption and freezes when the electronic device and the audio playback device switch the codec, and improves the user experience. experience.
- This technical solution is applicable to the point-to-point connection of a mobile phone and a wireless headset for wireless audio playback; it is also applicable to a point-to-point connection to a wireless headset with a wearable device such as a tablet/PC/smart watch;
- the audio playback device can play one or more types of speakers/sound bar/smart TV.
- FIG. 2 is a schematic diagram of a system provided by an embodiment of the present application.
- the electronic device 100 establishes a communication connection with the audio playback device 200, the electronic device 100 can send audio data to the audio playback device, and the audio playback device plays the audio data.
- the electronic device 100 may be a cell phone, tablet computer, desktop computer, laptop computer, handheld computer, notebook computer, ultra-mobile personal computer (UMPC), netbook, as well as cellular telephones, personal digital assistants (personal digital assistants) digital assistant (PDA), augmented reality (AR) devices, virtual reality (VR) devices, artificial intelligence (AI) devices, wearable devices, in-vehicle devices, smart home devices and/or For smart city equipment, the embodiment of the present application does not limit the specific type of the electronic equipment 100 .
- the software system of the electronic device 100 includes but is not limited to Linux or other operating systems. For Huawei's Hongmeng system.
- the audio playback device 200 refers to a device with audio playback capability, and the audio playback device may be, but is not limited to, headphones, speakers, TVs, AR/VR glasses devices, tablet/PC/smart watches and other wearable devices.
- the electronic device 100 as a mobile phone and the audio playback device 200 as a Bluetooth headset as an example.
- the electronic device 100 and the audio playback device 200 may be connected and communicated through wireless communication technology.
- the wireless communication technologies here include but are not limited to: wireless local area network (WLAN) technology, bluetooth (bluetooth), infrared, near field communication (NFC), ZigBee, wireless fidelity direct , Wi-Fi direct) (also known as wireless fidelity peer-to-peer (Wi-Fi P2P)) and other wireless communication technologies that appear in subsequent development.
- WLAN wireless local area network
- bluetooth bluetooth
- NFC near field communication
- ZigBee wireless fidelity direct
- Wi-Fi direct also known as wireless fidelity peer-to-peer (Wi-Fi P2P)
- Wi-Fi P2P wireless fidelity peer-to-peer
- the electronic device 100 When the audio playback device 200 is connected to the electronic device 100 through the Bluetooth technology, the electronic device 100 sends synchronization information (eg, handshake information) to the audio playback device 200 for network synchronization. After successful networking and synchronization, the audio playback device 200 plays audio under the control of the electronic device 100 . That is, the electronic device 100 sends audio data to the audio playback device 200 through the established Bluetooth channel, and the audio playback device 200 plays the audio data sent by the electronic device 100 .
- synchronization information eg, handshake information
- the schematic diagram of the system shown in FIG. 2 merely exemplarily shows a system.
- the electronic device 100 can also establish communication connections with multiple audio playback devices 200 at the same time.
- the following embodiments of the present application are described by using the electronic device 100 to establish a connection with an audio playback device 200 . It should be noted that this application does not limit the number of audio playback devices 200 .
- the electronic device 100 establishes communication connections with multiple audio playback devices 200 at the same time.
- the electronic device 100 may be a cell phone, tablet computer, desktop computer, laptop computer, handheld computer, notebook computer, ultra-mobile personal computer (UMPC), netbook, as well as cellular telephones, personal digital assistants (personal digital assistants) digital assistant (PDA), augmented reality (AR) devices, virtual reality (VR) devices, artificial intelligence (AI) devices, wearable devices, in-vehicle devices, smart home devices and/or For smart city equipment, the embodiment of the present application does not limit the specific type of the electronic equipment 100 .
- the software system of the electronic device 100 includes but is not limited to Linux or other operating systems. For Huawei's Hongmeng system.
- the audio playback device 200 refers to a device with audio playback capability, and the audio playback device may be, but is not limited to, headphones, speakers, TVs, ARVR glasses, tablet/PC/smart watches and other wearable devices.
- the embodiments of the present application are described by taking the electronic device 100 as a mobile phone and the plurality of audio playback devices 200 as earphones and speakers as examples. That is, the mobile phone establishes a connection with the headset and the speaker at the same time, and the mobile phone can simultaneously send multimedia content (such as audio data) to the headset and the speaker.
- multimedia content such as audio data
- the connection between the electronic device 100 and the multiple audio playback devices can be regarded as multiple components of an independent system. That is, the electronic device 100 negotiates a common codec classification category with the headset, and a default encoder identification and a default decoder identification under each category. The electronic device 100 negotiates a common codec classification category with the speaker, and a default encoder identifier and a default decoder identifier under each category. The electronic device 100 and the headset can independently select one of the codec classification categories shared by both parties to transmit audio data.
- the electronic device 100 and the speaker can independently select one of the codec classification categories shared by both parties to transmit audio data.
- the codec classification type selected by the electronic device 100 and the earphone may be the same or different from the codec classification type selected by the electronic device 100 and the speaker.
- the electronic device 100 and the earphone or the electronic device 100 and the speaker switch codec classification categories do not affect each other.
- the method for selecting and switching the codec classification between the electronic device 100 and the headset or between the electronic device 100 and the speaker is the same as the method for selecting and switching the codec classification between the electronic device 100 and the audio playback device 200 described in the following embodiments. This will not be repeated here.
- the connection between the electronic device 100 and the multiple audio playback devices is regarded as a complete system.
- the headset sends all the codec classification categories and the decoder identifiers under each category to the electronic device 100, the speaker All codec classification categories and the decoder identification under each category are sent to the electronic device 100 .
- the electronic device 100 acquires all the codec classification categories in the electronic device 100 and the encoder identifiers under each category.
- the electronic device 100 confirms the common codec classification categories from the codec classification categories in the electronic device 100 and the codec classification categories sent by the earphones and the speakers, that is, the codecs supported by the electronic device 100, the earphones, and the speakers.
- Decoder classification the electronic device 100 confirms the codec classification supported by the electronic device 100, the earphone and the speaker, and the electronic device 100 and the audio playback device 200 described in the following embodiments confirm the codec classification supported by both The methods of the categories are the same, so please do not repeat them here.
- the electronic device 100 determines a default encoder and a default decoder in the codec classification category supported by the electronic device 100, the earphone and the speaker.
- the electronic device 100 sends the codec classification categories supported by the electronic device 100, the earphone and the speaker, and a default encoder identifier and a default decoder identifier under each category to the earphone and the speaker. It should be noted that the selection and switching of the codec classification by the electronic device 100 is the same as the method for selecting and switching the codec classification by the electronic device 100 and the audio playback device 200 described in the following embodiments, and will not be repeated here.
- the following describes a codec negotiation and switching method provided by this embodiment by taking an example of establishing a connection between the electronic device 100 and an audio playback device 200 .
- the connection between the electronic device 100 and a pair of audio playback devices 200 is the same as the principle of establishing a connection between the electronic device 100 and one audio playback device 200 , and details are not described herein again.
- FIG. 4 exemplarily shows a schematic diagram of another process of transmitting audio data between the electronic device 100 and the audio playback device 200 .
- the electronic device 100 includes an audio data acquisition unit, an audio stream decoding unit, a sound mixing rendering unit, a wireless audio encoding unit, a capability negotiation unit, an encoding control unit, and a wireless transmission unit.
- the audio playback device 200 includes a wireless transmission unit, a wireless audio decoding unit, an audio power amplifier unit, an audio playback unit, a capability negotiation unit and a decoding control unit.
- the audio data acquisition unit, audio stream decoding unit, audio mixing rendering unit, wireless audio coding unit and wireless transmission unit in the electronic device 100 are the same as the audio data acquisition unit, audio stream decoding unit, audio mixing rendering unit shown in FIG. 1 .
- the functions of the unit, the wireless audio coding unit and the wireless transmission unit are the same, and will not be repeated in this application.
- the functions of the wireless transmission unit, wireless audio decoding unit, audio power amplifier unit, and audio playback unit in the audio playback device 200 are the same as those of the wireless transmission unit, wireless audio decoding unit, audio power amplifier unit, and audio playback unit shown in FIG. 1 . The same, this application will not repeat them again.
- the capability negotiation unit is specifically configured to obtain the codec classification standard, the identifiers of all encoders in the electronic device 100, and all the encoder capabilities.
- the encoder capabilities include the sampling rate, quantization bit depth, Parameter information such as bit rate and number of channels. And according to the codec classification standard and the capabilities of all the encoders in the electronic device 100, all the encoders in the electronic device 100 are divided into multiple categories. An encoder can belong to one or more categories. It should be noted that the codec classification standard is preset in the electronic device 100 .
- the capability negotiation unit is specifically configured to obtain the codec classification standard, the identifiers of all the decoders in the audio playback device 200, and the capabilities of all the decoders. And according to the codec classification standard and the capabilities of all the decoders in the electronic device 100, all the decoders in the audio playback device 200 are divided into multiple categories. A decoder can belong to one or more categories. It should be noted that the codec classification standard is preset in the audio playback device 200 .
- the audio playback device 200 sends the category identifiers whose number of decoder identifiers in the audio playback device 200 is greater than or equal to 1 to the wireless transmission unit in the audio playback device 200 through the capability negotiation unit.
- the wireless transmission unit in the audio playback device 200 sends the identifiers of the categories whose number of decoder identifiers is greater than or equal to 1 to the wireless transmission unit in the electronic device 100 .
- a category in which the number of decoder identifiers is greater than or equal to 1 can be understood as that the number of decoder identifiers included in this category is greater than or equal to 1.
- the wireless transmission unit in the electronic device 100 receives and sends the class identifiers whose number of decoder identifiers is greater than or equal to 1 to the capability negotiation unit in the electronic device 100 .
- the capability negotiation unit in the electronic device 100 receives the identifiers of the categories in which the number of decoder identifiers is greater than or equal to 1.
- the capability negotiation unit in the electronic device 100 also acquires the category identifiers in which the number of encoder identifiers in the electronic device 100 is greater than or equal to 1.
- the capability negotiation unit in the electronic device 100 sends the class identifiers with the number of decoder identifiers greater than or equal to 1 and the class identifiers with the number of encoder identifiers greater than or equal to 1 to the encoding control unit in the electronic device 100, and the encoding control unit in the electronic device 100
- the unit confirms the common category identification from the category identification with the number of decoder identifications greater than or equal to 1 and the category identification with the number of encoder identifications greater than or equal to 1. And determine the default one encoder for each common category.
- how the encoding control unit negotiates a default encoder in each common category of the electronic device 100 and the audio playback device 200 will be described in detail in subsequent embodiments, which is not limited in this application.
- the encoding control unit After the encoding control unit confirms the common category and the default encoder under each common category, the encoding control unit will determine the type of application, playback audio characteristics (sampling rate, quantization bit depth, number of channels), and audio rendering capabilities of electronic equipment. Whether to open or not, network conditions of the channel, etc., select an appropriate category (eg, the first category) from the common categories, and perform audio data transmission with the default encoder in the category. How does the encoding control unit select the first category from the common categories according to the application type, playback audio characteristics (sampling rate, quantization bit depth, number of channels), whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, etc. Subsequent embodiments will be described in detail, which is not limited in this application.
- the encoding negotiation unit sends the identifier of the first category to the wireless audio encoding unit, and the wireless audio encoding unit encodes the audio data according to the encoder corresponding to the default encoder identifier in the first category.
- the coding negotiation unit sends the identifier of the first type to the wireless transmission unit
- the electronic device 100 sends the identifier of the first type to the wireless transmission unit of the audio playback device 200 through the wireless transmission unit
- the wireless transmission list of the audio playback device 200 sends the
- the identification of the first category is sent to the capability negotiation unit in the audio playback device 200
- the wireless transmission single capability negotiation unit of the audio playback device 200 sends the identification of the first category to the decoding control unit in the audio playback device 200
- the audio playback device 200 The decoding control unit in the identification of the first category sends the identification of the first category to the wireless audio decoding unit
- the wireless audio decoding unit sends the encoded audio data to the electronic device 100 according to the decoder corresponding to the default decoder identification in the identification of the first category. to decode.
- the coding negotiation unit selects an appropriate category (such as the first category) from the common categories, and uses the default codec in the first category for audio data transmission, due to changes in the application type, playback audio characteristics (sampling rate , quantization bit depth, number of channels) changes, the audio rendering capability of the electronic device is turned on, the network conditions of the channel change and other factors, the electronic device 100 will re-select another category through the encoding control unit, and inform the audio playback device of the identification of the category The decoding control unit in 200.
- the electronic device 100 and the audio playback device 200 use the default codec in another category to transmit audio data.
- FIG. 5 exemplarily shows a schematic structural diagram of the electronic device 100 .
- the electronic device 100 as a mobile phone as an example. It should be understood that the electronic device 100 shown in FIG. 5 is only an example, and the electronic device 100 may have more or fewer components than those shown in FIG. 5, two or more components may be combined, or Different component configurations are possible.
- the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
- the electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2.
- Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, And a subscriber identification module (subscriber identification module, SIM) card interface 195 and so on.
- SIM subscriber identification module
- the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light. Sensor 180L, bone conduction sensor 180M, etc.
- the structures illustrated in the embodiments of the present invention do not constitute a specific limitation on the electronic device 100 .
- the electronic device 100 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
- the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
- the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
- application processor application processor, AP
- modem processor graphics processor
- graphics processor graphics processor
- ISP image signal processor
- controller memory
- video codec digital signal processor
- DSP digital signal processor
- NPU neural-network processing unit
- the controller may be the nerve center and command center of the electronic device 100 .
- the controller can generate an operation control signal according to the instruction operation code and timing signal, and complete the control of fetching and executing instructions.
- a memory may also be provided in the processor 110 for storing instructions and data.
- the memory in processor 110 is cache memory. This memory may hold instructions or data that have just been used or recycled by the processor 110 . If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby increasing the efficiency of the system.
- the processor 110 may include one or more interfaces.
- the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous transmitter) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / or universal serial bus (universal serial bus, USB) interface, etc.
- I2C integrated circuit
- I2S integrated circuit built-in audio
- PCM pulse code modulation
- PCM pulse code modulation
- UART universal asynchronous transceiver
- MIPI mobile industry processor interface
- GPIO general-purpose input/output
- SIM subscriber identity module
- USB universal serial bus
- the I2C interface is a bidirectional synchronous serial bus that includes a serial data line (SDA) and a serial clock line (SCL).
- the processor 110 may contain multiple sets of I2C buses.
- the processor 110 can be respectively coupled to the touch sensor 180K, the charger, the flash, the camera 193 and the like through different I2C bus interfaces.
- the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate with each other through the I2C bus interface, so as to realize the touch function of the electronic device 100 .
- the I2S interface can be used for audio communication.
- the processor 110 may contain multiple sets of I2S buses.
- the processor 110 may be coupled with the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170 .
- the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
- the PCM interface can also be used for audio communications, sampling, quantizing and encoding analog signals.
- the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
- the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
- the UART interface is a universal serial data bus used for asynchronous communication.
- the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
- a UART interface is typically used to connect the processor 110 with the wireless communication module 160 .
- the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
- the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
- the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display screen 194 and the camera 193 .
- MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
- the processor 110 communicates with the camera 193 through a CSI interface, so as to realize the photographing function of the electronic device 100 .
- the processor 110 communicates with the display screen 194 through the DSI interface to implement the display function of the electronic device 100 .
- the GPIO interface can be configured by software.
- the GPIO interface can be configured as a control signal or as a data signal.
- the GPIO interface may be used to connect the processor 110 with the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
- the GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
- the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
- the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
- the interface can also be used to connect other electronic devices, such as AR devices.
- the interface connection relationship between the modules illustrated in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100.
- the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
- the charging management module 140 is used to receive charging input from the charger.
- the charger may be a wireless charger or a wired charger.
- the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
- the charging management module 140 may receive wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 charges the battery 142 , it can also supply power to the electronic device through the power management module 141 .
- the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
- the power management module 141 receives input from the battery 142 and/or the charging management module 140 and supplies power to the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
- the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, battery health status (leakage, impedance).
- the power management module 141 may also be provided in the processor 110 .
- the power management module 141 and the charging management module 140 may also be provided in the same device.
- the wireless communication function of the electronic device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modulation and demodulation processor, the baseband processor, and the like.
- Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
- Each antenna in electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
- the antenna 1 can be multiplexed as a diversity antenna of the wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
- the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G etc. applied on the electronic device 100 .
- the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like.
- the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modulation and demodulation processor for demodulation.
- the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor, and then turn it into an electromagnetic wave for radiation through the antenna 1 .
- at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
- at least part of the functional modules of the mobile communication module 150 may be provided in the same device as at least part of the modules of the processor 110 .
- the modem processor may include a modulator and a demodulator.
- the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
- the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
- the low frequency baseband signal is processed by the baseband processor and passed to the application processor.
- the application processor outputs sound signals through audio devices (not limited to the speaker 170A, the receiver 170B, etc.), or displays images or videos through the display screen 194 .
- the modem processor may be a stand-alone device.
- the modem processor may be independent of the processor 110, and may be provided in the same device as the mobile communication module 150 or other functional modules.
- the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), global navigation satellites Wireless communication solutions such as global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), and infrared technology (IR).
- WLAN wireless local area networks
- BT Bluetooth
- GNSS global navigation satellite system
- FM frequency modulation
- NFC near field communication
- IR infrared technology
- the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
- the wireless communication module 160 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
- the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
- the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
- the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
- the GNSS may include global positioning system (global positioning system, GPS), global navigation satellite system (global navigation satellite system, GLONASS), Beidou navigation satellite system (beidou navigation satellite system, BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite based augmentation systems (SBAS).
- global positioning system global positioning system, GPS
- global navigation satellite system global navigation satellite system, GLONASS
- Beidou navigation satellite system beidou navigation satellite system, BDS
- quasi-zenith satellite system quadsi -zenith satellite system, QZSS
- SBAS satellite based augmentation systems
- the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
- the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor.
- the GPU is used to perform mathematical and geometric calculations for graphics rendering.
- Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
- Display screen 194 is used to display images, videos, and the like.
- Display screen 194 includes a display panel.
- the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
- LED diode AMOLED
- flexible light-emitting diode flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (quantum dot light emitting diodes, QLED) and so on.
- the electronic device 100 may include one or N display screens 194 , where N is a positive integer greater than one.
- the electronic device 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
- the ISP is used to process the data fed back by the camera 193 .
- the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the light signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
- ISP can also perform algorithm optimization on image noise, brightness, and skin tone.
- ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
- the ISP may be provided in the camera 193 .
- the camera 193 is used to capture still images or video.
- the object is projected through the lens to generate an optical image onto the photosensitive element.
- the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
- CMOS complementary metal-oxide-semiconductor
- the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
- the ISP outputs the digital image signal to the DSP for processing.
- DSP converts digital image signals into standard RGB, YUV and other formats of image signals.
- the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
- a digital signal processor is used to process digital signals, in addition to processing digital image signals, it can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy and so on.
- Video codecs are used to compress or decompress digital video.
- the electronic device 100 may support one or more video codecs.
- the electronic device 100 can play or record videos of various encoding formats, such as: Moving Picture Experts Group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
- MPEG Moving Picture Experts Group
- MPEG2 moving picture experts group
- MPEG3 MPEG4
- MPEG4 Moving Picture Experts Group
- the NPU is a neural-network (NN) computing processor.
- NN neural-network
- Applications such as intelligent cognition of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
- the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
- the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example to save files like music, video etc in external memory card.
- Internal memory 121 may be used to store computer executable program code, which includes instructions.
- the processor 110 executes various functional applications and data processing of the electronic device 100 by executing the instructions stored in the internal memory 121 .
- the internal memory 121 may include a storage program area and a storage data area.
- the storage program area can store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), and the like.
- the storage data area may store data (such as audio data, phone book, etc.) created during the use of the electronic device 100 and the like.
- the internal memory 121 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (UFS), and the like.
- the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playback, recording, etc.
- the audio module 170 is used for converting digital audio information into analog audio signal output, and also for converting analog audio input into digital audio signal. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
- Speaker 170A also referred to as a "speaker" is used to convert audio electrical signals into sound signals.
- the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
- the receiver 170B also referred to as "earpiece" is used to convert audio electrical signals into sound signals.
- the voice can be answered by placing the receiver 170B close to the human ear.
- the microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals.
- the user can make a sound by approaching the microphone 170C through a human mouth, and input the sound signal into the microphone 170C.
- the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which can implement a noise reduction function in addition to collecting sound signals. In other embodiments, the electronic device 100 may further be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
- the earphone jack 170D is used to connect wired earphones.
- the earphone interface 170D may be the USB interface 130, or may be a 3.5mm open mobile terminal platform (OMTP) standard interface, a cellular telecommunications industry association of the USA (CTIA) standard interface.
- OMTP open mobile terminal platform
- CTIA cellular telecommunications industry association of the USA
- the pressure sensor 180A is used to sense pressure signals, and can convert the pressure signals into electrical signals.
- the pressure sensor 180A may be provided on the display screen 194 .
- the capacitive pressure sensor may be comprised of at least two parallel plates of conductive material. When a force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
- the electronic device 100 determines the intensity of the pressure according to the change in capacitance. When a touch operation acts on the display screen 194, the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
- the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor 180A.
- touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example, when a touch operation whose intensity is less than the first pressure threshold acts on the short message application icon, the instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, the instruction to create a new short message is executed.
- the gyro sensor 180B may be used to determine the motion attitude of the electronic device 100 .
- the angular velocity of electronic device 100 about three axes ie, x, y, and z axes
- the gyro sensor 180B can be used for image stabilization.
- the gyroscope sensor 180B detects the shaking angle of the electronic device 100, calculates the distance to be compensated by the lens module according to the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse motion to achieve anti-shake.
- the gyro sensor 180B can also be used for navigation and somatosensory game scenarios.
- the air pressure sensor 180C is used to measure air pressure.
- the electronic device 100 calculates the altitude through the air pressure value measured by the air pressure sensor 180C to assist in positioning and navigation.
- the magnetic sensor 180D includes a Hall sensor.
- the electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
- the electronic device 100 can detect the opening and closing of the flip according to the magnetic sensor 180D. Further, according to the detected opening and closing state of the leather case or the opening and closing state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
- the acceleration sensor 180E can detect the magnitude of the acceleration of the electronic device 100 in various directions (generally three axes).
- the magnitude and direction of gravity can be detected when the electronic device 100 is stationary. It can also be used to identify the posture of electronic devices, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
- the electronic device 100 can measure the distance through infrared or laser. In some embodiments, when shooting a scene, the electronic device 100 can use the distance sensor 180F to measure the distance to achieve fast focusing.
- Proximity light sensor 180G may include, for example, light emitting diodes (LEDs) and light detectors, such as photodiodes.
- the light emitting diodes may be infrared light emitting diodes.
- the electronic device 100 emits infrared light to the outside through the light emitting diode.
- Electronic device 100 uses photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100 . When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100 .
- the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
- Proximity light sensor 180G can also be used in holster mode, pocket mode automatically unlocks and locks the screen.
- the ambient light sensor 180L is used to sense ambient light brightness.
- the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived ambient light brightness.
- the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
- the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket, so as to prevent accidental touch.
- the fingerprint sensor 180H is used to collect fingerprints.
- the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, accessing application locks, taking pictures with fingerprints, answering incoming calls with fingerprints, and the like.
- the temperature sensor 180J is used to detect the temperature.
- the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold value, the electronic device 100 reduces the performance of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection.
- the electronic device 100 when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 caused by the low temperature.
- the electronic device 100 boosts the output voltage of the battery 142 to avoid abnormal shutdown caused by low temperature.
- Touch sensor 180K also called “touch panel”.
- the touch sensor 180K may be disposed on the display screen 194 , and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
- the touch sensor 180K is used to detect a touch operation on or near it.
- the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
- Visual output related to touch operations may be provided through display screen 194 .
- the touch sensor 180K may also be disposed on the surface of the electronic device 100 , which is different from the location where the display screen 194 is located.
- the bone conduction sensor 180M can acquire vibration signals.
- the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
- the bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal.
- the bone conduction sensor 180M can also be disposed in the earphone, combined with the bone conduction earphone.
- the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibration bone block obtained by the bone conduction sensor 180M, so as to realize the voice function.
- the application processor can analyze the heart rate information based on the blood pressure beat signal obtained by the bone conduction sensor 180M, and realize the function of heart rate detection.
- the keys 190 include a power-on key, a volume key, and the like. Keys 190 may be mechanical keys. It can also be a touch key.
- the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
- Motor 191 can generate vibrating cues.
- the motor 191 can be used for vibrating alerts for incoming calls, and can also be used for touch vibration feedback.
- touch operations acting on different applications can correspond to different vibration feedback effects.
- the motor 191 can also correspond to different vibration feedback effects for touch operations on different areas of the display screen 194 .
- Different application scenarios for example: time reminder, receiving information, alarm clock, games, etc.
- the touch vibration feedback effect can also support customization.
- the indicator 192 can be an indicator light, which can be used to indicate the charging state, the change of the power, and can also be used to indicate a message, a missed call, a notification, and the like.
- the SIM card interface 195 is used to connect a SIM card.
- the SIM card can be contacted and separated from the electronic device 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195 .
- the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
- the SIM card interface 195 can support Nano SIM card, Micro SIM card, SIM card and so on. Multiple cards can be inserted into the same SIM card interface 195 at the same time. The types of the plurality of cards may be the same or different.
- the SIM card interface 195 can also be compatible with different types of SIM cards.
- the SIM card interface 195 is also compatible with external memory cards.
- the electronic device 100 interacts with the network through the SIM card to implement functions such as call and data communication.
- the electronic device 100 employs an eSIM, ie: an embedded SIM card.
- the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
- the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
- the embodiment of the present invention takes an Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 as an example.
- FIG. 6 is a block diagram of a software structure of an electronic device 100 (eg, a mobile phone) according to an embodiment of the present invention.
- an electronic device 100 eg, a mobile phone
- the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate with each other through software interfaces.
- the Android system is divided into four layers, which are, from top to bottom, an application layer, an application framework layer, an Android runtime (Android runtime) and a system library, and a kernel layer.
- the application layer can include a series of application packages.
- the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message and so on.
- the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
- the application framework layer includes some predefined functions.
- the application framework layer may include window managers, content providers, view systems, telephony managers, resource managers, notification managers, and the like.
- a window manager is used to manage window programs.
- the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, take screenshots, etc.
- Content providers are used to store and retrieve data and make these data accessible to applications.
- the data may include video, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
- the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on. View systems can be used to build applications.
- a display interface can consist of one or more views.
- the display interface including the short message notification icon may include a view for displaying text and a view for displaying pictures.
- the phone manager is used to provide the communication function of the electronic device 100 .
- the management of call status including connecting, hanging up, etc.).
- the resource manager provides various resources for the application, such as localization strings, icons, pictures, layout files, video files and so on.
- the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear automatically after a brief pause without user interaction. For example, the notification manager is used to notify download completion, message reminders, etc.
- the notification manager can also display notifications in the status bar at the top of the system in the form of graphs or scroll bar text, such as notifications of applications running in the background, and notifications on the screen in the form of dialog windows. For example, text information is prompted in the status bar, a prompt sound is issued, the electronic device vibrates, and the indicator light flashes.
- Android Runtime includes core libraries and a virtual machine. Android runtime is responsible for scheduling and management of the Android system.
- the core library consists of two parts: one is the function functions that the java language needs to call, and the other is the core library of Android.
- the application layer and the application framework layer run in virtual machines.
- the virtual machine executes the java files of the application layer and the application framework layer as binary files.
- the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, safety and exception management, and garbage collection.
- a system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
- surface manager surface manager
- media library Media Libraries
- 3D graphics processing library eg: OpenGL ES
- 2D graphics engine eg: SGL
- the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
- the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
- the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
- the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
- 2D graphics engine is a drawing engine for 2D drawing.
- the kernel layer is the layer between hardware and software.
- the kernel layer contains at least display drivers, camera drivers, audio drivers, and sensor drivers.
- a corresponding hardware interrupt is sent to the kernel layer.
- the kernel layer processes touch operations into raw input events (including touch coordinates, timestamps of touch operations, etc.). Raw input events are stored at the kernel layer.
- the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon, as an example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer, and then starts the camera driver by calling the kernel layer.
- the camera 193 captures still images or video.
- FIG. 7 exemplarily shows a schematic diagram of the hardware structure of the audio playback device 200 .
- FIG. 7 exemplarily shows a schematic structural diagram of an audio playback device 200 (eg, a Bluetooth device) provided by an embodiment of the present application.
- an audio playback device 200 eg, a Bluetooth device
- the audio playback device 200 as a Bluetooth device as an example. It should be understood that the audio playback device 200 shown in FIG. 7 is only an example, and the audio playback device 200 may have more or less components than those shown in FIG. 7, and two or more components may be combined, Or can have different component configurations.
- the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
- the audio playback device 200 may include: a processor 201 , a memory 202 , a Bluetooth communication module 203 , an antenna 204 , a power switch 205 , a USB communication processing module 206 , and an audio module 207 . in:
- the processor 201 may be used to read and execute computer readable instructions.
- the processor 201 may mainly include a controller, an arithmetic unit, and a register.
- the controller is mainly responsible for instruction decoding, and sends out control signals for the operations corresponding to the instructions.
- the arithmetic unit is mainly responsible for saving the register operands and intermediate operation results temporarily stored during the execution of the instruction.
- the hardware architecture of the processor 201 may be an application specific integrated circuit (ASIC) architecture, a MIPS architecture, an ARM architecture, an NP architecture, or the like.
- ASIC application specific integrated circuit
- the processor 201 may be configured to parse a signal received by the Bluetooth communication module 203, such as a pairing mode modification request sent by the terminal 100, and so on.
- the processor 201 may be configured to perform corresponding processing operations according to the parsing result, such as generating a pairing mode modification response, and the like.
- Memory 202 is coupled to processor 201 for storing various software programs and/or sets of instructions.
- memory 202 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
- the memory 202 can store operating systems, such as embedded operating systems such as uCOS, VxWorks, RTLinux, and the like.
- Memory 202 may also store communication programs that may be used to communicate with terminal 100, one or more servers, or other devices.
- the Bluetooth communication module 203 may include a Classic Bluetooth (BT) module and a Bluetooth Low Energy (BLE) module.
- BT Classic Bluetooth
- BLE Bluetooth Low Energy
- the Bluetooth communication module 203 can monitor signals transmitted by other devices (such as the terminal 100 ), such as probe requests, scan signals, etc., and can send response signals, scan responses, etc., so that other devices (such as the terminal 100)
- the audio playback device 200 can be discovered, and a wireless communication connection can be established with other devices (such as the terminal 100), and communicate with other devices (such as the terminal 100) through Bluetooth.
- the Bluetooth communication module 203 can also transmit signals, such as broadcasting BLE signals, so that other devices (such as the terminal 100 ) can discover the audio playback device 200 and establish a wireless communication connection with other devices (such as the terminal 100 ), Communicate with other devices (such as the terminal 100 ) through Bluetooth.
- signals such as broadcasting BLE signals
- the wireless communication function of the audio playback device 200 may be implemented by an antenna 204, a Bluetooth communication module 203, a modem processor, and the like.
- Antenna 204 may be used to transmit and receive electromagnetic wave signals. Each antenna in audio playback device 200 may be used to cover a single or multiple communication frequency bands.
- the Bluetooth communication module 203 may have one or more antennas.
- the power switch 205 may be used to control the power supplied by the power source to the audio playback device 200 .
- the USB communication processing module 206 may be used to communicate with other devices through a USB interface (not shown). In some embodiments, the audio playback device 200 may also not include the USB communication processing module 206 .
- the audio module 207 can be used to output audio signals through the audio output interface, so that the audio playback device 200 can support audio playback.
- the audio module can also be used to receive audio data through the audio input interface.
- the audio playback device 200 may be a media playback device such as a Bluetooth headset.
- the audio playback device 200 may further include a display screen (not shown), wherein the display screen may be used to display images, prompt information, and the like.
- the display screen can be a liquid crystal display (LCD), an organic light-emitting diode (OLED) display, and an active-matrix organic light emitting diode (AMOLED) display. screen, flexible light-emitting diode (flexible light-emitting diode, FLED) display, quantum dot light emitting diode (quantum dot light emitting diodes, QLED) display and so on.
- the audio playback device 200 may also include a serial interface such as an RS-232 interface.
- the serial interface can be connected to other devices, such as audio external devices such as speakers, so that the audio playback device 200 and the audio external device can cooperate to play audio and video.
- the structure shown in FIG. 7 does not constitute a specific limitation on the audio playback device 200 .
- the audio playback device 200 may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
- the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
- the electronic device 100 classifies all encoders in the electronic device 100 into multiple categories according to the codec classification standard, and the audio playback device 200 classifies the audio playback device 200 All decoders in are classified into multiple categories.
- the electronic device 100 and the audio playback device 200 may also classify one or more codecs of the electronic device 100 and the audio playback device 200 into multiple categories after establishing a communication connection. This application does not limit the time at which the codec 100 and the audio playback device 200 classify one or more codecs.
- codec classification standard will be described in detail, and how the electronic device 100 and the audio playback device 200 classify the encoders in the electronic device 100 and the audio playback device 200 into multiple categories according to the codec classification standard.
- the codec classification standard can be obtained according to one or a combination of two or more parameters, such as sampling rate, quantization bit depth, code rate, and number of channels.
- the codec classification standard may be obtained according to one parameter of sampling rate, quantization bit depth, code rate, number of channels, etc.
- one parameter may be the sampling rate;
- the codec classification standard may also be obtained according to two parameters.
- Parameters can be obtained, for example, two parameters can be sampling rate and quantization bit depth; codec classification criteria can also be obtained according to three parameters, for example, three parameters can be sampling rate, quantization bit depth and bit rate; codec
- the classification standard can also be obtained according to four parameters, for example, the four parameters can be sampling rate, quantization bit depth, code rate, and number of channels.
- the codec classification standard may also refer to other parameters, such as audio formats, etc., which are not limited in this application.
- the codec classification criteria are pre-existing in the electronic device 100 and the audio playback device 200 .
- the sampling rate is the number of times the sound signal is sampled in a unit time (for example, one second). The higher the sampling rate, the more realistic the restoration of the sound and the better the sound quality.
- Quantization bit depth is the quantization precision, which determines the dynamic range of digital audio. When frequency sampling, a higher quantization bit depth can provide more possible amplitude values, resulting in a larger vibration range, a higher signal-to-noise ratio, and improved fidelity.
- the bit rate refers to the bit rate, which identifies the number of bits transmitted per unit of time, in bits per second or kilobits per second. The higher the bit rate, the more audio data is transmitted per second, and the clearer the sound quality.
- the number of channels is the number of speakers that support different sounds.
- the number of channels includes mono, dual, 2.1, 5.1, 7.1 and so on.
- the format of audio data is generally PCM data format.
- PCM (pulse code modulation, pulse code modulation) data format is an uncompressed audio data stream, which is a standard digital signal converted from an analog signal through sampling, quantization and encoding. audio data.
- the format of the audio data also includes MP3 data format, MPEG data format, MPEG-4 data format, WAVE data format, CD data format and the like.
- the codec classification criteria is to classify codecs according to the value range of one or more parameters.
- the sampling rate may be divided into multiple segments according to the lowest sampling rate and the highest sampling rate of the codec frequently used in the device, and the sampling rate of each segment is divided into multiple segments.
- the value of the sample rate is different. Specifically, when the numerical value of the sampling rate of the codec is greater than or equal to the first sampling rate, the codec is divided into category one; when the numerical value of the sampling rate of the codec is less than the first sampling rate, greater than or equal to the second sampling rate When the sampling rate is higher, the codec is divided into category two; when the value of the sampling rate of the codec is smaller than the second sampling rate, the codec is divided into category three. Wherein, the first sampling rate is greater than the second sampling rate.
- the sampling rate can be sampled according to the lowest sampling rate and the highest sampling rate, the lowest code rate and the highest code rate of codecs often used in the device.
- the rate and code rate are divided into multiple segments.
- the codec when the numerical value of the sampling rate of the codec is greater than or equal to the first sampling rate, and the numerical value of the codec of the codec is greater than or equal to the first code rate, the codec is divided into category one; When the numerical value of the sampling rate of the codec is less than the first sampling rate and equal to the second sampling rate, and the numerical value of the codec rate is less than the first code rate and greater than or equal to the second code rate, the codec is divided into category two ; When the numerical value of the sampling rate of the codec is less than the second sampling rate, and the numerical value of the codec of the codec is less than the second code rate, the codec is divided into category three. Wherein, the first sampling rate is greater than the second sampling rate, and the first code rate is greater than the second code rate.
- the sampling rate may be determined according to the lowest sampling rate, the highest sampling rate, the lowest code rate and the highest sampling rate of codecs often used in the device.
- the code rate, the lowest quantization bit depth, and the highest quantization bit depth divide the sample rate, code rate, and quantization bit depth into segments.
- the codec when the value of the sampling rate of the codec is greater than or equal to the first sampling rate, the value of the codec of the codec is greater than or equal to the first code rate, and the value of the quantization bit depth of the codec is greater than or equal to the first quantization bit When it is deep, the codec is divided into category one; when the value of the codec's sampling rate is less than the first sampling rate and equal to the second sampling rate, the codec's codec value is less than the first code rate and greater than or equal to When the second code rate is used, and the value of the quantization bit depth of the codec is smaller than the first quantization bit depth and greater than or equal to the second quantization bit depth, the codec is divided into category two; When the numerical value is less than the second sampling rate, the numerical value of the coding rate of the codec is less than the second coding rate, and the numerical value of the quantization bit depth of the codec is less than the second quantization bit depth, the codec is divided into category three.
- the sampling rate may be based on the lowest sampling rate, the highest sampling rate, the lowest sampling rate and the lowest Bit rate and highest bit rate, lowest quantization bit depth and highest quantization bit depth, lowest number of channels most commonly used (eg, two-channel) divides the sample rate, bit rate, quantization bit depth, and number of channels into segments.
- the codec when the value of the sampling rate of the codec is greater than or equal to the first sampling rate, the value of the codec of the codec is greater than or equal to the first code rate, and the value of the quantization bit depth of the codec is greater than or equal to the first quantization bit depth , when the value of the number of channels of the codec is greater than or equal to the number of the first channels, the codec is divided into category one; when the value of the sampling rate of the codec is less than the first sampling rate and greater than or equal to the second sampling rate , when the value of the code rate of the codec is less than the first code rate and greater than or equal to the second code rate, the value of the codec's quantization bit depth is less than the first quantization bit depth and greater than or equal to the second quantization bit depth, and the codec's quantization bit depth When the value of the number of channels is greater than or equal to the number of the first channel, the codec is divided into category two; when the value of the sampling rate of the codec is less than
- the codec When the codec has two bit rates, the value of the quantization bit depth of the codec is less than the second quantization bit depth, and the value of the number of channels of the codec is greater than or equal to the first number of channels, the codec is divided into category three.
- the first sampling rate is greater than the second sampling rate, the first code rate is greater than the second code rate, and the first quantization bit depth is greater than the second quantization bit depth.
- codec classification standards can be obtained by setting according to different requirements, which will not be listed one by one in this application.
- the electronic device 100 and the audio playback device 200 classify the encoder in the electronic device 100 and the decoder in the audio playback device 200 into multiple categories according to the codec classification standard.
- the electronic device 100 classifies all the encoders into multiple classes, and the audio playback device 200 classifies all the decoders into multiple classes.
- the electronic device 100 will obtain the classification standard of the codec.
- the classification standard of the codec can be used to classify the codec into multiple categories according to the information of one or more parameters of the codec.
- the electronic device 100 will acquire the values of one or more parameters corresponding to all encoders in the electronic device 100 .
- the electronic device 100 divides the encoder into one or more parameters. under a category.
- the electronic device 100 divides all encoders into one or more categories according to the above method. It should be noted that an encoder can be divided into multiple categories.
- the electronic device 100 records the identifier of the corresponding encoder under each category. For example, when the category of the codec classification standard is category 1, the identifiers of the encoders corresponding to category 1 include encoder 1 and encoder 2.
- the audio playback device 200 divides all the decoders into multiple categories, which is consistent with the method for the electronic device 100 to divide all the encoders into multiple categories, and details are not described herein again.
- the audio playback device 200 records the identifier of the corresponding decoder under each category. For example, when the category of the codec classification standard is category 1, the identifiers of decoders corresponding to category 1 include decoder 1 and decoder 2.
- the codec classification criteria can be obtained according to the sampling rate.
- Table 1 exemplarily shows the codec classification standards obtained according to the sampling rate.
- the encoder when the value of one or more sampling rates supported by the codec is greater than or equal to the first sampling rate, the encoder belongs to category one; when the value of one or more sampling rates supported by the codec is less than or equal to If the first sampling rate is greater than or equal to the second sampling rate, the codec belongs to category two; when the value of one or more sampling rates supported by the codec is less than the second sampling rate, the codec belongs to category three . Wherein, the second sampling rate is smaller than the first sampling rate.
- the electronic device 100 obtains the values of the sampling rates supported by all encoders in the electronic device 100 .
- the electronic device 100 divides the encoders into the categories shown in Table 1 according to the values of the sampling rates supported by all the encoders in the electronic device 100 . It should be noted that the same encoder can be divided into multiple categories.
- Table 2 exemplarily shows that the electronic device 100 and the audio playback device 200 classify the codecs in the electronic device 100 and the audio playback device 200 into a plurality of categories according to the sampling rate.
- the identifiers of the codecs shown in the embodiments of the present application may also be expressed in binary, for example, the first encoder may also be expressed as e001, the second encoder may also be expressed as e010, and the third encoder may also be expressed as e011, the decoder one can also be expressed as d001, the encoder one can also be expressed as d010, and the encoder one can also be expressed as d011.
- the first sampling rate is 48 kHz
- the second sampling rate is 24 kHz.
- the encoders classified into category one in the electronic device 100 can be called high-definition sound quality encoders
- the encoders classified into category two in the electronic device 100 can be called standard definition sound quality encoders
- the encoder is called a base-quality encoder.
- the decoders classified into category one in the audio playback device 200 can be called high-definition sound quality decoders, and the decoders classified into category two in the audio playback device 200 can be called standard definition sound quality decoders; Decoders up to category three are called base quality decoders.
- the values of the sampling rate supported by the encoder 1 are 8kHz, 16kHz, 24kHz, 32kHz, 48kHz and 96kHz.
- the sample rate values supported by encoder two are 32kHz and 48kHz.
- the sample rate values supported by encoder three are 8kHz and 16Hz.
- the codec classification criteria shown in Table 2 the first encoder belongs to the first category, and the first encoder also belongs to the second and third categories, the second encoder belongs to the second category, and the third encoder belongs to the third category.
- the audio playback device 200 also has a decoder 1, a decoder 2, and a decoder 3.
- the audio playback device 200 acquires the values of the sampling rates supported by all the decoders in the audio playback device 200 .
- the audio playback device 200 classifies the decoders into the categories of the codec classification standards shown in Table 2 according to the values of the sampling rates supported by all the decoders in the audio playback device 200 .
- the audio playback device 200 divides the decoders into the codec classification standard categories shown in Table 1 according to the values of the sampling rates supported by all the decoders in the audio playback device 200.
- the method for classifying the encoder into the categories of the codec classification standards shown in Table 1 is the same as the value of the sampling rate supported by the encoder, and the description is not repeated here in this application.
- the codec classification standard can be obtained according to the sampling rate, quantization bit depth, code rate and number of channels.
- Table 3 exemplarily shows the codec classification standards obtained according to the sampling rate, the quantization bit depth, the code rate and the number of channels.
- the codec when the value of one or more sampling rates supported by the codec is greater than or equal to the first sampling rate, and the value of one or more quantization bit depths supported by the codec is greater than or equal to the first quantization bit depth, the codec If the value of one or more code rates supported by the decoder is greater than or equal to the first code rate, and the number of channels supported by the codec is greater than or equal to the number of the first channels, the encoder belongs to category one.
- the value of one or more sampling rates supported by the codec is smaller than the first sampling rate and greater than or equal to the second sampling rate
- the value of one or more quantization bit depths supported by the codec is smaller than the first quantization bit depth and greater than or equal to The second quantization bit depth
- the value of one or more code rates supported by the codec is less than the first code rate and greater than or equal to the second code rate
- the number of channels supported by the codec is greater than or equal to the first channel number
- the codec belongs to category three.
- the electronic device 100 obtains the values of the sampling rates supported by all the encoders in the electronic device 100, the values of the code rates supported by all the encoders, the values of the quantization bit depths supported by all the encoders, and the values of the quantization bit depths supported by all the encoders.
- the electronic device 100 divides all encoders in the electronic device 100 into the categories shown in Table 3. It should be noted that the same encoder can be divided into multiple categories.
- Table 4 exemplarily shows that the codecs in the electronic device 100 and the audio playback device 200 are classified into a plurality of categories according to the sampling rate, the quantization bit depth, the code rate, and the number of channels.
- the encoders classified into category one in the electronic device 100 can be called high-definition sound quality encoders, and the encoders classified into category two in the electronic device 100 can be called standard definition sound quality encoders;
- the encoder is called a base-quality encoder.
- the values of sampling rate supported by encoder one are 8kHz, 16kHz, 24kHz, 32kHz, 48kHz and 96kHz, and the values of quantization bit depth supported by encoder one are 16 bits, 24 bits and 32 bits.
- the rate values are 600kbps, 900kbps and 1200kbps, and the number of channels supported by encoder 1 is mono, dual, 2.1 and 5.1.
- the sampling rates supported by encoder two are 16 bits, 32kHz and 48kHz, the quantization bit depths supported by encoder two are 8 bits, 16 bits and 24 bits, and the code rates supported by encoder two are 200kbps and 300kbps , 400kbps and 600kbps, the number of channels supported by encoder two is mono and dual.
- the sample rate values supported by encoder three are 8kHz and 16kHz.
- the quantization bit depth values supported by encoder 3 are 8 bits and 16 bits, the code rates supported by encoder 3 are 200kbps and 300kbps, and the number of channels supported by encoder 3 is mono, dual, and 2.1. channel and 5.1 channel and 7.1 channel.
- encoder one belongs to category one. Then encoder two belongs to category two, and at the same time, encoder two also belongs to category three. Encoder three falls into category three.
- the audio playback device 200 also has a decoder 1, a decoder 2, and a decoder 3.
- the audio playback device 200 obtains the numerical value of the sampling rate supported by all the decoders in the audio playback device 200, the numerical value of the code rate supported by all the decoders, the numerical value of the quantization bit depth supported by all the decoders, and the number of channels supported by all the decoders.
- Numerical value, the decoder is divided into the categories of the codec classification standards shown in Table 4. It should be noted that the same decoder can belong to the codec classification standards of multiple categories.
- the audio playback device 200 determines the number of sampling rates supported by all decoders in the audio playback device 200, the code rate supported by all decoders, the quantization bit depth supported by all decoders, and the number of channels supported by all decoders.
- Numerical value, the decoders are divided into the categories shown in Table 4, and the electronic device 100 is based on the numerical value of the sampling rate supported by all the encoders in the electronic device 100, the numerical value of the code rate supported by all the encoders, and the quantization supported by all the encoders.
- the numerical value of the bit depth and the numerical value of the number of channels supported by all encoders are the same as the method for classifying the encoders into the categories shown in Table 4, which will not be repeated in this application.
- the electronic device 100 After the electronic device 100 divides the encoder and the audio playback device 200 into multiple categories, the electronic device 100 negotiates with the audio playback device 200 to obtain a common category. After that, when the electronic device 100 and the audio playback device 200 perform audio data transmission, the electronic device 100 will use the default encoder in a category under the common category to encode the audio data and send it to the audio playback device 200, and the audio playback device 200 will encode the audio data to the audio playback device 200. The default decoder in this category decodes the encoded audio data and then plays the audio data.
- FIG. 8 exemplarily shows a schematic diagram of the electronic device 100 negotiating a common category with the audio playback device 200 .
- the electronic device 100 establishes a communication connection with the audio playback device 200 .
- the electronic device 100 may establish a communication connection with the audio playback device 200 through any one of Bluetooth, Wi-Fi Direct, local area network, and the like. How to establish a communication connection between the electronic device 100 and the audio playback device 200 will be described in detail later, and will not be repeated in this application. The embodiments of the present application are described by taking an example of establishing a communication connection between the electronic device 100 and the audio playback device 200 through the Bluetooth technology.
- the audio playback device 200 classifies all the codecs into multiple categories according to the codec classification standard.
- the audio playback device 200 first acquires the codec classification standard. It will be appreciated that the codec category standard is pre-existing in the audio playback device 200 .
- the audio playback device 200 acquires the values of one or more parameters of all the decoders in the audio playback device 200 .
- the audio playback device 200 determines the value of one or more parameters of the decoder, and within the value range of one or more parameters adopted by the codec classification standard, the audio playback device 200 divides the decoder into one or more parameters. under a category.
- the audio playback device 200 divides all the decoders in the audio playback device 200 into a plurality of categories. It should be noted that a decoder can be divided into multiple categories.
- the audio playback device 200 records the identification of the decoders in each category. Exemplarily, when the class of the codec classification standard is class one, the identifiers of the decoders included in class one include decoder one and decoder two; when the class of the codec classification standard is class two, the decoders included in class two include decoders one and two.
- the identifier of the decoder includes decoder two and decoder three; when the category of the codec classification standard is category three, the identifier of the decoder included in category three includes decoder three; when the category of the codec classification standard is category four, the category The identifiers of the four included decoders are empty.
- the audio playback device 200 acquires the decoder identifier under each category.
- the audio playback device 200 sends the category identifiers whose number of decoder identifiers is greater than or equal to 1 to the electronic device 100 .
- the audio playback device 200 After the audio playback device 200 obtains the decoder identifiers under each category, the audio playback device 200 only needs to send the category identifiers whose number of decoder identifiers is greater than or equal to 1 to the electronic device 100 .
- the audio playback device 200 may also send the category identifiers whose number of decoder identifiers is greater than or equal to 1, and the corresponding decoder identifiers under each category to the electronic device 100 .
- the audio playback device 200 may also only send all the decoder identifiers and the values of one or more parameters corresponding to each decoder to the electronic device 100 .
- the electronic device 100 divides all the decoders in the audio playback device 200 into multiple categories according to the codec classification standard, that is, the electronic device 100 obtains the corresponding decoder identifiers under each category. Specifically, for the electronic device 100, the electronic device 100 will acquire the codec classification standard. It will be appreciated that the codec class standard is pre-existing in the electronic device 100.
- the electronic device 100 determines that the value of one or more parameters of the decoder is within the value range of one or more parameters adopted by the codec classification standard, and the electronic device 100 classifies the decoder into one or more categories Down.
- the electronic device 100 divides all the decoders in the audio electronic device 100 into a plurality of categories. It should be noted that a decoder can be divided into multiple categories.
- the electronic device 100 records the identification of the decoders in each category. Exemplarily, when the class of the codec classification standard is class one, the identifiers of the decoders included in class one include decoder one and decoder two; when the class of the codec classification standard is class two, the decoders included in class two include decoders one and two.
- the identifier of the decoder includes decoder two and decoder three; when the category of the codec classification standard is category three, the identifier of the decoder included in category three includes decoder three; when the category of the codec classification standard is category four, the category The identifiers of the four included decoders are empty.
- the audio playback device 200 can acquire category identifiers in which the number of decoder identifiers is greater than or equal to 1.
- the electronic device 100 classifies all encoders into multiple categories according to the codec classification standard.
- the electronic device 100 first obtains the codec classification criteria. It will be appreciated that the codec category standard is pre-existing in the audio playback device 200 .
- the electronic device 100 acquires the values of one or more parameters of all encoders in the electronic device 100 .
- the electronic device 100 determines that the value of one or more parameters of the encoder is within the value range of one or more parameters adopted by the codec classification standard, and the audio playback device 200 divides the encoder into one or more parameters. under the category.
- the electronic device 100 divides all encoders into multiple categories according to the above method. It should be noted that an encoder can be divided into multiple categories.
- the electronic device 100 records the identification of the encoder under each category. Exemplarily, when the class of the codec classification standard is class one, the identifiers of the encoders included in class one include encoder one and encoder two; when the class of the codec classification standard is class two, the codes included in class two include encoder one and encoder two.
- the identifier of the encoder includes encoder two and encoder three; when the category of the codec classification standard is category three, the identifier of the encoder included in category three includes encoder three; when the category of the codec classification standard is category four, the category
- the identifiers of the included encoders include encoder one and encoder four.
- the electronic device 100 acquires the encoder identifier under each category.
- S705-S706 may be executed before S702, which is not limited in this application.
- the electronic device 100 confirms the shared category among the categories with the number of encoder identifiers greater than or equal to 1 and the category with the number of decoder identifiers greater than or equal to 1.
- the electronic device 100 receives a category in which the number of encoder identifiers sent by the audio playback device 200 is greater than or equal to 1.
- the categories in which the number of encoder identifiers sent by the audio playback device 200 is greater than or equal to 1 may be category one, category two, category three, and category four.
- the electronic device 100 After acquiring the encoder identifiers under each category, the electronic device 100 confirms the categories in which the number of encoder identifiers is greater than or equal to 1. Exemplarily, the categories for which the electronic device 100 determines that the number of encoder identifiers is greater than or equal to 1 may be category one, category two, and category four.
- the electronic device 100 confirms the shared category from the categories with the number of encoder identifiers equal to or greater than 1 and the category with the number of decoder identifiers greater than or equal to 1.
- Common categories that is, the intersection of the categories with the number of encoder identifications greater than or equal to 1 and the categories with the number of decoder identifications greater than or equal to 1.
- the common category is that the electronic device 100 can transmit audio data with the audio playback device 200 through the encoder and decoder under this category.
- the common ones may be category one, category two, and category four.
- the electronic device 100 determines the default encoder identifier in the shared category.
- the electronic device 100 confirms that the encoder identifier under this category is a default encoder identifier, or the decoder identifier under this category is The identifier of the decoder is the default one of the decoder identifier.
- the identifiers of encoders included in category three include encoder three
- the category of the codec classification standard is category three
- the identifiers of decoders included in category three include decoder three. Because category three includes only one encoder identifier, the electronic device 100 determines that encoder three is a default encoder under category three. Because category three includes only one decoder identifier, the electronic device 100 determines that decoder three is a default decoder under category three.
- the electronic device 100 will confirm a default code from more than one encoder identifier according to a preset rule the decoder identifier, or the electronic device 100 will confirm a default decoder identifier from more than one decoder identifier according to a preset rule.
- the preset rules may be priority rules, low power rules, high efficiency rules, and so on.
- the electronic device 100 confirms a default encoder identification from more than one encoder identification according to the priority rule, or the electronic device 100 determines the default encoder identification from more than one encoder identification according to the priority rule
- a default decoder ID is identified in the decoder ID of .
- Table 5 exemplarily shows the priority ranking of encoders and decoders.
- the electronic device 100 confirms a default encoder ID from more than one encoder ID according to the priority of the codecs shown in Table 5, or the electronic device 100 determines the default encoder ID from more than one encoder ID according to the priority rule.
- a default decoder ID is identified in the decoder ID.
- the identifiers of decoders included in category one include decoder one and decoder two, and the identifiers of encoders included in category one include encoder one and encoder two. Since the priority ranking of encoder 1 is higher than that of encoder 2, the electronic device 100 determines that encoder 1 is the default encoder of category 2. Since the priority ranking of decoder 1 is higher than that of decoder 2, the electronic device 100 determines that decoder 1 is the default decoder of category 2.
- the electronic device 100 confirms a default encoder identifier from more than one encoder identifiers according to the low power rule, or the electronic device 100 determines from more than one encoder identifiers according to the low power rule A default decoder ID is identified in the decoder ID of .
- Table 6 exemplarily shows the power ranking of the encoder and the decoder.
- the power levels of the encoder and decoder may be commonly used in the industry, or may be set by developers, and the power level of the encoder and decoder is not limited in this application.
- the electronic device 100 is ranked according to the power of the encoders shown in Table 6 from low to high, and confirms a default encoder identification from more than one encoder identification, or the electronic device 100 will be based on the encoder power from low to low.
- a default decoder ID is identified from more than 1 decoder ID to a high ranking.
- the identifiers of decoders included in category one include decoder one and decoder two, and the identifiers of encoders included in category one include encoder one and encoder two. Since the power of the encoder 1 is lower than the power of the encoder 2, the electronic device 100 determines that the encoder 1 is the default encoder of the category 2. Since the power of the decoder 1 is lower than the power of the decoder 2, the electronic device 100 determines that the decoder 1 is the default decoder of the category 2.
- the electronic device 100 confirms a default encoder ID from more than one encoder ID according to the efficiency rule, or the electronic device 100 selects a default encoder ID from more than one encoder ID according to the efficiency rule A default decoder ID is identified in the decoder ID of .
- Table 7 exemplarily shows the efficiency ranking of the encoder and the decoder.
- the efficiency of the encoder and the decoder may be commonly used in the industry, or may be set by the developer, and the present application does not limit the efficiency of the codec.
- the electronic device 100 is ranked according to the efficiency of the encoders shown in Table 6, and confirms a default encoder identification from more than one encoder identification, or the electronic device 100 will be based on the efficiency of the encoder.
- a default decoder ID is identified in the decoder ID of each.
- the identifiers of decoders included in category one include decoder one and decoder two, and the identifiers of encoders included in category one include encoder one and encoder two. Since the efficiency of encoder 1 is higher than that of encoder 2, the electronic device 100 determines that encoder 1 is the default encoder of category 2. Since the efficiency of the decoder 1 is higher than that of the decoder 2, the electronic device 100 determines that the decoder 1 is a default decoder of the category 2.
- the electronic device 100 can also confirm a default encoder identification from more than one encoder identification according to other rules, or confirm a default decoder identification from more than one decoder identification. This is not limited.
- the electronic device 100 determines the default encoder identifier in the common category, and the electronic device 100 also needs to determine the default decoder identifier in the common category.
- the electronic device 100 assigns the audio playback device 200 to the audio playback device 200 according to the codec classification standard. All the decoders are divided into a plurality of categories, and then the electronic device 100 confirms the category shared by the electronic device 100 and the audio playback device 200 . Or the audio playback device 200 divides all the decoders in the audio playback device 200 into multiple categories according to the codec classification standard, and classifies the number of decoder identifiers greater than or equal to 1, and the corresponding decoders under each category.
- the electronic device 100 When the identification is sent to the electronic device 100 , the electronic device 100 then confirms the type shared by the electronic device 100 and the audio playback device 200 . After the electronic device 100 confirms the common categories, it also confirms a default encoder identifier under each common category. The electronic device 100 also needs to confirm a default decoder identifier under each common category. Specifically, when the number of corresponding decoder identifiers in some shared categories is 1, the electronic device 100 confirms that the corresponding decoder identifiers under this category are the default decoder identifiers. When there are some common categories, the number of corresponding decoder identifiers is greater than 1. The electronic device 100 can adopt the embodiments shown in Table 5 to Table 7 to confirm the default decoder identification under this category. Specifically, this application will not repeat them here.
- the electronic device 100 sends the shared category identifier to the audio playback device 200 .
- the electronic device 100 After the electronic device 100 confirms the shared category, the electronic device 100 sends the shared category identifier to the audio playback device 200 .
- the audio playback device 200 receives the shared category identifier sent by the electronic device 100 .
- the audio playback device 200 also needs to confirm the default decoder identifier in each common category.
- the audio playback device 200 confirms that the identifiers of the decoders in this category are the default decoder identifiers.
- the identifiers of the decoders included in category three include decoder three. Because category three includes only one decoder identifier, the audio playback device 200 confirms that encoder three is a default decoder under category three.
- the audio playback device 200 will confirm a default decoder identifier from more than one encoder identifiers according to a preset rule.
- the preset rules may be priority rules, low power rules, high efficiency rules, and so on.
- the method for the audio playback device 200 to confirm a default decoder identifier from more than one encoder identifiers according to the priority rule or the low power rule or the high efficiency rule is the same as the method for the aforementioned electronic device 100 according to the priority rule or the low power rule
- the method of confirming a default decoder identifier from more than one encoder identifier is the same as the high-efficiency rule, which is not repeated in this application.
- S809 may also be executed after S802, which is not limited in this application.
- the electronic device 100 when the electronic device 100 confirms the default decoder identifier in the shared category, when the electronic device 100 sends the shared category identifier to the audio playback device 200, it also needs to send the default decoder identifier in the shared category.
- the decoder identifier of the audio playback device is sent to the audio playback device 200.
- the electronic device 100 may also send the common category identifier and the default decoder identifier and default encoder identifier under each category to the audio playback device 200 .
- the electronic device 100 After the electronic device 100 and the audio playback device 200 have confirmed the shared categories and the default codec under each shared category, the electronic device 100 will play the audio characteristics (sampling rate, quantization bit depth, number of channels) according to the application type and audio playback characteristics. , whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, etc., select an appropriate category from the common categories, and perform audio data transmission with the default codec in the category.
- the audio characteristics sampling rate, quantization bit depth, number of channels
- the electronic device 100 selects an appropriate category from the common categories according to the application type, playback audio characteristics (sampling rate, quantization bit depth, number of channels), whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, etc. of.
- Application type In the electronic device 100, different types of application programs that play audio have different requirements for the characteristics of the audio playback.
- the electronic device 100 will obtain the minimum sampling rate and the lowest quantization bit of the audio data by the application program that plays the audio. According to the minimum sampling rate, the minimum quantization bit depth, and the number of channels of the audio data, an appropriate category is selected from the common categories according to the requirements of the application program that plays the audio. For example, some applications have relatively high requirements on sound quality, and relatively high requirements on the value of the sampling rate of the audio data and the value of the quantization bit depth. For example, a general application program that plays audio sets the sampling rate of the audio data at 32 kHz, and sets the quantization bit depth of the audio data at 16 bits.
- the sampling rate of audio data be at least 48 kHz, and the quantization bit depth of audio data must be at least 24 bits.
- the electronic device 100 may select a default codec in this category for audio data transmission according to the condition that the value of the sampling rate includes 48 kHz and the value of the quantization bit depth includes 24 bits.
- Playing audio characteristics In the electronic device 100, different audio data that can be played by an application program may have different characteristics.
- the electronic device 100 will obtain the requirements of the minimum sampling rate, the minimum quantization bit depth, and the number of channels of the audio data being played, and then obtain the requirements of the minimum sampling rate, the minimum quantization bit depth, and the number of channels of the audio data being played from the total number of channels. Choose the appropriate one from the categories. For example, some audio data have relatively high requirements on sound quality, and relatively high requirements on the sampling rate and quantization bit depth of the audio data. For example, the value of the sampling rate of general audio data is 32 kHz, and the value of the quantization bit depth of audio data is 16 bits.
- the minimum value of the sampling rate of some preset audio data with relatively high sound quality is 48 kHz
- the minimum value of the quantization bit depth of the audio data is 24 bits.
- the electronic device 100 may select a default codec in this category for audio data transmission according to the condition that the value of the sampling rate includes 48 kHz and the value of the quantization bit depth includes 24 bits.
- the value of the sampling rate of the currently playing audio of the electronic device is the sampling rate 1
- the value of the quantization bit depth is the quantization bit depth 1
- the value of the code rate is the code rate 1
- the value of the number of channels Number one for the channel is the audio rendering capability of the electronic device 100.
- the rendering unit of the electronic device 100 can increase the value of the sampling rate of the audio data from the sampling rate 1 to the sampling rate 2, and the rendering unit of the electronic device 100 can change the quantization bit depth of the audio data to a higher value.
- the numerical value is increased from the quantization bit depth one to the quantization bit depth two, and the rendering unit of the electronic device 100 may increase the numerical value of the number of channels of the audio data from the number of channels one to the number of channels two.
- the sampling rate 2 is greater than the sampling rate 1
- the quantization bit depth 2 is larger than the quantization bit depth 1
- the number of channels 2 is larger than the number of channels 1.
- the value of the sampling rate of the electronic device 100 is the sampling rate two
- the value of the quantization bit depth of the audio data is the value of the quantization bit depth two
- the value of the code rate is the code rate one
- the number of channels of the audio data is The value of the number of channels is 2.
- sampling rate including the sampling rate of 2, the quantization bit depth including the quantization bit depth of 2, the value of the code rate including the code rate of 1, and the number of channels including the number of channels.
- a category of 2 Under the default codec for audio data transmission.
- the value of the sampling rate of the audio data currently played by the electronic device is the sampling rate 1
- the value of the quantization bit depth is the quantization bit depth 1
- the value of the number of channels is the number of channels 1
- the value of the bit rate is rate one.
- the numerical value of the sampling rate of the audio data of the electronic device 100 according to the audio data is the sampling rate one
- the numerical value of the quantization bit depth of the audio data is the quantization bit depth one
- the numerical value of the number of channels of the audio data is the number of channels one
- the numerical value of the number of channels of the audio data is one.
- the value of the code rate is code rate 1. From the common categories, select the sampling rate including sampling rate 1, the quantization bit depth including quantization bit depth 1, the number of channels including the number of channels 1, and the code rate including the code rate 1. category, and use the default codec under this category for audio data transmission.
- the electronic device 100 can also select an appropriate category from the shared categories according to other parameters, and is not limited to the application types and playback audio characteristics (sampling rate, quantization bit depth, and number of channels) listed in the foregoing embodiment. , whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, etc., which will not be repeated in this application.
- the electronic device 100 and the audio playback device 200 select an appropriate category from the shared categories, and use the default codec in the category for audio data transmission, due to changes in the application type, playback audio characteristics (sampling rate, quantization bit depth, The number of channels) changes, the audio rendering capability of the electronic device is turned on, the network conditions of the channel change and other factors, the electronic device 100 will re-select another category and notify the audio playback device 200 of the identification of the category.
- the electronic device 100 and the audio playback device 200 use the default codec under another category to transmit audio data.
- the application type is changed from application type 1 to application type 2: in the electronic device 100, different types of application programs that play audio have different requirements for the characteristics of playing audio.
- the value of the sampling rate of the audio data is the sampling rate one
- the value of the quantization bit depth is the quantization bit depth one
- the value of the code rate is the code rate one
- the value of the number of channels is one. The value is one channel number.
- the electronic device 100 switches the application program for playing audio data from application program 1 to application program 2, if application program 2 has relatively high requirements on the sound quality of audio data, when application type 2 plays audio data, the value of the sampling rate of audio data is sampling rate 2, the value of quantization bit depth is quantization bit depth 2, the value of code rate is code rate 2, and the value of channel number is channel number 1, where sampling rate 2 is greater than sampling rate 1, quantization bit depth Two is greater than the quantization bit depth one, and the code rate two is greater than the code rate one. Due to the parameter change, the electronic device 100 will reselect the codec classification category.
- the electronic device 100 selects the default codec under a category in which the sampling rate includes the sampling rate 2, the quantization bit depth includes the quantization bit depth 2, the numerical value of the code rate includes the code rate 2, and the number of channels includes the number of channels 1.
- the device performs the transmission of audio data.
- the audio content is switched from audio data 1 to audio data 2: in the electronic device 100, different audio data that can be played by an application program may have different characteristics.
- the value of the sampling rate of the audio data one is the sampling rate one
- the value of the quantization bit depth is the quantization bit depth one
- the value of the code rate is the code rate one
- the value of the number of channels is one. The value is one channel number.
- the electronic device 100 switches the playing audio content from audio data 1 to audio data 2, if the sound quality of audio data 2 is relatively high and the electronic device 100 plays audio data 2, the value of the sampling rate of audio data 2 is the sampling rate 2 , the value of quantization bit depth is quantization bit depth 2, the value of code rate is code rate 2, and the value of channel number is channel number 1, where sampling rate 2 is greater than sampling rate 1, quantization bit depth 2 is greater than quantization bit Deep one, bit rate two is greater than bit rate one. Due to the parameter change, the electronic device 100 will reselect the codec classification category.
- the electronic device 100 selects the default codec under a category in which the sampling rate includes the sampling rate 2, the quantization bit depth includes the quantization bit depth 2, the numerical value of the code rate includes the code rate 2, and the number of channels includes the number of channels 1.
- the device performs the transmission of audio data.
- the audio rendering capability of the electronic device is turned from off to on: the value of the sampling rate of the currently playing audio of the electronic device is the sampling rate 1, the value of the quantization bit depth is the quantization bit depth 1, the value of the code rate is the code rate 1, and the number of channels The value is one channel number.
- the rendering unit of the electronic device 100 can increase the value of the sampling rate of the audio data from the sampling rate 1 to the sampling rate 2, and the rendering unit of the electronic device 100 can change the quantization bit depth of the audio data to a higher value.
- the numerical value is increased from the quantization bit depth one to the quantization bit depth two, and the rendering unit of the electronic device 100 may increase the numerical value of the number of channels of the audio data from the number of channels one to the number of channels two.
- the sampling rate 2 is greater than the sampling rate 1
- the quantization bit depth 2 is larger than the quantization bit depth 1
- the number of channels 2 is larger than the number of channels 1. Due to the parameter change, the electronic device 100 will reselect the codec classification category.
- the electronic device 100 will select the sampling rate including the sampling rate 2, the quantization bit depth including the quantization bit depth 2, the value of the code rate including the code rate 1, and the number of channels including the number of channels.
- the codec performs the transmission of audio data.
- the value of the sampling rate of the audio data currently played by the electronic device is the sampling rate 1
- the value of the quantization bit depth is the quantization bit depth 1
- the value of the number of channels is the number of channels 1
- the value of the bit rate is 1.
- the value is code rate one. Due to strong interference and attenuation of the wireless transmission channel, the code rate supported by the wireless transmission channel is reduced from code rate 1 to code rate 2, where code rate 2 is smaller than code rate 1. Due to the parameter change, the electronic device 100 will reselect the codec classification category. The electronic device 100 will select the sampling rate including the sampling rate 1, the quantization bit depth 2 including the quantization bit depth 1, the numerical value of the code rate including the code rate 2, and the number of channels including the number of channels. codec for the transmission of audio data.
- the electronic device 100 After the electronic device 100 reselects the default codec under another category, the electronic device 100 and the audio playback device 200 use the default codec under another category to transmit audio data.
- the embodiments of the present application may adopt the following methods to achieve smooth transition during codec switching.
- category 1 corresponds to default encoder 1 and decoder 1
- category 2 corresponds to default encoder 2 and decoder 2. Then, the electronic device 100 will switch from encoder one to encoder two, and the audio playback device 200 will switch from decoder one to decoder two.
- the electronic device 100 needs to complete the switching of the encoder 1 to the encoder 2 within one frame of audio data.
- This frame of audio data in the encoder-to-encoder-two transition process is referred to as the i-th frame of audio data.
- the encoder encodes the audio data of the i-th frame to obtain packet A (data packet A).
- the second encoder encodes the audio data of the i-th frame to obtain packet B (data packet B).
- the electronic device 100 transmits packet A and packet B to the audio playback device 200.
- the audio playback device 200 uses the decoder 1 to decode the packet A to obtain the audio data pcmA; the audio playback device 200 also uses the decoder 2 to decode the packet A to obtain the audio data pcmB. Then, the audio playback device 200 performs smoothing processing on the i-th frame of audio data, and the smoothing process is shown in formula (1):
- Pcm(i) represents the ith frame of audio data after smoothing
- wi represents a smoothing coefficient
- wi can be linear smoothing or cos smoothing, and so on.
- the value range of wi is between 0 and 1. The smaller the smoothing coefficient wi, the stronger the smoothing effect, and the smaller the adjustment to the prediction result; the larger the smoothing coefficient wi, the weaker the smoothing effect, and the greater the adjustment to the prediction result.
- pcmA(i) represents the audio data obtained by the decoder 1 decoding packet A
- pcmB(i) represents the audio data obtained by the decoder 2 decoding packet B.
- the audio playback device 200 can obtain the audio data frame Pcm(i) after smoothing the audio data of the ith frame. In this way, the audio playback device 200 plays the audio data frame after the audio data of the ith frame is smoothed, so that the audio data frame in the codec switching process can be smoothly transitioned.
- max indicates the operation of taking the maximum value
- % indicates the operation of taking the remainder
- the frame length indicates that the encoder encodes a certain period of audio data into one frame, and a frame of audio data of this certain period of time is the frame length .
- each frame of audio data can obtain two data packets, which are packet A (packet A ) and packet B (packet B).
- the electronic device 100 transmits packet A and packet B to the audio playback device 200.
- the audio playback device receives packet A and packet B, uses decoder 1 to decode packet A to obtain audio data pcmA, and uses decoder 2 to decode packet B to obtain audio data pcm B.
- the first D-1 audio data frames are still the audio data decoded by the decoder 1.
- the audio playback device 200 uses the D-th audio data frame. Smoothing is performed, and the smoothing process is shown in formula (3):
- Pcm(i) represents the D-th audio data frame after smoothing
- wi represents a smoothing coefficient
- wi can be linear smoothing or cos smoothing, and so on.
- the value range of wi is between 0 and 1. The smaller the smoothing coefficient wi, the stronger the smoothing effect, and the smaller the adjustment to the prediction result; the larger the smoothing coefficient wi, the weaker the smoothing effect, and the greater the adjustment to the prediction result.
- pcmA(i) represents the audio data obtained by the decoder 1 decoding the D th audio data frame
- pcmB(i) represents the audio data decoded by the decoder 2 on the D th audio data frame.
- the audio playback device 200 can obtain the audio data frame Pcm(i) after smoothing the D-th audio data frame. In this way, the audio playback device 200 plays the first D-1 audio data frames and the audio data frames after the audio data of the D th frame is smoothed, so that the audio data frames in the codec switching process can be smoothly transitioned.
- FIG. 9 is a flowchart of a codec negotiation and switching method provided by an embodiment of the present application.
- the electronic device 100 establishes a communication connection with the audio playback device 200 .
- the electronic device 100 may establish a communication connection with the audio playback device 200 through one or more of Bluetooth, Wi-Fi Direct, and NFC.
- the embodiments of the present application are described by taking an example of establishing a communication connection between the electronic device 100 and the audio playback device 200 through the Bluetooth technology.
- the following describes how to establish a communication connection between the electronic device 100 and the audio playback device 200 in detail with reference to the UI diagram.
- FIGS. 9A-9C exemplarily show UI diagrams for establishing a communication connection between the electronic device 100 and the audio playback device 200 through Bluetooth.
- the electronic device 100 may also establish a communication connection with the audio playback device 200 through one or more of Wi-Fi Direct and NFC.
- FIG. 9A shows an example audio playback user interface 600 on electronic device 100 .
- the audio playback interface 600 includes a music name 601 , a playback control 602 , a previous control 603 , a next control 604 , a playback progress bar 605 , a download control 606 , a sharing control 607 , and a more button 608 ,and many more.
- the music name 601 may be "Dream it possible”.
- the play control 602 is used to trigger the terminal 100 to play the audio data corresponding to the music name 601 .
- the previous control 603 can be used to trigger the electronic device 100 to switch to the previous audio data in the playlist for playback.
- the next track control 604 can be used to trigger the electronic device 100 to switch to the next audio data in the playlist to play.
- the playback progress bar 605 can be used to indicate the playback progress of the current audio data.
- the download control 606 can be used to trigger the electronic device 100 to download and save the audio data of the music title 601 to a local storage medium.
- the sharing control 607 can be used to trigger the electronic device 100 to share the playback link of the audio data corresponding to the music name 601 to other applications.
- the more controls 608 may be used to trigger the electronic device 100 to display more functional controls related to music playback.
- the electronic device 100 can also play audio data played by video applications, audio data played by game applications, and audio data of real-time calls, etc.
- the source of the audio data played by the electronic device 100 is not limited in this application.
- the electronic device 100 when the electronic device 100 detects the downward swipe gesture on the display screen, in response to the swipe gesture, the electronic device 100 displays the window 610 shown in FIG. 9C on the user interface 20 .
- a Bluetooth control 611 may be displayed in the window 610 , and the Bluetooth control 611 may receive an operation (eg, touch operation, click operation) for turning on/off the Bluetooth function of the electronic device 100 .
- the representation of the Bluetooth control 611 may include icons and/or text (eg, the text "Co-cast").
- the window 610 can also display other functions such as Wi-Fi, hotspot, flashlight, ringing, auto-rotate, instant sharing, airplane mode, mobile data, location information, screen capture, eye protection mode, screen recording, collaborative screencasting, Switch controls such as NFC, that is, the user operation to enable the collaborative screen projection function is detected.
- the electronic device 100 can change the display form of the Bluetooth control 611 , such as adding a shadow when the Bluetooth control 611 is added.
- the user may also input a downward swipe gesture on other interfaces to trigger the electronic device 100 to display the window 610 .
- the user operation of enabling the Bluetooth function can also be implemented in other forms, which are not limited in the embodiment of the present application. .
- the electronic device 100 may also display a setting interface provided by a settings application, and the setting interface may include a control provided to the user for turning on/off the Bluetooth function of the electronic device 100, and the user can input a control on the control. The user operates to turn on the Bluetooth function of the electronic device 100 .
- the electronic device 100 After detecting the user operation to enable the Bluetooth function, the electronic device 100 discovers other electronic devices with the Bluetooth function enabled near the electronic device 100 through Bluetooth. For example, the electronic device 100 can discover and connect to the nearby audio playback device 200 and other electronic devices through Bluetooth.
- the electronic device 100 determines whether a connection is established with the audio playback device 200 for the first time. If the electronic device 100 establishes a connection with the audio playback device 200 for the first time, the electronic device 100 executes S903; otherwise, the electronic device 100 executes S907.
- the audio playback device 200 sends the category identifiers whose number of decoder identifiers is greater than or equal to 1 to the electronic device 100 .
- the audio playback device 200 Before the audio playback device 200 sends the category identifiers whose number of decoder identifiers is greater than or equal to 1 to the electronic device 100 , the audio playback device 200 divides all the decoders into multiple categories according to the codec classification standard. How the audio playback device 200 classifies all decoders into multiple categories according to the codec classification standard has been described in detail in the embodiment shown in S702, and will not be repeated in this application.
- the audio playback device 200 sends the identification of the first category and the identification of the second category to the electronic device 100, and the electronic device 100 receives the identification of the first category and the identification of the second category sent by the audio playback device 200; or, The audio playback device 200 sends the identifier of the first category to the electronic device 100 , and the electronic device 100 receives the identifier of the first category sent by the audio playback device 200 .
- the decoders in the first category include at least the first decoder
- the decoders in the second category include at least the second decoder.
- the audio playback device 200 Before the audio playback device 200 sends the category identifiers whose number of decoder identifiers is greater than or equal to 1 to the electronic device 100, the audio playback device 200 classifies the first encoder into the first encoder based on the parameter information of the first encoder and the codec classification standard.
- the second encoder is divided into the second category based on the parameter information of the second encoder and the codec classification standard; wherein the parameter information of the first encoder and the parameter information of the second encoder include the sampling rate one or more of , code rate, quantization bit depth, number of channels, and audio stream format; the audio playback device is also used to: classify the first decoder into In the first category, the second decoder is divided into the second category based on the parameter information of the second decoder and the codec classification standard; wherein the parameter information of the first decoder and the parameter information of the second decoder include sampling One or more of the rate, the code rate, the quantization bit depth, the number of channels, and the audio stream format; wherein, the codec classification standard includes the mapping relationship between the codec category and the parameter information of the codec. It should be noted that the parameter information of the first encoder, the parameter information of the second encoder, the parameter information of the first decoder, and the parameter information of the second decoder are all the same
- the electronic device 100 confirms a common category from the categories with the number of encoder identifiers greater than or equal to 1 and the category with the number of decoder identifiers greater than or equal to 1.
- the electronic device 100 receives the category identifiers with the number of decoder identifiers greater than or equal to 1 sent by the audio playback device 200.
- the electronic device 100 confirms that the shared categories of the electronic device and the audio playback device are the first category and the second category.
- the electronic device has not received the identification of the second category sent by the audio playback device; or the number of encoders classified into the second category by the electronic device is 0, and the electronic device 100 confirms that the shared category of the electronic device and the audio playback device is the first. a category.
- the electronic device 100 can perform audio data transmission with the audio playback device 200 through the codec under this category.
- the electronic device 100 may also classify all encoders in the electronic device 100 into multiple categories according to a preset codec classification standard after the connection is established.
- the audio playback device 200 may also classify all the decoders in the audio playback device 200 into multiple categories according to the preset codec classification standard, and confirm that the number of decoder identifiers is greater than or equal to one category. Please do not limit yourself here.
- the audio playback device 200 sends all the decoder identifiers in the audio playback device 200 and the numerical value of one or more parameters corresponding to each decoder
- the electronic device 100 divides all the encoders in the electronic device 100 and all the decoders in the audio playback device 200 into multiple categories according to the codec classification standard, and confirms that the number of encoder identifiers is greater than Classes equal to 1 and classes whose number of decoder identities is greater than or equal to 1. Please do not limit yourself here.
- the codec classification standard can be obtained according to one or a combination of two or more parameters such as sampling rate, quantization bit depth, code rate, number of channels, and audio stream format.
- the codec classification standards when the codec classification standards are obtained according to sampling rate, quantization bit depth, code rate, number of channels, and audio stream format, the codec classification standards can be divided into two categories.
- the value of the sampling rate of the codec is greater than or equal to the first sampling rate (target sampling rate)
- the numerical value of the codec of the codec is greater than or equal to the first code rate (target code rate)
- the quantization bit of the codec The value of the depth is greater than or equal to the first quantization bit depth (target quantization bit depth)
- the value of the number of channels of the codec is greater than or equal to the first channel number (target channel number)
- the audio stream format is PCM (target audio stream format).
- the codec is divided into category one; when the numerical value of the sampling rate of the codec is less than the first sampling rate, the numerical value of the codec of the codec is less than the first code rate, the quantization bit depth of the codec The value of is less than the first quantization bit depth, and the value of the number of channels of the codec is greater than or equal to the number of the first channels, and when the audio stream format is PCM, the codec is divided into category two.
- the first sampling rate is 48khz
- the first bit rate is 600kps
- the first quantization bit depth is 24 bits
- the first channel number is 2
- the audio stream format is PCM.
- the codec classification criteria of category 1 are: the sampling rate of the codec is greater than or equal to 48khz, the code rate of the codec is greater than or equal to 600kps, the quantization bit depth of the codec is greater than or equal to 24 bits, the number of channels of the codec More than or equal to 2 channels, the audio stream format is PCM.
- the codec classification criteria for category 2 are: the sampling rate of the codec is less than 48khz, the code rate of the codec is less than or equal to 600kps, the quantization bit depth of the codec is less than 24 bits, and the number of channels of the codec is greater than or equal to 2 channel, the audio stream format is PCM.
- Codecs classified into category one may be referred to as high-definition sound quality codecs, and codecs classified into category two may be referred to as standard sound quality codecs.
- the audio playback device 200 includes decoder one, decoder two, and decoder three, and decoder one belongs to category one, and decoder two and decoder three belong to category two.
- the electronic device 100 includes encoder one, encoder two, and encoder three, and encoder one belongs to category one, and encoder two and encoder three belong to category two.
- the categories shared by the electronic device 100 and the audio playback device 200 include category one and category two.
- the electronic device 100 confirms a default encoder identifier and a default decoder identifier in each common category.
- the electronic device 100 and the audio playback device 200 can transmit audio data through the codecs classified in the shared category.
- the electronic device 100 needs to confirm a default encoder ID and a default decoder ID under each shared category. After that, the electronic device 100 and the audio playback device 200 will use a default encoder and a default decoder under each common category to transmit audio data.
- the electronic device 100 confirms that the encoder under this category is the default encoder, or the decoder under this category The identifier of the decoder is the default one.
- the codecs classified into category 1 are high-definition audio codecs.
- the encoders in category one include encoder one (first encoder), and the decoders in category one include decoder one (first decoder). Because the category only includes one encoder and one decoder, the electronic device 100 confirms that the encoder one and the decoder one are the default one encoder and the default one decoder under the category. When the electronic device 100 confirms that the codec in the first category is used to transmit the audio data, the electronic device 100 sends the adopted category identification (the identification of the category one) to the audio playback device 200 .
- the electronic device 100 uses the encoder in the first category to encode the audio data into first encoded audio data and sends it to the audio playback device 200, and the audio playback device 200 uses the decoder in the category one to decode the compressed audio data into (the first playback device). audio data), and play the first playback audio data.
- the codecs classified into category 2 are basic audio codecs.
- the encoder in category two includes encoder two (second encoder), and the decoder in category two includes decoder two (second decoder). Because category two includes only one encoder and one decoder, the electronic device 100 confirms that encoder two and decoder two are default one encoder and one default decoder under category two. When the electronic device 100 confirms that the codec in the second category is used to transmit the audio data, the electronic device 100 sends the adopted category identifier (the identifier of the second category) to the audio playback device 200 .
- the electronic device 100 uses the second encoder in the second category to package the audio data and sends it to the audio playback device 200 , and the audio playback device 200 uses the second decoder in the second category to decompress the compressed audio data and play the audio data.
- the electronic device 100 needs to identify one of the multiple encoders in the category as the default encoder , or one of the multiple decoders in this category as the default one.
- the codecs classified into category 1 are high-definition audio codecs.
- the encoder in category one includes encoder one (first encoder) and encoder three (third encoder), and the decoder in category one includes decoder one (first decoder) and decoder three ( third decoder). Because category one includes multiple encoders and multiple decoders, the electronic device 100 determines one default encoder and one default decoder for category next.
- the electronic device 100 can identify a default encoder and a default decoder from a plurality of encoders and a plurality of decoders according to preset rules, and the preset rules can be priority rules, low power rules, Efficient rules and so on.
- the electronic device 100 identifies a default encoder (first encoder) and a default decoder from the plurality of encoders and the plurality of decoders according to the priority rule, the low power rule, and the high efficiency rule.
- first decoder For the method of the (first decoder), please refer to the embodiment shown in S808, which is not repeated in this application.
- the electronic device 100 After the electronic device 100 confirms that the encoder one and the decoder one are the default encoder and the default decoder under the category, the electronic device 100 sends the default decoder identification and/or encoder identification under the category to the audio Playback device 200 .
- the electronic device 100 when the electronic device 100 confirms that the codec in the category 1 is used to transmit the audio data, the electronic device 100 sends the adopted category identifier (the identifier of the category 1) to the audio playback device 200 .
- the electronic device 100 uses the encoder in the first category to encode the audio data into the first encoded audio data and sends it to the audio playback device 200, and the audio playback device 200 uses the decoder in the category one to decode the first encoded audio data into the first playback device. audio data, and play the first playback audio data.
- the codecs classified into class 1 are high-definition audio codecs
- the codecs classified into class 2 are basic audio codecs.
- the encoder in category one includes encoder one
- the decoder in category one includes decoder one
- the encoder in category two includes encoder two
- the decoder in category two includes decoder two. Because category only includes one encoder and one decoder, and category two includes only one decoder and one encoder, the electronic device 100 confirms that encoder one and decoder one are the default one encoder and the default one for category one Decoder, Encoder Two and Decoder Two are the default one encoder and the default one decoder under category two.
- the electronic device 100 can use the codec in class 1 or class 2 to transmit audio data
- the electronic device 100 sends the adopted class identifier (the identifier of class 1 or the identifier of class 2) to the audio playback device 200 .
- the codecs classified into category 1 are high-definition audio codecs
- the codecs classified into category 2 are basic audio codecs.
- the encoder in category one includes encoder one and encoder three
- the decoder in category one includes decoder one and decoder three
- the encoder in category two includes encoder two and encoder four
- the decoder in category two The decoder includes Decoder Hot and Decoder Four. Because category one includes multiple encoders and multiple decoders, category two also includes multiple encoders and multiple decoders.
- the electronic device 100 may identify a default encoder and a default decoder under category 1 and category 2 from a plurality of encoders and a plurality of decoders according to a preset rule.
- the preset rules may be priority rules, low power rules, high efficiency rules, and so on.
- the electronic device 100 determines a default encoder and a default decoder under the first and second categories from the plurality of encoders and the plurality of decoders according to the priority rule, the low power rule, and the high efficiency rule. , please refer to the embodiment shown in S808, which will not be repeated in this application.
- the electronic device 100 When the electronic device 100 confirms that the encoder 1 and the decoder 1 are the default encoder and the default decoder of the category 1, the encoder 2 and the decoder 2 are the default encoder and the default decoder of the category 2 After that, the electronic device 100 sends the default decoder identification and/or encoder identification under the first category and the default decoder identification and/or encoder identification under the second category to the audio playback device 200 .
- the electronic device 100 may use the default codec in category 1 or category 2 to transmit audio data.
- the electronic device 100 may use a codec in category 1 to transmit audio data, and the codec in category 1 is a high-definition audio codec.
- the electronic device 100 can switch the codec in the first category to the codec in the second category. And use the codec in category two to transmit audio data.
- the electronic device 100 After the electronic device 100 confirms the category shared by the electronic device 100 and the audio playback device 200, in an optional implementation manner, the electronic device 100 confirms the default encoder identification and the default decoder identification in each shared category , the method for the electronic device 100 to confirm the default encoder identification and the default decoder identification in the category shared by each electronic device 100 and the audio playback device 200 has been described in detail in the embodiment shown in FIG. This will not be repeated here.
- the electronic device 100 only needs to confirm the default encoder identifier in the category shared by the electronic device 100 and the audio playback device 200 . After that, the electronic device 100 sends the shared category of the electronic device 100 and the audio playback device 200 to the audio playback device 200, and the audio playback device 200 confirms the default decoder identifier in the category shared by the electronic device 100 and the audio playback device 200.
- the preset codec classification criteria may be updated every fixed period (eg, one month). Therefore, when the codec classification standard is periodically updated, the electronic device 100 also needs to periodically (for example, one month) divide the codecs into multiple categories, and the audio playback device 200 also needs to periodically (for example, one month) to decode the codecs. devices are grouped into multiple categories.
- the electronic device 100 and the audio playback device 200 may classify the codecs into multiple categories at an appropriate time according to the collected user behavior habits.
- the electronic device 100 and the audio playback device 200 may classify the codecs into multiple categories during the time period "24:00-7:00", Because the user rests at home during the time period "24:00-7:00", at this time, when the codec classification is performed in this time period, the user's experience of using the device will not be affected.
- the electronic device 100 sends the shared category identifier and a default decoder identifier in each shared category to the audio playback device 200 .
- the electronic device 100 After the electronic device 100 confirms the category shared by the electronic device 100 and the audio playback device 200, the electronic device 100 identifies the category shared by the electronic device 100 and the audio playback device 200 (the identification of the first category and the identification of the second category, or the identification of the first category. category identifier) and in each common category, a default decoder identifier is sent to the audio playback device 200.
- the audio playback device 200 receives the category identifier shared by the electronic device 100 and the audio playback device 200 sent by the electronic device 100 and a default decoder identifier in each shared category. After that, the electronic device 100 and the audio playback device 200 will use the category shared by the electronic device 100 and the audio playback device 200 to transmit audio data.
- the electronic device 100 may negotiate the shared categories between the electronic device 100 and the audio playback device 200 before negotiating the categories. , the audio playback device 200 sends each category identifier and the corresponding decoder identifier under each category to the electronic device 100 . Then, after the electronic device 100 confirms the category shared by the electronic device 100 , the electronic device 100 only needs to send the category shared by the electronic device 100 and the audio playback device 200 to the audio playback device 200 .
- the electronic device 100 After the electronic device 100 confirms the category shared by the electronic device 100 and the audio playback device 200 , when the audio playback device 200 confirms the default decoder identifier in the category shared by the electronic device 100 and the audio playback device 200 .
- the electronic device 100 only needs to send the category identifier shared by the electronic device 100 and the audio playback device 200 to the audio playback device 200 .
- the audio playback device 200 when the electronic device 100 negotiates a common category, sends the audio playback device 200, all decoder identifiers and the value of one or more parameters corresponding to each decoder to the electronic device 200.
- the electronic device 100 divides all the encoders in the electronic device 100 and all the decoders in the audio playback device 200 into a plurality of categories according to the codec classification standard. After that, the electronic device 100 confirms the type shared by the electronic device 100 and the audio playback device 200 .
- the electronic device 100 in addition to sending the electronic device 100 to the audio playback device 200 of the category shared by the electronic device 100 and the audio playback device 200, the electronic device 100 also needs to decode the decoding under the category shared by the electronic device 100 and the audio playback device 200.
- the player identification is sent to the audio playback device 200 .
- the electronic device 100 can identify the default decoder identifier in the category shared by the electronic device 100 and the audio playback device 200 .
- the electronic device 100 confirms the default decoder identifier in the category shared by the electronic device 100 and the audio playback device 200 .
- the electronic device 100 sends the category identifier shared by the electronic device 100 and the audio playback device 200 to the audio playback device 200, it also needs to send the default decoder identifier in the category shared by the electronic device 100 and the audio playback device 200 to the audio playback device 200. device 200.
- the electronic device 100 confirms the type shared by the electronic device 100 and the audio playback device 200 .
- the audio playback device 200 confirms the default decoder identifier in the category shared by the electronic device 100 and the audio playback device 200 .
- the electronic device 100 sends the category shared by the electronic device 100 and the audio playback device 200 to the audio playback device 200, it also needs to send the decoder identifier under the category shared by the electronic device 100 and the audio playback device 200 to the audio playback device 200. .
- the electronic device 100 selects a default codec in the first category from the common categories to transmit audio data.
- the electronic device 100 acquires the first parameter information of the audio data, and when the first parameter information of the audio data satisfies the first condition, encodes the audio data into the first encoded audio data according to the first encoder in the first category, and Sending the first encoded audio data to the audio playback device.
- the electronic device 100 can choose between the electronic device 100 and the electronic device 100 according to the application type, playback audio characteristics (sampling rate, quantization bit depth, number of channels), whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, the audio stream format, etc.
- Select the default codec in one category eg, the first category
- the default encoder in the first category is the first encoder
- the default decoding in the first category is the first decoder.
- the electronic device 100 determines that the first parameter information is: sampling rate is the first sampling rate, the quantization bit depth is the first quantization bit depth, the code rate is the first code rate, and the number of channels is the first number of channels.
- the electronic device 100 selects the sampling rate including the first sampling rate, the quantization bit depth includes the first quantization bit depth, the code rate includes the first code rate, the number of channels includes the first channel number, and the audio stream format includes The default codec in the PCM category is for the transmission of audio data.
- the electronic device 100 plays the audio from the electronic device 100 and the audio playback device 200 according to the application type, playback audio characteristics (sampling rate, quantization bit depth, number of channels), whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, the audio stream format, etc.
- the electronic device 100 may select the category with the highest priority from the two or more categories as the first category, and use the category with the highest priority.
- the default codec for audio data transmission It can be understood that the higher the sampling rate, the higher the code rate, and the higher the quantization depth specified in the codec classification standard, the better the sound quality of the codec classified into this category. The better the sound quality of a codec, the higher the priority of the category that the codec is in.
- the categories shared by the electronic device 100 and the audio playback device 20 include category one and category two. If the electronic device 100 selects the categories according to the application type, playback audio characteristics (sampling rate, quantization bit depth, number of channels), whether the audio rendering capability of the electronic device is enabled, the network conditions of the channel, the audio stream format, etc., the categories include category 1 and category 1. Category two. Since the codecs classified into category 1 are high-definition sound quality codecs, the codecs classified into category 2 are standard sound quality codecs. Category one has higher priority than category two. Therefore, the electronic device 100 will preferentially select the default codec in category 1 to transmit audio data.
- the electronic device 100 sends the first category identifier to the audio playback device 200 .
- the audio playback device 200 receives the first category identifier sent by the electronic device 100, and the electronic device 100 and the audio playback device 200 use the default codec in the first category identifier to transmit audio data.
- the electronic device 100 acquires audio data, and encodes the audio data by using an encoder corresponding to the first encoder identifier to obtain encoded audio data.
- the first encoder identifier is the default encoder identifier in the first category.
- the electronic device 100 sends the encoded audio data (the first encoded audio data) to the audio playback device 200 .
- the electronic device 100 may acquire the currently playing audio data by recording or other means, and then compress the acquired audio data and send it to the audio playback device 200 through a communication connection with the audio playback device 200 .
- the electronic device 100 collects the audio played by the electronic device 100, and uses an advanced audio coding (advanced audio coding, AAC) algorithm to compress the audio; then compress the audio
- AAC advanced audio coding
- the resulting audio data is encapsulated into a transport stream (TS), and then the TS stream is encoded according to the real-time transport protocol (RTP) and the encoded data is sent to the audio playback device through the Bluetooth channel connection 200.
- TS transport stream
- RTP real-time transport protocol
- the audio playback device 200 receives the encoded audio data sent by the electronic device 100, and uses a decoder corresponding to the first decoder identifier to decode the encoded audio data to obtain audio data (first playback audio data).
- the first decoder identifier is the default decoder identifier in the first category.
- the audio playback device 200 uses a decoder corresponding to the first decoder identifier to decode the encoded audio data, obtains unencoded audio data, and plays the audio data.
- the electronic device 100 switches the first category to the second category.
- the electronic device 100 When the sampling rate and/or code rate and/or quantization bit depth and/or number of channels of the audio data of the electronic device 100 change, the electronic device 100 will reselect a codec in another category (ie, the second category) for When the audio data is transmitted, the electronic device 100 informs the audio playback device 200 of the identification of the category. The electronic device 100 and the audio playback device 200 use the codec in the second category to transmit audio data. This part of the content has been described in detail in the foregoing embodiments, and will not be repeated in this application.
- the application type changes, the audio content changes, the audio rendering capability of the electronic device is turned on, the network conditions of the channel become worse, etc., the sampling rate and/or code rate and/or quantization bit depth and/or
- the electronic device 100 switches the first category to the second category.
- the electronic device 100 when the electronic device 100 receives the user's selection of the high-quality sound mode, the electronic device 100 switches the first encoding category to the second category, wherein the audio quality of the codec in the second category is higher than that in the first category.
- the audio quality of the codec The sampling rate, code rate, and quantization bit depth of the codec in the second category are all greater than the sampling rate, code rate, and quantization bit depth in the first category.
- the electronic device 100 receives an operation of the user clicking the more control 608 , and in response to the user operation, the electronic device 100 will display a prompt box 900 as shown in FIG. 10B .
- the prompt box 900 includes a high-quality sound mode control 901 , a stable transmission mode control 902 , and an audio rendering mode enable control 903 .
- the high-quality sound mode control 901 in the prompt box 900 can receive the user's click operation, and in response to the user's click operation, the electronic device 100 switches the first category to A second category, wherein the audio quality of the codecs in the second category is higher than the audio quality of the codec classification of the first category.
- the electronic device 100 when the electronic device 100 receives a user operation to enable the audio rendering capability, the electronic device 100 increases the sampling rate of the playing audio data from the first sampling rate to the second sampling rate, and the rendering unit of the electronic device 100 can quantize the audio data.
- the value of the bit depth is increased from the first quantized bit depth to the second quantized bit depth, and the rendering unit of the electronic device 100 may increase the value of the number of channels of the audio data from the first number of channels to the second number of channels.
- the second sampling rate is greater than the first sampling rate
- the second quantization bit depth is greater than the first quantization bit depth
- the second channel number is greater than the first channel number.
- the enable audio rendering mode control 903 in the prompt box 900 can receive the user's click operation, and in response to the user's click operation, the electronic device 100 will first The category is switched to the second category, wherein the sampling rate, code rate, quantization bit depth, and number of channels of the codec in the second category are all greater than the sampling rate, code rate, quantization bit depth, number of channels.
- the electronic device 100 may receive a user operation to switch the current audio data transmission mode to the stable transmission mode.
- the electronic device 100 switches the first category to the second category, wherein the code rate of the codec in the second category is lower than the code rate of the codec in the first category.
- the electronic device 100 may automatically switch to the stable transmission mode. This application is not limited here.
- the stable transmission mode control 902 in the prompt box 900 can receive the user's click operation, and in response to the user's click operation, the electronic device 100 switches the first category to the second category category, where the codec in the second category has a lower code rate than the codec in the first category.
- the electronic device 100 acquires the second parameter information of the audio data, and when the second parameter information of the audio data satisfies the second condition, the electronic device 100 switches the first category to the second category, and according to the second encoder in the second category
- the audio data is encoded into second encoded audio data, and the second encoded audio data is sent to the audio playback device.
- the electronic device 100 sends the identification of the second category to the audio playback device 200 .
- the second parameter information is: the sampling rate of the audio data played by the electronic device 100 changes from the first sampling rate to the second sampling rate, then the electronic device 100 selects from the categories shared by the electronic device 100 and the audio playback device 200, and the selected sampling rate includes The second sampling rate, the quantization bit depth includes the first quantization bit depth, the code rate includes the first code rate, the number of channels includes the first channel number, and the audio stream format includes the default codec in the category of PCM for audio data processing. transmission.
- the parameter types of the second decoder are the same as the parameter types in the parameter information of the second decoder; the first parameter information satisfies the first condition, and the second parameter information satisfies the second condition, specifically including: the sampling rate in the first parameter information is greater than or equal to the target sampling rate rate, the sampling rate in the second parameter information is less than the target sampling rate; and/or, the code rate in the first parameter information is greater than or equal to the target code rate, and the code rate in the second parameter information is less than the target code rate; and/or,
- the quantization bit depth in the first parameter information is greater than or equal to the target quantization bit depth, and the quantization bit depth in the second parameter information is less than the target quantization bit depth; and/or, the number of channels in the first parameter information
- the electronic device 100 sends the identifier of the second category to the audio playback device 200 .
- the audio playback device 200 receives the second category identifier sent by the electronic device 100 .
- the electronic device 100 and the audio playback device 200 will transmit audio data through the default codec in the second category identifier.
- the electronic device 100 collects audio data, and the electronic device 100 encodes the audio data by using the encoder corresponding to the second encoder identifier to obtain encoded audio data (second encoded audio data), and the electronic device 100 encodes the encoded audio data.
- the audio data is sent to the audio playback device 200, and the audio playback device 200 will use the decoder corresponding to the second decoder to decode the audio data to obtain uncoded audio data (the second playback audio data), and the audio playback device 200 will Unencoded audio data is played (second playback audio data).
- the second encoder identifier is the default encoder identifier in the second category; the second decoder identifier is the default decoder identifier in the second category.
- the audio data during the switching between the electronic device 100 and the audio playback device 200 will be smoothly transitioned to improve user experience.
- the electronic device encodes the first audio frame in the audio data into the first encoded audio frame through the first encoder, and sends the first encoded audio frame to the audio Playing device; encode the first audio frame in the audio data into a second encoded audio frame by the second encoder, send the second encoded audio frame to the audio playback device, and use the second encoder to encode the second encoded audio frame in the audio data
- the audio frame is encoded into the Nth encoded audio frame, and the Nth encoded audio frame is sent to the audio playback device; the audio playback device decodes the first encoded audio frame into the first decoded audio frame through the first decoder, and uses the second decoder to decode the audio frame.
- the electronic device 100 first plays the first playback audio frame, and then the electronic device 100 plays the Nth playback audio frame.
- the switching between the first encoder and the second encoder needs to be completed within one frame, and this frame is the first audio frame, and the audio playback device will After the first audio frame is smoothed, it is played to prevent jamming when the codec is switched, and to achieve a smooth transition.
- the adjacent audio frames after the first audio frame, such as the second audio frame do not need to be smoothed, and the second decoder is directly decoded and played.
- the frame is encoded to obtain the Nth encoded audio frame; the third encoded audio frame to the D+2th encoded audio frame, the D+3th encoded audio frame, and the Nth encoded audio frame are sent to the audio playback device;
- a decoder decodes the third encoded audio frame to the D+2 encoded audio frame into the second playback audio frame to the D+1 playback audio frame, and decodes the D+3 encoded audio frame into the third decoded audio frame through the second decoder Audio frame; play the second playback audio frame to the D-th playback audio frame; perform smooth processing on the D+1-th playback audio frame and the third decoded audio frame to obtain the target playback audio frame, and the playback target playback audio frame; through the second decoding
- the processor decodes the Nth encoded audio frame into the Nth decoded audio frame, and plays the Nth decoded audio frame.
- the switching between the first encoder and the second encoder needs to be completed in multiple frames (D frames), so that the first encoder and the second encoder are During the encoder switching process, the audio data encoded by the first encoder arrives at the audio playback device and is decoded, and the audio data encoded by the second encoder arrives at the audio playback device and is decoded at the same time. If the encoder switching needs to be completed in the D frame, the audio playback device directly decodes and plays the first audio frame to the D-1 audio frame. There is a freeze when the decoder is switched to achieve a smooth transition. The adjacent audio frames after the D-th audio frame, such as the N-th audio frame, do not need smoothing processing, and the N-th decoder is directly decoded and played out.
- S912-S913 can also be replaced by, when the second parameter information satisfies the second condition, encode the audio data into the third encoded audio data by the first encoder in the first category, and send the third encoded audio data to the audio A playback device; an audio playback device, further configured to decode the third encoded audio data into third playback audio data through the first decoder in the first category.
- the electronic device and the audio playback device only support one codec category (the first category), in this case, when the parameter information of the audio data is changed from the first parameter information to the second parameter information, and the second parameter information If the information satisfies the second condition, the electronic device cannot switch the codec, and the electronic device still uses the default codec in the first category to transmit audio data with the audio playback device.
- the first category when the parameter information of the audio data is changed from the first parameter information to the second parameter information, and the second parameter information If the information satisfies the second condition, the electronic device cannot switch the codec, and the electronic device still uses the default codec in the first category to transmit audio data with the audio playback device.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Acoustics & Sound (AREA)
- Theoretical Computer Science (AREA)
- Spectroscopy & Molecular Physics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
Abstract
Description
Claims (35)
- 一种编解码器协商与切换系统,其特征在于,所述系统包括电子设备和音频播放设备,其中:所述电子设备用于:当音频数据的第一参数信息满足第一条件时,根据第一类别中的第一编码器将所述音频数据编码成第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;其中,所述第一类别为所述电子设备在获取所述音频数据之前,确定出所述电子设备与所述音频播放设备共有的编解码器类别;将所述第一类别的标识发送至所述音频播放设备;所述音频播放设备用于:接收所述电子设备发送的所述第一类别的标识;通过所述第一类别中的第一解码器将所述第一编码音频数据解码成第一播放音频数据;所述电子设备还用于:当音频数据的第二参数信息满足第二条件时,根据第二类别中的第二编码器将所述音频数据编码成第二编码音频数据,并将所述第二编码音频数据发送至所述音频播放设备;其中,所述第二类别为所述电子设备在获取所述音频数据之前,确定的所述电子设备与所述音频播放设备共有的编解码器类别;将所述第二类别的标识发送至所述音频播放设备;所述音频播放设备还用于:接收所述电子设备发送的所述第二类别的标识;通过所述第二类别中的第二解码器将所述第二编码音频数据解码成第二播放音频数据;其中,所述第一条件与所述第二条件不同,所述第一类别与所述第二类别不同。
- 根据权利要求1所述的系统,其特征在于,所述第一类别中的编码器至少包括所述第一编码器,所述第二类别中的编码器至少包括所述第二编码器。
- 根据权利要求2所述的系统,其特征在于,所述电子设备还用于:接收所述音频播放设备发送的所述第一类别的标识和所述第二类别的标识;其中,所述第一类别中的解码器至少包括所述第一解码器,所述第二类别中的解码器的至少包括所述第二解码器。
- 根据权利要求3所述的系统,其特征在于,所述电子设备还用于:确认出所述电子设备与所述音频播放设备的共有类别为所述第一类别和所述第二类别;将所述第一类别的标识和所述第二类别的标识发送至所述音频播放设备;所述音频播放设备,还用于:接收所述电子设备发送的所述第一类别的标识和所述第二类别的标识。
- 根据权利要求4所述的系统,其特征在于,所述第一类别中的编码器只包括所述第一编码器,所述第一类别中的解码器只包括所述第一解码器;在所述电子设备确认出所述电子设备与所述音频播放设备的共有类别为所述第一类别和所述第二类别之后,所述电子设备还用于:当所述第一参数信息满足所述第一条件时,通过所述第一类别中的所述第一编码器将所述音频数据编码成所述第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;所述音频播放设备,还用于通过所述第一类别中的所述第一解码器将所述第一编码音频数据解码成所述第一播放音频数据。
- 根据权利要求5所述的系统,其特征在于,所述第一类别中的编码器还包括第三编码器,所述第一类别中的解码器还包括第三解码器;在所述电子设备确认出所述电子设备与所述音频播放设备的共有类别为所述第一类别和所述第二类别之后,所述电子设备还用于:当所述第一参数信息满足所述第一条件时,通过所述第一类别中的第一编码器将所述音频数据编码成第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;其中,所述第一编码器的功耗低于所述第三编码器,或者,所述第一编码器的优先级或功率高于所述第三编码器;所述音频播放设备,还用于通过所述第一类别中的所述第一解码器将所述第一编码音频数据解码成第一播放音频数据;其中,所述第一解码器的功耗低于所述第二解码器,或者,所述第一解码器的优先级或功率高于所述第二解码器。
- 根据权利要求1所述的系统,其特征在于,当所述电子设备与所述音频播放设备共有的编码器类别只包括所述第一类别时,所述电子设备还用于:当所述第二参数信息满足所述第二条件时,通过所述第一类别中的所述第一编码器将所述音频数据编码成第三编码音频数据,并将所述第三编码音频数据发送至所述音频播放设备;所述音频播放设备,还用于通过所述第一类别中的所述第一解码器将所述第三编码音频数据解码成第三播放音频数据。
- 根据权利要求7所述的方法,其特征在于,所述电子设备与所述音频播放设备共有的编码器类别只包括所述第一类别,包括:所述电子设备未收到所述音频播放设备发送的所述第二类别的标识或所述电子设备划分到所述第二类别中的编码器的数量为0。
- 根据权利要求7所述的系统,其特征在于,所述第一类别中的编码器只包括所述第一编码器,所述第一类别中的解码器只包括所述第一解码器;当所述第一参数信息满足所述第一条件时,所述电子设备还用于:根据所述第一类别中的所述第一编码器将所述音频数据编码成所述第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;所述音频播放设备,还用于通过所述第一类别中的所述第一解码器将所述第一编码音频数据解码成所述第一播放音频数据。
- 根据权利要求7所述的系统,其特征在于,所述第一类别中的编码器还包括第三编码器,所述第一类别中的解码器还包括第三解码器;当所述第一参数信息满足所述第一条件时,所述电子设备还用于:根据所述第一类别中的所述第一编码器将所述音频数据编码成所述第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;其中,所述第一编码器的功耗低于所述第三编码器,或者,所述第一编码器的优先级或功率高于所述第三编码器;所述音频播放设备,还用于通过所述第一类别中的所述第一解码器将所述第一编码音频数据解码成所述第一播放音频数据;其中,所述第一解码器的功耗低于所述第三解码器,或者,所述第一解码器的优先级或功率高于所述第三解码器。
- 根据权利要求2-10任一项所述的系统,其特征在于,所述第一类别中的编解码器为高清音质编解码器,所述第二类别中的编解码器为标准音质编解码器;或所述第一类别中的编解码器为标准音质编解码器,所述第二类别中的编解码器为高清音质编解码器。
- 根据权利要求1所述的系统,其特征在于,在所述电子设备获取音频数据之前,所述电子设备还用于:基于所述第一编码器的参数信息以及编解码器分类标准将所述第一编码器划到所述第一类别中,基于所述第二编码器的参数信息以及所述编解码器分类标准将所述第二编码器划分到所述第二类别中;其中,所述第一编码器的参数信息和所述第二编码器的参数信息包括采样率、码率、量化位深、声道数和音频流格式中的一个或多个;所述音频播放设备还用于:基于所述第一解码器的参数信息以及所述编解码器分类标准将所述第一解码器划到所述第一类别中,基于所述第二解码器的参数信息以及所述编解码器分类标准将所述第二解码器划分到所述第二类别中;其中,所述第一解码器的参数信息和所述第二解码器的参数信息包括采样率、码率、量化位深、声道数和音频流格式中的一个或多个;其中,所述编解码器分类标准包括编解码器类别与编解码器的参数信息的映射关系。
- 根据权利要求12所述的系统,其特征在于,所述第一类别中的编解码器的采样率大于等于目标采样率,所述第二类别中的编解码器的采样率小于目标采样率;和/或,所述第一类别中的编解码器的码率大于等于目标码率,所述第二类别中的编解码器的码率小于目标码率;和/或,所述第一类别中的编解码器的声道数大于等于目标声道数,所述第二类别中的编解码器的声道数小于目标声道数;和/或,所述第一类别中的编解码器的量化位深大于等于目标量化位深,所述第二类别中的编解码器的量化位深小于目标量化位深;和/或,所述第一类别中的编解码器的音频流格式为目标音频流格式,所述第二类别中的编解码器的音频流格式为所述目标音频流格式。
- 根据权利要求13所述的系统,其特征在于,所述第一参数信息中的参数种类、所述第一编码器的参数信息中的参数种类、所述第一解码器的参数信息中的参数种类、所述第二参数信息中的参数种类、所述第二编码器的参数信息中的参数种类、所述第二解码器的参数信息中的参数种类相同;所述第一参数信息满足所述第一条件,所述第二参数信息满足所述第二条件,具体包括:所述第一参数信息中的采样率大于等于所述目标采样率,所述第二参数信息中的采样率小于所述目标采样率;和/或,所述第一参数信息中的码率大于等于所述目标码率,所述第二参数信息中的码率小于所述目标码率;和/或,所述第一参数信息中的量化位深大于等于所述目标量化位深,所述第二参数信息中的量化位深小于所述目标量化位深;和/或,所述第一参数信息中的声道数大于等于所述目标声道数,所述第二参数信息中的声道数小于于所述目标声道数;和/或,所述第一参数信息中的音频流格式为所述目标音频流格式,所述第二参数信息中的音频流格式为所述目标音频流格式。
- 根据权利要求1所述的系统,其特征在于,当所述第一编码器与所述第二编码器的时延相同时,所述电子设备还用于:通过所述第一编码器将所述音频数据中的第一音频帧编码成第一编码音频帧,并将所述第一编码音频帧发送给所述音频播放设备;通过所述第二编码器将所述音频数据中的第一音频帧编码成第二编码音频帧,并将所述第二编码音频帧发送至所述音频播放设备;所述音频播放设备,还用于:通过所述第一解码器将所述第一编码音频帧解码为第一解码音频帧,通过所述第二解码器将所述第二编码音频帧解码为第二解码音频帧;对所述第一解码音频帧和所述第二解码音频帧进行平滑处理,得到第一播放音频帧。
- 根据权利要求1所述的系统,其特征在于,当所述第一编码器与所述第二编码器的时延不同,所述电子设备还用于:通过公式D=取整((max(编码器一的总时延,编码器二的总时延)+(帧长-编码器一的总时延%帧长)+帧长-1)/帧长)获取D帧音频数据帧;其中,D表示所述第一编码器与所述第二编码器切换过程中的总音频数据帧数,max表示取最大值操作,%表示取余操作,帧长表示一帧音频数据的时长;通过所述第一编码器将所述音频数据中的第一音频帧至第D音频帧进行编码,得到第三编码音频帧至第D+2编码音频帧;通过所述第二编码器将所述音频数据中的所述第D音频帧进行编码,得到第D+3编码音频帧;将所述第三编码音频帧至所述第D+2编码音频帧、所述第D+3编码音频帧发送至所述音频播放设备;所述音频播放设备,还用于:通过所述第一解码器将所述第三编码音频帧至所述第D+2编码音频帧解码为第二播放音频帧至第D+1播放音频帧,通过第二解码器将所述D+3编码音频帧解码为第三解码音频帧;播放所述第二播放音频帧至第D播放音频帧;对所述第D+1播放音频帧和所述第三解码音频帧进行平滑处理,得到目标播放音频帧。
- 根据权利要求15所述的系统,其特征在于,所述音频播放设备,还用于:通过公式Pcm=wi*pcmA+(1-wi)*pcmB得到所述第一播放音频帧;其中,Pcm为所述第一播放音频帧,wi为平滑系数,wi i大于0小于1,pcmA为所述第一解码音频帧,pcmB为所述第二解码音频帧。
- 根据权利要求16所述的系统,其特征在于,所述音频播放设备,还用于:通过公式Pcm=wi*pcmA+(1-wi)*pcmB得到所述目标播放音频帧;其中,Pcm为所述目标播放音频帧,wi为平滑系数,wi i大于0小于1,pcmA为所述第D+1播放音频帧,pcmB为所述第三解码音频帧。
- 一种编解码器协商与切换方法,其特征在于,所述方法包括:当音频数据的第一参数信息满足第一条件时,所述电子设备根据第一类别中的第一编码器将所述音频数据编码成第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;其中,所述第二类别为所述电子设备在获取所述音频数据之前,确定的所述电子设备与所述音频播放设备共有的编解码器类别;当所述第二参数信息满足第二条件时,所述电子设备根据第二类别中的第二编码器将所述音频数据编码成第二编码音频数据,并将所述第二编码音频数据发送至所述音频播放设备;其中,所述第二类别为所述电子设备在获取所述音频数据之前,确定出所述电子设备与所述音频播放设备共有的编解码器类别;其中,所述第二类别为所述电子设备在获取所述音频数据之前,确定的所述电子设备与所述音频播放设备共有的编解码器类别,所述第一条件与所述第二条件不同,所述第一类别与所述第二类别不同。
- 根据权利要求19所述的方法,其特征在于,所述第一类别中的编码器至少包括所述第一编码器,所述第二类别中的编码器至少包括所述第二编码器。
- 根据权利要求20所述的方法,其特征在于,所述方法还包括:所述电子设备接收所述音频播放设备发送的所述第一类别的标识和所述第二类别的标识;其中,所述第一类别中的解码器至少包括所述第一解码器,所述第二类别中的解码器的至少包括所述第二解码器。
- 根据权利要求21所述的方法,其特征在于,所述方法还包括:所述电子设备确认出所述电子设备与所述音频播放设备的共有类别为所述第一类别和所述第二类别;所述电子设备将所述第一类别的标识和所述第二类别的标识发送至所述音频播放设备。
- 根据权利要求22所述的方法,其特征在于,所述第一类别中的编码器只包括所述第一编码器;在所述电子设备确认出所述电子设备与所述音频播放设备的共有类别为所述第一类别和所述第二类别之后,所述方法还包括:当所述第一参数信息满足所述第一条件时,所述电子设备通过所述第一类别中的所述第一编码器将所述音频数据编码成所述第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备。
- 根据权利要求23所述的方法,其特征在于,所述第一类别中的编码器还包括第三编码器;在所述电子设备确认出所述电子设备与所述音频播放设备的共有类别为所述第一类别和所述第二类别之后,所述方法还包括:当所述第一参数信息满足所述第一条件时,所述电子设备通过所述第一类别中的第一编码器将所述音频数据编码成第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;其中,所述第一编码器的功耗低于所述第三编码器,或者,所述第一编码器的优先级或功率高于所述第三编码器。
- 根据权利要求19所述的方法,其特征在于,当所述电子设备与所述音频播放设备共有的编码器类别只包括所述第一类别时,所述方法还包括:当所述第二参数信息满足所述第二条件时,所述电子设备通过所述第一类别中的所述第一编码器将所述音频数据编码成第三编码音频数据,并将所述第三编码音频数据发送至所述音频播放设备。
- 根据权利要求25所述的方法,其特征在于,所述电子设备与所述音频播放设备共有的编码器类别只包括所述第一类别,包括:所述电子设备未收到所述音频播放设备发送的所述第二类别的标识;或所述电子设备划分到所述第二类别中的编码器的数量为0。
- 根据权利要求25所述的方法,其特征在于,所述第一类别中的编码器只包括所述第一编码器,所述第一类别中的解码器只包括所述第一解码器;当所述第一参数信息满足所述第一条件时,所述电子设备根据所述第一类别中的所述第一编码器将所述音频数据编码成所述第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备。
- 根据权利要求25所述的方法,其特征在于,所述第一类别中的编码器还包括第三编码器,所述第一类别中的解码器还包括第三解码器;当所述第一参数信息满足所述第一条件时,所述电子设备根据所述第一类别中的所述第一编码器将所述音频数据编码成所述第一编码音频数据,并将所述第一编码音频数据发送至所述音频播放设备;其中,所述第一编码器的功耗低于所述第三编码器,或者,所述第一编码器的优先级或功率高于所述第三编码器。
- 根据权利要求20-28任一项所述的方法,其特征在于,所述第一类别中的编解码器为高清音质编解码器,所述第二类别中的编解码器为标准音质编解码器;或所述第一类别中的编解码器为标准音质编解码器,所述第二类别中的编解码器为高清音质编解码器。
- 根据权利要求10所述的方法,其特征在于,在所述电子设备获取音频数据之前,所述方法还包括:所述电子设备基于所述第一编码器的参数信息以及编解码器分类标准将所述第一编码器划到所述第一类别中,基于所述第二编码器的参数信息以及所述编解码器分类标准将所述第二编码器划分到所述第二类别中;其中,所述第一编码器的参数信息和所述第二编码器的参 数信息包括采样率、码率、量化位深、声道数和音频流格式中的一个或多个;所述编解码器分类标准包括编解码器类别与编解码器的参数信息的映射关系。
- 根据权利要求30所述的方法,其特征在于,所述第一类别中的编解码器的采样率大于等于目标采样率,所述第二类别中的编解码器的采样率小于目标采样率;和/或,所述第一类别中的编解码器的码率大于等于目标码率,所述第二类别中的编解码器的码率小于目标码率;和/或,所述第一类别中的编解码器的声道数大于等于目标声道数,所述第二类别中的编解码器的声道数小于目标声道数;和/或,所述第一类别中的编解码器的量化位深大于等于目标量化位深,所述第二类别中的编解码器的量化位深小于目标量化位深;和/或,所述第一类别中的编解码器的音频流格式为目标音频流格式,所述第二类别中的编解码器的音频流格式为所述目标音频流格式。
- 根据权利要求31所述的方法,其特征在于,所述第一参数信息中的参数种类、所述第一编码器的参数信息中的参数种类、所述第一解码器的参数信息中的参数种类、所述第二参数信息中的参数种类、所述第二编码器的参数信息中的参数种类、所述第二解码器的参数信息中的参数种类相同;所述第一参数信息满足所述第一条件,所述第二参数信息满足所述第二条件,具体包括:所述第一参数信息中的采样率大于等于所述目标采样率,所述第二参数信息中的采样率小于所述目标采样率;和/或,所述第一参数信息中的码率大于等于所述目标码率,所述第二参数信息中的码率小于所述目标码率;和/或,所述第一参数信息中的量化位深大于等于所述目标量化位深,所述第二参数信息中的量化位深小于所述目标量化位深;和/或,所述第一参数信息中的声道数大于等于所述目标声道数,所述第二参数信息中的声道数小于于所述目标声道数;和/或,所述第一参数信息中的音频流格式为所述目标音频流格式,所述第二参数信息中的音频流格式为所述目标音频流格式。
- 一种电子设备,其特征在于,包括一个或多个处理器、一个或多个存储器,一个或多个编码器;所述一个或多个存储器、所述一个或多个编码器与所述一个或多个处理器耦合,所述一个或多个存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,所述一个或多个处理器调用所述计算机指令以使得所述电子设备执行如权利要求19至32任一项所述的方法。
- 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有计算机可执行指令,所述计算机可执行指令在被所述计算机调用时用于执行如权利要求19至32任一项所述的方法。
- 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如权利要求19至32中任意一项所述的方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023564200A JP2024515684A (ja) | 2021-04-20 | 2022-03-29 | コーデックネゴシエーションおよび切替方法 |
EP22790822.5A EP4318467A1 (en) | 2021-04-20 | 2022-03-29 | Codec negotiation and switching method |
US18/489,217 US20240045643A1 (en) | 2021-04-20 | 2023-10-18 | Codec negotiation and switching method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110423987.8A CN115223579A (zh) | 2021-04-20 | 2021-04-20 | 一种编解码器协商与切换方法 |
CN202110423987.8 | 2021-04-20 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/489,217 Continuation US20240045643A1 (en) | 2021-04-20 | 2023-10-18 | Codec negotiation and switching method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022222713A1 true WO2022222713A1 (zh) | 2022-10-27 |
Family
ID=83604709
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/083816 WO2022222713A1 (zh) | 2021-04-20 | 2022-03-29 | 一种编解码器协商与切换方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20240045643A1 (zh) |
EP (1) | EP4318467A1 (zh) |
JP (1) | JP2024515684A (zh) |
CN (1) | CN115223579A (zh) |
WO (1) | WO2022222713A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116261008A (zh) * | 2022-12-14 | 2023-06-13 | 海信视像科技股份有限公司 | 音频处理方法和音频处理装置 |
CN116580716B (zh) * | 2023-07-12 | 2023-10-27 | 腾讯科技(深圳)有限公司 | 音频编码方法、装置、存储介质及计算机设备 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1300508A (zh) * | 1999-04-06 | 2001-06-20 | 阿尔卡塔尔公司 | 在话音信道上传输数据的方法和设备 |
US20070255433A1 (en) * | 2006-04-25 | 2007-11-01 | Choo Eugene K | Method and system for automatically selecting digital audio format based on sink device |
CN103477388A (zh) * | 2011-10-28 | 2013-12-25 | 松下电器产业株式会社 | 声音信号混合解码器、声音信号混合编码器、声音信号解码方法及声音信号编码方法 |
CN104509119A (zh) * | 2012-04-24 | 2015-04-08 | Vid拓展公司 | 用于mpeg/3gpp-dash中平滑流切换的方法和装置 |
CN107404339A (zh) * | 2017-08-14 | 2017-11-28 | 青岛海信电器股份有限公司 | 一种调节蓝牙a2dp编码设置的方法和装置 |
WO2020239985A1 (en) * | 2019-05-31 | 2020-12-03 | Tap Sound System | Method for operating a bluetooth device |
WO2021018739A1 (en) * | 2019-07-26 | 2021-02-04 | Tap Sound System | Method for managing a plurality of multimedia communication links in a point-to-multipoint bluetooth network |
-
2021
- 2021-04-20 CN CN202110423987.8A patent/CN115223579A/zh active Pending
-
2022
- 2022-03-29 JP JP2023564200A patent/JP2024515684A/ja active Pending
- 2022-03-29 EP EP22790822.5A patent/EP4318467A1/en active Pending
- 2022-03-29 WO PCT/CN2022/083816 patent/WO2022222713A1/zh active Application Filing
-
2023
- 2023-10-18 US US18/489,217 patent/US20240045643A1/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1300508A (zh) * | 1999-04-06 | 2001-06-20 | 阿尔卡塔尔公司 | 在话音信道上传输数据的方法和设备 |
US20070255433A1 (en) * | 2006-04-25 | 2007-11-01 | Choo Eugene K | Method and system for automatically selecting digital audio format based on sink device |
CN103477388A (zh) * | 2011-10-28 | 2013-12-25 | 松下电器产业株式会社 | 声音信号混合解码器、声音信号混合编码器、声音信号解码方法及声音信号编码方法 |
CN104509119A (zh) * | 2012-04-24 | 2015-04-08 | Vid拓展公司 | 用于mpeg/3gpp-dash中平滑流切换的方法和装置 |
CN107404339A (zh) * | 2017-08-14 | 2017-11-28 | 青岛海信电器股份有限公司 | 一种调节蓝牙a2dp编码设置的方法和装置 |
WO2020239985A1 (en) * | 2019-05-31 | 2020-12-03 | Tap Sound System | Method for operating a bluetooth device |
WO2021018739A1 (en) * | 2019-07-26 | 2021-02-04 | Tap Sound System | Method for managing a plurality of multimedia communication links in a point-to-multipoint bluetooth network |
Also Published As
Publication number | Publication date |
---|---|
CN115223579A (zh) | 2022-10-21 |
EP4318467A1 (en) | 2024-02-07 |
US20240045643A1 (en) | 2024-02-08 |
JP2024515684A (ja) | 2024-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111316598B (zh) | 一种多屏互动方法及设备 | |
WO2020253719A1 (zh) | 一种录屏方法及电子设备 | |
CN110109636B (zh) | 投屏方法、电子设备以及系统 | |
WO2020238871A1 (zh) | 一种投屏方法、系统及相关装置 | |
CN113497909B (zh) | 一种设备交互的方法和电子设备 | |
EP4084486B1 (en) | Cross-device content projection method, and electronic device | |
CN113438354B (zh) | 数据传输方法、装置、电子设备和存储介质 | |
WO2022222713A1 (zh) | 一种编解码器协商与切换方法 | |
WO2022105445A1 (zh) | 基于浏览器的应用投屏方法及相关装置 | |
CN114040242A (zh) | 投屏方法和电子设备 | |
WO2022222924A1 (zh) | 一种投屏显示参数调节方法 | |
WO2023030099A1 (zh) | 跨设备交互的方法、装置、投屏系统及终端 | |
JP2022537012A (ja) | マルチ端末マルチメディアデータ通信方法及びシステム | |
CN114185503A (zh) | 多屏交互的系统、方法、装置和介质 | |
CN116170629A (zh) | 一种传输码流的方法、电子设备及计算机可读存储介质 | |
CN112437341B (zh) | 一种视频流处理方法及电子设备 | |
WO2022206763A1 (zh) | 一种显示方法、电子设备和系统 | |
WO2022161006A1 (zh) | 合拍的方法、装置、电子设备和可读存储介质 | |
WO2022135157A1 (zh) | 页面显示的方法、装置、电子设备以及可读存储介质 | |
WO2022135254A1 (zh) | 一种编辑文本的方法、电子设备和系统 | |
WO2024051634A1 (zh) | 一种投屏显示的方法、系统以及电子设备 | |
WO2023016347A1 (zh) | 声纹认证应答方法、系统及电子设备 | |
CN113271577B (zh) | 媒体数据播放系统、方法及相关装置 | |
WO2023093778A1 (zh) | 一种截屏方法及相关装置 | |
WO2023011220A1 (zh) | 一种数据同步方法、终端和系统 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22790822 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2023564200 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022790822 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022790822 Country of ref document: EP Effective date: 20231030 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |