WO2021233398A1 - 无线音频系统、无线通讯方法及设备 - Google Patents

无线音频系统、无线通讯方法及设备 Download PDF

Info

Publication number
WO2021233398A1
WO2021233398A1 PCT/CN2021/094998 CN2021094998W WO2021233398A1 WO 2021233398 A1 WO2021233398 A1 WO 2021233398A1 CN 2021094998 W CN2021094998 W CN 2021094998W WO 2021233398 A1 WO2021233398 A1 WO 2021233398A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound effect
effect processing
audio
connection
audio data
Prior art date
Application number
PCT/CN2021/094998
Other languages
English (en)
French (fr)
Inventor
郭育锋
秦伟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21808553.8A priority Critical patent/EP4148734A4/en
Priority to US17/926,799 priority patent/US20230209624A1/en
Publication of WO2021233398A1 publication Critical patent/WO2021233398A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/14Direct-mode setup
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/03Synergistic effects of band splitting and sub-band processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2460/00Details of hearing devices, i.e. of ear- or headphones covered by H04R1/10 or H04R5/033 but not provided for in any of their subgroups, or of hearing aids covered by H04R25/00 but not provided for in any of its subgroups
    • H04R2460/01Hearing devices using active noise cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/02Circuits for transducers, loudspeakers or microphones for preventing acoustic reaction, i.e. acoustic oscillatory feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W84/00Network topologies
    • H04W84/18Self-organising networks, e.g. ad-hoc networks or sensor networks

Definitions

  • the embodiments of the application relate to the field of wireless technology, and in particular to wireless audio systems, wireless communication methods and equipment.
  • Bluetooth (Bluetooth) wireless technology is a short-distance communication system intended to replace cable connections between portable and/or fixed electronic devices.
  • Bluetooth headsets based on Bluetooth line communication technology are favored by consumers for their wireless connection and good sound quality.
  • users use Bluetooth to connect and play audio data between devices.
  • users use mobile phones + Bluetooth headsets to listen to music or watch videos, and users use mobile phones + vehicle-mounted devices to listen.
  • users use mobile phones + Bluetooth speakers to listen to music or watch videos, and so on.
  • the audio stream may pass through the master device (such as a mobile phone or computer) and a slave device (such as a Bluetooth headset or Bluetooth speaker).
  • the sound effect algorithm of the master device and the sound effect algorithm of the slave device are generally Two sets of independent sound effect algorithms.
  • the two independent sound effect algorithms may cause sound effect conflicts.
  • the gain at 1Khz of the audio signal is reduced by 2db.
  • the superposition of simple sound effect algorithm effects may cause the sound effect of the final audio stream to be worse than the sound effect generated when the sound effect algorithm of the master device or the slave device is used alone.
  • the technical solution of the present application provides a wireless audio system, wireless communication method and device, which can realize the sound effect processing negotiation between the master and slave devices, make full use of the advantages of the dual side sound effect processing of the master device and the slave device, and further improve the sound effect to meet The user's pursuit of higher sound quality of audio.
  • an embodiment of the present application provides a wireless communication method, which is applied to a wireless communication system.
  • the wireless communication system includes a first device and a second device.
  • the first device may be an electronic device such as a mobile phone or a tablet computer
  • the second device may be an audio output device such as a wireless headset or a speaker, such as a TWS Bluetooth headset.
  • the first device may be the master device in the wireless audio system
  • the second device may be the slave device in the wireless audio system.
  • the first device and the second device can establish a wireless communication connection.
  • the first device may send a first request to the second device to request to query the sound effect processing capability of the slave device.
  • the second device may send a first response to the first device, and the first response carries the first indication information.
  • the first indication information may be used to indicate the sound effect processing capability of the second device.
  • the first device may determine whether the first device and the second device support joint sound effect processing according to the indication information fed back by the second device.
  • the first device may determine whether the first device has a first sound effect processing algorithm, the first sound effect processing algorithm is a sound effect processing algorithm adapted to the second sound effect processing algorithm used by the second device, and the second sound effect processing algorithm is based on the first sound effect processing algorithm. An indication information is determined. If the first sound effect processing algorithm is determined, then: an audio connection can be established between the first device and the second device, and the first device uses the first sound effect processing algorithm to process the first audio data to obtain the second Audio data. Then, the first device can transmit the second audio data to the second device through the audio connection. The second device may use the second sound effect processing algorithm to process the second audio data to obtain the third audio data. The second device plays the third audio data.
  • the above-mentioned wireless communication connection can be a logical link control and adaptation protocol L2CAP connection, or a radio frequency communication RFCOMM connection.
  • L2CAP logical link control and adaptation protocol
  • RFCOMM radio frequency communication
  • the first indication information may include one or more of the following device parameters: manufacturer and product model of the second device.
  • the sound effect processing capability may include one or more of the following: noise reduction capability and echo cancellation capability
  • the second sound effect processing algorithm may include one or more of the following: noise reduction algorithm and echo cancellation algorithm.
  • the implementation of the method provided in the first aspect can realize the audio processing negotiation between the master and the slave devices, make full use of the advantages of the dual-side audio processing of the master device and the slave device, further enhance the sound effect, and meet the user's pursuit of higher audio quality .
  • Case 1 Joint audio processing is not supported between the primary and secondary devices, and the secondary equipment does not support audio processing.
  • the first device can determine that the second device does not have the sound effect processing capability according to device parameters such as the manufacturer and product model fed back by the second device
  • the first device can determine that the negotiated sound effect processing situation is Case 1.
  • Whether the second device has the sound effect processing capability can be determined according to the device parameters such as the manufacturer and product model fed back by the second device.
  • the first device can query whether the second device has sound processing capabilities such as noise reduction, echo cancellation, etc. locally or in the cloud according to device parameters such as the manufacturer and product model of the second device such as headphones and speakers, and What is the sound effect processing algorithm used by this second device with sound effect processing capabilities?
  • Case 2 Joint audio processing is not supported between the primary and secondary devices, and the secondary device supports audio processing.
  • the first device determines that the second device has the sound effect processing capability according to the device parameters such as the manufacturer and product model fed back by the second device, if the first device uses the sound effect processing algorithm used by the second device, the first device cannot If the sound effect processing algorithm II of the first device that is adapted to the sound effect processing algorithm I on the second device side is acquired from the multiple sets of sound effect processing algorithms of the device, the first device may determine that the situation of negotiating sound effect processing is Case 2.
  • Case 3 The main and the second device support joint sound effect processing, and the second device supports sound effect processing.
  • the first device determines that the second device supports sound effect processing according to the device parameters such as the manufacturer and product model fed back by the second device
  • the first device uses the sound effect processing algorithm I used by the second device
  • the first device can If the sound effect processing algorithm II that is compatible with the sound effect processing algorithm I is acquired from the multiple sets of sound effect processing algorithms in, the first device can determine that the situation of negotiating the sound effect processing is case 3.
  • the sound effect processing algorithm II can be used for the sound effect processing of the first device in the joint sound effect processing.
  • the first device may determine in the following ways: whether the first device has a sound effect processing algorithm II of the first device that is compatible with the sound effect processing algorithm I on the second device side.
  • Method 1 By performing sound effect processing on the test signal and comparing the processing results, the adaptation algorithm is selected.
  • the first device may select a set of sound effect processing algorithms from the multiple sets of sound effect processing algorithms of the first device, and use the selected set of sound effect processing algorithms and the sound effect processing algorithm on the second device side to test the audio one after another.
  • Data is processed. If one or more of the following conditions are met: the measured signal-to-noise ratio of the processed test audio data is better than the first signal-to-noise ratio, the second signal-to-noise ratio, the echo component measured by the processed test audio data If it is less than the first echo component and the second echo component, it is determined that the first device has a sound effect processing algorithm II that is compatible with the sound effect processing algorithm I.
  • the first device can further select from the multiple sets of sound effect processing algorithms that satisfy one or more of the aforementioned conditions and have the best sound effect processing effect (for example, the best signal-to-noise ratio, and the least echo component) to adapt the first sound effect processing algorithm.
  • the sound processing algorithm on the device side I For example, the best signal-to-noise ratio, and the least echo component.
  • the first signal-to-noise ratio and the first echo component may be the signal-to-noise ratio, echo component, the second signal-to-noise ratio, the second signal-to-noise ratio,
  • the second echo components may be respectively the signal-to-noise ratio and echo components measured after processing the test audio data using the sound effect processing algorithm I on the second device side.
  • the multiple sets of sound effect processing algorithms of the first device may be stored locally in the first device, or may be stored in a cloud server and accessed by the first device.
  • the first device may store or have access to a mapping table.
  • the mapping table can record the correspondence between the device parameters of multiple devices (such as the device parameters of the manufacturer, product model, etc.) and multiple sets of sound effect processing algorithms.
  • the sound effect processing algorithm corresponding to the device parameters of a certain second device refers to the sound effect that is compatible with the sound effect processing algorithm used by the second device among the multiple sets of sound effect processing algorithms of the first device Processing algorithm.
  • a set of sound effect processing algorithms may include multiple sound effect processing algorithms, such as noise reduction algorithms, echo suppression algorithms, and so on. Of course, a set of sound effect processing algorithms may also include only one sound effect processing algorithm, such as a noise reduction algorithm, which is not limited in this application.
  • the mapping table may be written into the memory by the first device before leaving the factory, or downloaded by the first device from the server, or shared by other devices. This application does not limit the source of the mapping table.
  • the first device may determine whether there is a sound effect processing algorithm corresponding to the device parameter of the second device in the first mapping table, and if so, it may determine that the first device is compatible with the sound effect processing algorithm I on the second device side
  • the sound processing algorithm II is the sound effect processing algorithm corresponding to the device parameter of the second device in the mapping table.
  • the indication information of the audio processing capability sent by the second device to the first device may further include a specific bit or a specific field, and the specific bit or specific field may be used to indicate whether the second device has Sound processing capabilities. For example, when the specific bit is 0, it indicates that the second device has no sound effect processing capability; when the specific bit is 1, it indicates that the second device has sound effect processing capability. In this way, the first device can directly determine whether the second device has the sound effect processing capability based on the specific bit or specific field, without the need to determine whether the second device has the sound effect processing capability based on the device parameters such as the manufacturer and product model fed back by the second device ,higher efficiency.
  • the first device can directly use a certain set of sound effect processing algorithms of the first device in the first device to the host. , The audio data between the second device performs sound effect processing. If it is determined that the second device has the sound effect processing capability, the first device also needs to further determine whether the main and second devices support joint sound processing according to the manufacturer, product model and other device parameters. For details, refer to Case 2 and Case 3 above. .
  • the first device determines that the first device does not have the first sound effect processing algorithm, and the second device does not have the sound effect processing capability, that is, the above case 1, then: the first device and the second device An audio connection is established between the devices.
  • the first device uses a third sound effect processing algorithm to process the first audio data to obtain fourth audio data.
  • the first device transmits the fourth audio data to the second device through the audio connection.
  • the second device plays the fourth audio data.
  • the first device determines that the first device does not have the first sound effect processing algorithm, and the second device has the sound effect processing capability, that is, the case 2 above, then: the first device and the second device An audio connection is established between them, the first device transmits the first audio data to the second device through the audio connection, and the second device uses the second sound effect processing algorithm to process the first audio data to obtain the fifth audio data.
  • the second device plays the fifth audio data.
  • the above-mentioned wireless communication connection may be a logical link control and adaptation protocol L2CAP connection.
  • the signaling of the first request and the first response exchanged between the first device and the second device can be implemented as follows:
  • the first request may include an L2CAP echo request (ECHO request), and the first response may include an L2CAP echo response (ECHO response).
  • ECHO request L2CAP echo request
  • ECHO response L2CAP echo response
  • the audio connection used to transmit audio data between the first device and the second device may be implemented as follows: the audio connection may include a media audio connection established based on the aforementioned L2CAP connection, such as an A2DP connection based on the advanced audio distribution application specification.
  • the audio data transmitted by the audio connection may include media audio data.
  • the first device may establish the media audio connection with the second device when detecting a user operation of playing media audio.
  • the above-mentioned wireless communication connection may be a radio frequency communication RFCOMM connection.
  • the audio connection used to transmit audio data between the first device and the second device can be implemented as follows:
  • the first request may include a first AT command
  • the first response may include a first AT response in response to the first AT command
  • the audio connection used to transmit audio data between the first device and the second device can be implemented as follows:
  • the audio connection may include a call audio connection established based on the aforementioned RFCOMM connection, such as a connection-oriented synchronous connection SCO connection.
  • the audio data transmitted by the audio connection may include call audio data.
  • the first device may establish the call audio connection with the second device when detecting a user operation to answer an incoming call or make a call.
  • this application provides a wireless communication method, which is applied to the above-mentioned first device.
  • the first device and the second device can establish a wireless communication connection.
  • the first device may send a first request to the second device to request to query the sound effect processing capability of the slave device.
  • the first device may receive the first response sent by the second device, and the first response carries the first indication information.
  • the first indication information may be used to indicate the sound effect processing capability of the second device.
  • the first device may determine whether the first device and the second device support joint sound effect processing according to the indication information fed back by the second device.
  • the first device may determine whether the first device has a first sound effect processing algorithm, the first sound effect processing algorithm is a sound effect processing algorithm adapted to the second sound effect processing algorithm used by the second device, and the second sound effect processing algorithm is based on the first sound effect processing algorithm. An indication information is determined. If the first sound effect processing algorithm is determined, then: an audio connection can be established between the first device and the second device, and the first device uses the first sound effect processing algorithm to process the first audio data to obtain the second Audio data. Then, the first device can transmit the second audio data to the second device through the audio connection.
  • the second device uses the second sound effect processing algorithm to process the second audio data to obtain the third audio data.
  • the second device plays the third audio data.
  • an embodiment of the present application provides an electronic device.
  • the electronic device may include a communication module, a memory, one or more processors, and one or more programs, and the one or more processors are used to execute and store in the memory.
  • One or more computer programs for making the electronic device implement any function as the first device in the first aspect, or any function as the first device in the second aspect.
  • an embodiment of the present application provides an audio output device, which may include a Bluetooth module, an audio module, a memory, and a processor.
  • One or more programs are stored in the memory, and one or more processors are used to execute the one or more programs stored in the memory, so that the audio output device can implement any function as the second device in the first aspect, or Such as any function of the second device in the second aspect.
  • embodiments of the present application provide a wireless communication system, which may include the first device and the second device described in the foregoing aspects.
  • the embodiments of the present application provide a chip system that can be applied to an electronic device.
  • the chip includes one or more processors, and the processor is used to invoke computer instructions to make the electronic device implement the same as in the first aspect. Any possible implementation manner, or any possible implementation manner as in the second aspect.
  • a computer program product containing instructions is characterized in that when the computer program product runs on an electronic device, the electronic device executes any possible implementation manner as in the first aspect, or as in the second aspect Any possible implementation.
  • a computer-readable storage medium including instructions, characterized in that, when the instructions are executed on an electronic device, the electronic device executes any possible implementation manner as in the first aspect, or as in the second aspect Any possible implementation.
  • FIG. 1A shows a wireless audio system related to an embodiment of the present application
  • FIG. 1B shows the structure of the audio output device in the wireless audio system shown in FIG. 1A;
  • FIG. 2A shows another wireless audio system involved in an embodiment of the present application
  • FIG. 2B shows the structure of the audio output device in the wireless audio system shown in FIG. 2A;
  • FIG. 3 shows a process of establishing a media audio connection involved in an embodiment of the present application
  • FIG. 4 shows a process of establishing a call audio connection involved in an embodiment of the present application
  • FIG. 5 shows a process of establishing a call audio control connection involved in an embodiment of the present application
  • FIG. 6 shows the overall flow of the wireless communication method provided by the technical solution of the present application
  • FIG. 7A shows a user operation of turning on the Bluetooth function of the electronic device
  • FIG. 7B shows another user operation of turning on the Bluetooth function of the electronic device
  • FIG. 7C shows an indicator that the electronic device displays a Bluetooth connection
  • Fig. 8 shows the wireless communication method provided in the first embodiment
  • Figure 9 shows the wireless communication method provided in the second embodiment
  • Figure 10 shows the structure of an electronic device
  • Fig. 11 shows the structure of the audio output device.
  • FIG. 1A shows a wireless audio system 100 related to an embodiment of the present application.
  • the wireless audio system 100 may include an electronic device 101 and an audio output device 106.
  • the electronic device 101 may be implemented as any one of the following electronic devices: mobile phones, portable game consoles, portable media playback devices, personal computers, vehicle-mounted media playback devices, and so on.
  • the audio output device 106 may be responsible for converting audio data into sound.
  • the audio output device 106 may be an audio output device such as a headset, a neck-mounted earphone, a speaker, or the like. Taking earphones as an example, as shown in FIG. 1A, the audio output device 106 may include a left earphone 102, a right earphone 103 and a wire control 104.
  • the left earphone 102 and the wire control 104 are connected by an earphone wire, and the right earphone 103 and the wire control 104 are also connected by an earphone wire.
  • the line control 104 may be configured with buttons such as a volume up key, a volume down key, and a playback control key, and the line control 104 may also be configured with sound collection devices such as a receiver/microphone.
  • a wireless communication connection 105 can be established between the electronic device 101 and the audio output device 106.
  • the electronic device 101 may send audio data to the audio output device 106 through the wireless communication connection 105.
  • the role of the electronic device 101 is an audio source
  • the role of the audio output device 106 is an audio sink.
  • the audio output device 106 can convert the received audio data into sound, so that the user wearing the audio output device 106 can hear the sound.
  • the electronic device 101 and the audio output device 106 can also interact with each other based on the wireless communication connection 105 to play control (such as previous, next, etc.) messages, call control (such as answer, hang up) messages, and volume control messages. (Such as volume increase, volume decrease) and so on.
  • the electronic device 101 can send a playback control message and a call control message to the audio output device 106 through the wireless communication connection 105, so that playback control and call control can be performed on the electronic device 101 side.
  • the electronic device may send an audio play command to the audio output device 106 via the wireless communication connection 105, which triggers the audio output device 106 to play music.
  • the audio output device 106 may send a playback control message and a call control message to the electronic device 101 through the wireless communication connection 105, so that playback control and call control can be performed on the audio output device 106 side.
  • the audio output device 106 can send a volume up command to the electronic device 101 via the wireless communication connection 105 to trigger the electronic device 101 to increase the volume of music playback.
  • the physical form and size of the electronic device 101 and the audio output device 106 may also be different, which is not limited in this application.
  • the wireless audio system 100 shown in FIG. 1A may be a wireless audio system implemented based on the Bluetooth protocol. That is, the wireless communication connection 105 between the electronic device 101 and the audio output device 106 may adopt a Bluetooth communication connection.
  • the structure of the audio output device 106 in FIG. 1A may be as shown in FIG. 1B. in,
  • Both the left earphone 102 and the right earphone 103 may include: audio modules and sensors.
  • the audio module can be used to convert audio data into sound, and specifically can be an electo-acoustic transducer.
  • the sensor can be used to detect the current usage scenario of the left earphone 102 and the right earphone 103.
  • the sensor can be an optical sensor, a capacitive sensor, an infrared sensor, etc. This sensor may be referred to as the first sensor.
  • the left earphone 102 and the right earphone 103 can determine whether the earphone is in the ear by photoelectric detection.
  • the optical sensor in the earphone can use the principle of optical sensing to perceive the user's wearing state.
  • the optical sensor When the optical sensor detects that the light signal is blocked, it will The processor feedbacks that the headset is in the wearing state at this time, and then the system will automatically enter the playback mode. On the contrary, when the optical sensor detects the light signal, it will feedback to the processor that the headset is not in the wearing state at this time, and then the system will automatically pause the audio Play.
  • the earphone can also perform in-ear detection based on the capacitance change fed back by the capacitive sensor, or can perform in-ear detection based on the occlusion condition fed back by the infrared sensor.
  • the wire control 104 may include: a Bluetooth module, a processor, and a battery.
  • the Bluetooth module can be used to receive or transmit Bluetooth signals.
  • the audio output device 106 may establish a Bluetooth communication connection with the electronic device 101 through a Bluetooth module, and transmit a Bluetooth signal to the electronic device 101 or receive a Bluetooth signal transmitted by the electronic device 101 through the Bluetooth communication connection.
  • the processor can be coupled to the Bluetooth module and the audio modules and sensors in the left earphone 102 and the right earphone 103.
  • the processor may be responsible for reading the instructions in the memory, decoding and executing the instructions, so as to implement the wireless communication method provided by the technical solution of the present application.
  • the battery can be used to power various components in the audio output device 106 (such as a processor, an audio module, a sensor, a Bluetooth module, etc.).
  • the audio output device 106 may also include other components, for example, a memory, a receiver, an indicator light, etc. may be configured in the wire control 104.
  • the left headset 102 and the right headset 103 in the wireless audio system 100 may also be other types of audio output devices.
  • the left earphone 102 and the right earphone 103 in the wireless audio system 100 may also be two speakers in the home theater scene: a left channel sound and a right channel sound.
  • a control device similar to the wire control 104 can also be connected between the two speakers.
  • FIG. 2A shows another wireless audio system 200 involved in an embodiment of the present application.
  • the wireless audio system 200 may include an electronic device 201, an audio output device 202, and an audio output device 203.
  • the audio output device 202 and the audio output device 203 may be a left earphone and a right earphone of a pair of Bluetooth earphones, respectively, for converting audio data into sound.
  • the electronic device 201 can be implemented as any of the following electronic devices: mobile phones, portable game consoles, portable media playback devices, personal computers, vehicle-mounted media playback devices, and so on.
  • the audio output device 202 and the audio output device 203 may be split true wireless stereo (TWS) earphones, which may be a left earphone and a right earphone in a pair of TWS earphones, respectively.
  • TWS true wireless stereo
  • the audio output device 202 and the audio output device 203 can respectively establish a wireless communication connection with the electronic device 201.
  • the audio output device 202 can establish a wireless communication connection 204 with the electronic device 201, and can exchange audio data, play control messages, call control messages, etc. through the wireless communication connection 204.
  • a wireless communication connection 205 can be established between the electronic device 201 and the audio output device 203, and audio data, playback control messages, call control messages, etc. can be exchanged through the wireless communication connection 205.
  • the physical form and size of the electronic device 201, the audio output device 202, and the audio output device 203 may also be different, which is not limited in the embodiment of the present application.
  • the wireless audio system 200 shown in FIG. 2A may be a wireless audio system implemented based on the Bluetooth protocol. That is, the wireless communication connection (such as the wireless communication connection 204, the wireless communication connection 205, and the wireless communication connection 206) between the electronic device 201, the audio output device 202 and the audio output device 203 may adopt a Bluetooth communication connection.
  • the wireless communication connection such as the wireless communication connection 204, the wireless communication connection 205, and the wireless communication connection 206
  • the structure of the audio output device 202 and the audio output device 203 in FIG. 2A may be as shown in FIG. 2B.
  • Both the audio output device 202 and the audio output device 203 may include: a Bluetooth module, an audio module, a sensor, a processor, and a battery. It can be seen that, unlike the audio output device 106 in the wireless audio system 100 shown in FIG. 1A, the audio output device 202 and the audio output device 203 respectively integrate the functions of the wire control 104 of the audio output device 106.
  • the Bluetooth module can be used to receive or transmit Bluetooth signals.
  • the audio output device 202 and the audio output device 203 can respectively establish a Bluetooth communication connection with the electronic device 201 through a Bluetooth module, and transmit a Bluetooth signal to the electronic device 201 or receive a Bluetooth signal transmitted by the electronic device 201 through the Bluetooth communication connection.
  • the audio output device 202 and the audio output device 203 can also communicate through a Bluetooth module.
  • the processor can be coupled to the Bluetooth module and audio modules and sensors in the audio output device 202 and the audio output device 203.
  • the processor may be responsible for reading the instructions in the memory, decoding and executing the instructions, so as to implement the wireless communication method provided by the technical solution of the present application.
  • the battery can be used to power the audio output device 202 and various components in the audio output device 203 (such as a processor, an audio module, a sensor, a Bluetooth module, etc.).
  • the audio module can be used to convert audio data into sound, and specifically can be an electo-acoustic transducer.
  • the sensor can be used to detect the current usage scenario of the audio output device 202 and the audio output device 203.
  • the sensor can be an optical sensor, a capacitive sensor, an infrared sensor, an acceleration sensor, a pressure sensor, a six-axis sensor, a bone voiceprint sensor, and so on. For example, taking earphones as an example, the left earphone 202 and the right earphone 203 can determine whether the earphone is in the ear by photoelectric detection.
  • the optical sensor in the earphone can use the principle of optical sensing to perceive the user's wearing state.
  • the optical sensor When the optical sensor detects that the light signal is When blocking, it will feedback to the processor that the headset is in the wearing state, and then the system will automatically enter the playback mode. On the contrary, when the optical sensor detects the light signal, it will feedback to the processor that the headset is not in the wearing state at this time, and then the system It will automatically pause the audio playback.
  • the earphone can also perform in-ear detection based on the capacitance change fed back by the capacitive sensor, or can perform in-ear detection based on the occlusion condition fed back by the infrared sensor.
  • the headset can also detect user touch and sliding operations through capacitive sensors, and then complete operations such as music control and volume control.
  • the left earphone 202 and the right earphone 203 can sense the user's actions and times of tapping the earphone through the acceleration sensor, so as to realize operations such as waking up the voice assistant, switching music to the previous/next song, answering/hanging up the call, and so on.
  • the left earphone 202 and the right earphone 203 can also sense the user's actions and times of pressing the earphone leg through the pressure sensor, and then switch the earphone sound effect mode (for example, turn on/off the noise reduction mode/ambient sound mode), wake up the voice assistant, and switch music to Operation of previous song/next song, answering/hanging up call, etc.
  • the headset can also use the six-axis sensor (3D accelerometer and 3D gyroscope) inside the headset to recognize the user's head movement through the head gesture recognition algorithm. For example, when a new call is received, the user can nod or shake the head. Achieve answering and hanging up calls.
  • the bone voiceprint sensor in the headset can confirm the characteristics of the owner based on the bone voiceprint information, ensuring that the headset only listens to the owner's commands.
  • the audio output device 202 and the audio output device 203 may also include other components, for example, a memory, a receiver, an indicator light, etc. may be configured.
  • the audio output device 202 and the audio output device 203 in the wireless audio system 200 may also be other types of audio output devices.
  • the audio output device 202 and the audio output device 203 in the wireless audio system 200 may also be two audio devices in the home theater scenario: left channel audio and right channel audio.
  • an embodiment of the present application provides a wireless communication method.
  • the master and the slave device can negotiate the sound effect processing.
  • the master device may send a query request to the slave device to obtain the sound effect processing capability of the slave device.
  • the slave device can send the indication information of the sound effect processing capability of the slave device to the master device.
  • the indication information may include multiple fields, and the multiple fields may include fields for indicating equipment parameters such as manufacturer and product model.
  • the master device determines whether joint sound effect processing can be performed between the master and slave devices according to the instruction information fed back from the slave device, and if it is confirmed that the joint sound effect processing can be performed, the master and slave devices coordinately perform adaptive sound effect processing on both sides.
  • the joint sound effect processing means that the master device can determine the sound effect processing algorithm I used by the slave device according to the device parameters such as the manufacturer and product model fed back by the slave device, and determine the master device that is compatible with the sound effect processing algorithm I.
  • the sound processing algorithm on the device side II The master device will use the sound effect processing algorithm II on the master device side to perform sound effect processing on the audio data between the master and the slave device, and then the slave device will continue to perform sound effect processing on the audio data using the sound effect processing algorithm I on the slave device side.
  • the sound effect processing algorithm II on the master device side is compatible with the sound effect processing algorithm I on the slave device side.
  • the specific adaptation can be expressed as: using the sound effect processing algorithm I and the sound effect processing algorithm II to process a certain audio data (such as a certain test audio data) successively, and the measured signal-to-noise ratio of the processed audio data is better than The first signal-to-noise ratio, the second signal-to-noise ratio, or the echo component measured by the processed audio data is less than the first echo component, the second echo component, or the processed audio data
  • the signal-to-noise ratio of is better than the first signal-to-noise ratio and the second signal-to-noise ratio, and the echo component is less than the first echo component and the second echo component.
  • the first signal-to-noise ratio and the first echo component are respectively the signal-to-noise ratio and echo component measured after processing the audio data using the sound effect processing algorithm I.
  • the second signal-to-noise ratio and the second echo component are respectively processed using the sound effect Algorithm II measures the signal-to-noise ratio and echo components after processing the audio data.
  • matching means that the sound processing algorithm I on the slave device side and the sound processing algorithm II on the master device side can cooperate and complement each other, and the respective sound effect processing strengths can be fully utilized. In this way, the sound effect processing effect of the sound effect processing algorithm I and the sound effect processing effect of the sound effect processing algorithm II will not cancel each other out on the whole, but will enhance each other, thereby presenting a better sound effect.
  • the Bluetooth connection refers to the physical Bluetooth connection, such as a connectionless asynchronous connection (asynchronous connectionless link, ACL).
  • the Bluetooth connection can be established by the master device (the device that initiates the connection request) and the slave device (the device that receives the connection request) through inquiry and paging.
  • the master device is an electronic device (such as a mobile phone)
  • the slave device is a Bluetooth headset.
  • the master device is a Bluetooth headset
  • the slave device is an electronic device (such as a mobile phone).
  • Audio connection may include: call audio connection (call audio connection) and media audio connection (media audio connection). Among them, the call audio connection can be used to transmit voice data, and the media audio connection can be used to transmit media audio data.
  • the call audio connection may be a connection-oriented synchronous connection (synchronous connection oriented link, SCO) used in a call scenario.
  • the media audio connection may be an advanced audio distribution profile (A2DP) connection.
  • the call audio connection can be configured with a call audio control connection, which can be used to transmit call control (such as answering, hanging up, etc.) signaling.
  • the call audio control connection may be a hands-free profile (HFP) connection.
  • the media audio connection can be configured with a media audio control connection, which can be used to transmit media audio playback control (such as previous song, next song, pause, playback, volume control, etc.) signaling.
  • the media audio control connection may be an audio/video remote control profile (AVRCP) connection.
  • AVRCP audio/video remote control profile
  • the application profile connection is established based on the logical link control and adaptation protocol (logical link control and adaptation protocol, L2CAP) channel.
  • L2CAP logical link control and adaptation protocol
  • L2CAP is located on the baseband, and is responsible for converting baseband data into a data packet format that is convenient for application decoding, and provides functions such as protocol multiplexing and quality of service exchange.
  • the establishment of the Bluetooth physical connection is described in (1) above, but if the upper layer application (such as A2DP) needs to communicate between the master and slave devices, it also needs to establish corresponding L2CAP channels on the L2CAP layer. These channels are used for various Different application specifications communicate.
  • an L2CAP channel corresponding to an application specification can be represented by a channel identity (CID). When an application specification is no longer used, the L2CAP channel corresponding to the application specification must be cleared.
  • CID channel identity
  • L2CAP only supports ACL data transmission, not SCO data transmission.
  • the establishment of a media audio connection may include the following steps:
  • Step 1 The master and slave devices establish an L2CAP channel based on the ACL connection.
  • the L2CAP channel used for A2DP may include: L2CAP channel used to transmit signaling, and L2CAP channel used to transmit stream.
  • the L2CAP channel used for signaling can be referred to as the AVDTP signaling L2CAP channel
  • the L2CAP channel used for streaming can be referred to as the AVDTP stream L2CAP channel.
  • Step 2 The master and slave devices establish an A2DP connection based on the L2CAP channel used for A2DP, which may specifically include steps 2-1 to 2-5.
  • Step 2-1 Stream endpoints discovery process.
  • the master device can send an AVDPT_DISCOVER command to the slave device to discover stream endpoints (SEP) in the device.
  • the slave device will return a response, which can carry the SEP identification (SEID) that the slave device can provide.
  • SEID SEP identification
  • Step 2-2 Capabilities query process.
  • the master device can send the AVDPT_GET_CAPABILITIES command to the slave device to obtain the services that the SEP of the slave device can provide.
  • the slave device will feedback the services that its SEP can provide to the master device.
  • Step 2-3 Stream configuration process.
  • the master device can send the AVDPT_SET_CONFIGURATION command to the slave device to configure the services provided by the SEP of the slave device, such as configuring audio Channel, sampling rate, etc.
  • Steps 2-4 Stream establishment process.
  • the master device can send the AVDPT_OPEN command to the slave device to create a stream.
  • the created stream can also be called a streaming connection, that is, an A2DP connection.
  • the A2DP connection is established.
  • Step 2-5 Stream start process.
  • the master device can send an AVDPT_START command to the slave device to start the stream.
  • the slave device After the stream is started, you can use the stream to transmit media audio data.
  • media audio refers to mono and stereo audio, which is different from the voice audio transmitted on the SCO connection.
  • the master and slave devices can close the stream through the AVDPT_CLOSE command, that is, disconnect the A2DP connection.
  • AVRCP connection is also established based on the Bluetooth connection (ACL) and the L2CAP channel, which will not be repeated here.
  • ACL Bluetooth connection
  • L2CAP channel L2CAP channel
  • the process of establishing a call audio connection may include the following steps:
  • Step 1 The master device sends an SCO connection establishment request to the slave device, and the establishment of the SCO connection has been initiated.
  • Step 2 The slave device returns an acceptance response of the SCO connection establishment request to the master device.
  • the SCO connection establishment request may be LMP_SCO_link_req signaling
  • the SCO connection establishment request acceptance response may be LMP_accepted signaling. If the slave device cannot establish the SCO connection, it can return LMP_not_accepted (rejected response of the SCO connection establishment request), indicating that the SCO connection cannot be established.
  • the establishment of the SCO connection can also be initiated by the slave device. Subsequently, the master and slave devices can disconnect (or remove) the SCO connection by removing the SCO connection establishment request (such as LMP_remove_SCO_link_req signaling).
  • the SCO connection can be established by the master and slave devices in response to internal events or user operations (making or receiving calls).
  • the SCO connection will be disconnected.
  • the SCO connection can be used as an auxiliary connection to the HFP connection and can be used to transmit voice data. How to establish an HFP connection will be mentioned later, so I won’t go into details here. Only after the ACL connection has been established can the SCO connection be established. Because, the establishment of HFP connection needs to be based on ACL connection.
  • the establishment process of the call audio control connection may include the following steps:
  • Step 1 The master and slave devices establish an L2CAP channel for radio frequency communication (RFCOMM) based on the ACL connection.
  • RCOMM radio frequency communication
  • Step 2 The master and slave devices establish an RFCOMM connection based on the L2CAP channel.
  • GAP Generic Access Profile
  • SPP Serial Port Profile
  • Step 3 In response to user operations (making or receiving calls) or internal events, based on the existing RFCOMM connection, a service level connection (establish service level connection) can be established between the master and slave devices. At this point, the HFP connection is established.
  • the master and slave devices can release the service level connection (release service level connection) through the service level connection removal procedure (service level connection removal procedure), that is, disconnect the HFP connection.
  • service level connection removal procedure service level connection removal procedure
  • FIG. 6 shows the overall flow of the wireless communication method provided by the embodiment of the present application.
  • the master device is the aforementioned electronic device
  • the slave device is an audio output device, which will be explained below.
  • S101 The master device is successfully paired with the slave device and a Bluetooth connection is established.
  • pairing can create a shared key between the master device and the slave device: Link Key.
  • the link key (Link Key) is used for the master device and the slave device to mutually authenticate the device and encrypt the exchanged data.
  • the Bluetooth connection may refer to a Bluetooth physical connection, which is used for audio transmission, such as an ACL connection.
  • the master device and the slave device can establish an ACL connection through an inquiry and paging process.
  • the master device and the slave device start to perform S101.
  • the user can click the Bluetooth option 702 in the pull-down status bar 701 displayed by the electronic device to turn on the Bluetooth of the electronic device.
  • the user can also turn on the Bluetooth of the electronic device through the Bluetooth switch 704 in the "settings" interface 703 of the electronic device.
  • “Settings” is an application or service on an electronic device that is responsible for configuring various functions of the electronic device, such as flight mode, wireless fidelity (Wi-Fi), Bluetooth, mobile network, and so on. It is not limited to the two methods shown in FIG. 7A and FIG. 7B, and the Bluetooth of the electronic device can also be activated by an internal event.
  • the internal event may be, for example, the opening event of a sharing service such as "Huawei Sharing".
  • the activation of the sharing service of "Huawei Sharing” will trigger the activation of Bluetooth and Wi-Fi.
  • Slave devices such as Bluetooth headsets or Bluetooth speakers, generally can turn on Bluetooth after powering on, without the need for the user to manually turn on Bluetooth.
  • the electronic device can display an indicator 705 in the status bar, and the indicator 705 can be used to indicate that the current master and slave devices are in a Bluetooth connection state.
  • the indicator 705 may be referred to as a first indicator.
  • the electronic device may also display the indicator 706.
  • the indicator 706 can be used to indicate the remaining power of the slave device.
  • the indicator 706 may be referred to as a second indicator.
  • the audio connection established between the master and the slave device can be a call audio connection (such as an SCO connection) or a media audio connection (such as an A2DP connection). Subsequent embodiments will explain in detail how the master and slave devices negotiate the sound effect processing in the scenarios of media audio connection (such as A2DP connection) and call audio connection (such as SCO connection).
  • a call audio connection such as an SCO connection
  • a media audio connection such as an A2DP connection
  • the master device may send a query instruction to the slave device to query the sound effect processing capability of the slave device.
  • the query command can be an echo request in the L2CAP protocol, an extended command of other types of L2CAP, an extended AT command in the HFP protocol, and other custom extended commands. This application does not make any comments on this limit. Subsequent embodiments will describe in detail the signaling implementation of the request, which will not be expanded here.
  • the slave device feeds back the indication information (such as manufacturer, product model, etc.) of the sound effect processing capability of the slave device to the master device.
  • indication information such as manufacturer, product model, etc.
  • the indication information can be carried in messages such as echo response (ECHO response), extended AT response, etc., for details, please refer to the subsequent embodiments.
  • ECHO response echo response
  • extended AT response extended AT response
  • the master device may determine whether the master device and the slave device support joint sound effect processing according to the indication information fed back from the slave device.
  • the instruction information can include the device parameters such as the manufacturer and product model of the slave device. These parameters can be used to indicate the sound processing algorithm used by the slave device and reflect the sound processing capability of the slave device, so it can be called this model. The sound effect processing capability of the slave device or the instruction information of the sound effect processing algorithm used.
  • whether the master and slave devices support joint audio processing can include the following three situations:
  • Case 1 The master and slave devices do not support joint sound effect processing, and the slave device does not support sound effect processing. Please refer to step S105.
  • the master device can determine that the negotiated sound effect processing situation is Case 1. Whether the slave device has the ability to process sound effects can be determined based on the manufacturer, product model and other device parameters fed back from the device.
  • the master device can query whether the slave device has sound processing capabilities such as noise reduction, echo cancellation, etc. locally or in the cloud according to the manufacturer and product model of a certain headset, speaker, and other slave devices. What is the sound effect processing algorithm used by the slave device when it has the sound effect processing capability.
  • Case 2 The master and slave devices do not support joint sound effect processing, and the slave device supports sound effect processing. Please refer to step S110.
  • the master device determines that the slave device has sound effect processing capabilities based on the device parameters such as the manufacturer and product model fed back from the slave device, if the master device uses the sound effect processing algorithm of the slave device, it cannot handle multiple sets of sound effects from the master device side.
  • the algorithm obtains the sound effect processing algorithm II on the master device side that is compatible with the sound effect processing algorithm I on the slave device side, and the master device can determine that the situation in which the sound effect processing is negotiated is Case 2.
  • Case 3 The master and slave devices support joint sound effect processing, and the slave device supports sound effect processing. Please refer to step S115.
  • the master device determines that the slave device supports sound processing according to the device parameters such as the manufacturer and product model fed back by the slave device
  • the master device uses the sound processing algorithm I used by the slave device, it can learn from multiple sets of sound processing algorithms on the master device side. If the sound effect processing algorithm II that is compatible with the sound effect processing algorithm I is acquired in the, the main device can determine that the negotiated sound effect processing situation is case 3.
  • the sound effect processing algorithm II can be used for the sound effect processing on the main device side in the joint sound effect processing.
  • the master device can determine in the following ways: whether the master device has a sound effect processing algorithm II on the master device that is compatible with the sound effect processing algorithm I on the slave device side.
  • Method 1 By performing sound effect processing on the test signal and comparing the processing results, the adaptation algorithm is selected.
  • the main device can select a set of sound effect processing algorithms from multiple sets of sound effect processing algorithms on the main device side, and use the selected set of sound effect processing algorithms and the sound effect processing algorithm I on the slave device side to successively perform the test audio data deal with. If one or more of the following conditions are met: the measured signal-to-noise ratio of the processed test audio data is better than the first signal-to-noise ratio and the second signal-to-noise ratio, the echo component measured by the processed test audio data If it is less than the first echo component and the second echo component, it is determined that the main device side has a sound effect processing algorithm II that is compatible with the sound effect processing algorithm I.
  • the master device can further select from these multiple sets of sound effect processing algorithms the sound effect processing algorithm that meets one or more of the aforementioned conditions and has the best sound effect processing effect (for example, the best signal-to-noise ratio and the least echo component) to adapt to the slave device Side sound processing algorithm I.
  • the best sound effect processing effect for example, the best signal-to-noise ratio and the least echo component
  • the first signal-to-noise ratio and the first echo component may be the signal-to-noise ratio, echo component, the second signal-to-noise ratio, the second signal-to-noise ratio,
  • the second echo components can be respectively the signal-to-noise ratio and echo components measured after processing the test audio data using the sound effect processing algorithm I on the side of the device.
  • Multiple sets of sound effect processing algorithms on the main device side can be stored locally in the main device, or stored on a cloud server, and accessed by the main device.
  • the main device side may store or have access to a mapping table.
  • the mapping table can record the correspondence between the device parameters of multiple devices (such as the device parameters of the manufacturer, product model, etc.) and multiple sets of sound effect processing algorithms.
  • the sound effect processing algorithm corresponding to the device parameters of a certain slave device refers to the sound effect processing algorithm that is compatible with the sound effect processing algorithm used by the slave device among the multiple sets of sound effect processing algorithms on the master device side .
  • a set of sound effect processing algorithms may include multiple sound effect processing algorithms, such as noise reduction algorithms, echo suppression algorithms, and so on. Of course, a set of sound effect processing algorithms may also include only one sound effect processing algorithm, such as a noise reduction algorithm, which is not limited in this application.
  • the mapping table can be written into the memory by the master device before leaving the factory, or downloaded from the server by the master device, or shared by other devices. This application does not limit the source of the mapping table.
  • the master device side can determine whether there is a sound effect processing algorithm corresponding to the device parameters of the slave device in the first mapping table, and if so, it can determine that the first device has a sound effect that is compatible with the sound effect processing algorithm I on the slave device side.
  • Processing algorithm II is the sound effect processing algorithm corresponding to the device parameters of the slave device in the mapping table.
  • the device parameters such as manufacturer and product model are not limited.
  • the indication information of the sound effect processing capability sent from the slave device to the master device may further include a specific bit or a specific field, and the specific bit or specific field can be used to indicate the slave device Whether it has the ability to process sound effects. For example, when the specific bit is 0, it indicates that the slave device has no sound effect processing capability; when the specific bit is 1, it indicates that the slave device has sound effect processing capability. In this way, the master device can directly determine whether the slave device has the sound effect processing capability based on the specific bit or specific field. It is not necessary to determine whether the slave device has the sound effect processing capability based on the device parameters such as the manufacturer and product model feedback from the slave device, which is more efficient. .
  • the master device can directly use a certain set of sound effect processing algorithms on the master device side to compare the master and slave devices on the master device side.
  • the audio data between devices is processed by sound effects.
  • the master device needs to further determine whether the master and slave devices support joint sound processing according to the manufacturer, product model and other device parameters. For details, refer to Case 2 and Case 3 above.
  • the master device may determine to use the sound effect processing algorithm III on the master device side to perform sound effect processing on the audio data A between the master and slave devices.
  • the sound effect processing algorithm III belongs to the aforementioned multiple sound effect processing algorithms on the main device side.
  • the sound effect processing algorithm III can be the sound effect processing algorithm used by the main device side by default, or it can be the sound effect processing algorithm specially selected by the main device for situation 1.
  • the main device may use the sound effect processing algorithm III on the main device side to perform sound effect processing on audio data A to obtain audio data B.
  • An audio connection can be established between the master and slave devices, and the audio connection can be used to transmit audio data A between the master and slave devices.
  • the audio connection can be a call audio connection (such as a SCO connection) or a media audio connection (such as an A2DP connection). Subsequent embodiments will explain in detail how the master and slave devices negotiate the sound effect processing in the scenario of media audio connection (such as A2DP connection) and call audio connection (such as SCO connection), which will not be expanded here.
  • the master device may send audio data B to the slave device through the audio connection between the master and the slave device.
  • the slave device does not support audio processing. After receiving the audio data B from the master device, the slave device plays the audio data B without performing sound effect processing on the audio data B.
  • the master device may determine to use the sound effect processing algorithm IV on the slave device side to perform sound effect processing on the audio data A between the master and slave devices.
  • the sound effect processing algorithm IV belongs to the aforementioned multiple sets of sound effect processing algorithms on the slave device side.
  • the sound effect processing algorithm IV can be a sound effect processing algorithm used by default from the device side, or other types of sound effect processing algorithms.
  • the master device can directly send audio data A to the slave device through the audio connection between the master and the slave device.
  • An audio connection can be established between the master and slave devices, and the audio connection can be used to transmit audio data A between the master and slave devices.
  • the audio connection can be a call audio connection (such as a SCO connection) or a media audio connection (such as an A2DP connection), which will be described in detail in subsequent embodiments.
  • the master device determines not to use the sound effect processing function on the master device side, so the master device sends audio data A to the slave device without performing sound effect processing on the audio data A.
  • the slave device may use the sound effect processing algorithm IV on the slave device side to perform sound effect processing on the audio data A to obtain audio data C.
  • the master device selects a sound effect processing algorithm II that is compatible with the sound effect processing algorithm I according to the sound effect processing algorithm I used by the slave device, and determines to use the sound effect processing algorithm II on the master device side to compare the difference between the master and the slave devices.
  • the audio data A in between is subjected to sound effect processing.
  • the master device can obtain the sound effect processing algorithm II on the master device side that is adapted to the sound effect processing algorithm I on the slave device side locally or in the cloud.
  • the sound effect processing algorithm II belongs to multiple sets of sound effect processing algorithms on the main device side.
  • the sound effect processing algorithm II is generally a sound effect processing algorithm specially selected by the master device for situation 3 and adapted to the sound effect processing algorithm I on the side of the slave device.
  • the master device side obtains the sound effect processing algorithm that is compatible with the sound effect processing algorithm I on the slave device side, please refer to the foregoing content, which will not be repeated here.
  • the main device uses the sound effect processing algorithm II on the main device side to perform sound effect processing on the audio data A to obtain audio data D.
  • An audio connection can be established between the master and slave devices, and the audio connection can be used to transmit audio data A between the master and slave devices.
  • the audio connection can be a call audio connection (such as a SCO connection) or a media audio connection (such as an A2DP connection), which will be described in detail in subsequent embodiments.
  • S118 The master device sends audio data D to the slave device through the audio connection between the master device and the slave device.
  • the slave device uses the sound effect processing algorithm I on the slave device side to perform sound effect processing on the audio data D to obtain audio data E.
  • the audio data A is processed by the joint sound effect processing of the sound effect processing algorithm II on the master device side and the sound effect processing algorithm I on the slave device side, and finally audio data E is obtained.
  • the wireless communication method provided by this application can more flexibly select the sound effect processing method between the master and the slave device according to the sound effect processing capability of the slave device, and supports the joint sound effect processing between the master and the slave device, which can make the master , From the equipment to give full play to the advantages of their own sound processing. Compared with the case of single-sided sound effect processing, the effects of joint sound effect processing will not cancel each other out on the whole, but will enhance each other, which can further enhance the sound effect of audio and satisfy the user's pursuit of higher audio quality.
  • the above processes can also be based on other types of wireless communication connections between the master and slave devices, such as Wi-Fi connection, near field communication (NFC) connection Etc., this application does not limit this.
  • Wi-Fi connection Wi-Fi connection
  • NFC near field communication
  • the media audio connection may be, for example, an A2DP connection.
  • Fig. 8 shows the wireless communication method provided in the first embodiment.
  • a Bluetooth connection (ACL connection) is established and maintained between the master device and the slave device.
  • the method may include:
  • an L2CAP channel such as an ECHO L2CAP channel, is established between the master device and the slave device based on the ACL connection.
  • the negotiation of sound effect processing between the master and slave devices can be implemented based on Bluetooth L2CAP signaling.
  • the ECHO L2CAP channel can be established, and the audio effect processing negotiation between the master and the slave device can be realized through the echo request (ECHO request)/echo response (ECHO response) instructions in the L2CAP.
  • ECHO request echo request
  • ECHO response echo response
  • L2CAP belongs to the bottom layer of Bluetooth connection, it can support the negotiation result before the Bluetooth Profile connection is established. Therefore, the master and slave devices can complete the negotiation before the media audio data arrives.
  • S202 The master device sends an echo request to the slave device based on the ECHO L2CAP channel.
  • the echo request sent by the master device is used to request to query the sound effect processing capability of the slave device.
  • the slave device sends an echo response (including indication information of the sound effect processing capability of the slave device) to the master device based on the ECHO L2CAP channel.
  • the slave device In response to the echo request sent by the master device, the slave device sends an echo response to the master device.
  • the echo response can carry indication information of the sound effect processing capability of the slave device, such as device parameters such as manufacturer and product model, for subsequent negotiation of sound effect processing between the master and the slave device.
  • the master device can determine whether the master device and the slave device support joint sound effect processing according to the indication information of the sound effect processing capability of the slave device.
  • the master device may determine, according to the instruction information of the slave device, that joint sound effect processing cannot be performed between the master device and the slave device.
  • the slave device does not support sound effect processing, that is, it is determined that the case of negotiating sound effect processing is case 1.
  • the master device may It is determined that the negotiated sound effect processing situation is Case 1.
  • the master device may determine to use the sound effect processing algorithm III on the master device side to perform sound effect processing on the media audio data A between the master and slave devices.
  • the sound effect processing algorithm III belongs to the aforementioned multiple sound effect processing algorithms on the main device side.
  • the sound effect processing algorithm III can be the sound effect processing algorithm used by the main device side by default, or it can be the sound effect processing algorithm specially selected by the main device for situation 1.
  • S206 Establish an A2DP connection between the master device and the slave device based on the L2CAP channel used for A2DP.
  • the A2DP connection may be a streaming connection established based on the Bluetooth connection and the L2CAP channel used for A2DP. Regarding the establishment of the A2DP connection between the master and the slave device, reference may be made to the establishment process of the media audio connection shown in FIG. 3, which will not be repeated here.
  • S207 The main device detects that the user plays media audio A.
  • the user can operate and play media audio A on the main device.
  • the main device detects the user's operation of playing media audio A.
  • the slave device can also detect the user's operation of playing media audio A and feed it back to the master device.
  • the main device uses the sound effect processing algorithm III to perform sound effect processing on the media audio data A to obtain the media audio data B.
  • S209 The master device sends the media audio data B to the slave device through the A2DP connection.
  • the master device sends the media audio data B after the sound effect processing to the slave device through the A2DP connection.
  • the slave device does not support audio processing. After receiving the media audio data B from the master device, the slave device plays the media audio data B without performing sound effect processing on the media audio data B.
  • the master device may determine, according to the instruction information of the slave device, that joint sound effect processing cannot be performed between the master device and the slave device.
  • the slave device supports sound effect processing, that is, it is determined that the case of negotiating sound effect processing is case 2.
  • the master device determines that the slave device supports sound effect processing according to the indication information of the sound effect processing capability of the slave device, such as the manufacturer, product model and other device parameters, the master device cannot obtain the sound effect processing algorithm used by the slave device. If the sound effect processing algorithm on the main device side is equipped, the main device can determine that the negotiated sound effect processing situation is case 2.
  • the master device may determine to use the sound effect processing algorithm IV on the side of the slave device to perform sound effect processing on the media audio data A between the master and the slave device.
  • the sound effect processing algorithm IV belongs to the aforementioned multiple sets of sound effect processing algorithms on the slave device side.
  • the sound effect processing algorithm IV can be a sound effect processing algorithm used by default from the device side, or other types of sound effect processing algorithms.
  • S213 Establish an A2DP connection between the master device and the slave device based on the L2CAP channel used for A2DP.
  • the A2DP connection may be a streaming connection established based on the Bluetooth connection and the L2CAP channel used for A2DP. Regarding the establishment of the A2DP connection between the master and the slave device, reference may be made to the establishment process of the media audio connection shown in FIG. 3, which will not be repeated here.
  • S214 The main device detects that the user plays the media audio A.
  • the user can operate and play media audio A on the main device.
  • the main device detects the user's operation of playing media audio A.
  • the slave device can also detect the user's operation of playing media audio A and feed it back to the master device.
  • S215 The master device sends the media audio data A to the slave device through the A2DP connection.
  • the master device determines not to use the sound effect processing function on the master device side, so the master device sends the media audio data A to the slave device without performing sound effect processing on the media audio data A.
  • the slave device uses the sound effect processing algorithm IV to perform sound effect processing on the media audio data A to obtain the media audio data C.
  • the master device determines, according to the instruction information of the slave device, that joint sound effect processing can be performed between the master device and the slave device.
  • the master device determines that the slave device supports sound processing according to the device parameters such as the manufacturer and product model fed back by the slave device
  • the master device uses the sound processing algorithm I used by the slave device, it can learn from multiple sets of sound processing algorithms on the master device side. If the sound effect processing algorithm II that is compatible with the sound effect processing algorithm I is acquired in the, the host device can determine that the negotiated sound effect processing situation is case 3.
  • the sound effect processing algorithm II can be used for the sound effect processing on the main device side in the joint sound effect processing.
  • the master device selects a sound effect processing algorithm II that is compatible with the sound effect processing algorithm I according to the sound effect processing algorithm I used by the slave device, and determines that the sound effect processing algorithm II is used on the master device side to compare between the master and the slave devices.
  • the media audio data A in the middle is subjected to sound effect processing.
  • the master device can obtain the sound effect processing algorithm II on the master device side that is compatible with the sound effect processing algorithm I on the slave device side locally or in the cloud.
  • the sound effect processing algorithm II belongs to multiple sets of sound effect processing algorithms on the main device side.
  • the sound effect processing algorithm II is generally a sound effect processing algorithm specially selected by the master device for situation 3 and adapted to the sound effect processing algorithm I on the side of the slave device.
  • the sound processing algorithm II on the main device side and the sound processing algorithm I on the slave device side cooperate to process the media audio. They cooperate with each other and complement each other to present better sound effects.
  • S220 Establish an A2DP connection between the master device and the slave device based on the L2CAP channel used for A2DP.
  • the A2DP connection may be a streaming connection established based on the Bluetooth connection and the L2CAP channel used for A2DP. Regarding the establishment of the A2DP connection between the master and the slave device, reference may be made to the establishment process of the media audio connection shown in FIG. 3, which will not be repeated here.
  • S221 The main device detects that the user plays media audio A.
  • the user can operate and play media audio A on the main device.
  • the main device detects the user's operation of playing media audio A.
  • the slave device can also detect the user's operation of playing media audio A and feed it back to the master device.
  • S222 The main device uses the sound effect processing algorithm II to perform sound effect processing on the media audio data A to obtain the media audio data D.
  • S223 The master device sends the media audio data D to the slave device through the A2DP connection.
  • the master device sends the media audio data D after the sound effect processing to the slave device through the A2DP connection.
  • S224 The slave device uses the sound effect processing algorithm I to perform sound effect processing on the media audio data D to obtain the media audio data E.
  • L2CAP can interact with the Bluetooth upper-layer audio profile, so that the upper-layer audio profile can obtain the negotiation result.
  • L2CAP belongs to the bottom layer of Bluetooth connection, which can support the negotiation result before the Bluetooth Profile connection is established, ensure that the negotiation is completed before the arrival of the media audio data, and perform sound effect processing on the media audio data in time.
  • the master and slave devices can also establish an L2CAP channel based on other types of L2CAP to conduct the aforementioned audio processing negotiation.
  • the master device such as a mobile phone or a tablet computer
  • a slave device such as a Bluetooth headset or a Bluetooth speaker
  • the AT command is a part of the HFP protocol and is used to transmit control signaling between the AG (Audio Gateway) and HF (Hands-Free Unit) through the RFCOMM (Serial Emulation Protocol) channel.
  • the encoding format of AT commands is ASCII code.
  • the call audio connection may be, for example, an SCO connection. Different from the media audio connection, the establishment of the SCO connection requires user operation (answering the call) or triggering by an internal event. That is, the call audio connection needs to be established when there is a call audio service.
  • Fig. 9 shows the wireless communication method provided in the second embodiment.
  • a Bluetooth connection (ACL connection) is established and maintained between the master device and the slave device.
  • the method may include:
  • an L2CAP channel for RFCOMM is established between the master device and the slave device based on the ACL connection.
  • step 1 in the establishment process of the call audio control connection shown in FIG. 5, which will not be repeated here.
  • an RFCOMM connection is established between the master device and the slave device based on the RFCOMM L2CAP channel.
  • step 2 in the establishment process of the call audio control connection shown in FIG. 5, which will not be repeated here.
  • the master device sends a specific extended AT command to the slave device based on the RFCOMM connection to query the device parameters such as the manufacturer and product model of the slave device.
  • the extended AT command is implemented based on the ASCII code command line, and its format is as follows:
  • ⁇ > indicates the part that must be included, and [] indicates the optional part.
  • AT+ is the command information prefix;
  • CMD is the command string;
  • [para-n] is parameter setting Input, such as query, no need;
  • ⁇ CR> is the end character, carriage return, the ASCII code is 0x0a or 0x0d.
  • + is the response message prefix
  • RSP is the response string, including: "ok” means success, “ERR” means failure
  • [para-n] is the return parameter during query or the error during error Code
  • ⁇ CR> is the end character, which means carriage return, and the ASCII code is 0x0d.
  • ⁇ LF> is the terminator, which means line feed, and the ASCII code is 0x0a.
  • the slave device may send a response message of the specific extended AT instruction (carrying indication information of the sound effect processing capability of the slave device) to the slave device based on the RFCOMM connection.
  • the master device can determine whether the slave device has the sound effect processing capability according to the device parameters such as the manufacturer and product model feedback from the slave device, and whether the master and slave devices support joint sound effect processing, that is, whether the master device side has a sound effect processing algorithm adapted to Audio processing algorithm from the device side.
  • the master device may determine, according to the instruction information of the slave device, that joint sound effect processing cannot be performed between the master device and the slave device.
  • the slave device does not support sound effect processing, that is, it is determined that the case of negotiating sound effect processing is case 1.
  • the master device determines that the slave device does not support sound effect processing according to the indication information of the sound effect processing capability of the slave device, such as the manufacturer, product model and other device parameters, or the slave device has a poor sound effect processing effect on the audio data
  • the master device It can be determined that the case of negotiating sound effect processing is case 1.
  • the master device may determine to use the sound effect processing algorithm III on the master device side to perform sound effect processing on the call audio data A between the master and slave devices.
  • the sound effect processing algorithm III belongs to the aforementioned multiple sound effect processing algorithms on the main device side.
  • the sound effect processing algorithm III can be the sound effect processing algorithm used by the main device side by default, or it can be the sound effect processing algorithm specially selected by the main device for situation 1.
  • S307 A service-level connection is established between the master device and the slave device.
  • step 3 in the establishment process of the call audio control connection shown in FIG. 5, which will not be repeated here.
  • S308 The main device detects that the user answers the call or makes a call.
  • the user can operate on the main device to answer an incoming call or make a call.
  • the main device detects the user's operation of answering an incoming call or making a call.
  • the slave device can also detect the user's operation of answering an incoming call or dialing a phone call, and feedback to the master device.
  • S310 The main device detects the call audio data A.
  • S311 The main device uses the sound effect processing algorithm III to perform sound effect processing on the call audio data A to obtain media audio data B.
  • S312 The master device sends the call audio data B to the slave device through the SCO connection.
  • the master device sends the call audio data B after the sound effect processing to the slave device through the SCO connection.
  • the slave device does not support audio processing. After receiving the call audio data B from the autonomous device, the slave device plays the call audio data B without performing sound effect processing on the call audio data B.
  • the master device may determine, according to the instruction information of the slave device, that joint sound effect processing cannot be performed between the master device and the slave device.
  • the slave device supports sound effect processing, that is, it is determined that the case of negotiating sound effect processing is case 2.
  • the master device determines that the slave device supports sound effect processing according to the indication information of the sound effect processing capability of the slave device, such as the manufacturer, product model and other device parameters, the master device cannot obtain the sound effect processing algorithm used by the slave device. If the sound effect processing algorithm on the main device side is equipped, the main device can determine that the negotiated sound effect processing situation is case 2.
  • the master device may determine to use the sound effect processing algorithm IV on the slave device side to perform sound effect processing on the call audio data A between the master and slave devices.
  • the sound effect processing algorithm IV belongs to the aforementioned multiple sets of sound effect processing algorithms on the slave device side.
  • the sound effect processing algorithm IV can be a sound effect processing algorithm used by default from the device side, or other types of sound effect processing algorithms.
  • S316 A service-level connection is established between the master device and the slave device.
  • step 3 in the establishment process of the call audio control connection shown in FIG. 5, which will not be repeated here.
  • S317 The main device detects that the user answers the call or makes a call.
  • the user can operate on the main device to answer an incoming call or make a call.
  • the main device detects the user's operation of answering an incoming call or making a call.
  • the slave device can also detect the user's operation of answering an incoming call or dialing a call, and feed it back to the master device.
  • S319 The main device detects the call audio data A.
  • S320 The master device sends the call audio data A to the slave device through the SCO connection.
  • the master device determines not to use the sound effect processing function on the master device side, so the master device does not perform sound effect processing on the call audio data A before sending the call audio data A to the slave device.
  • the slave device uses the sound effect processing algorithm IV to perform sound effect processing on the call audio data A to obtain the call audio data C.
  • the master device determines, according to the instruction information of the slave device, that joint sound effect processing can be performed between the master device and the slave device.
  • the master device determines that the slave device supports sound processing according to the device parameters such as the manufacturer and product model fed back by the slave device
  • the master device uses the sound processing algorithm I used by the slave device, it can learn from multiple sets of sound processing algorithms on the master device side. If the sound effect processing algorithm II that is compatible with the sound effect processing algorithm I is acquired in the, the host device can determine that the negotiated sound effect processing situation is case 3.
  • the sound effect processing algorithm II can be used for the sound effect processing on the main device side in the joint sound effect processing. Refer to step S115.
  • the master device selects a sound effect processing algorithm II that is compatible with the sound effect processing algorithm I according to the sound effect processing algorithm I used by the slave device, and determines to use the sound effect processing algorithm II on the master device side for the call audio data between the master and slave devices A for sound processing.
  • the master device can obtain the sound effect processing algorithm II on the master device side that is compatible with the sound effect processing algorithm I on the slave device side locally or in the cloud.
  • the sound effect processing algorithm II belongs to multiple sets of sound effect processing algorithms on the main device side.
  • the sound effect processing algorithm II is generally a sound effect processing algorithm specially selected by the master device for situation 3 and adapted to the sound effect processing algorithm I on the side of the slave device.
  • the sound effect processing algorithm II on the master device side and the sound effect processing algorithm I on the slave device side cooperate to process the call audio. They cooperate with each other and complement each other to present better sound effects.
  • S325 A service-level connection is established between the master device and the slave device.
  • step 3 in the establishment process of the call audio control connection shown in FIG. 5, which will not be repeated here.
  • S326 The main device detects that the user answers the call or makes a call.
  • the user can operate on the main device to answer an incoming call or make a call.
  • the main device detects the user's operation of answering an incoming call or dialing a call.
  • the slave device can also detect the user's operation of answering an incoming call or dialing a phone call, and feedback to the master device.
  • S328 The main device detects the call audio data A.
  • S329 The main device uses the sound effect processing algorithm II to perform sound effect processing on the call audio data A to obtain the call audio data D.
  • S330 The master device sends the call audio data D to the slave device through the SCO connection.
  • the master device sends the call audio data D after the sound effect processing to the slave device through the SCO connection.
  • S331 The slave device uses the sound effect processing algorithm I to perform sound effect processing on the call audio data D to obtain the call audio data E.
  • the call audio data A is processed by the combined sound effect processing of the sound effect processing algorithm II on the master device side and the sound effect processing algorithm I on the slave device side, and finally the call audio data E is obtained.
  • the SCO connection will also be disconnected.
  • the master and slave devices use AT commands to negotiate the sound effect, which achieves high efficiency, and enables the master and slave devices to negotiate the results before the service-level connection is established, ensuring that the user answers the call or The negotiation is completed before the call is made, and the audio data of the call is processed in time to improve the quality of the call audio.
  • the master device may further refine the sound effect processing process on both sides of the master device and the slave device according to the remaining power of the master device and the slave device.
  • the master device performs sound effect processing on the audio data on one side.
  • the main device can turn off the sound processing algorithm on the main device side to save power.
  • the audio data is processed by the audio data from one side of the device.
  • the master device can send a notification to the slave device to notify the slave device that it will no longer perform sound effect processing, or the slave device actively closes its own sound effect processing function to save The power of the slave device.
  • the master and slave devices support joint sound effect processing, and the master and slave devices cooperate to perform sound effect processing on the audio data. If it is found that the power of the master device or the power of the slave device is lower than a certain value, such as less than 10%, you can discard the use of joint audio processing. After abandoning joint audio processing, comprehensively consider reducing power consumption. If the power of the slave device is low, use the one-side audio processing algorithm of the master device with relatively sufficient power to perform audio data processing; if the power of the master device is low, the power consumption is relatively low. Sufficient one-side sound effect processing algorithm of the slave device performs sound effect processing on the audio data; if the power of the master and slave devices are low, then the sound processing function is turned off on both sides of the master and slave devices to reduce power consumption and save power.
  • a certain value such as less than 10%
  • the embodiment of the present application also provides a wireless communication method, which is used in the case of a low-power Bluetooth connection.
  • the slave device can send a BLE broadcast, which can contain related parameters of the slave device, such as the name of the slave device, the manufacturer of the slave device, The service UUID of the slave device, etc.
  • the master device After the master device searches for the broadcast, it initiates a connection request. After the connection is established, the master device can obtain the parameters of the slave device. Or when establishing a BLE connection, the master and slave devices negotiate the sound effect processing through extended commands.
  • Bluetooth Low Energy supports the negotiation of sound processing between the master and slave devices through broadcasting, and can achieve high-efficiency interaction with lower power consumption.
  • the electronic device 10 may be implemented as the main device mentioned in the above embodiment, and may be the electronic device 101 in the wireless audio system 100 shown in FIG. 1A or the electronic device 201 in the wireless audio system 200 shown in FIG. 2A.
  • the electronic device 10 can generally be used as an audio source, such as a mobile phone, a tablet computer, etc., and can transmit audio data to other audio sinks, such as earphones, speakers, etc., so that other audio sinks can connect The audio data is converted into sound.
  • the electronic device 10 can also be used as an audio sink to receive audio data (such as audio converted from the user’s speech collected by the headset) transmitted by the audio source of other devices (such as a headset with a microphone). data).
  • FIG. 10 shows a schematic diagram of the structure of the electronic device 10.
  • the electronic device 10 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 10.
  • the electronic device 10 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • GPU graphics processing unit
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the electronic device 10 may also include one or more processors 110.
  • the controller may be the nerve center and command center of the electronic device 10.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the electronic device 10 is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter/receiver (universal asynchronous) interface.
  • receiver/transmitter, UART) interface mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and / Or Universal Serial Bus (USB) interface, etc.
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the wireless communication function of the electronic device 10 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the wireless communication module 160 can provide applications on the electronic device 10 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 may also receive a signal to be sent from the processor 110, perform frequency modulation, amplify, and convert it into electromagnetic waves to radiate through the antenna 2.
  • the wireless communication module 160 may include a Bluetooth module, a Wi-Fi module, and the like.
  • the antenna 1 of the electronic device 10 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 10 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 10 can implement a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor, which is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations and is used for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 10 may include one or N display screens 194, and N is a positive integer greater than one.
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the electronic device 10 can be realized, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 10.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, photos, videos and other data in an external memory card.
  • the internal memory 121 may be used to store one or more computer programs, and the one or more computer programs include instructions.
  • the processor 110 can run the above-mentioned instructions stored in the internal memory 121 to enable the electronic device 10 to execute the data sharing methods provided in some embodiments of the present application, as well as various functional applications and data processing.
  • the internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store the operating system; the storage program area can also store one or more application programs (such as a gallery, contacts, etc.) and so on.
  • the data storage area can store data (such as photos, contacts, etc.) created during the use of the electronic device 10.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • UFS universal flash storage
  • the electronic device 10 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 10 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 10 answers a call or voice message, it can receive the voice by bringing the receiver 170B close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through the human mouth, and input the sound signal into the microphone 170C.
  • the electronic device 10 may be provided with at least one microphone 170C. In other embodiments, the electronic device 10 may be provided with two microphones 170C, which can implement noise reduction functions in addition to collecting sound signals. In other embodiments, the electronic device 10 may also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and realize directional recording functions.
  • the earphone interface 170D is used to connect wired earphones.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, and a cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA, CTIA
  • the button 190 includes a power-on button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the electronic device 10 can receive key input, and generate key signal input related to user settings and function control of the electronic device 10.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card. The SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 10.
  • the electronic device 10 may support 1 or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the electronic device 10 adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 10 and cannot be separated from the electronic device 10.
  • the electronic device 10 exemplarily shown in Fig. 10 can display various user interfaces described in the following embodiments through a display screen 194.
  • the electronic device 10 can detect touch operations in each user interface through the touch sensor 180K, such as a click operation in each user interface (such as a touch operation on an icon, a double-click operation), and for example, up or up in each user interface. Swipe down, or perform circle-drawing gestures, and so on.
  • the electronic device 10 can detect a motion gesture performed by the user holding the electronic device 10, such as shaking the electronic device, through the gyroscope sensor 180B, the acceleration sensor 180E, and the like.
  • the electronic device 10 can detect non-touch gesture operations through the camera 193 (eg, a 3D camera, a depth camera).
  • the terminal application processor (AP) included in the electronic device 10 can implement the Host in the Bluetooth protocol framework
  • the Bluetooth (BT) module included in the electronic device 10 can implement the controller in the Bluetooth protocol framework. Communicate through HCI between. That is, the functions of the Bluetooth protocol framework are distributed on two chips.
  • the terminal application processor (AP) of the electronic device 10 may implement the Host and the controller in the Bluetooth protocol framework. That is to say, all the functions of the Bluetooth protocol framework are placed on one chip, that is, the host and controller are placed on the same chip. Because the host and controller are on the same chip, there is no need for physical HCI to exist. , The host and the controller interact directly through the application programming interface API.
  • the audio receiver device 300 may be implemented as the slave device mentioned in the above embodiment, such as a Bluetooth headset, and may be the audio output device 106 in the wireless audio system 100 shown in FIG. 1A or the wireless audio system 200 shown in FIG. 2A
  • the audio sink device 300 (audio sink), such as earphones and speakers, can transmit audio data to other audio sources, such as mobile phones, tablet computers, etc., and can convert the received audio data into sound.
  • the audio receiver device 300 can also be used as an audio source to transmit audio data to other device audio sinks (such as mobile phones) (such as the audio data converted into the voice of the user's speech collected by the headset).
  • other device audio sinks such as mobile phones
  • the audio receiver device 300 may be a pair of Bluetooth earphones, including a left earphone and a right earphone.
  • the Bluetooth headset can be a neck-worn Bluetooth headset or a TWS Bluetooth headset.
  • FIG. 11 exemplarily shows a schematic structural diagram of an audio receiver device 300 provided by the technical solution of the present application.
  • the audio receiver device 300 may include a processor 302, a memory 303, a Bluetooth communication processing module 304, a power supply 305, a sensor 306, a microphone 307, and an electric/acoustic converter 308. These components can be connected via a bus.
  • the audio receiver device 300 may also include a wire control.
  • the processor 302, the memory 303, the Bluetooth communication processing module 304, and the power supply 305 can be integrated in the online control.
  • both headsets may also be integrated with a processor 302, a memory 303, a Bluetooth communication processing module 304, and a power supply 305.
  • the processor 302 can be used to read and execute computer readable instructions.
  • the processor 302 may mainly include a controller, an arithmetic unit, and a register.
  • the controller is mainly responsible for instruction decoding, and sends out control signals for the operation corresponding to the instruction.
  • the arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations and logical operations, etc., and can also perform address operations and conversions.
  • the register is mainly responsible for storing the register operands and intermediate operation results temporarily stored during the execution of the instruction.
  • the hardware architecture of the processor 302 may be an application specific integrated circuit (ASIC) architecture, a MIPS architecture, an ARM architecture, or an NP architecture, and so on.
  • ASIC application specific integrated circuit
  • the processor 302 may be used to parse the signals received by the Bluetooth communication processing module 304, such as signals encapsulated with audio data, content control messages, flow control messages, and so on.
  • the processor 302 may be used to perform corresponding processing operations according to the analysis result, such as driving the electric/acoustic converter 308 to start or pause or stop converting audio data into sound, and so on.
  • the processor 302 may also be used to generate signals sent by the Bluetooth communication processing module 304, such as Bluetooth broadcast signals, beacon signals, and audio data converted into collected sounds.
  • the memory 303 is coupled with the processor 302, and is used to store various software programs and/or multiple sets of instructions.
  • the memory 303 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory 303 can store an operating system, such as embedded operating systems such as uCOS, VxWorks, and RTLinux.
  • the memory 303 may also store a communication program, which may be used to communicate with the electronic device 10, one or more servers, or additional devices.
  • the Bluetooth (BT) communication processing module 304 can receive signals transmitted by other devices (such as the electronic device 10), such as scanning signals, broadcast signals, signals encapsulated with audio data, content control messages, flow control messages, and so on.
  • the Bluetooth (BT) communication processing module 304 can also transmit signals, such as broadcast signals, scan signals, signals encapsulated with audio data, content control messages, flow control messages, and so on.
  • the power supply 305 can be used to supply power to other internal components such as the processor 302, the memory 303, the Bluetooth communication processing module 304, the sensor 306, and the electric/acoustic converter 308.
  • the sensor 306 may include, for example, an infrared sensor, a pressure sensor, a Hall sensor, a proximity light sensor, and so on. Among them, infrared sensors and pressure sensors can be used to detect the wearing state of the headset. Hall sensors and proximity light sensors can be used to detect whether the left and right earphones are pulled together.
  • the microphone 307 can be used to collect sound, such as the voice of a user speaking, and can output the collected sound to the electric/acoustic converter 308, so that the electric/acoustic converter 308 can convert the sound collected by the microphone 307 into audio data.
  • the electric/acoustic converter 308 can be used to convert sound into electric signals (audio data), for example, to convert the sound collected by the microphone 307 into audio data, and can transmit the audio data to the processor 302. In this way, the processor 302 can trigger the Bluetooth (BT) communication processing module 304 to transmit the audio data.
  • the electrical/acoustic converter 308 may also be used to convert electrical signals (audio data) into sound, for example, to convert audio data output by the processor 302 into sound.
  • the audio data output by the processor 302 may be received by the Bluetooth (BT) communication processing module 304.
  • the processor 302 may implement the Host in the Bluetooth protocol framework, and the Bluetooth (BT) communication processing module 304 may implement the controller in the Bluetooth protocol framework, and the two communicate through HCI. That is, the functions of the Bluetooth protocol framework are distributed on two chips.
  • BT Bluetooth
  • the processor 302 may implement the Host and the controller in the Bluetooth protocol framework. That is to say, all the functions of the Bluetooth protocol framework are placed on one chip, that is, the host and controller are placed on the same chip. Because the host and controller are on the same chip, there is no need for physical HCI to exist. , The host and the controller interact directly through the application programming interface API.
  • the structure illustrated in FIG. 11 does not constitute a specific limitation on the audio receiver device 300.
  • the audio receiver device 300 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the process can be completed by a computer program instructing relevant hardware.
  • the program can be stored in a computer readable storage medium. , May include the processes of the above-mentioned method embodiments.
  • the aforementioned storage media include: ROM or random storage RAM, magnetic disks or optical disks and other media that can store program codes.

Abstract

本申请实施例提供了一种无线通讯方法,在主设备与从设备建立无线连接后,进行音效处理的协商。主设备根据从设备反馈的指示信息,确定主、从设备之间是否能够进行联合音效处理,如果能够进行联合音效处理,则主、从设备两侧进行相适配的音效处理。这样,主、从设备的音效处理效果总体上不会相互抵消,而会相互增强,进而呈现更佳的音效效果。

Description

无线音频系统、无线通讯方法及设备
本申请要求于2020年05月22日提交中国专利局、申请号为202010446694.7、申请名称为“无线音频系统、无线通讯方法及设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及无线技术领域,尤其涉及无线音频系统、无线通讯方法及设备。
背景技术
蓝牙(Bluetooth)无线技术是意图替代便携式和/或固定式电子设备之间的线缆连接的一种短距离通信系统。基于蓝牙线通信技术的蓝牙耳机,以其无线化的连接方式和良好的音质受到消费者的喜爱。其中,用户使用蓝牙在设备间互联并播放音频数据的场景越来越多,越来越普遍,例如,用户使用手机+蓝牙耳机用来听音乐或看视频,用户使用手机+车载设备用来听音乐或听导航信息,用户使用手机+蓝牙音响听音乐或看视频等等。在这些使用蓝牙技术播放音频数据的场景中,音频流可能会经过主设备(如手机或电脑)和从设备(如蓝牙耳机或蓝牙音响),主设备的音效算法和从设备的音效算法一般是两套独立的音效算法,当音频流经过主设备和从设备时,两个独立的音效算法可能会产生音效冲突,比如说主设备音效在音频信号1Khz处的增益为抬高2db,从设备在音频信号1Khz处增益为压低2db,这时简单的音效算法效果的叠加可能导致最终音频流的音效效果比单独使用主设备或从设备的音效算法时的生成的音效效果要差。
目前业界对蓝牙互联设备在媒体流上的音效算法没有协同交互的方案。而在通话流上有类似的协同交互方案,这个方案是在通话场景下通过交互命令来决策通话流使用主设备的音效算法或者使用从设备的音效算法,该方案可以部分解决通话流中主从设备音效算法的配合问题。
发明内容
本申请技术方案提供了一种无线音频系统、无线通讯方法及设备,可实现主从设备之间的音效处理协商,充分利用主设备和从设备双侧音效处理的优势,进一步提升音效效果,满足了用户对于音频的更高音质的追求。
第一方面,本申请实施例提供了一种无线通讯方法,应用于无线通讯系统。该无线通讯系统包括第一设备和第二设备,第一设备可以是手机、平板电脑等电子设备,第二设备可以是无线耳机、音箱等音频输出设备,例如TWS蓝牙耳机。第一设备可以是无线音频系统中的主设备,第二设备可以是无线音频系统中的从设备。
在该无线通讯方法中,第一设备与第二设备可以建立无线通信连接。第一设备可以向第二设备发送第一请求,以请求查询从设备的音效处理能力。相应的,第二设备可以向所述第一设备发送第一响应,第一响应中携带第一指示信息。第一指示信息可以用于指示第二设备具备的音效处理能力。第一设备可以根据第二设备反馈的指示信息确定第一设备与 第二设备之间是否支持联合音效处理。具体的,第一设备可确定第一设备是否有第一音效处理算法,第一音效处理算法为适配第二设备使用的第二音效处理算法的音效处理算法,第二音效处理算法是根据第一指示信息确定,如果确定有第一音效处理算法,则:第一设备与第二设备之间可以建立音频连接,第一设备使用第一音效处理算法对第一音频数据进行处理,得到第二音频数据。然后,第一设备可以通过音频连接向第二设备传输第二音频数据。第二设备可以使用第二音效处理算法对第二音频数据进行处理,得到第三音频数据。第二设备播放第三音频数据。
其中,上述无线通信量连接可以是逻辑链路控制和适配协议L2CAP连接,也可以是射频通信RFCOMM连接。基于这两种不同的无线通信连接,第一设备与第二设备之间交互的第一请求、第一响应的信令实现也相应不同,第一设备与第二设备之间用来传输音频数据的音频连接的实现也相应不同。后面会展开。
其中,第一指示信息可包括以下一项或多项设备参数:第二设备的生产厂商、产品型号。
其中,音效处理能力可以包括以下一项或多项:降噪能力、消回声能力,第二音效处理算法可以包括以下一项或多项:降噪算法、消回声算法。
实施第一方面提供的方法,可实现主从设备之间的音效处理协商,充分利用主设备和从设备双侧音效处理的优势,进一步提升音效效果,满足了用户对于音频的更高音质的追求。
结合第一方面中的描述,第一设备与第二设备之间是否支持联合音效处理可以概括为下述三种情况:
情况1:主、第二设备之间不支持联合音效处理,第二设备不支持音效处理。
具体的,当第一设备可以根据第二设备反馈的生产厂商、产品型号等设备参数确定第二设备不具备音效处理能力时,第一设备可确定协商音效处理的情况为情况1。第二设备是否具备音效处理能力,这一点可以根据第二设备反馈的生产厂商、产品型号等设备参数确定出来。第一设备可以根据某款耳机、音箱等第二设备的生产厂商、产品型号等设备参数,在本地或在云端查询到该款第二设备是否具有如降噪、消回声等音效处理能力,以及该款第二设备在具有音效处理能力的情况下所使用的音效处理算法是什么。
情况2:主、第二设备之间不支持联合音效处理,第二设备支持音效处理。
具体的,当第一设备根据第二设备反馈的生产厂商、产品型号等设备参数确定第二设备具备音效处理能力时,如果第一设备根据第二设备使用的音效处理算法,不能够从第一设备的多套音效处理算法中获取到与第二设备侧的音效处理算法Ⅰ相适配的第一设备的音效处理算法Ⅱ,则第一设备可确定协商音效处理的情况为情况2。
情况3:主、第二设备之间支持联合音效处理,第二设备支持音效处理。
具体的,当第一设备根据第二设备反馈的生产厂商、产品型号等设备参数确定第二设备支持音效处理时,如果第一设备根据第二设备使用的音效处理算法Ⅰ,能够从第一设备的多套音效处理算法中获取到与音效处理算法Ⅰ相适配的音效处理算法Ⅱ,则第一设备可确定协商音效处理的情况为情况3。音效处理算法Ⅱ可用于联合音效处理中第一设备的音效处理。
为了确定上述几种情况,第一设备可以通过下述几种方式确定:第一设备是否有与第二设备侧的音效处理算法Ⅰ相适配的第一设备的音效处理算法Ⅱ。
方式1.通过对测试信号进行音效处理,比对处理结果,来筛选出适配算法。
具体的,第一设备可从第一设备的多套音效处理算法中选择出一套音效处理算法,使用选择出来的这一套音效处理算法和第二设备侧的音效处理算法Ⅰ接连对测试音频数据进行处理。如果以下一项或多项条件被满足:处理后的测试音频数据所测出信噪比优于第一信噪比、第二信噪比,处理后的测试音频数据所测出的回声分量少于第一回声分量、第二回声分量,则确定第一设备有与音效处理算法Ⅰ相适配的音效处理算法Ⅱ。如果这一项或多项条件未被满足,则继续从该多套音效处理算法中选择出下一套音效处理算法,重复前述音效处理,直至筛选出满足这一项或多项条件的音效处理算法。如果多套算法全部都无法满足这一项或多项条件,则可确定第一设备没有与第二设备侧的音效处理算法Ⅰ相适配的音效处理算法Ⅱ。
第一设备可进一步从这多套音效处理算法中选择出满足前述一项或多项条件,且音效处理效果最优(例如信噪比最优、回声分量最少)的音效处理算法来适配第二设备侧的音效处理算法Ⅰ。
其中,第一信噪比、第一回声分量可以分别为使用前述选择出来的这一套音效处理算法对所述测试音频数据处理后测出的信噪比、回声分量,第二信噪比、第二回声分量可以分别为使用第二设备侧的音效处理算法Ⅰ对该测试音频数据处理后测出的信噪比、回声分量。第一设备的多套音效处理算法可以存储在第一设备本地,也可以存储在云端服务器,由第一设备访问。
方式2.查表获得适配算法。
具体的,第一设备可以存储有或者可访问到一个映射表。该映射表中可以记录有多款设备的设备参数(如生产厂商、产品型号等设备参数)和多套音效处理算法之间的对应关系。在该映射表中,某款第二设备的设备参数对应的音效处理算法,指的是第一设备的多套音效处理算法中与该款第二设备所使用的音效处理算法相适配的音效处理算法。一套音效处理算法可以包括多个音效处理算法,例如降噪算法、回声抑制算法等等。当然,一套音效处理算法也可以仅包括一个音效处理算法,例如降噪算法,本申请对此不做限制。该映射表可以是第一设备出厂前写入到存储器中,也可以是第一设备从服务器下载的,还可以是其他设备分享而来的。本申请对该映射表的来源不作限制。
具体的,第一设备可以判断第一映射表中是否存在第二设备的设备参数对应的音效处理算法,如果有,则可确定第一设备有与第二设备侧的音效处理算法Ⅰ相适配的音效处理算法Ⅱ。音效处理算法Ⅱ即映射表中第二设备的设备参数对应的音效处理算法。
不限于生产厂商、产品型号等设备参数,第二设备向第一设备发送的音效处理能力的指示信息还可以进一步包括特定比特或特定字段,该特定比特或特定字段可用于指示第二设备是否具备音效处理能力。例如,当该特定比特为0时,指示第二设备没有音效处理能力;当该特定比特为1时,指示第二设备具有音效处理能力。这样,第一设备可以直接根据该特定比特或特定字段确定出第二设备是否具备音效处理能力,无需基于第二设备反馈的生产厂商、产品型号等设备参数去确定第二设备是否具备音效处理能力,效率更高。进 一步地,如果根据该特定比特或特定字段确定出第二设备不具备音效处理能力,即上述情况1,那么第一设备就可以直接使用第一设备的某套音效处理算法在第一设备对主、第二设备之间的音频数据进行音效处理。如果确定出第二设备具备音效处理能力,那么第一设备还需要根据生产厂商、产品型号等设备参数进一步确定主、第二设备之间是否支持联合音效处理,具体可参考上述情况2、情况3。
结合第一方面,在一些实施例中,如果第一设备确定出第一设备没有第一音效处理算法,且第二设备不具有音效处理能力,即上述情况1,则:第一设备与第二设备之间建立音频连接,第一设备使用第三音效处理算法对第一音频数据进行处理,得到第四音频数据,第一设备通过音频连接向第二设备传输第四音频数据。第二设备播放第四音频数据。
结合第一方面,在一些实施例中,如果第一设备确定出第一设备没有第一音效处理算法,且第二设备具有音效处理能力,即上述情况2,则:第一设备与第二设备之间建立音频连接,第一设备通过音频连接向第二设备传输第一音频数据,第二设备使用第二音效处理算法对第一音频数据进行处理,得到第五音频数据。第二设备播放第五音频数据。
结合第一方面,在一些实施例中,上述无线通信连接可以是逻辑链路控制和适配协议L2CAP连接。
基于L2CAP连接,第一设备与第二设备之间交互的第一请求、第一响应的信令实现可如下:
第一请求可以包括L2CAP回声请求(ECHO request),第一响应可以包括L2CAP回声响应(ECHO response)。
基于L2CAP连接,第一设备与第二设备之间用来传输音频数据的音频连接的实现可如下:该音频连接可包括基于前述L2CAP连接建立的媒体音频连接,例如高级音频分发应用规范A2DP连接。该音频连接传输的音频数据可包括媒体音频数据。
具体的,第一设备可以在检测到播放媒体音频的用户操作时,与第二设备之间建立该媒体音频连接。
结合第一方面,在一些实施例中,上述无线通信连接可以是射频通信RFCOMM连接。
基于射频通信RFCOMM连接,第一设备与第二设备之间用来传输音频数据的音频连接的实现可如下:
第一请求可以包括第一AT命令,第一响应可以包括响应于第一AT命令的第一AT响应。
基于射频通信RFCOMM连接,第一设备与第二设备之间用来传输音频数据的音频连接的实现可如下:
该音频连接可包括基于前述RFCOMM连接建立的通话音频连接,例如面向连接的同步连接SCO连接。该音频连接传输的音频数据可包括通话音频数据。
具体的,第一设备可以在检测到接听来电或拨通电话的用户操作时,与第二设备之间建立该通话音频连接。
第二方面,本申请提供了一种无线通讯方法,应用在上述第一设备。在该方法中,第一设备与第二设备可以建立无线通信连接。第一设备可以向第二设备发送第一请求,以请求查询从设备的音效处理能力。相应的,第一设备可以接收第二设备发送的第一响应,第 一响应中携带第一指示信息。第一指示信息可以用于指示第二设备具备的音效处理能力。第一设备可以根据第二设备反馈的指示信息确定第一设备与第二设备之间是否支持联合音效处理。具体的,第一设备可确定第一设备是否有第一音效处理算法,第一音效处理算法为适配第二设备使用的第二音效处理算法的音效处理算法,第二音效处理算法是根据第一指示信息确定,如果确定有第一音效处理算法,则:第一设备与第二设备之间可以建立音频连接,第一设备使用第一音效处理算法对第一音频数据进行处理,得到第二音频数据。然后,第一设备可以通过音频连接向第二设备传输第二音频数据。
这样,第二设备使用第二音效处理算法对第二音频数据进行处理,得到第三音频数据。第二设备播放第三音频数据。
第二方面中,第一设备具有的功能以及其他说明可以参考第一方面中的内容,这里不再赘述。
第三方面,本申请实施例提供了一种电子设备,电子设备可包括通讯模块、存储器、一个或多个处理器、以及一个或多个程序,一个或多个处理器用于执行存储在存储器中的一个或多个计算机程序,使得电子设备可实现如第一方面中第一设备具有的任一功能,或如第二方面中第一设备具有的任一功能。
第四方面,本申请实施例提供了一种音频输出设备,该音频输出设备可包括蓝牙模块、音频模块、存储器和处理器。存储器中存储有一个或多个程序,一个或多个处理器用于执行存储在存储器中的一个或多个程序,使得音频输出设备可实现如第一方面中第二设备具有的任一功能,或如第二方面中第二设备具有的任一功能。
第五方面,本申请实施例提供了一种无线通讯系统,该无线通讯系统可包括前述各方面中描述的第一设备、第二设备。
第六方面,本申请实施例提供了一种芯片系统,该芯片系统可以应用于电子设备,该芯片包括一个或多个处理器,处理器用于调用计算机指令以使得电子设备实现如第一方面中任一可能的实现方式,或如第二方面中任一可能的实现方式。
第七方面,一种包含指令的计算机程序产品,其特征在于,当计算机程序产品在电子设备上运行时,使得电子设备执行如第一方面中任一可能的实现方式,或如第二方面中任一可能的实现方式。
第八方面,提供一种计算机可读存储介质,包括指令,其特征在于,当指令在电子设备上运行时,使得电子设备执行如第一方面中任一可能的实现方式,或如第二方面中任一可能的实现方式。
附图说明
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。
图1A示出了本申请实施例涉及的一种无线音频系统;
图1B示出了图1A所示无线音频系统中的音频输出设备的结构;
图2A示出了本申请实施例涉及的另一种无线音频系统;
图2B示出了图2A所示无线音频系统中的音频输出设备的结构;
图3示出了本申请实施例涉及的媒体音频连接的建立过程;
图4示出了本申请实施例涉及的通话音频连接的建立过程;
图5示出了本申请实施例涉及的通话音频控制连接的建立过程;
图6示出了本申请技术方案提供的无线通讯方法的总体流程;
图7A示出了开启电子设备的蓝牙功能的一种用户操作;
图7B示出了开启电子设备的蓝牙功能的另一种用户操作;
图7C示出了电子设备显示蓝牙连接的指示符;
图8示出了实施例一提供的无线通讯方法;
图9示出了实施例二提供的无线通讯方法;
图10示出了电子设备的结构;
图11示出了音频输出设备的结构。
具体实施方式
本申请的实施方式部分使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。
图1A示出了本申请实施例涉及的一种无线音频系统100。如图1A所示,无线音频系统100可包括电子设备101、音频输出设备106。
其中,电子设备101可以实现为以下任意一种电子设备:手机、便携式游戏机、便携式媒体播放设备、个人电脑、车载媒体播放设备等等。
音频输出设备106可负责将音频数据转换成声音。音频输出设备106可以是头戴式耳机、颈挂式耳机、音响等等音频输出设备。以耳机为例,如图1A所示,音频输出设备106可以包括左耳机102、右耳机103和线控104。左耳机102与线控104之间通过耳机线连接,右耳机103和线控104之间也通过耳机线连接。线控104可配置有音量增大键、音量减小键以及播放控制键等按键,线控104还可配置有受话器/麦克风等声音采集器件。
电子设备101和音频输出设备106之间可建立无线通信连接105。
在电子设备101至音频输出设备106的传输方向上,电子设备101可以通过无线通信连接105向音频输出设备106发送音频数据。此时,电子设备101的角色是音频源(audiosource),音频输出设备106的角色是音频接收方(audio sink)。这样,音频输出设备106可以将接收到的音频数据转换成声音,使得佩戴音频输出设备106的用户可以听到该声音。
除了音频数据,电子设备101和音频输出设备106之间还可以基于无线通信连接105交互播放控制(如上一首、下一首等)消息、通话控制(如接听、挂断)消息、音量控制消息(如音量增大、音量减小)等。具体的,电子设备101可以通过无线通信连接105向音频输出设备106发送播放控制消息、通话控制消息,可实现在电子设备101侧进行播放控制、通话控制。例如,当用户在电子设备101上点击音乐播放键时,电子设备可以通过无线通信连接105向音频输出设备106发送音频播放命令,触发音频输出设备106播放音乐。具体的,音频输出设备106可以通过无线通信连接105向电子设备101发送播放控制消息、通话控制消息,可实现在音频输出设备106侧进行播放控制、通话控制。例如,当 用户按下线控104上的音量增大键时,音频输出设备106可以通过无线通信连接105向电子设备101发送音量增大命令,触发电子设备101增大音乐播放的音量。
不限于图1A所示,电子设备101、音频输出设备106的物理形态、尺寸还可以不同,本申请对此不做限定。
图1A所示的无线音频系统100可以是基于蓝牙协议实现的无线音频系统。即电子设备101、音频输出设备106之间的无线通信连接105可以采用蓝牙通信连接。
图1A中的音频输出设备106的结构可以如图1B所示。其中,
左耳机102、右耳机103均可包括:音频模块和传感器等。音频模块可以用于将音频数据转换成声音,具体可为电声转换器(electo-acoustic transducer)。传感器可用于检测左耳机102、右耳机103当前处于的使用场景。该传感器可以是光学传感器、电容传感器、红外传感器等。该传感器可称为第一传感器。例如,左耳机102、右耳机103可以通过光电检测的方式判断耳机是否入耳,耳机中的光学传感器可以利用光学感应原理来感知用户的佩戴状态,当光学传感器检测到光信号被阻挡时,会向处理器反馈此时耳机处于佩戴状态,然后系统便会自动进入播放模式,反之,当光学传感器检测到光信号时,会向处理器反馈此时耳机未处于佩戴状态,然后系统便会自动暂停音频的播放。在用户体验上,用户可以感受到当摘下耳机时,音频自动停止播放,当戴上耳机时又会自动恢复音频播放。同理,耳机还可以根据电容传感器反馈的电容变化进行入耳检测,或者可以根据红外线传感器反馈的遮挡情况进行入耳检测。
线控104可包括:蓝牙模块、处理器和电池。蓝牙模块可用于接收或发射蓝牙信号。音频输出设备106可通过蓝牙模块和电子设备101之间建立蓝牙通信连接,并通过蓝牙通信连接向电子设备101发射蓝牙信号或者接收电子设备101发射的蓝牙信号。处理器可耦合蓝牙模块以及左耳机102、右耳机103中的音频模块、传感器。处理器可负责读取存储器中的指令,对指令译码并执行指令,以实现本申请技术方案提供的无线通讯方法。电池可用于对音频输出设备106中的各个部件(如处理器、音频模块、传感器、蓝牙模块等)供电。
不限于图1B所示,音频输出设备106还可以包括其他部件,例如线控104中可配置有存储器、受话器、指示灯等。
不限于图1A所示的蓝牙耳机,无线音频系统100中的左耳机102、右耳机103还可以为其他类型的音频输出设备。例如,在家庭影院场景下,无线音频系统100中的左耳机102、右耳机103还可以分别为家庭影院场景下的两个音响:左声道音响、右声道音响。这两个音响之间也可连接有一个类似线控104的控制装置。
图2A示出了本申请实施例涉及的另一种无线音频系统200。如图2A所示,无线音频系统200可包括电子设备201、音频输出设备202和音频输出设备203。音频输出设备202和音频输出设备203可分别为一副蓝牙耳机的左耳机、右耳机,用于将音频数据转换成声音。同样地,电子设备201可以实现为以下任意一种电子设备:手机、便携式游戏机、便携式媒体播放设备、个人电脑、车载媒体播放设备等等。
和图1A所示的无线音频系统100不同的是,音频输出设备202和音频输出设备203 之间没有线缆连接。二者可以通过无线通信连接206而不是有线通信连接,进行通信。音频输出设备202和音频输出设备203可以为分体式真无线立体声(true wireless stereo,TWS)耳机,可分别为一副TWS耳机中的左耳机、右耳机。
在无线音频系统200中,音频输出设备202、音频输出设备203可分别和电子设备201建立无线通信连接。例如,音频输出设备202可以和电子设备201之间建立无线通信连接204,可通过无线通信连接204交互音频数据、播放控制消息、通话控制消息等。同样的,电子设备201和音频输出设备203之间可以建立无线通信连接205,并可以通过无线通信连接205交互音频数据、播放控制消息、通话控制消息等。
不限于图2A所示,电子设备201、音频输出设备202和音频输出设备203的物理形态、尺寸还可以不同,本申请实施例对此不做限定。
图2A所示的无线音频系统200可以是基于蓝牙协议实现的无线音频系统。即电子设备201、音频输出设备202和音频输出设备203之间的无线通信连接(如无线通信连接204、无线通信连接205、无线通信连接206)可以采用蓝牙通信连接。
图2A中的音频输出设备202、音频输出设备203的结构可以如图2B所示。音频输出设备202、音频输出设备203均可包括:蓝牙模块、音频模块、传感器和处理器和电池。可以看出,和图1A所示的无线音频系统100中的音频输出设备106不同的是,音频输出设备202、音频输出设备203分别集成了音频输出设备106的线控104的功能。
蓝牙模块可用于接收或发射蓝牙信号。音频输出设备202、音频输出设备203可分别通过蓝牙模块和电子设备201之间建立蓝牙通信连接,并通过蓝牙通信连接向电子设备201发射蓝牙信号或者接收电子设备201发射的蓝牙信号。音频输出设备202和音频输出设备203之间也可以通过蓝牙模块进行通信。处理器可耦合蓝牙模块以及音频输出设备202和音频输出设备203中的音频模块、传感器。处理器可负责读取存储器中的指令,对指令译码并执行指令,以实现本申请技术方案提供的无线通讯方法。电池可用于对音频输出设备202和音频输出设备203中的各个部件(如处理器、音频模块、传感器、蓝牙模块等)供电。
音频模块可用于将音频数据转换成声音,具体可为电声转换器(electo-acoustic transducer)。传感器可用于检测音频输出设备202、音频输出设备203当前处于的使用场景。该传感器可以是光学传感器、电容传感器、红外传感器、加速度传感器、压力传感器、六轴传感器、骨声纹传感器等等。例如,以耳机为例,左耳机202、右耳机203可以通过光电检测的方式判断耳机是否入耳,耳机中的光学传感器可以利用光学感应原理来感知用户的佩戴状态,当光学传感器检测到光信号被阻挡时,会向处理器反馈此时耳机处于佩戴状态,然后系统便会自动进入播放模式,反之,当光学传感器检测到光信号时,会向处理器反馈此时耳机未处于佩戴状态,然后系统便会自动暂停音频的播放。在用户体验上,用户可以感受到当摘下耳机时,音频自动停止播放,当戴上耳机时又会自动恢复音频播放。同理,耳机还可以根据电容传感器反馈的电容变化进行入耳检测,或者可以根据红外线传感器反馈的遮挡情况进行入耳检测。耳机还可以通过电容传感器来检测用户触控及滑动操作,进而完成音乐控制、音量控制等操作。另外,左耳机202、右耳机203可以通过加速度传感器感知用户敲击耳机的动作及次数,进而实现唤醒语音助手、切换音乐到上一首/下一首、 接听/挂断电话等操作。左耳机202、右耳机203还可以通过压力传感器感知用户按压耳机腿的动作及次数,进而来实现切换耳机音效模式(例如打开/关闭降噪模式/环境音模式)、唤醒语音助手、切换音乐到上一首/下一首、接听/挂断电话等操作。耳机还可以通过耳机内部的六轴传感器(3D加速度计和3D陀螺仪),配合头部姿态识别算法,从而识别用户的头部动作,例如当收到新来电时,用户可以通过点头或者摇头来实现接听、挂断电话。耳机中的骨声纹传感器可以根据骨声纹信息来确认机主的特征,确保耳机只听机主的命令。
不限于图2B所示,音频输出设备202、音频输出设备203还可以包括其他部件,例如可配置有存储器、受话器、指示灯等。
不限于图2A所示的蓝牙耳机,无线音频系统200中的音频输出设备202、音频输出设备203还可以为其他类型的音频输出设备。例如,在家庭影院场景下,无线音频系统200中的音频输出设备202、音频输出设备203还可以分别为家庭影院场景下的两个音响设备:左声道音响、右声道音响。
基于前述无线音频系统,本申请实施例提供了一种无线通讯方法。
本申请实施例提供的无线通讯方法中,在电子设备(即主设备)与音频输出设备(即从设备)配对成功并建立无线连接后,主、从设备之间可以进行音效处理的协商。具体的,主设备可以向从设备发送查询请求,用于获取从设备的音效处理能力。相应的,从设备可以向主设备发送从设备的音效处理能力的指示信息。例如该指示信息可以包括多个字段,这多个字段可包括用于指示生产厂商、产品型号等设备参数的字段。主设备根据从设备反馈的指示信息,确定主、从设备之间是否能够进行联合音效处理,如果确认能够进行联合音效处理,则在主、从设备两侧协同进行相适配的音效处理。
本申请中,联合音效处理是指,主设备可以根据从设备反馈的生产厂商、产品型号等设备参数确定从设备所使用的音效处理算法Ⅰ,并确定出与音效处理算法Ⅰ相适配的主设备侧的音效处理算法Ⅱ。主设备会在主设备侧使用音效处理算法Ⅱ对主、从设备之间的音频数据进行音效处理,接着,从设备会使用从设备侧的音效处理算法Ⅰ继续对该音频数据进行音效处理。主设备侧的音效处理算法Ⅱ与从设备侧的音效处理算法Ⅰ相适配。
这里,相适配具体可表现为:使用音效处理算法Ⅰ、音效处理算法Ⅱ接连对某一音频数据(例如某测试音频数据)进行处理,处理后的该音频数据所测出信噪比优于第一信噪比、第二信噪比,或者,处理后的该音频数据所测出的回声分量少于第一回声分量、第二回声分量,或者,处理后的该音频数据所测出的信噪比优于第一信噪比、第二信噪比,且回声分量少于第一回声分量、第二回声分量。其中,第一信噪比、第一回声分量分别为使用音效处理算法Ⅰ对该音频数据处理后测出的信噪比、回声分量,第二信噪比、第二回声分量分别为使用音效处理算法Ⅱ对该音频数据处理后测出的信噪比、回声分量。
也即是说,相适配是指,从设备侧的音效处理算法Ⅰ和主设备侧的音效处理算法Ⅱ之间能够互相配合,互相补充,各自的音效处理长处更能充分地发挥出来。这样,音效处理算法Ⅰ的音效处理效果与音效处理算法Ⅱ的音效处理效果总体上不会相互抵消,而会相互增强,进而呈现更佳的音效效果。
下面先说明本申请实施例涉及的蓝牙连接、音频连接。
(1)蓝牙连接
蓝牙连接是指蓝牙物理连接,如无连接的异步连接(asynchronous connectionless link,ACL)。蓝牙连接可由主设备(master,即发起连接请求的设备)和从设备(slave,即接收连接请求的设备)经过查询(inquiry)、寻呼(paging)过程建立起来。一种情况下,主设备是电子设备(如手机),从设备是蓝牙耳机。另一种情况下,主设备是蓝牙耳机,从设备是电子设备(如手机)。
(2)音频连接
音频连接(audio connection)可包括:通话音频连接(call audio connection)、媒体音频连接(media audio connection)。其中,通话音频连接可用于传输话音数据,媒体音频连接可用于传输媒体音频数据。通话音频连接可以是用于通话场景的,面向连接的同步连接(synchronous connection oriented link,SCO)。媒体音频连接可以是高级音频分发应用规范(advanced audio distribution profile,A2DP)连接。
不同的音频连接还相应配置有不同的用于音频控制的应用规范(profile)连接。具体的,通话音频连接可配置有通话音频控制连接,可用于传输通话控制(如接听、挂断等)信令。通话音频控制连接可以是免提应用规范(hands-free profile,HFP)连接。媒体音频连接可配置有媒体音频控制连接,可用于传输媒体音频播放控制(如上一首、下一首、暂停、播放、音量控制等)信令。媒体音频控制连接可以是音视频远程控制规范(audio/video remote control profile,AVRCP)连接。
应用规范(profile)连接是基于逻辑链路控制和适配协议(logical link control and adaptation protocol,L2CAP)信道建立的。L2CAP位于基带(baseband)之上,负责将基带的数据转换为便于应用程解码的数据分组格式,并提供协议复用和服务质量交换等功能。前面(1)中描述了蓝牙物理连接的建立,但上层应用(如A2DP)若需要在主、从设备之间通信,则还需要在L2CAP层上建立相应的L2CAP信道,这些信道用于各种不同的应用规范进行通信。在L2CAP层,一个应用规范对应的L2CAP信道可通过信道标识(channel identity,CID)表示。当某个应用规范不再被使用的时候,该应用规范对应的L2CAP信道要被清除掉。
L2CAP只支持ACL数据传输,不支持SCO数据传输。
(3)媒体音频连接(media audio connection)的建立过程
以A2DP连接为例,如图3所示,媒体音频连接的建立过程可包括如下步骤:
步骤1.主、从设备基于ACL连接建立L2CAP信道。其中,用于A2DP的L2CAP信道可包括:用来传输信令(signaling)的L2CAP信道、用来传输流(stream)的L2CAP信道。其中,用于信令的L2CAP信道可称为AVDTP信令L2CAP信道,用于流的L2CAP信道可称为AVDTP stream L2CAP信道。
步骤2.主、从设备基于用于A2DP的L2CAP信道建立A2DP连接,具体可包括步骤2-1至步骤2-5。
步骤2-1.流端点(stream endpoints)发现过程。
例如,通过AVDTP信令L2CAP信道,主设备可以向从设备发送AVDPT_DISCOVER 命令,用于发现设备中的流端点(stream endpoints,SEP)。相应的,从设备会返回一个响应,该响应中可携带从设备可提供的SEP的标识(SEID)。
步骤2-2.能力(capabilities)查询过程。
例如,在发现SEP之后,通过AVDTP信令L2CAP信道,主设备可以向从设备发送AVDPT_GET_CAPABILITIES命令,用于获取从设备的SEP所能够提供的服务。相应的,从设备会向主设备反馈自己的SEP所能够提供的服务。
步骤2-3.流配置(stream configuration)过程。
例如,在查询到从设备的SEP所能够提供的服务之后,通过AVDTP信令L2CAP信道,主设备可以向从设备发送AVDPT_SET_CONFIGURATION命令,用于对从设备的SEP所提供的服务进行配置,如配置音频的声道、采样率等等。
步骤2-4.流建立(stream establishment)过程。
例如,在流配置完成之后,通过AVDTP信令L2CAP信道,主设备可以向从设备发送AVDPT_OPEN命令,用于创建流。创建的流也可以称为流连接(streaming connection),即A2DP连接。至此,A2DP连接建立完成。
步骤2-5.流启动(stream start)过程。
例如,通过AVDTP信令L2CAP信道,主设备可以向从设备发送AVDPT_START命令,用于启动流。流启动之后,就可以利用流传输媒体音频数据了。这里,媒体音频是指单声道(mono)和立体声(stereo)的音频,区别于SCO连接上传输的话音音频。后续,主、从设备可通过AVDPT_CLOSE命令关闭流,即断开A2DP连接。
类似于A2DP连接的建立,AVRCP连接也是基于蓝牙连接(ACL)、L2CAP信道建立的,这里不再赘述。AVRCP连接的建立过程可参考AVRCP规范。
(4)通话音频连接(call audio connection)的建立过程
以SCO连接为例,如图4所示,通话音频连接的建立过程可包括如下步骤:
步骤1.主设备向从设备发送SCO连接建立请求,已发起SCO连接的建立。
步骤2.从设备向主设备返回SCO连接建立请求的接受应答。
其中,SCO连接建立请求可为LMP_SCO_link_req信令,SCO连接建立请求接受应答可以为LMP_accepted信令。如果从设备不能够建立SCO连接,则可返回LMP_not_accepted(SCO连接建立请求的拒绝应答),表示不能够建立SCO连接。
不限于图4所示,SCO连接的建立也可以由从设备发起。后续,主、从设备可通过移除SCO连接建立请求(如LMP_remove_SCO_link_req信令)断开(或移除)SCO连接。
具体的,SCO连接可由主、从设备响应内部事件或者用户操作(拨打电话或接听电话)而建立。当通话从蓝牙耳机接听状态切换到电子设备的听筒、扬声器来接听时,SCO连接会断开。SCO连接可作为HFP连接的附属连接,可用于传输话音数据。后面会提及如何建立HFP连接,这里先不赘述。仅仅在ACL连接已经建立之后,才可以建立SCO连接。因为,HFP连接的建立需要基于ACL连接。
(5)通话音频控制连接的建立过程
以HFP连接为例,如图5所示,通话音频控制连接的建立过程可包括如下步骤:
步骤1.主、从设备基于ACL连接建立用于射频通信(radio frequency communication, RFCOMM)的L2CAP信道。
步骤2.主、从设备基于该L2CAP信道建立RFCOMM连接。关于RFCOMM连接的建立过程,可参考通用访问规范(generic access profile,GAP)和串口规范(serial port profile,SPP)。
步骤3.响应用户操作(拨打电话或接听电话)或内部事件,基于已有的RFCOMM连接,主从设备之间可以建立服务级连接(establish service level connection)。至此,HFP连接建立完成。
后续,主、从设备可通过服务级连接移除过程(service level connection removal procedure)来释放服务级连接(release service level connection),即断开HFP连接。一旦已建立的服务级连接释放,其相应的RFCOMM连接也会被移除。同样的,该服务级连接对应的音频连接(SCO连接)也会被移除。
关于服务级连接建立过程、服务级连接移除过程,可参考HFP规范。
图6示出了本申请实施例提供的无线通讯方法的总体流程。其中,主设备是前述电子设备,从设备是音频输出设备,下面展开说明。
S101,主设备与从设备配对成功并建立蓝牙连接。
其中,配对可以在主设备与从设备之间创建共享密钥:链路密钥(Link Key)。该链路密钥(Link Key)用于主设备与从设备彼此认证设备并加密交换的数据。
其中,蓝牙连接可以是指蓝牙物理连接,用于音频传输,如ACL连接。主设备与从设备可经过查询(inquiry)、寻呼(paging)过程建立ACL连接。
当主设备、从设备的蓝牙开启时,主设备与从设备开始执行S101。
一种方式中,如图7A所示,用户可在电子设备显示的下拉状态栏701中点击蓝牙选项702来开启电子设备的蓝牙。另一种方式中,如图7B所示,用户还可以在电子设备的“设置”界面703中通过蓝牙开关704开启电子设备的蓝牙。“设置”是电子设备上的一款应用程序或服务,可负责配置电子设备所具有的各项功能,如飞行模式、无线高保真(wireless fidelity,Wi-Fi)、蓝牙、移动网络等等。不限于图7A、图7B所示的这两种方式,电子设备的蓝牙还可由内部事件触发开启。该内部事件可例如为“华为分享”等分享服务的开启事件。“华为分享”这种分享服务的开启会触发蓝牙、Wi-Fi的开启。
从设备,如蓝牙耳机或蓝牙音响,一般可在开机后开启蓝牙,无需用户手动额外开启蓝牙。
当主设备与从设备配对成功并建立蓝牙连接后,如图7C示例性所示,电子设备可以在状态栏显示指示符705,指示符705可用于指示当前主、从设备处于蓝牙连接状态。指示符705可以称为第一指示符。另外,在显示指示符705时,电子设备还可显示指示符706。指示符706可用于指示从设备的剩余电量。指示符706可以称为第二指示符。当主、从设备间的蓝牙连接断开之后,指示符705、指示符706会随之消失。不限于图7C所示的外观,第一指示符、第二指示符还可以呈现其他外观,本申请实施例对此不作限制。
主、从设备之间建立的音频连接可以是通话音频连接(如SCO连接)、媒体音频连接(如A2DP连接)。后续实施例会详细说明,在媒体音频连接(如A2DP连接)和通话音频 连接(如SCO连接)的场景下,主、从设备之间是如何协商音效处理的。
S102,主设备可以向从设备发送查询指令,以查询从设备的音效处理能力。
该查询指令可以是L2CAP协议中的回声请求(ECHO request),可以是其他类型的L2CAP的扩展指令,也可以是HFP协议中扩展的AT指令,以及其他自定义的扩展指令,本申请对此不作限制。后续实施例中会详细说明该请求的信令实现,这里先不展开。
S103,从设备向主设备反馈从设备的音效处理能力的指示信息(如生产厂商、产品型号等)。
该指示信息可以携带在回声响应(ECHO response)、扩展的AT响应等消息中,具体可参考后续实施例。
S104,主设备可以根据从设备反馈的指示信息确定主、从设备之间是否支持联合音效处理。
该指示信息可包括从设备的生产厂商、产品型号等设备参数,这些参数可用于指示该款从设备所使用的音效处理算法,反映出该款从设备的音效处理能力,故可称为该款从设备的音效处理能力或所使用的音效处理算法的指示信息。
这里,主、从设备之间是否支持联合音效处理可以包括下述三种情况:
情况1:主、从设备之间不支持联合音效处理,从设备不支持音效处理,可参考步骤S105。
具体的,当主设备可以根据从设备反馈的生产厂商、产品型号等设备参数确定从设备不具备音效处理能力时,主设备可确定协商音效处理的情况为情况1。从设备是否具备音效处理能力,这一点可以根据从设备反馈的生产厂商、产品型号等设备参数确定出来。主设备侧可以根据某款耳机、音箱等从设备的生产厂商、产品型号等设备参数,在本地或在云端查询到该款从设备是否具有如降噪、消回声等音效处理能力,以及该款从设备在具有音效处理能力的情况下所使用的音效处理算法是什么。
情况2:主、从设备之间不支持联合音效处理,从设备支持音效处理,可参考步骤S110。
具体的,当主设备根据从设备反馈的生产厂商、产品型号等设备参数确定从设备具备音效处理能力时,如果主设备根据从设备使用的音效处理算法,不能够从主设备侧的多套音效处理算法中获取到与从设备侧的音效处理算法Ⅰ相适配的主设备侧的音效处理算法Ⅱ,则主设备可确定协商音效处理的情况为情况2。
情况3:主、从设备之间支持联合音效处理,从设备支持音效处理,可参考步骤S115。
具体的,当主设备根据从设备反馈的生产厂商、产品型号等设备参数确定从设备支持音效处理时,如果主设备根据从设备使用的音效处理算法Ⅰ,能够从主设备侧的多套音效处理算法中获取到与音效处理算法Ⅰ相适配的音效处理算法Ⅱ,则主设备可确定协商音效处理的情况为情况3。音效处理算法Ⅱ可用于联合音效处理中主设备侧的音效处理。
为了确定上述几种情况,主设备可以通过下述几种方式确定:主设备侧是否有与从设备侧的音效处理算法Ⅰ相适配的主设备侧的音效处理算法Ⅱ。
方式1.通过对测试信号进行音效处理,比对处理结果,来筛选出适配算法。
具体的,主设备可从主设备侧的多套音效处理算法中选择出一套音效处理算法,使用选择出来的这一套音效处理算法和从设备侧的音效处理算法Ⅰ接连对测试音频数据进行处 理。如果以下一项或多项条件被满足:处理后的测试音频数据所测出信噪比优于第一信噪比、第二信噪比,处理后的测试音频数据所测出的回声分量少于第一回声分量、第二回声分量,则确定主设备侧有与音效处理算法Ⅰ相适配的音效处理算法Ⅱ。如果这一项或多项条件未被满足,则继续从该多套音效处理算法中选择出下一套音效处理算法,重复前述音效处理,直至筛选出满足这一项或多项条件的音效处理算法。如果多套算法全部都无法满足这一项或多项条件,则可确定主设备侧没有与从设备侧的音效处理算法Ⅰ相适配的音效处理算法Ⅱ。
主设备可进一步从这多套音效处理算法中选择出满足前述一项或多项条件,且音效处理效果最优(例如信噪比最优、回声分量最少)的音效处理算法来适配从设备侧的音效处理算法Ⅰ。
其中,第一信噪比、第一回声分量可以分别为使用前述选择出来的这一套音效处理算法对所述测试音频数据处理后测出的信噪比、回声分量,第二信噪比、第二回声分量可以分别为使用从设备侧的音效处理算法Ⅰ对该测试音频数据处理后测出的信噪比、回声分量。主设备侧的多套音效处理算法可以存储在主设备本地,也可以存储在云端服务器,由主设备访问。
方式2.查表获得适配算法。
具体的,主设备侧可以存储有或者可访问到一个映射表。该映射表中可以记录有多款设备的设备参数(如生产厂商、产品型号等设备参数)和多套音效处理算法之间的对应关系。在该映射表中,某款从设备的设备参数对应的音效处理算法,指的是主设备侧的多套音效处理算法中与该款从设备所使用的音效处理算法相适配的音效处理算法。一套音效处理算法可以包括多个音效处理算法,例如降噪算法、回声抑制算法等等。当然,一套音效处理算法也可以仅包括一个音效处理算法,例如降噪算法,本申请对此不做限制。该映射表可以是主设备出厂前写入到存储器中,也可以是主设备从服务器下载的,还可以是其他设备分享而来的。本申请对该映射表的来源不作限制。
具体的,主设备侧可以判断第一映射表中是否存在从设备的设备参数对应的音效处理算法,如果有,则可确定第一设备有与从设备侧的音效处理算法Ⅰ相适配的音效处理算法Ⅱ。音效处理算法Ⅱ即映射表中从设备的设备参数对应的音效处理算法。
在上述S103中,不限于生产厂商、产品型号等设备参数,从设备向主设备发送的音效处理能力的指示信息还可以进一步包括特定比特或特定字段,该特定比特或特定字段可用于指示从设备是否具备音效处理能力。例如,当该特定比特为0时,指示从设备没有音效处理能力;当该特定比特为1时,指示从设备具有音效处理能力。这样,主设备可以直接根据该特定比特或特定字段确定出从设备是否具备音效处理能力,无需基于从设备反馈的生产厂商、产品型号等设备参数去确定从设备是否具备音效处理能力,效率更高。进一步地,如果根据该特定比特或特定字段确定出从设备不具备音效处理能力,即上述情况1,那么主设备就可以直接使用主设备侧的某套音效处理算法在主设备侧对主、从设备之间的音频数据进行音效处理。如果确定出从设备具备音效处理能力,那么主设备还需要根据生产厂商、产品型号等设备参数进一步确定主、从设备之间是否支持联合音效处理,具体可参考上述情况2、情况3。
下面分别说明上述三种情况下的音效处理过程:
情况1下的音效处理过程(S106-S109):
S106,在情况1下,主设备可确定在主设备侧使用音效处理算法Ⅲ对主、从设备之间的音频数据A进行音效处理。
音效处理算法Ⅲ属于前述主设备侧的多套音效处理算法。音效处理算法Ⅲ可以是主设备侧默认使用的音效处理算法,也可以是主设备针对情况1专门选择的音效处理算法。
S107,主设备可以使用主设备侧的音效处理算法Ⅲ对音频数据A进行音效处理,得到音频数据B。
主、从设备之间可以建立音频连接,该音频连接可用于传输主、从设备之间的音频数据A。该音频连接可以是通话音频连接(如SCO连接)、媒体音频连接(如A2DP连接)。后续实施例会详细说明,在媒体音频连接(如A2DP连接)和通话音频连接(如SCO连接)的场景下,主、从设备之间是如何协商音效处理的,这里先不展开。
S108,主设备可以通过主、从设备之间的音频连接向从设备发送音频数据B。
S109,从设备播放音频数据B。
在情况1下,从设备不支持音效处理。在接收到来自主设备的音频数据B后,从设备不对音频数据B进行音效处理便播放音频数据B。
情况2下的音效处理过程(S111-S114):
S111,在情况2下,主设备可确定在从设备侧使用音效处理算法Ⅳ对主、从设备之间的音频数据A进行音效处理。
音效处理算法Ⅳ属于前述从设备侧的多套音效处理算法。音效处理算法Ⅳ可以是从设备侧默认使用的音效处理算法,也可以是其他类型的音效处理算法。
S112,主设备可以通过主、从设备之间的音频连接直接向从设备发送音频数据A。
主、从设备之间可以建立音频连接,该音频连接可用于传输主、从设备之间的音频数据A。该音频连接可以是通话音频连接(如SCO连接)、媒体音频连接(如A2DP连接),后续实施例会详细说明。
在情况2下,主设备确定不使用主设备侧的音效处理功能,所以主设备不对音频数据A进行音效处理便将音频数据A发送给从设备。
S113,从设备可以使用从设备侧的音效处理算法Ⅳ对音频数据A进行音效处理,得到音频数据C。
S114,从设备播放音频数据C。
情况3下的音效处理过程(S116-S120):
S116,在情况3下,主设备根据从设备使用的音效处理算法Ⅰ选择出与音效处理算法Ⅰ相适配的音效处理算法Ⅱ,确定在主设备侧使用音效处理算法Ⅱ对主、从设备之间的音频数据A进行音效处理。
参考步骤S115,主设备可以在本地或云端获取到与从设备侧的音效处理算法Ⅰ相适配的主设备侧的音效处理算法Ⅱ。音效处理算法Ⅱ属于主设备侧的多套音效处理算法。音效处理算法Ⅱ一般是主设备针对情况3专门选择的与从设备侧的音效处理算法Ⅰ相适配的音效处理算法。关于主设备侧如何获取到与从设备侧的音效处理算法Ⅰ相适配的音效处理算 法,可以参考前述内容,这里不再赘述。
S117,主设备使用主设备侧的音效处理算法Ⅱ对音频数据A进行音效处理,得到音频数据D。
主、从设备之间可以建立音频连接,该音频连接可用于传输主、从设备之间的音频数据A。该音频连接可以是通话音频连接(如SCO连接)、媒体音频连接(如A2DP连接),后续实施例会详细说明。
S118,主设备通过主、从设备之间的音频连接向从设备发送音频数据D。
S119,从设备使用从设备侧的音效处理算法Ⅰ对音频数据D进行音效处理,得到音频数据E。
S120,从设备播放音频数据E。
在情况3下,音频数据A在经过主设备侧的音效处理算法Ⅱ和从设备侧的音效处理算法Ⅰ的联合音效处理后,最终得到音频数据E。
可以看出,本申请提供的无线通讯方法,可根据从设备的音效处理能力更灵活的选择主、从设备之间的音效处理方式,支持主、从设备之间的联合音效处理,可使得主、从设备充分发挥各自音效处理的优势。相对于单侧音效处理的情况,联合音效处理的效果总体上不会相互抵消,而会相互增强,可以进一步提升音频的音效效果,满足了用户对于音频的更高音质的追求。
不限于主、从设备之间的蓝牙连接,上述各流程也可以基于主、从设备之间的其他类型的无线通信连接,例如Wi-Fi连接,近距离无线通信(near field communication,NFC)连接等,本申请对此不作限制。
下面将详细说明本申请的各个实施例提供的无线通讯方法。
实施例一
本实施例将讨论基于蓝牙L2CAP协议中的信令,主设备(如手机或平板电脑)和从设备(如蓝牙耳机或蓝牙音响)在媒体音频连接的场景下是如何协同音效处理的。这里,媒体音频连接可例如为A2DP连接。
图8示出了实施例一提供的无线通讯方法。图8所示方法中,主设备和从设备之间建立并保持有蓝牙连接(ACL连接)。如图8所示,以媒体音频连接是A2DP连接为例,该方法可包括:
S201,主设备和从设备之间基于ACL连接建立L2CAP信道,如ECHO L2CAP信道。
本实施例可以基于蓝牙L2CAP的信令来实现主、从设备之间的音效处理的协商。在一个示例中,可以建立ECHO L2CAP信道,通过L2CAP中的回声请求(ECHO request)/回声响应(ECHO response)指令,实现主、从设备之间的音效处理的协商。因为L2CAP属于蓝牙底层连接,可支持在蓝牙Profile连接建立之前协商好结果,因此,主、从设备可以在媒体音频数据到来之前完成协商。
具体地,回声请求(ECHO request)/回声响应(ECHO response)指令的格式,如下表1、表2所示:
Figure PCTCN2021094998-appb-000001
Figure PCTCN2021094998-appb-000002
表1 回声请求指令格式
Figure PCTCN2021094998-appb-000003
表2 回声响应指令格式
S202,主设备基于ECHO L2CAP信道向从设备发送回声请求。
主设备发送的回声请求用于请求查询从设备的音效处理能力。
S203,从设备基于ECHO L2CAP信道向主设备发送回声响应(包含从设备的音效处理能力的指示信息)。
响应于主设备发送的回声请求,从设备向主设备发送回声响应。其中,回声响应中可以携带从设备的音效处理能力的指示信息,例如生产厂商、产品型号等设备参数,用于后续主、从设备之间的音效处理的协商。
主设备可以根据从设备的音效处理能力的指示信息确定主、从设备之间是否支持联合音效处理。
下面说明主、从设备之间是否支持联合音效处理的三种情况(具体可参考前述总体方法流程中的相关内容),以及这三种情况下的音效处理过程。
情况1以及情况1下的音效处理过程(S204-S210):
S204,主设备可以根据从设备的指示信息确定主、从设备之间不能进行联合音效处理。
并且,从设备不支持音效处理,即确定协商音效处理的情况为情况1。
具体的,当主设备根据从设备的音效处理能力的指示信息,例如生产厂商、产品型号等设备参数确定从设备不支持音效处理时,或者从设备对音频数据的音效处理效果较差,主设备可确定协商音效处理的情况为情况1。
S205,在情况1下,主设备可以确定在主设备侧使用音效处理算法Ⅲ对主、从设备之间的媒体音频数据A进行音效处理。
音效处理算法Ⅲ属于前述主设备侧的多套音效处理算法。音效处理算法Ⅲ可以是主设备侧默认使用的音效处理算法,也可以是主设备针对情况1专门选择的音效处理算法。
S206,主设备和从设备之间基于用于A2DP的L2CAP信道建立A2DP连接。
A2DP连接可以是基于蓝牙连接、用于A2DP的L2CAP信道建立的流连接。关于主、从设备之间的A2DP连接的建立,可以参考前述图3所示的媒体音频连接的建立过程,在此不再赘述。
S207,主设备检测到用户播放媒体音频A。
用户可以在主设备上操作播放媒体音频A,此时主设备检测到用户播放媒体音频A的操作。另外,基于从设备的传感器、线控等器件的交互反应,从设备也可以检测到用户播放媒体音频A的操作,并且反馈给主设备。
S208,主设备使用音效处理算法Ⅲ对媒体音频数据A进行音效处理,得到媒体音频数据B。
S209,主设备通过A2DP连接向从设备发送媒体音频数据B。
主设备将音效处理后的媒体音频数据B通过A2DP连接发送给从设备。
S210,从设备播放媒体音频数据B。
在情况1下,从设备不支持音效处理。在接收到来自主设备的媒体音频数据B后,从设备不对媒体音频数据B进行音效处理便播放媒体音频数据B。
情况2以及情况2下的音效处理过程(S211-S217):
S211,主设备可以根据从设备的指示信息确定主、从设备之间不能进行联合音效处理。
但是,从设备支持音效处理,即确定协商音效处理的情况为情况2。
具体的,当主设备根据从设备的音效处理能力的指示信息,例如生产厂商、产品型号等设备参数确定从设备支持音效处理时,但是主设备不能够获取到与从设备使用的音效处理算法相适配的主设备侧的音效处理算法,则主设备可确定协商音效处理的情况为情况2。
S212,在情况2下,主设备可确定在从设备侧使用音效处理算法Ⅳ对主、从设备之间的媒体音频数据A进行音效处理。
音效处理算法Ⅳ属于前述从设备侧的多套音效处理算法。音效处理算法Ⅳ可以是从设备侧默认使用的音效处理算法,也可以是其他类型的音效处理算法。
S213,主设备和从设备之间基于用于A2DP的L2CAP信道建立A2DP连接。
A2DP连接可以是基于蓝牙连接、用于A2DP的L2CAP信道建立的流连接。关于主、从设备之间的A2DP连接的建立,可以参考前述图3所示的媒体音频连接的建立过程,在此不再赘述。
S214,主设备检测到用户播放媒体音频A。
用户可以在主设备上操作播放媒体音频A,此时主设备检测到用户播放媒体音频A的操作。另外,基于从设备的传感器、线控等器件的交互反应,从设备也可以检测到用户播放媒体音频A的操作,并且反馈给主设备。
S215,主设备通过A2DP连接向从设备发送媒体音频数据A。
在情况2下,主设备确定不使用主设备侧的音效处理功能,所以主设备不对媒体音频数据A进行音效处理便将媒体音频数据A发送给从设备。
S216,从设备使用音效处理算法Ⅳ对媒体音频数据A进行音效处理,得到媒体音频数据C。
S217,从设备播放媒体音频数据C。
情况3以及情况3下的音效处理过程(S218-S225):
S218,主设备根据从设备的指示信息确定主、从设备之间能够进行联合音效处理。
具体的,当主设备根据从设备反馈的生产厂商、产品型号等设备参数确定从设备支持音效处理时,如果主设备根据从设备使用的音效处理算法Ⅰ,能够从主设备侧的多套音效处理算法中获取到音效处理算法Ⅰ相适配的音效处理算法Ⅱ,则主设备可确定协商音效处理的情况为情况3。音效处理算法Ⅱ可用于联合音效处理中主设备侧的音效处理。
S219,在情况3下,主设备根据从设备使用的音效处理算法Ⅰ选择出与音效处理算法Ⅰ相适配的音效处理算法Ⅱ,确定在主设备侧使用音效处理算法Ⅱ对主、从设备之间的媒 体音频数据A进行音效处理。
主设备可以在本地或云端获取到与从设备侧的音效处理算法Ⅰ相适配的主设备侧的音效处理算法Ⅱ。音效处理算法Ⅱ属于主设备侧的多套音效处理算法。音效处理算法Ⅱ一般是主设备针对情况3专门选择的与从设备侧的音效处理算法Ⅰ相适配的音效处理算法。主设备侧的音效处理算法Ⅱ搭配从设备侧的音效处理算法Ⅰ协同对媒体音频进行处理,它们互相配合,互相补充,可以呈现出更佳的音效效果。
S220,主设备和从设备之间基于用于A2DP的L2CAP信道建立A2DP连接。
A2DP连接可以是基于蓝牙连接、用于A2DP的L2CAP信道建立的流连接。关于主、从设备之间的A2DP连接的建立,可以参考前述图3所示的媒体音频连接的建立过程,在此不再赘述。
S221,主设备检测到用户播放媒体音频A。
同样的,用户可以在主设备上操作播放媒体音频A,此时主设备检测到用户播放媒体音频A的操作。另外,基于从设备的传感器、线控等器件的交互反应,从设备也可以检测到用户播放媒体音频A的操作,并且反馈给主设备。
S222,主设备使用音效处理算法Ⅱ对媒体音频数据A进行音效处理,得到媒体音频数据D。
S223,主设备通过A2DP连接向从设备发送媒体音频数据D。
主设备将音效处理后的媒体音频数据D通过A2DP连接发送给从设备。
S224,从设备使用音效处理算法Ⅰ对媒体音频数据D进行音效处理,得到媒体音频数据E。
S225,从设备播放媒体音频E。
在情况3下,媒体音频数据A在经过主设备侧的音效处理算法Ⅱ和从设备侧的音效处理算法Ⅰ的联合音效处理后,最终得到媒体音频数据E。
在媒体音频连接的场景下,L2CAP能够和蓝牙上层音频Profile进行交互,使得上层音频Profile能够获取到协商结果。而且,L2CAP属于蓝牙底层连接,可支持在蓝牙Profile连接建立之前协商好结果,确保媒体音频数据到来之前完成协商,并及时对媒体音频数据进行音效处理。
另外,不限于ECHO L2CAP,主、从设备还可以基于其他类型的L2CAP建立L2CAP信道进行前述音效处理协商。
实施例二
本实施例将讨论基于蓝牙HFP协议中的AT指令,主设备(如手机或平板电脑)和从设备(如蓝牙耳机或蓝牙音响)在通话音频连接的场景下是如何协同音效处理的。AT指令是HFP协议的一部分,用于在AG(Audio Gateway)和HF(Hands-Free Unit)之间通过RFCOMM(串口仿真协议)通道传输控制信令。AT指令的编码格式为ASCII码。这里,通话音频连接可例如为SCO连接。和媒体音频连接不同的是,SCO连接的建立需要用户操作(接听电话)或内部事件触发。即,通话音频连接需要在有通话音频业务时才建立。
图9示出了实施例二提供的无线通讯方法。图9所示方法中,主设备和从设备之间建 立并保持有蓝牙连接(ACL连接)。如图9所示,以通话音频连接是SCO连接为例,该方法可包括:
S301,主设备和从设备之间基于ACL连接建立用于RFCOMM的L2CAP信道。
具体内容可参考前述图5中所示的通话音频控制连接的建立过程中的步骤1,此处不再赘述。
S302,主设备和从设备之间基于RFCOMM L2CAP信道建立RFCOMM连接。
具体内容可参考前述图5中所示的通话音频控制连接的建立过程中的步骤2,此处不再赘述。
S303,主设备基于RFCOMM连接向从设备发送特定扩展AT指令,以查询从设备的生产厂商、产品型号等设备参数。
扩展AT指令是基于ASCII码的命令行实现的,其格式如下:
AT+<CMD>[op][para-1,para-2,para-3,…]<CR>。
其中,<>表示必须包含的部分,[]表示可选的部分。AT+是命令信息前缀;CMD是指令字符串;[op]是指令操作符,指定是参数设置或查询:“=”表示参数设置,“NULL”表示查询;[para-n]是参数设置时的输入,如查询则不需要;<CR>是结束符,回车,ASCII码为0x0a或0x0d。
扩展AT指令对应的响应消息的格式:
+<RSP>[op][para-1,para-2,para-3,…]<CR><LF><CR><LF>。
其中,+是响应消息前缀;RSP是响应字符串,包括:“ok”表示成功,“ERR”表示失败;[op]为=;[para-n]是查询时的返回参数或出错时的错误码;<CR>是结束符,表示回车,ASCII码为0x0d。<LF>是结束符,表示换行,ASCII码为0x0a。
例如,可以自定义一个“AT+AAID”的AT指令,作为主设备查询从设备音效处理能力的指示信息的AT+指令,其中,“AAID”字符为示例展示的自定义的指令字符串,关于扩展AT指令的形式以及内容,本实施例不作任何限制。
S304,从设备可以基于RFCOMM连接向从设备发送该特定扩展AT指令的响应消息(携带从设备的音效处理能力的指示信息)。
关于该特定扩展AT指令的响应消息的格式,可参考S303中的相关内容,这里不再赘述。
主设备可以根据从设备反馈的生产厂商、产品型号等设备参数确定从设备是否具有音效处理能力,以及主、从设备之间是否支持联合音效处理,即主设备侧是否有音效处理算法适配于从设备侧的音效处理算法。
下面说明主、从设备之间是否支持联合音效处理的三种情况(具体可参考前述总体方法流程中的相关内容),以及这三种情况下的音效处理过程。
情况1以及情况1下的音效处理过程(S305-S213):
S305,主设备可以根据从设备的指示信息确定主、从设备之间不能进行联合音效处理。
并且,从设备不支持音效处理,即确定协商音效处理的情况为情况1。
具体的,当主设备根据从设备的音效处理能力的指示信息,例如生产厂商、产品型号等设备参数,确定从设备不支持音效处理时,或者从设备对音频数据的音效处理效果较差, 主设备可确定协商音效处理的情况为情况1。
S306,在情况1下,主设备可以确定在主设备侧使用音效处理算法Ⅲ对主、从设备之间的通话音频数据A进行音效处理。
音效处理算法Ⅲ属于前述主设备侧的多套音效处理算法。音效处理算法Ⅲ可以是主设备侧默认使用的音效处理算法,也可以是主设备针对情况1专门选择的音效处理算法。
S307,主设备和从设备之间建立服务级连接。
具体内容可参考前述图5中所示的通话音频控制连接的建立过程中的步骤3,此处不再赘述。
S308,主设备检测到用户接听来电或拨通电话。
用户可以在主设备上操作接听来电或拨通电话,此时主设备检测到用户接听来电或拨通电话的操作。另外,基于从设备的传感器、线控等器件的交互反应,从设备也可以检测到用户接听来电或拨通电话的操作,并且反馈给主设备。
S309,主设备和从设备之间建立SCO连接。
关于SCO连接建立过程,可参考前述图4的相关内容,这里不再赘述。
S310,主设备检测到通话音频数据A。
S311,主设备使用音效处理算法Ⅲ对通话音频数据A进行音效处理,得到媒体音频数据B。
S312,主设备通过SCO连接向从设备发送通话音频数据B。
主设备将音效处理后的通话音频数据B通过SCO连接发送给从设备。
S313,从设备播放通话音频数据B。
在情况1下,从设备不支持音效处理。在接收到来自主设备的通话音频数据B后,从设备不对通话音频数据B进行音效处理便播放通话音频数据B。
情况2以及情况2下的音效处理过程(S314-S322):
S314,主设备可以根据从设备的指示信息确定主、从设备之间不能进行联合音效处理。
但是,从设备支持音效处理,即确定协商音效处理的情况为情况2。
具体的,当主设备根据从设备的音效处理能力的指示信息,例如生产厂商、产品型号等设备参数确定从设备支持音效处理时,但是主设备不能够获取到与从设备使用的音效处理算法相适配的主设备侧的音效处理算法,则主设备可确定协商音效处理的情况为情况2。
S315,在情况2下,主设备可确定在从设备侧使用音效处理算法Ⅳ对主、从设备之间的通话音频数据A进行音效处理。
音效处理算法Ⅳ属于前述从设备侧的多套音效处理算法。音效处理算法Ⅳ可以是从设备侧默认使用的音效处理算法,也可以是其他类型的音效处理算法。
S316,主设备和从设备之间建立服务级连接。
具体内容可参考前述图5中所示的通话音频控制连接的建立过程中的步骤3,此处不再赘述。
S317,主设备检测到用户接听来电或拨通电话。
用户可以在主设备上操作接听来电或拨通电话,此时主设备检测到用户接听来电或拨通电话的操作。另外,基于从设备的传感器、线控等器件的交互反应,从设备也可以检测 到用户接听来电或拨通电话的操作,并且反馈给主设备。
S318,主设备和从设备之间建立SCO连接。
关于SCO连接建立过程,可参考前述图4的相关内容,这里不再赘述。
S319,主设备检测到通话音频数据A。
S320,主设备通过SCO连接向从设备发送通话音频数据A。
在情况2下,主设备确定不使用主设备侧的音效处理功能,所以主设备不对通话音频数据A进行音效处理便将通话音频数据A发送给从设备。
S321,从设备使用音效处理算法Ⅳ对通话音频数据A进行音效处理,得到通话音频数据C。
S322,从设备播放通话音频数据C。
情况3以及情况3下的音效处理过程(S323-S332):
S323,主设备根据从设备的指示信息确定主、从设备之间能够进行联合音效处理。
具体的,当主设备根据从设备反馈的生产厂商、产品型号等设备参数确定从设备支持音效处理时,如果主设备根据从设备使用的音效处理算法Ⅰ,能够从主设备侧的多套音效处理算法中获取到音效处理算法Ⅰ相适配的音效处理算法Ⅱ,则主设备可确定协商音效处理的情况为情况3。音效处理算法Ⅱ可用于联合音效处理中主设备侧的音效处理。可参考步骤S115。
S324,主设备根据从设备使用的音效处理算法Ⅰ选择出与音效处理算法Ⅰ相适配的音效处理算法Ⅱ,确定在主设备侧使用音效处理算法Ⅱ对主、从设备之间的通话音频数据A进行音效处理。
主设备可以在本地或云端获取到与从设备侧的音效处理算法Ⅰ相适配的主设备侧的音效处理算法Ⅱ。音效处理算法Ⅱ属于主设备侧的多套音效处理算法。音效处理算法Ⅱ一般是主设备针对情况3专门选择的与从设备侧的音效处理算法Ⅰ相适配的音效处理算法。主设备侧的音效处理算法Ⅱ搭配从设备侧的音效处理算法Ⅰ协同对通话音频进行处理,它们互相配合,互相补充,可以呈现出更佳的音效效果。
S325,主设备和从设备之间建立服务级连接。
具体内容可参考前述图5中所示的通话音频控制连接的建立过程中的步骤3,此处不再赘述。
S326,主设备检测到用户接听来电或拨通电话。
同样的,用户可以在主设备上操作接听来电或拨通电话,此时主设备检测到用户接听来电或拨通电话的操作。另外,基于从设备的传感器、线控等器件的交互反应,从设备也可以检测到用户接听来电或拨通电话的操作,并且反馈给主设备。
S327,主设备和从设备之间建立SCO连接。
关于SCO连接建立过程,可参考前述图4的相关内容,这里不再赘述。
S328,主设备检测到通话音频数据A。
S329,主设备使用音效处理算法Ⅱ对通话音频数据A进行音效处理,得到通话音频数据D。
S330,主设备通过SCO连接向从设备发送通话音频数据D。
主设备将将音效处理后的通话音频数据D通过SCO连接发送给从设备。
S331,从设备使用音效处理算法Ⅰ对通话音频数据D进行音效处理,得到通话音频数据E。
S332,从设备播放通话音频E。
在情况3下,通话音频数据A在经过主设备侧的音效处理算法Ⅱ和从设备侧的音效处理算法Ⅰ的联合音效处理后,最终得到通话音频数据E。
另外,在主设备或从设备检测到通话被挂断时,SCO连接也会随之断开。
在通话音频连接的场景下,主、从设备通过使用AT指令进行音效处理的协商,实现效率高,而且可以使得主、从设备在服务级连接建立之前就协商好结果,确保在用户接听来电或拨通电话之前协商完成,并及时对通话音频数据进行音效处理,提高通话音频的质量。
在一些实施例中,为了节约主设备侧、从设备的功耗,主设备可以根据主设备、从设备的剩余电量来进一步细化主、从设备两侧的音效处理过程。
在前述情况1下,主设备单侧对音频数据进行音效处理。在发现主设备的剩余电量低于一定数值时,比如小于10%,主设备可以关闭主设备侧的音效处理算法,以节约电量。
在前述情况2下,从设备单侧对音频数据进行音效处理。在发现从设备的剩余电量低于一定数值时,比如小于10%,主设备可以向从设备发一个通知,通知从设备不再进行音效处理,或者从设备主动关闭自身的音效处理功能,以节约从设备的电量。
在前述情况3下,主、从设备之间支持联合音效处理,主、从设备双侧协同对音频数据进行音效处理。如果发现主设备电量或从设备电量低于一定数值,比如小于10%,可以抛弃使用联合音效处理。抛弃联合音效处理后,综合考虑降低功耗,如果是从设备电量低,则使用电量相对充足的主设备单侧的音效处理算法对音频数据进行音效处理;如果主设备电量低,则使用电量相对充足的从设备单侧的音效处理算法对音频数据进行音效处理;如果是主、从设备电量都低,那么主、从设备两侧均关闭音效处理功能,以降低功耗,节省电量。
另外,本申请实施例还提供了一种无线通讯方法,用于低功耗蓝牙连接的情形中。具体的,与前述实施例不同的是,低功耗蓝牙连接的情形中,从设备可以发送BLE广播,该广播中可以包含从设备的相关参数,比如从设备的名称,从设备的生产商,从设备具有的服务UUID等。主设备搜索到该广播后发起连接请求,建立连接后主设备可以获取到从设备的参数。或者在建立BLE连接的时候,主、从设备通过扩展指令,进行音效处理的协商。低功耗蓝牙通过广播来支持主、从设备之间的音效处理的协商,可以用更低的功耗来实现高效率的交互。
下面介绍本申请实施例中提供的示例性电子设备10。电子设备10可以实现为上述实施例中提及的主设备,可以是图1A所示的无线音频系统100中的电子设备101或者图2A所示的无线音频系统200中的电子设备201。电子设备10通常可以用作音频源(audio source),如手机、平板电脑等,可以向其他音频接收设备(audio sink),如耳机、音箱等, 传输音频数据,这样其他音频接收设备便可以将音频数据转换成声音。在一些场景下,电子设备10也可以用作音频接收方(audio sink),接收其他设备音频源(如具有麦克风的耳机)传输的音频数据(如耳机采集的用户说话的声音所转换成的音频数据)。
图10示出了电子设备10的结构示意图。
电子设备10可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本发明实施例示意的结构并不构成对电子设备10的具体限定。在本申请另一些实施例中,电子设备10可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。在一些实施例中,电子设备10也可以包括一个或多个处理器110。
其中,控制器可以是电子设备10的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了电子设备10的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备10的结构限定。在另一些实施例中,电子设备10也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
电子设备10的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
无线通信模块160可以提供应用在电子设备10上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。示例性地,无线通信模块160可以包括蓝牙模块、Wi-Fi模块等。
在一些实施例中,电子设备10的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备10可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备10通过GPU,显示屏194,以及应用处理器等可以实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备10可以包括1个或N个显示屏194,N为大于1的正整数。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备10的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备10的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐、照片、视频等数据保存在外部存储卡中。
内部存储器121可以用于存储一个或多个计算机程序,该一个或多个计算机程序包括指令。处理器110可以通过运行存储在内部存储器121的上述指令,从而使得电子设备10执行本申请一些实施例中所提供的数据分享的方法,以及各种功能应用以及数据处理等。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统;该存储程序区还可以存储一个或多个应用程序(比如图库、联系人等)等。存储数据区可存储电子设备10使用过程中所创建的数据(比如照片,联系人等)。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备10可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备10可以通过扬声器170A收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备10接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备10可以设置至少一个麦克风170C。在另一些实施例中,电子设备10可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备10还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备10可以接收按键输入,产生与电子设备10的用户设置以及功能控制有关的键信号输入。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备10的接触和分离。电子设备10可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。在一些实施例中,电子设备10采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备10中,不能和电子设备10分离。
图10示例性所示的电子设备10可以通过显示屏194显示以下各个实施例中所描述的 各个用户界面。电子设备10可以通过触摸传感器180K在各个用户界面中检测触控操作,例如在各个用户界面中的点击操作(如在图标上的触摸操作、双击操作),又例如在各个用户界面中的向上或向下的滑动操作,或执行画圆圈手势的操作,等等。在一些实施例中,电子设备10可以通过陀螺仪传感器180B、加速度传感器180E等检测用户手持电子设备10执行的运动手势,例如晃动电子设备。在一些实施例中,电子设备10可以通过摄像头193(如3D摄像头、深度摄像头)检测非触控的手势操作。
在一些实施中,电子设备10中包括的终端应用处理器(AP)可以实现蓝牙协议框架中的Host,电子设备10中包括的蓝牙(BT)模块可以实现蓝牙协议框架中的controller,二者之间通过HCI进行通信。即把蓝牙协议框架的功能分布在两颗芯片上。
在另一些实施例中,电子设备10终端应用处理器(AP)可以实现蓝牙协议框架中的Host和controller。即蓝牙协议框架的所有功能都放在一颗芯片上,也就是说,host和controller都放在同一颗芯片上,由于host和controller都在同一颗芯片上,因此物理HCI就没有存在的必要性,host和controller之间直接通过应用编程接口API来交互。
下面介绍本申请实施例中提供的示例性音频接收方设备300。音频接收方设备300可以实现为上述实施例中提及的从设备,如蓝牙耳机,可以是图1A所示的无线音频系统100中的音频输出设备106或图2A所示的无线音频系统200中的音频输出设备202或音频输出设备203。音频接收方设备300(audio sink),如耳机、音箱,可以向其他音频源(audio source),如手机、平板电脑等,传输的音频数据,并可以将接收到的音频数据转换成声音。在一些场景下,如果配置有麦克风/受话器等声音采集器件,音频接收方设备300也可以用作音频源(audio source),向其他设备音频接收方(audio sink)(如手机)传输音频数据(如耳机采集的用户说话的声音所转换成的音频数据)。
音频接收方设备300可以是一对蓝牙耳机,包括左耳机和右耳机。该蓝牙耳机可以是颈戴式蓝牙耳机,也可以是TWS蓝牙耳机。
图11示例性示出了本申请技术方案提供的音频接收方设备300的结构示意图。
如图11所示,音频接收方设备300可包括处理器302、存储器303、蓝牙通信处理模块304、电源305、传感器306、麦克风307和电/声转换器308。这些部件可以通过总线连接。
如果音频接收方设备300是颈戴式蓝牙耳机,则音频接收方设备300还可包括线控。处理器302、存储器303、蓝牙通信处理模块304、电源305可集成在线控中。如果音频接收方设备300是TWS蓝牙耳机,则两个耳机还都可集成有处理器302、存储器303、蓝牙通信处理模块304、电源305。
处理器302可用于读取和执行计算机可读指令。具体实现中,处理器302可主要包括控制器、运算器和寄存器。其中,控制器主要负责指令译码,并为指令对应的操作发出控制信号。运算器主要负责执行定点或浮点算数运算操作、移位操作以及逻辑操作等,也可以执行地址运算和转换。寄存器主要负责保存指令执行过程中临时存放的寄存器操作数和中间操作结果等。具体实现中,处理器302的硬件架构可以是专用集成电路(Application Specific Integrated Circuits,ASIC)架构、MIPS架构、ARM架构或者NP架构等等。
在一些实施例中,处理器302可以用于解析蓝牙通信处理模块304接收到的信号,如封装有音频数据的信号、内容控制消息,流控制消息等等。处理器302可以用于根据解析结果进行相应的处理操作,如驱动电/声转换器308开始或暂停或停止将音频数据转换成声音等等。
在一些实施例中,处理器302还可以用于生成蓝牙通信处理模块304向外发送的信号,如蓝牙广播信号、信标信号,又如采集到的声音所转换成的音频数据。
存储器303与处理器302耦合,用于存储各种软件程序和/或多组指令。具体实现中,存储器303可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器303可以存储操作系统,例如uCOS、VxWorks、RTLinux等嵌入式操作系统。存储器303还可以存储通信程序,该通信程序可用于与电子设备10,一个或多个服务器,或附加设备进行通信。
蓝牙(BT)通信处理模块304可以接收其他设备(如电子设备10)发射的信号,如扫描信号、广播信号、封装有音频数据的信号、内容控制消息、流控制消息等等。蓝牙(BT)通信处理模块304也可以发射信号,如广播信号、扫描信号、封装有音频数据的信号、内容控制消息、流控制消息等等。
电源305可用于向处理器302、存储器303、蓝牙通信处理模块304、传感器306、电/声转换器308等其他内部部件供电。
传感器306可以包括例如红外传感器、压力传感器、霍尔传感器、接近光传感器,等等。其中,红外传感器、压力传感器可用于检测耳机的佩戴状态。霍尔传感器、接近光传感器可用于检测左右耳机是否吸合在一起。
麦克风307可用于采集声音,如用户说话的声音,并可以将采集到声音输出给电/声转换器308,这样电/声转换器308便可以将麦克风307采集到的声音转换成音频数据。
电/声转换器308可用于将声音转换成电信号(音频数据),例如将麦克风307采集到的声音转换成音频数据,并可以传输音频数据至处理器302。这样,处理器302便可以触发蓝牙(BT)通信处理模块304发射该音频数据。电/声转换器308还可用于将电信号(音频数据)转换成声音,例如将处理器302输出的音频数据转换成声音。处理器302输出的音频数据可以是蓝牙(BT)通信处理模块304接收到的。
在一些实施中,处理器302可以实现蓝牙协议框架中的Host,蓝牙(BT)通信处理模块304可以实现蓝牙协议框架中的controller,二者之间通过HCI进行通信。即把蓝牙协议框架的功能分布在两颗芯片上。
在另一些实施例中,处理器302可以实现蓝牙协议框架中的Host和controller。即蓝牙协议框架的所有功能都放在一颗芯片上,也就是说,host和controller都放在同一颗芯片上,由于host和controller都在同一颗芯片上,因此物理HCI就没有存在的必要性,host和controller之间直接通过应用编程接口API来交互。
可以理解的是,图11示意的结构并不构成对音频接收方设备300的具体限定。在本申请另一些实施例中,音频接收方设备300可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。

Claims (27)

  1. 一种无线通讯方法,其特征在于,包括:
    第一设备和第二设备之间建立无线通信连接;
    所述第一设备通过所述无线通信连接向所述第二设备发送第一请求;
    所述第二设备向所述第一设备发送第一响应,所述第一响应携带第一指示信息,所述第一指示信息用于指示所述第二设备具备的音效处理能力;
    所述第一设备确定所述第一设备是否有第一音效处理算法,所述第一音效处理算法为适配所述第二设备使用的第二音效处理算法的音效处理算法,所述第二音效处理算法是根据所述第一指示信息确定,如果有所述第一音效处理算法,则:
    所述第一设备与所述第二设备之间建立音频连接;
    所述第一设备使用所述第一音效处理算法对第一音频数据进行处理,得到第二音频数据;
    所述第一设备通过所述音频连接向所述第二设备传输所述第二音频数据;
    所述第二设备使用第二音效处理算法对所述第二音频数据进行处理,得到第三音频数据;
    所述第二设备播放所述第三音频数据。
  2. 如权利要求1所述的方法,其特征在于,所述第一设备确定所述第一设备是否有第一音效处理算法,具体包括:
    所述第一设备从多套音效处理算法中选择出一套音效处理算法;
    所述第一设备使用选择出来的所述一套音效处理算法、所述第二音效处理算法接连对测试音频数据进行处理,如果满足以下一项或多项条件:处理后的测试音频数据所测出信噪比优于第一信噪比、第二信噪比,处理后的测试音频数据所测出的回声分量少于第一回声分量、第二回声分量,则确定所述第一设备有所述第一音效处理算法,所述第一音效处理算法为所述一套音效处理算法;
    其中,所述第一信噪比、所述第一回声分量分别为使用所述第一音效处理算法对所述测试音频数据处理后测出的信噪比、回声分量,所述第二信噪比、所述第二回声分量分别为使用所述第二音效处理算法对所述测试音频数据处理后测出的信噪比、回声分量。
  3. 如权利要求1所述的方法,其特征在于,所述第一指示信息包括所述第二设备的设备参数;所述第一设备确定所述第一设备是否有第一音效处理算法,具体包括:
    所述第一设备判断第一映射表中是否存在所述第二设备的设备参数对应的音效处理算法,如果有,则确定所述第一设备有所述第一音效处理算法,所述第一音效处理算法为所述第二设备的设备参数对应的音效处理算法;
    其中,第一映射表中记录了多款设备的设备参数和多套音效处理算法之间的对应关系,一款设备的设备参数对应的一套音效处理算法为所述第一设备的,与所述一款设备使用的音效处理算法相适配的音效处理算法;所述设备参数包括以下一项或多项:所述第二设备 的生产厂商、产品型号。
  4. 如权利要求2或3所述的方法,其特征在于,所述多套音效处理算法由所述第一设备从云端服务器获取。
  5. 如权利要求1-4中任一项所述的方法,其特征在于,还包括:
    所述第一设备确定出所述第一设备没有所述第一音效处理算法,且所述第二设备不具有音效处理能力,则:
    所述第一设备与所述第二设备之间建立所述音频连接;
    所述第一设备使用第三音效处理算法对所述第一音频数据进行处理,得到第四音频数据;
    所述第一设备通过所述音频连接向所述第二设备传输所述第四音频数据;
    所述第二设备播放所述第四音频数据。
  6. 如权利要求1-5中任一项所述的方法,其特征在于,还包括:
    所述第一设备确定出所述第一设备没有所述第一音效处理算法,且所述第二设备具有音效处理能力,则:
    所述第一设备与所述第二设备之间建立所述音频连接;
    所述第一设备通过所述音频连接向所述第二设备传输所述第一音频数据;
    所述第二设备使用所述第二音效处理算法对所述第一音频数据进行处理,得到第五音频数据;
    所述第二设备播放所述第五音频数据。
  7. 一种无线通讯方法,其特征在于,包括:
    第一设备和第二设备之间建立无线通信连接;
    所述第一设备通过所述无线通信连接向所述第二设备发送第一请求;
    所述第二设备向所述第一设备发送第一响应,所述第一响应携带第一指示信息,所述第一指示信息用于指示所述第二设备具备的音效处理能力;
    所述第一设备判断所述第一设备的多套音效处理算法中是否有第一音效处理算法,所述第一音效处理算法为适配所述第二设备侧使用的第二音效处理算法的音效处理算法,所述第二音效处理算法是根据所述第一指示信息确定出的,如果有所述第一音效处理算法,则:
    所述第一设备与所述第二设备之间建立音频连接,所述音频连接用于传输第一音频数据;
    所述第一设备使用所述第一音效处理算法对所述第一音频数据进行处理,得到第二音频数据;
    所述第一设备通过所述音频连接向所述第二设备传输所述第二音频数据;所述第二设备用于使用第二音效处理算法对所述第二音频数据进行处理,得到第三音频数据,并播放 所述第三音频数据。
  8. 如权利要求7所述的方法,其特征在于,所述第一设备判断所述第一设备是否有第一音效处理算法,具体包括:
    所述第一设备从多套音效处理算法中选择出一套音效处理算法;
    所述第一设备使用选择出来的所述一套音效处理算法、所述第二音效处理算法接连对测试音频数据进行处理,如果满足以下一项或多项条件:处理后的测试音频数据所测出信噪比优于第一信噪比、第二信噪比,处理后的测试音频数据所测出的回声分量少于第一回声分量、第二回声分量,则确定所述第一设备有所述第一音效处理算法,所述第一音效处理算法为所述一套音效处理算法;
    其中,所述第一信噪比、所述第一回声分量分别为使用所述第一音效处理算法对所述测试音频数据处理后测出的信噪比、回声分量,所述第二信噪比、所述第二回声分量分别为使用所述第二音效处理算法对所述测试音频数据处理后测出的信噪比、回声分量。
  9. 如权利要求7所述的方法,其特征在于,所述第一指示信息包括所述第二设备的设备参数;所述第一设备确定所述第一设备是否有第一音效处理算法,具体包括:
    所述第一设备判断第一映射表中是否存在所述第二设备的设备参数对应的音效处理算法,如果有,则确定所述第一设备有所述第一音效处理算法,所述第一音效处理算法为所述第二设备的设备参数对应的音效处理算法;
    其中,第一映射表中记录了多款设备的设备参数和多套音效处理算法之间的对应关系,一款设备的设备参数对应的一套音效处理算法为所述第一设备的,与所述一款设备使用的音效处理算法相适配的音效处理算法;所述设备参数包括以下一项或多项:所述第二设备的生产厂商、产品型号。
  10. 如权利要求8或9所述的方法,其特征在于,所述多套音效处理算法由所述第一设备从云端服务器获取。
  11. 如权利要求7-10中任一项所述的方法,其特征在于,还包括:
    所述第一设备判断出所述第一设备没有所述第一音效处理算法,且所述第二设备不具有音效处理能力,则:
    所述第一设备确定在所述第一设备使用第三音效处理算法对所述第一设备与所述第二设备之间的音频数据进行音效处理;
    所述第一设备与所述第二设备之间建立所述音频连接,所述音频连接用于传输所述第一音频数据;
    所述第一设备使用所述第三音效处理算法对所述第一音频数据进行处理,得到第四音频数据;
    所述第一设备通过所述音频连接向所述第二设备传输所述第四音频数据;所述第二设备用于播放所述第四音频数据。
  12. 如权利要求7-11中任一项所述的方法,其特征在于,还包括:
    所述第一设备判断出所述第一设备没有所述第一音效处理算法,且所述第二设备具有音效处理能力,则:
    所述第一设备与所述第二设备之间建立所述音频连接,所述音频连接用于传输所述第一音频数据;
    所述第一设备通过所述音频连接向所述第二设备传输所述第一音频数据;所述第二设备用于使用所述第二音效处理算法对所述第一音频数据进行处理,得到第五音频数据,并播放所述第五音频数据。
  13. 如权利要求1-12中任一项所述的方法,其特征在于,所述无线通信连接包括逻辑链路控制和适配协议L2CAP连接。
  14. 如权利要求13所述的方法,其特征在于,所述第一请求包括L2CAP回声请求(ECHO request),所述第一响应包括L2CAP回声响应(ECHO response)。
  15. 如权利要求13或14所述的方法,其特征在于,所述音频连接包括基于所述L2CAP连接建立的媒体音频连接,所述音频连接传输的音频数据包括媒体音频数据。
  16. 如权利要求15所述的方法,其特征在于,所述媒体音频连接包括高级音频分发应用规范A2DP连接。
  17. 如权利要求13-16中任一项所述的方法,其特征在于,所述第一设备与所述第二设备之间建立音频连接,具体包括:所述第一设备检测到播放媒体音频的用户操作时,所述第一设备与所述第二设备之间建立所述音频连接。
  18. 如权利要求1-12中任一项所述的方法,其特征在于,所述无线通信连接包括射频通信RFCOMM连接。
  19. 如权利要求18所述的方法,其特征在于,所述第一请求包括第一AT命令,所述第一响应包括响应于所述第一AT命令的第一AT响应。
  20. 如权利要求18或19所述的方法,其特征在于,所述音频连接包括通话音频连接,所述音频连接传输的包括通话音频数据。
  21. 如权利要求20所述的方法,其特征在于,所述通话音频连接包括面向连接的同步连接SCO连接。
  22. 如权利要求19-20中任一项所述的方法,其特征在于,所述第一设备与所述第二设备之间建立音频连接,具体包括:所述第一设备检测到接听来电或拨通电话的用户操作时,所述第一设备与所述第二设备之间建立所述音频连接。
  23. 如权利要求1-22中任一项所述的方法,其特征在于,所述第一指示信息包括以下一项或多项:所述第二设备的生产厂商、产品型号。
  24. 如权利要求1-23中任一项所述的方法,其特征在于,所述音效处理能力包括以下一项或多项:降噪能力、消回声能力,所述第二音效处理算法包括以下一项或多项:降噪算法、消回声算法。
  25. 一种电子设备,其特征在于,包括无线通信模块、存储器以及耦合于所述存储器的处理器,多个应用程序,以及一个或多个程序;所述处理器在执行所述一个或多个程序时,使得所述电子设备实现如权利要求7至24任一项所述的方法。
  26. 一种芯片系统,所述芯片系统应用于电子设备,所述芯片包括一个或多个处理器,所述处理器用于调用计算机指令以使得所述电子设备执行如权利要求7至24中任一项所述的方法。
  27. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求7至24任一项所述的方法。
PCT/CN2021/094998 2020-05-22 2021-05-20 无线音频系统、无线通讯方法及设备 WO2021233398A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21808553.8A EP4148734A4 (en) 2020-05-22 2021-05-20 WIRELESS AUDIO SYSTEM, WIRELESS COMMUNICATION METHOD AND APPARATUS
US17/926,799 US20230209624A1 (en) 2020-05-22 2021-05-20 Wireless Audio System, Wireless Communication Method, and Device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010446694.7A CN113709906B (zh) 2020-05-22 2020-05-22 无线音频系统、无线通讯方法及设备
CN202010446694.7 2020-05-22

Publications (1)

Publication Number Publication Date
WO2021233398A1 true WO2021233398A1 (zh) 2021-11-25

Family

ID=78646556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/094998 WO2021233398A1 (zh) 2020-05-22 2021-05-20 无线音频系统、无线通讯方法及设备

Country Status (4)

Country Link
US (1) US20230209624A1 (zh)
EP (1) EP4148734A4 (zh)
CN (1) CN113709906B (zh)
WO (1) WO2021233398A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115334405A (zh) * 2022-08-08 2022-11-11 深圳感臻智能股份有限公司 一种音效配置的方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120186418A1 (en) * 2011-01-26 2012-07-26 Inventec Appliances (Shanghai) Co., Ltd. System for Automatically Adjusting Sound Effects and Method Thereof
CN104240738A (zh) * 2014-08-28 2014-12-24 杰发科技(合肥)有限公司 一种音效设置方法及电子装置
CN107170472A (zh) * 2016-03-08 2017-09-15 阿里巴巴集团控股有限公司 一种车载音频数据播放方法和设备
CN109754825A (zh) * 2018-12-26 2019-05-14 广州华多网络科技有限公司 一种音频处理方法、装置及设备
US20190220240A1 (en) * 2016-06-16 2019-07-18 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Sound effect configuration method and system and related device

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7856240B2 (en) * 2004-06-07 2010-12-21 Clarity Technologies, Inc. Distributed sound enhancement
US8233648B2 (en) * 2008-08-06 2012-07-31 Samsung Electronics Co., Ltd. Ad-hoc adaptive wireless mobile sound system
KR102351368B1 (ko) * 2015-08-12 2022-01-14 삼성전자주식회사 전자 장치에서 오디오 출력 방법 및 장치
CN106095558B (zh) * 2016-06-16 2019-05-10 Oppo广东移动通信有限公司 一种音效处理的方法及终端
US11036463B2 (en) * 2017-03-13 2021-06-15 Sony Corporation Terminal device, control method, and audio data reproduction system
KR102388803B1 (ko) * 2017-11-02 2022-04-20 삼성전자주식회사 반도체 메모리 장치, 이를 포함하는 메모리 시스템 및 반도체 메모리 장치의 동작 방법
CN109218528B (zh) * 2018-09-04 2021-03-02 Oppo广东移动通信有限公司 音效处理方法、装置以及电子设备
CN109994119B (zh) * 2019-04-04 2021-07-06 珠海市杰理科技股份有限公司 无线语音适配装置、系统以及音频播放控制方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120186418A1 (en) * 2011-01-26 2012-07-26 Inventec Appliances (Shanghai) Co., Ltd. System for Automatically Adjusting Sound Effects and Method Thereof
CN104240738A (zh) * 2014-08-28 2014-12-24 杰发科技(合肥)有限公司 一种音效设置方法及电子装置
CN107170472A (zh) * 2016-03-08 2017-09-15 阿里巴巴集团控股有限公司 一种车载音频数据播放方法和设备
US20190220240A1 (en) * 2016-06-16 2019-07-18 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Sound effect configuration method and system and related device
CN109754825A (zh) * 2018-12-26 2019-05-14 广州华多网络科技有限公司 一种音频处理方法、装置及设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4148734A4

Also Published As

Publication number Publication date
CN113709906B (zh) 2023-12-08
EP4148734A1 (en) 2023-03-15
CN113709906A (zh) 2021-11-26
US20230209624A1 (en) 2023-06-29
EP4148734A4 (en) 2023-10-25

Similar Documents

Publication Publication Date Title
US11653398B2 (en) Bluetooth connection method and device
CN112868244B (zh) 一种点对多点的数据传输方法及设备
US11778363B2 (en) Audio data transmission method applied to switching between single-earbud mode and double-earbud mode of TWS headset and device
WO2020124581A1 (zh) 一种音频数据传输方法及电子设备
CN112913321B (zh) 一种使用蓝牙耳机进行通话的方法、设备及系统
EP4061027B1 (en) Bluetooth communication method and apparatus
CN113039822B (zh) 一种数据信道的建立方法及设备
CN113169915B (zh) 无线音频系统、音频通讯方法及设备
US20240040481A1 (en) Tws headset connection method and device
WO2021013196A1 (zh) 一种同时响应的方法及设备
WO2022213689A1 (zh) 一种音频设备间语音互通的方法及设备
CN112771828B (zh) 一种音频数据的通信方法及电子设备
WO2021233398A1 (zh) 无线音频系统、无线通讯方法及设备
CN113132959B (zh) 无线音频系统、无线通讯方法及设备
WO2021043277A1 (zh) 配置蓝牙连接参数的方法和电子设备
EP4246982A1 (en) Sound effect adjustment method and electronic device
WO2023236670A1 (zh) 数据传输管理方法、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21808553

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021808553

Country of ref document: EP

Effective date: 20221205

NENP Non-entry into the national phase

Ref country code: DE