WO2020107491A1 - Système audio sans fil et procédé et dispositif de communication audio - Google Patents

Système audio sans fil et procédé et dispositif de communication audio Download PDF

Info

Publication number
WO2020107491A1
WO2020107491A1 PCT/CN2018/118791 CN2018118791W WO2020107491A1 WO 2020107491 A1 WO2020107491 A1 WO 2020107491A1 CN 2018118791 W CN2018118791 W CN 2018118791W WO 2020107491 A1 WO2020107491 A1 WO 2020107491A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
service
module
isochronous
content control
Prior art date
Application number
PCT/CN2018/118791
Other languages
English (en)
Chinese (zh)
Inventor
朱宇洪
王良
郑勇
张景云
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2018/118791 priority Critical patent/WO2020107491A1/fr
Priority to CN201880099860.1A priority patent/CN113169915B/zh
Priority to CN202211136691.9A priority patent/CN115665670A/zh
Publication of WO2020107491A1 publication Critical patent/WO2020107491A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W56/00Synchronisation arrangements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • This application relates to the field of wireless technology, in particular to a wireless audio system, audio communication method and equipment.
  • Bluetooth (Bluetooth) wireless technology is a short-range communication system intended to replace cable connections between portable and/or stationary electronic devices.
  • the key features of Bluetooth wireless communication technology are stability, low power consumption and low cost. Many features of its core specifications are optional and support product differentiation.
  • Bluetooth wireless technology has two forms of system: basic rate (basic rate, BR) and low power consumption (low energy, LE). Both types of systems include device discovery, connection establishment, and connection mechanisms.
  • the basic rate BR may include an optional enhanced data rate (EDR), and alternating media access control layer and physical layer extension (alternate media access control and physical layer extensions, AMP).
  • EDR enhanced data rate
  • AMP alternating media access control and physical layer extensions
  • Devices that implement the two systems BR and LE can communicate with other devices that also implement the two systems. Some profiles and use cases are only supported by one of the systems. Therefore, devices that implement these two systems have the ability to support more use cases.
  • Bluetooth Profile is a unique concept of Bluetooth protocol.
  • the Bluetooth protocol not only specifies the core specification (called Bluetooth core), but also defines various application layer specifications for various application scenarios. These application layer specifications are called Make a Bluetooth profile.
  • the Bluetooth protocol has developed application layer specifications (profiles) for various possible and universal application scenarios, such as the Bluetooth stereo audio transmission specifications (advance audio distribution profile, A2DP), audio/video remote control profile (AVRCP), basic image profile (basic imaging profile (BIP), hands-free profile (HFP), human interface specification (human interface device profile, HID profile, Bluetooth headset profile (headset profile, HSP), serial port profile (serial port profile, SPP), file transfer profile (file transmission profile, FTP), personal area network protocol (personal area network profile, PAN profile )and many more.
  • A2DP Bluetooth stereo audio transmission specifications
  • AVRCP audio/video remote control profile
  • BIP basic image profile
  • HFP hands-free profile
  • human interface specification human interface device profile, HID profile, Bluetooth headset profile (headset profile, HSP)
  • This application provides a wireless audio system, audio communication method and device, which can solve the problem of poor compatibility based on the existing Bluetooth protocol.
  • the present application provides an audio communication method, which is applied to an audio source side.
  • the method may include: establishing an ACL link between an audio source (such as a mobile phone and a media player) and an audio receiver (such as a headset).
  • the audio source can negotiate parameters with the audio receiver through the ACL link.
  • an isochronous data transmission channel can be established between the audio source and the audio receiver.
  • the isochronous data transmission channel can be used to transmit streaming data (i.e. audio data) of the first audio service.
  • a Bluetooth low energy connection is established between the audio source and the audio receiver.
  • the present application provides an audio communication method, which is applied to the audio receiver side.
  • the method may include: the audio receiver and the audio source establish a Bluetooth low energy connectionless asynchronous LE ACL link.
  • the audio receiver performs parameter negotiation for the first audio service through the LE ACL link and the audio source, and the first parameter negotiated in the parameter negotiation corresponds to the first audio service.
  • the audio receiver may create an LE isochronous data transmission channel corresponding to the first audio service based on the first parameter and the audio source.
  • the LE isochronous data transmission channel corresponding to the first audio service is used for the audio receiver to receive the audio data of the first audio service sent by the audio source.
  • a Bluetooth low energy connection is established between the audio source and the audio receiver.
  • the audio service may refer to a service or application capable of providing audio functions (such as audio playback, audio recording, etc.).
  • Audio services may involve audio-related data transmission services, such as the transmission of audio data itself, content control messages used to control the playback of audio data, and flow control messages used to create isochronous data transmission channels.
  • parameter negotiation and the establishment of isochronous data transmission channels can be performed at the granularity of audio services.
  • the flow control messages and content control messages of each audio service are transmitted through the LEACL link
  • the data is transmitted through the LE ISO link, which unifies the transmission framework of each business.
  • the audio communication method provided in this application can be applied to more audio services and has better compatibility.
  • the ACL link may be used to carry flow control messages, such as flow control messages involved in parameter negotiation, parameter configuration, and establishment of isochronous transmission channels.
  • the ACL link can also be used to carry content control messages, such as call control (e.g. answer, hang up, etc.) messages, playback control (e.g. previous, next, etc.) messages, and volume control (e.g., increase volume, decrease volume) News etc.
  • the audio source may generate the content control message of the first audio service, and may send the content control message of the first audio service to the audio receiver through the LEACL link.
  • the audio receiver can receive the content control message of the first audio service sent by the audio source through the LEACL link, and can perform content control on the first audio service according to the content control message.
  • the content control includes one or more of the following : Volume control, playback control, call control.
  • the content control message is used by the audio receiver to perform content control on the first audio service.
  • the content control includes one or more of the following: volume control, playback control, and call control.
  • the audio source may receive user input (for example, the user presses the phone hang-up button on the audio source), and then generates a content control message for the first audio service according to the user input.
  • user input for example, the user presses the phone hang-up button on the audio source
  • the audio receiver may generate the content control message of the first audio service, and may send the content control message of the first audio service to the audio source through the LEACL link.
  • the audio source can receive the content control message of the first audio service sent by the audio receiver through the LEACL link, and can perform content control on the first audio service according to the content control message.
  • the content control includes one or more of the following : Volume control, playback control, call control.
  • the content control message is used by the audio receiver to perform content control on the first audio service.
  • the content control includes one or more of the following: volume control, playback control, and call control.
  • the audio receiver can receive user input (for example, the user presses the phone hangup button on the audio receiver), and then generates a content control message for the first audio service according to the user input.
  • user input for example, the user presses the phone hangup button on the audio receiver
  • the audio source may generate audio data of the first audio service, and may send the first audio to the audio receiver through the LE isochronous data transmission channel corresponding to the first audio service Audio data for business.
  • the audio receiver can receive the audio data sent by the audio source through the LE isochronous data transmission channel corresponding to the first audio service.
  • the audio receiver may convert the audio data of the first audio service into sound.
  • the audio receiver may store audio data of the first audio service.
  • the first parameter may include one or more of the following: QoS parameters, codec parameters, ISO parameters, and so on.
  • the QoS parameters may include parameters such as delay, packet loss rate, throughput, etc. that represent transmission quality.
  • Codec parameters may include parameters that affect audio quality, such as encoding method and compression ratio.
  • ISO parameters can include CIS ID, number of CIS, maximum data size transmitted from master to slave, maximum data size transmitted from slave to master, maximum time interval of data transmission from master to slave at the link layer, and data from slave to master The longest time interval for packet transmission at the link layer, etc.
  • the first parameter may be obtained by querying a database according to the first audio service, and the database may store parameters corresponding to various audio services.
  • the parameters corresponding to the audio service may be designed by comprehensively considering various audio switching situations or mixing situations involved in the audio service.
  • This parameter can be applied to these situations involved in the business. For example, in the game business, it may happen that the background sound of the game and the voice of the microphone are switched or superimposed (the microphone is turned on during the game). The codec parameters and QoS parameters of the game background sound and Mike's speech sound may be different. For the game business, parameters suitable for this situation can be designed, so that when the user opens the microphone to speak during the game, it will not affect the listening experience.
  • the content control message may include one or more of the following: volume control (such as increasing the volume, decreasing the volume, etc.) message, playback control (such as the previous song, the next One, etc.) message, call control (answer, hang up) message.
  • volume control such as increasing the volume, decreasing the volume, etc.
  • playback control such as the previous song, the next One, etc.
  • call control answer, hang up
  • the audio source and the audio receiver can Re-negotiate the parameters, negotiate to determine the new parameters corresponding to the new audio service (such as the telephone service), and then create a new isochronous data transmission channel based on the new parameters.
  • the new isochronous data transmission channel can be used to transmit streaming data of the new audio service (such as telephone service).
  • the isochronous data transmission channels of various services are based on LE. In this way, the switching of the business scenario does not involve the switching of the transmission frame, and the efficiency is higher, and there is no obvious pause.
  • the isochronous data transmission channel corresponding to the old audio service can also be reconfigured using the new parameters corresponding to the new audio service (such as the telephone service), without the need for new
  • the parameters recreate a new isochronous data transmission channel. In this way, efficiency can be further improved.
  • the creation time of the isochronous data transmission channel may include the following options:
  • an isochronous data transmission channel can be created when the audio service arrives. For example, when the user opens the game application (the game background sound starts playing at the same time), the application layer of the mobile phone will send a game background sound service creation notification to the Host, according to the notification, the mobile phone will initiate the creation process of the isochronous data transmission channel to the Bluetooth headset .
  • a default isochronous data transmission channel may be established first, and the default isochronous data transmission channel may be created based on default CIG parameters. In this way, when the audio service arrives, the default isochronous data transmission channel can be used directly to carry streaming data, and the response speed is faster.
  • multiple virtual isochronous data transmission channels may be established first.
  • the multiple virtual isochronous data transmission channels may correspond to multiple sets of different CIG parameters, and may be applicable to multiple audio services.
  • a virtual isochronous data transmission channel refers to an isochronous data transmission channel where no data interaction occurs on the air interface. In this way, when an audio service arrives, a virtual isochronous data transmission channel corresponding to the audio service may be selected, and a handshake is triggered between the first audio device and the second audio device and communication is started.
  • an audio device which includes multiple functional units for correspondingly performing the method provided in any one of the possible implementation manners of the first aspect.
  • an audio device including a plurality of functional units for correspondingly performing the method provided in any one of the possible implementation manners of the second aspect.
  • an audio device for performing the audio communication method described in the first aspect.
  • the network device may include: a memory and a processor, a transmitter, and a receiver coupled to the memory, where the transmitter is used to send a signal to another wireless communication device, the receiver is used to receive a signal sent by another wireless communication device, and the memory is used
  • the processor is used to execute the program code stored in the memory, that is, to execute any one of the audio communication methods described in the possible implementation manners of the first aspect.
  • an audio device for performing the audio communication method described in the second aspect.
  • the terminal may include: a memory and a processor, a transmitter, and a receiver coupled to the memory, where the transmitter is used to send signals to another wireless communication device, the receiver is used to receive signals sent by another wireless communication device, and the memory is used.
  • the implementation code of the audio communication method described in the second aspect is stored, and the processor is used to execute the program code stored in the memory, that is, the audio communication method described in any one of the possible implementation manners of the second aspect is executed.
  • a chip set may include: a first chip and a second chip.
  • the first chip and the second chip communicate through an interface HCI.
  • the first chip may include the following modules: multimedia audio module, voice module, background sound module, content control module, stream control module, stream data module, and L2CAP module.
  • the second chip may include: an LE physical layer module and an LE link layer module.
  • LE physical layer module which can be used to provide a physical channel for data transmission (commonly referred to as a channel).
  • a channel for data transmission
  • the LE link layer module can be used to provide a physical independent logical transmission channel (also called a logical link) between two or more devices on the basis of the physical layer.
  • the LE link layer module can be used to control the radio frequency state of the device. The device will be in one of five states: waiting, advertising, scanning, initializing, and connecting.
  • the broadcast device can send data without establishing a connection, and the scanning device receives the data sent by the broadcast device; the device that initiates the connection responds to the broadcast device by sending a connection request. If the broadcast device accepts the connection request, the broadcast device and the device that initiates the connection will Will enter the connection state.
  • the device that initiates the connection is called the master device, and the device that accepts the connection request is called the slave device.
  • the LE link layer module may include the LE ACL module and the LE isochronous (ISO) module.
  • the LEACL module can be used to transmit control messages between devices through the LEACL link, such as flow control messages, content control messages, and volume control messages.
  • the LEISO module can be used to transmit isochronous data between devices (such as streaming data itself) through isochronous data transmission channels.
  • the L2CAP module can be used to manage the logical links provided by the logical layer. Based on L2CAP, different upper-layer applications can share the same logical link. Similar to the concept of port in TCP/IP.
  • the multimedia audio module, voice module, and background sound module may be modules set according to business scenarios, and may be used to divide the audio application of the application layer into several types of audio services such as multimedia audio, voice, and background sound. It is not limited to multimedia audio, voice, background sound, etc. Audio services can also be divided into: voice, music, games, video, voice assistant, e-mail reminder, alarm, reminder sound, navigation sound, etc.
  • the content control module can be responsible for encapsulating the content control (such as previous, next, etc.) messages of various audio services, and output the content control message of the audio service to the LEACL module 411 to transmit the encapsulated content through the LEACL module 411 Control messages.
  • the flow control module can be used to negotiate parameters for specific audio services, such as QoS parameter negotiation, codec parameter negotiation, ISO parameter negotiation, and creating an isochronous data transmission channel for the specific service based on the negotiated parameters. Creating an isochronous data transmission channel for the specific service can be used to transmit audio data of the specific audio service.
  • the specific audio service may be referred to as a first audio service
  • the negotiated parameter may be referred to as a first parameter.
  • the streaming data module can be used to output audio data of the audio service to the LE isochronous (ISO) module to transmit audio data through the isochronous data transmission channel.
  • the isochronous data transmission channel may be CIS. CIS can be used to transfer isochronous data between connected devices. The isochronous data transmission channel is finally carried on LEISO.
  • a chip is provided.
  • the chip may include the module in the first chip and the module in the second chip described in the seventh aspect.
  • each module please refer to the seventh aspect, which will not be repeated here.
  • a communication system in a ninth aspect, includes: a first audio device and a second audio device, where the first audio device may be the audio device described in the third aspect or the fifth aspect.
  • the second audio device may be the audio device described in the fourth aspect or the sixth aspect.
  • a communication system includes: a first audio device, a second audio device, and a third audio device, where the first audio device may be the audio device described in the third aspect or the fifth aspect. Both the second audio device and the third audio device may be the audio devices described in the fourth aspect or the sixth aspect.
  • a computer-readable storage medium having instructions stored on it, which when run on a computer, causes the computer to execute the audio communication method described in the first aspect above.
  • the readable storage medium stores instructions, which when executed on a computer, causes the computer to perform the audio communication method described in the second aspect.
  • a computer program product containing instructions, which when executed on a computer, causes the computer to execute the audio communication method described in the first aspect above.
  • FIG. 1 is a schematic structural diagram of a wireless audio system provided by this application.
  • 2A is a schematic diagram of the existing protocol framework for releasing BR/EDR Bluetooth
  • 2B-2D are schematic diagrams of existing protocol stacks of several audio profiles
  • FIG. 3 is a schematic diagram of a BLE-based audio protocol framework provided by this application.
  • Figure 5 is a schematic diagram of the extended BLE transmission framework
  • FIG. 6 is a schematic diagram of the overall flow of the audio communication method provided by this application.
  • FIG. 8 is a schematic flowchart of an audio communication method in a scenario where left and right headphones are used together provided by this application;
  • 10A is a schematic diagram of a hardware architecture of an electronic device provided by an embodiment of the present application.
  • FIG. 10B is a schematic diagram of a software architecture implemented on the electronic device shown in FIG. 10A;
  • FIG. 11 is a schematic diagram of a hardware architecture of an audio output device provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of an architecture for providing a chipset according to this application.
  • FIG. 13 is a schematic diagram of a chip architecture of the present application.
  • FIG. 1 shows a wireless audio system 100 provided by this application.
  • the wireless audio system 100 may include a first audio device 101, a second audio device 102 and a third audio device 103.
  • the first audio device 101 may be implemented as any of the following electronic devices: mobile phones, portable game consoles, portable media playback devices, personal computers, vehicle-mounted media playback devices, and so on.
  • the second audio device 102 and the third audio device 103 may be configured as any type of electro-acoustic transducer for converting audio data into sound, such as speakers, in-ear headphones, headphones, etc. Wait.
  • the physical form and size of the first audio device 101, the second audio device 102, and the third audio device 103 may also be different, which is not limited in this application.
  • the first audio device 101, the second audio device 102, and the third audio device 103 may all be configured with a wireless transceiver, and the wireless transceiver may be used to transmit and receive wireless signals.
  • the two can communicate through a wireless communication connection 106 instead of a wired communication connection.
  • the first audio device 101 and the second audio device 102 can establish a wireless communication connection 104.
  • the first audio device 101 may send audio data to the second audio device 102 through the wireless communication connection 104.
  • the role of the first audio device 101 is an audio source (audiosource)
  • the role of the second audio device 102 is an audio receiver (audio sink).
  • the second audio device 102 can convert the received audio data into sound, so that the user wearing the second audio device 102 can hear the sound.
  • the second audio device 102 In the transmission direction of the second audio device 102 to the first audio device 101, when the second audio device 102 is configured with a sound collection device such as a receiver/microphone, the second audio device 102 can convert the collected sound into audio data And send audio data to the first audio device 101 through the wireless communication connection 104.
  • the role of the second audio device 102 is an audio source (audiosource)
  • the role of the first audio device 101 is an audio receiver (audio sink).
  • the first audio device 101 can process the received audio data, such as sending the audio data to other electronic devices (in a voice call scenario) and storing the audio data (in a recording scenario).
  • the first audio device 101 and the second audio device 102 can also interactively play control control (such as previous, next, etc.) messages, call control (such as answer, hang up) messages based on the wireless communication connection 104, Volume control messages (eg volume up, volume down), etc.
  • the first audio device 101 may send a playback control message and a call control message to the second audio device 102 through the wireless communication connection 104, which may implement playback control and call control on the first audio device 101 side.
  • the second audio device 102 may send a playback control message and a call control message to the first audio device 101 through the wireless communication connection 104, which may implement playback control and call control on the second audio device 102 side.
  • a wireless communication connection 105 can be established between the first audio device 101 and the third audio device 103, and audio data, playback control messages, and call control messages can be exchanged through the wireless communication connection 105.
  • the first audio device 101 can simultaneously transmit audio data to the second audio device 102 and the third audio device 103.
  • the audio data transmitted from the first audio device 101 to the second audio device 102, the audio data from the third audio device 103, and the control messages all need to achieve point-to-multipoint synchronous transmission.
  • the synchronization of the second audio device 102 and the third audio device 103 has a crucial influence on the integrity of the user's hearing experience.
  • the second audio device 102 and the third audio device 103 are implemented as a left earphone and a right earphone, respectively, if the signals of the left and right ears are out of synchronization for about 30 microseconds, it will be disturbing and the user will feel the sound chaos.
  • the wireless audio system 100 shown in FIG. 1 may be a wireless audio system implemented based on the Bluetooth protocol. That is, the wireless communication connection (wireless communication connection 104, wireless communication connection 105, wireless communication connection 106) between the devices may use a Bluetooth communication connection. In order to support audio applications, the existing BR Bluetooth protocol provides some profiles, such as A2DP, AVRCP, HFP.
  • the existing Bluetooth protocol defines different protocol frameworks for different profiles, which are independent of each other and cannot be compatible.
  • FIG. 2A exemplarily shows the existing BR/EDR Bluetooth protocol framework.
  • the existing BR/EDR Bluetooth protocol framework may include multiple profiles. To simplify the illustration, only some audio application profiles are shown in FIG. 2A: A2DP, AVRCP, and HFP. Not limited to this, the existing BR/EDR Bluetooth protocol framework may also include other profiles, such as SPP, FTP, etc.
  • A2DP stipulates the protocol stack and method of using Bluetooth asynchronous transmission channel to transmit high-quality audio. For example, you can use a stereo Bluetooth headset to listen to music from a music player.
  • AVRCP refers to the remote control function, and generally supports remote control operations such as pause, stop, replay, and volume control. For example, you can use a Bluetooth headset to perform pauses, switch to the next song, etc. to control the music player to play music.
  • FHP is a voice application that provides hands-free calling.
  • 2B-2C show the protocol stacks of A2DP and HFP respectively. among them:
  • Protocols and entities included in the A2DP protocol stack are Protocols and entities included in the A2DP protocol stack.
  • the audio source is a source of a digital audio stream, which is transmitted to an audio sink in the piconet.
  • An audio receiver is a receiver that receives a digital audio stream from an audio source (audio source) in the same piconet.
  • a typical device used as an audio source may be a media player device, such as MP3, and a typical device used as an audio receiver may be a headset.
  • a typical device used as an audio source may be a sound collection device, such as a microphone, and a typical device used as an audio receiver may be a portable recorder.
  • LMP link management protocol
  • L2CAP logical link control and adaptation protocol
  • SDP service discovery protocol
  • the audio and video data transmission protocol includes a signaling entity for negotiating streaming parameters and a transmission entity for controlling the stream itself.
  • the application (Application) layer is an entity in which application service and transmission service parameters are defined. This entity is also used to adapt the audio stream data to a defined packet format or adapt the defined packet format to audio stream data.
  • Protocols and entities included in the AVRCP protocol stack are Protocols and entities included in the AVRCP protocol stack.
  • a controller is a device that initiates a transaction by sending a command frame to a target device.
  • Typical controllers can be personal computers, mobile phones, remote controllers, etc.
  • a target is a device that receives command frames and generates response frames accordingly.
  • Typical target parties may be audio playback/recording devices, video playback/recording devices, televisions, etc.
  • Baseband Baseband
  • LMP Link Management Protocol
  • L2CAP Logical Link Control and Adaptation Protocol
  • SDP is a Bluetooth service discovery protocol (service discovery protocol).
  • OBEX object exchange protocol
  • AV/C Audio/video/control
  • the Application layer is an ACRVP entity used to exchange control and browsing commands defined in the protocol.
  • An audio gateway is a device used as a gateway for inputting audio and outputting audio.
  • a typical device used as an audio gateway may be a cellular phone.
  • Hands-free unit (Hands-Free unit) is a device used as a remote audio input and output mechanism of the audio gateway. The hands-free unit can provide some remote control methods.
  • a typical device used as a hands-free unit may be an on-board hands-free unit.
  • Baseband Baseband
  • LMP Link Management Protocol
  • L2CAP Logical Link Control and Adaptation Protocol
  • RFCOMM is a Bluetooth serial port emulation entity.
  • SDP is a Bluetooth service discovery protocol.
  • Hands-free control Hands-free control (Hands-Free control) is the entity responsible for the specific control signal of the hands-free unit. The control signal is based on AT commands.
  • the audio port simulation (audio port emulation) layer is the entity that simulates the audio port on the audio gateway (audio gateway), and the audio driver (audio driver) is the driver software in the hands-free unit.
  • A2DP, AVRCP, and HFP respectively correspond to different protocol stacks, and different profiles use different transmission links, which are not compatible with each other. That is to say, the profile is actually a different protocol stack of the Bluetooth protocol corresponding to different application scenarios.
  • the Bluetooth protocol needs to support new application scenarios, it is necessary to follow the existing Bluetooth protocol framework to add a profile and add a protocol stack.
  • a user wearing a Bluetooth headset turns on Mike and his teammates during the game (the game will produce a background sound of the game, such as a sound triggered by game skills).
  • audio transmission will need to switch from A2DP to HFP.
  • the background sound transmission during the game can be implemented based on the A2DP protocol stack, and the voice transmission to the teammates can be implemented based on the HFP protocol stack.
  • the game background sound requires higher sound quality than the voice, that is, the coding parameters (such as compression rate) used by the two are different, and the game background sound uses a higher compression rate than the voice.
  • A2DP and HFP are independent of each other, switching from A2DP to HFP needs to stop the configuration related to the transmission of game background sound under A2DP, and re-negotiate the parameters of audio data transmission under HFP, configuration initialization, etc. This switch The process takes a long time, resulting in a pause that the user can clearly perceive.
  • the existing BR/EDR Bluetooth protocol does not implement point-to-multipoint synchronous transmission.
  • the existing BR/EDR Bluetooth protocol defines two types of Bluetooth physical links: connectionless asynchronous (Asynchronous) connectionless (ACL) link, synchronous connection-oriented (synchronous connection oriented (SCO) or extended SCO (extended SCO, eSCO) link.
  • ACL connectionless asynchronous
  • SCO synchronous connection-oriented
  • SCO extended SCO
  • eSCO extended SCO
  • the ACL link supports both symmetric connection (point-to-point) and asymmetric connection (point-to-multipoint).
  • the transmission efficiency of the ACL link is high, but the delay is uncontrollable, and the number of retransmissions is not limited. It can be mainly used to transmit data that is not sensitive to delay, such as control signaling and packet data.
  • SCO/eSCO links support symmetrical connections (point-to-point).
  • the transmission efficiency of the SCO/eSCO link is low, but the delay is controllable, and the number of retransmissions is limited. It can mainly transmit delay-sensitive services (such
  • the existing links of ACL and SCO/eSCO in the BR/EDR Bluetooth protocol do not support isochronous data. That is to say, in the point-to-multipoint piconet, the data sent by the master device to multiple slave devices is not synchronized, and the signals of multiple slave devices will be out of sync.
  • this application provides an audio protocol framework based on Bluetooth low energy BLE.
  • the existing BLE protocol supports a point-to-multipoint network topology.
  • the Bluetooth Interest Group (SIG) has proposed to add isochronous data support to BLE to allow BLE devices to transmit isochronous data.
  • isochronous data is time-bounded.
  • isochronous data refers to the information in the stream. Each information entity in the stream is limited by the time relationship between it and the previous entity and the subsequent entity.
  • the existing BLE protocol does not define audio transmission, and the BLE profile does not include audio profiles (such as A2DP, HFP). That is to say, the audio transmission (voice-over-ble) based on Bluetooth low energy is not standardized.
  • the BLE-based audio protocol framework provided by this application will support audio transmission.
  • FIG. 3 shows a BLE-based audio protocol framework provided by this application.
  • the protocol framework may include: LE physical layer (LE physical layer) 313, LE link layer (LE link layer) 310, L2CAP layer and application (application) layer 308.
  • the LE physical layer 313 and the LE link layer 310 may be implemented in a controller, and the L2CAP layer 308 may be implemented in a host.
  • the protocol framework may further include some functional entities implemented in the Host: multimedia audio functional entity 302, voice functional entity 303, background sound functional entity 304, content control functional entity 305, flow control functional entity 306, and streaming data functional entity 307.
  • the LE physical layer 313 may be responsible for providing a physical channel (commonly referred to as a channel) for data transmission.
  • a physical channel commonly referred to as a channel
  • channels there are several different types of channels in a communication system, such as control channels, data channels, voice channels, and so on.
  • Bluetooth uses the 2.4 GHz industrial scientific (ISM) frequency band.
  • the LE link layer 310 provides, on the basis of the physical layer, a logical transmission channel (also called a logical link) between two or more devices that is physically independent.
  • the LE link layer 310 can be used to control the radio frequency state of the device.
  • the device will be in one of five states: waiting, advertising, scanning, initializing, and connecting.
  • the broadcast device can send data without establishing a connection, and the scanning device receives the data sent by the broadcast device; the device that initiates the connection responds to the broadcast device by sending a connection request. If the broadcast device accepts the connection request, the broadcast device and the device that initiated the connection will Will enter the connection state.
  • the device that initiates the connection is called the master device, and the device that accepts the connection request is called the slave device.
  • the LE link layer 310 may include a LE ACL link 311 and a LE isochronous (ISO) link 312.
  • the LE ACL link 311 can be used to transmit control messages between devices, such as flow control messages, content control messages, and volume control messages.
  • the LE ISO link 312 can be used to transmit isochronous data between devices (such as streaming data itself).
  • the L2CAP layer 308 can be responsible for managing the logical links provided by the logical layer. Based on L2CAP, different upper-layer applications can share the same logical link. Similar to the concept of port in TCP/IP.
  • the multimedia audio function entity 302, the voice function entity 303, and the background sound function entity 304 may be function entities set according to business scenarios, and may be used to divide the audio application of the application layer into multimedia audio, voice, background sound, etc. business. It is not limited to multimedia audio, voice, background sound, etc. Audio services can also be divided into: voice, music, games, video, voice assistant, e-mail reminder, alarm, reminder sound, navigation sound, etc.
  • the content control function entity 305 may be responsible for encapsulating content control (eg, previous, next, etc.) messages of various audio services, and transmitting the encapsulated content control message through the LEACL link 311.
  • content control eg, previous, next, etc.
  • the stream control functional entity 306 may be responsible for parameter negotiation, such as quality of service (QoS) parameter negotiation, codec parameter negotiation, and isochronous data transmission channel parameter (hereinafter referred to as ISO parameter) Negotiation, and responsible for the establishment of isochronous data transmission channels.
  • QoS quality of service
  • ISO parameter isochronous data transmission channel parameter
  • the streaming data function entity 307 may be responsible for transmitting audio data through the isochronous data transmission channel.
  • the isochronous data transmission channel (isochronous data path) may be a connected isochronous audio stream (connected isochronous stream, CIS).
  • CIS can be used to transfer isochronous data between connected devices.
  • the isochronous data transmission channel is finally carried on LEISO 312.
  • the flow control function entity 306 may also be used to negotiate parameters before creating an isochronous data transmission channel, and then create an isochronous data transmission channel based on the negotiated parameters.
  • the audio protocol framework shown in FIG. 3 may also include a host controller interface (Host Controller Interface, HCI).
  • Host and Controller communicate through HCI, and the communication medium is HCI commands.
  • the Host can be implemented in the application processor (AP) of the device, and the Controller can be implemented in the Bluetooth chip of the device.
  • AP application processor
  • the Controller can be implemented in the Bluetooth chip of the device.
  • Host and Controller can be implemented in the same processor or controller, in which case HCI is optional.
  • the BLE-based audio protocol framework provided by this application can divide the data of various audio applications (such as A2DP, HFP, etc.) into three types:
  • call control such as answering, hanging up, etc.
  • playback control such as previous song, next song, etc.
  • volume control such as increasing the volume, decreasing the volume
  • Flow control Create stream (create stream), terminate stream (terminate stream) and other signaling for stream management. Streams can be used to carry audio data.
  • Streaming data audio data itself.
  • the content control and flow control data are transmitted through the LEACL 311 link; the streaming data is transmitted through the LEISO 312 link.
  • the BLE-based audio protocol framework provided by this application provides a unified audio transmission framework, no matter which audio profile data can be divided into three types: content control, flow control, and flow data. Based on the BLE framework, two types of data, content control and flow control, are transmitted on the LEACL link, and streaming data is transmitted on the LEISO link.
  • the BLE-based audio protocol framework supports audio transmission and can unify service level connections, and divide all upper-layer audio profiles into multimedia audio, voice, background sound and other audio services according to business scenarios.
  • the flow control of each audio service (including negotiation of QoS parameters, negotiation of codec parameters, negotiation of ISO parameters and establishment of isochronous data transmission channels) is unified by the flow control functional entity in the protocol stack.
  • the content control of each audio service (such as call control such as answering and hanging up, playback control such as previous and next song, such as volume control, etc.) is unified by the content control function entity in the protocol stack.
  • Both the flow control message and the content control message are transmitted through the LE ACL link, and the streaming data is transmitted through the LE ISO link. In this way, different audio profiles can be implemented based on the same transmission framework, with better compatibility.
  • the audio protocol framework provided in this application is based on BLE, which refers to an extended BLE transmission framework (transport architecture).
  • BLE refers to an extended BLE transmission framework (transport architecture).
  • the expanded BLE transmission framework mainly includes the addition of: isochronous channel (isochronous) channel characteristics.
  • Figure 5 shows the extended BLE transport framework entities (transport architectures). Among them, the shaded entities are newly added logical sublayers, and these newly added logical sublayers jointly provide isochronous channel characteristics. As shown in Figure 5:
  • LE physical transmission (LE physical transmission) layer air interface data transmission, through the data packet structure, through coding, modulation scheme and other marks.
  • LEPHYSICALtransport carries all the information from the upper layer.
  • LE physical channel (LE physical channel) layer the air interface physical channel transmitted between Bluetooth devices, through the time domain, frequency domain, and the physical layer bearer channel marked by the air domain, including frequency hopping, time slot, event, access code the concept of.
  • a LE physical channel can carry different LE logical transmissions (LE logical transmission); for the lower layer, an LE physical channel always maps its only corresponding LE physical transmission.
  • the LE physical channel layer can include four physical channel entities: LE physical network (LE piconet physical channel), LE broadcast physical channel (LE advertising physical channel), LE periodic physical channel (LE periodic physical channel), LE isochronous physical Channel (LE isochronous physical channel). That is, the LE isochronous physical channel is added to the existing LE physical channel.
  • LE physical network LE piconet physical channel
  • LE broadcast physical channel LE advertising physical channel
  • LE periodic physical channel LE periodic physical channel
  • LE isochronous physical Channel LE isochronous physical channel
  • LE piconet physical channel can be used for communication between connected devices.
  • the communication uses frequency hopping technology.
  • LE advertising can be used for connectionless broadcast communications between devices. These broadcast communications can be used for device discovery, connection operations, and connectionless data transmission.
  • LE periodic physical channel can be used for periodic broadcast communication between devices.
  • LE isochronous physical channel can be used to transmit isochronous data, and there is a one-to-one mapping relationship with the upper LE isochronous physical link.
  • LE physical link layer the baseband connection between Bluetooth devices, it is a virtual concept, and there is no corresponding field expression in the air interface data packet.
  • an LE logical transport will only be mapped to an LE physical link.
  • a LE physical link can be carried through different LE physical channels, but a transmission is always mapped to an LE physical channel.
  • LE physical link is a further encapsulation of the LE physical channel.
  • the LE physical link layer can include four physical link entities: LE active physical link (LE active physical link), LE broadcast physical link (LE advertising physical link), LE periodic physical link (LE periodic physical link), LE Isochronous physical link (LE isochronous physical link). On the basis of the existing LE physical link, the LE isochronous physical link is added.
  • LE isochronous physical link can be used to transmit isochronous data, carrying the upper layer LE-BIS, LE-CIS, and LE physical channel there is a one-to-one mapping relationship.
  • LE logical transmission layer it can be responsible for flow control, ACK/NACK confirmation mechanism, retransmission mechanism, and scheduling mechanism. This information is generally carried in the data packet header.
  • one LElogical transport can correspond to multiple LElogical links.
  • a LElogical transport only maps to a corresponding LEphysical link.
  • the LE logical transport layer may include the following logical transport entities: LE-ACL, ADVB, PADVB, LE-BIS, LE-CIS. That is, LE-BIS and LE-CIS are added to the existing LE logical transport.
  • LE-CIS is a point-to-point logical transport between the Master and a designated slave, and each CIS supports a LE-S logical link.
  • CIS can be a symmetric rate or an asymmetric rate.
  • LE-CIS is built on LE-ACL.
  • LE-BIS is a point-to-multipoint Logical transport, and each BIS supports one LE-S logical link.
  • LE-BIS is built on PADVB.
  • BIS refers to broadcast isochronous stream
  • CIS refers to connected isochronous stream.
  • the LE ISO link 312 in FIG. 3 may be LE CIS, and the LE ACL link 311 in FIG. 3 may be LE ACL.
  • LE logical link (LE logical link) layer can be used to support different application data transmission.
  • each LElogical link may be mapped to multiple LElogical transports, but only one LElogical transport is selected for mapping at a time.
  • the LE logical layer may include the following logical link entities: LE-C, LE-U, ADVB-C, ADVB-U, low-energy broadcast control (LEB-C), and low-power flow (low energy, stream, LE-S).
  • LEB-C means control
  • -U means user. That is, LEB-C and LE-S are added on the basis of the existing LE logical link. Among them, LEB-C is used to carry BIS control information, and LE-S is used to carry isochronous data streams.
  • the present application provides an audio communication method.
  • the main inventive idea may include: determining the parameters of the isochronous data transmission channel for each audio service with the audio service as the granularity.
  • the first audio device such as a mobile phone, a media player
  • the second audio device such as a headset
  • the isochronous data transmission channel can be used to transmit streaming data.
  • the audio service may refer to a service or application capable of providing audio functions (such as audio playback, audio recording, etc.).
  • Audio services may involve audio-related data transmission services, such as the transmission of audio data itself, content control messages used to control the playback of audio data, and flow control messages used to create isochronous data transmission channels.
  • the audio communication method provided in this application no longer uses the profile as a granularity for parameter negotiation, but uses the audio service as a granularity for parameter negotiation.
  • the isochronous data transmission channel can be configured based on the renegotiated parameters, without the need to switch between different profile protocol stacks, which is more efficient and avoids obvious pauses.
  • A2DP music service
  • HFP telephone service
  • A2DP and HFP respectively correspond to different transmission frames.
  • A2DP stream data (such as stereo music data) is finally transmitted through the ACL link
  • HFP stream data (such as voice data) is finally transmitted through the SCO/eSCO link. Therefore, in the existing Bluetooth protocol, this switch will cause the switch of the underlying transmission frame, which is time-consuming.
  • the BLE-based audio protocol framework provided by this application provides a unified audio transmission framework. No matter which audio business scenario, streaming data will be transmitted through the LE ISO link. The switching of business scenarios does not involve the switching of the transmission framework, efficiency higher.
  • the QoS parameters may include parameters representing transmission quality such as delay, packet loss rate, and throughput.
  • Codec parameters may include parameters that affect audio quality, such as encoding method and compression ratio.
  • ISO parameters can include CIS ID, number of CIS, maximum data size transmitted from master to slave, maximum data size transmitted from slave to master, maximum time interval of data transmission from master to slave at the link layer, and data from slave to master The longest time interval for packet transmission at the link layer, etc.
  • FIG. 6 shows the overall flow of the audio communication method provided by the present application.
  • a BLE connection is established between the first audio device (such as a mobile phone and a media player) and the second audio device (such as a headset). Expand below:
  • an ACL link is established between the first audio device (such as a mobile phone and a media player) and the second audio device (such as a headset).
  • the ACL link can be used to carry flow control messages, such as flow control messages involved in parameter negotiation, parameter configuration, and establishment of isochronous transmission channels in the flow control process (S602-S604).
  • the ACL link can also be used to carry content control messages, such as call control (such as answering, hanging up, etc.) messages during the content control process (S605-S607), and playback control (such as previous, next, etc.) messages. , Volume control (such as increasing the volume, decreasing the volume) message, etc.
  • content control messages such as call control (such as answering, hanging up, etc.) messages during the content control process (S605-S607), and playback control (such as previous, next, etc.) messages.
  • volume control such as increasing the volume, decreasing the volume
  • the first audio device and the second audio device may perform parameter negotiation through the ACL link.
  • the parameter negotiation can be conducted with the audio service as the granularity.
  • Different audio services require parameter negotiation, such as QoS parameter negotiation, codec parameter negotiation, and ISO parameter negotiation.
  • An audio service can correspond to a set of parameters, and a set of parameters can include one or more of the following: QoS parameters, codec parameters, and ISO parameters.
  • the specific process of the parameter negotiation may include:
  • the first audio device may send a parameter negotiation message to the second audio device through the ACL link, and the message may carry a set of parameters corresponding to the specific audio service.
  • This set of parameters may be obtained by querying from a database according to the specific audio service, and the database may store parameters corresponding to various audio services.
  • Step b The second audio device receives the parameter negotiation message sent by the first audio device through the ACL link. If the second audio device agrees with the parameters carried in the message, a confirmation message is returned to the first audio device; if the second audio device does not agree or partially agrees with the parameters carried in the parameter negotiation message, it returns to the first audio device to continue the negotiation Message to return to the first audio device to continue parameter negotiation.
  • the parameters corresponding to the audio service may be designed by comprehensively considering various audio switching situations or mixing situations involved in the audio service.
  • This parameter can be applied to these situations involved in the business. For example, in the game business, it may happen that the background sound of the game and the voice of the microphone are switched or superimposed (the microphone is turned on during the game). The codec parameters and QoS parameters of the game background sound and Mike's speech sound may be different. For the game business, parameters suitable for this situation can be designed, so that when the user opens the microphone to speak during the game, it will not affect the listening experience.
  • the specific audio service may be phone, game, voice assistant, music and so on.
  • the first audio device may perform parameter configuration to the second audio device through the ACL link.
  • the parameter configuration refers to a parameter determined by configuration negotiation of the second audio device.
  • the first audio device may send a parameter configuration message to the second audio device through the ACL link, and the parameter configuration message may carry parameters that have been negotiated and determined by both the first audio device and the second audio device.
  • the second audio device can perform the reception or transmission of the streaming data according to the parameters that have been negotiated and determined by both parties.
  • the isochronous data transmission channel can be used to transmit streaming data (ie, audio data).
  • streaming data ie, audio data.
  • the subsequent content will expand and explain the specific process of establishing an isochronous data transmission channel between the first audio device and the second audio device, which will not be repeated here.
  • the first audio device may be an audio source (audio source), and the second audio device may be an audio receiver (audio sink). That is, the audio source initiates parameter negotiation and isochronous data channel creation.
  • the first audio device may also be an audio source (audio sink) and the second audio device may be an audio receiver (audio source). That is, the audio receiver initiates parameter negotiation and isochronous data channel creation.
  • the content control message can be exchanged between the first audio device and the second audio device based on the ACL link.
  • the first audio device and the second audio device may exchange call control messages based on the ACL link, such as answering and hanging up control messages.
  • the first audio device can send a call control (such as answering, hanging up, etc.) message to the second audio device (such as a headset) through the ACL link, which can be implemented on the first audio device (such as a mobile phone) Call control.
  • a typical application scenario corresponding to this method may be: when using a Bluetooth headset to make a call, the user clicks the hang up button on the mobile phone to hang up the phone.
  • the second audio device (such as a headset) can send a call control (such as answering, hanging up, etc.) message to the first audio device (such as a mobile phone) through the ACL link, which can be implemented on the second audio device ( Such as headset) side call control.
  • a typical application scenario corresponding to this method may be: when using a Bluetooth headset to make a call, the user presses the hangup button on the Bluetooth headset to hang up the phone. Not limited to pressing the hang up button, the user can also hang up the phone on the Bluetooth headset through other operations, such as tapping the headset.
  • the first audio device and the second audio device can interactively play control messages based on the ACL link, such as the previous song and the next song.
  • the first audio device (such as a mobile phone) can send a playback control (such as previous song, next song, etc.) message to the second audio device (such as a headset) through the ACL link, which can be implemented in the first audio device (Such as mobile phone) side to play control.
  • a typical application scenario corresponding to this method may be: when listening to music using a Bluetooth headset, the user clicks the previous/next button on the mobile phone to switch songs.
  • the second audio device (such as a headset) can send a playback control (such as previous song, next song, etc.) message to the first audio device (such as a mobile phone) through the ACL link, which can be implemented in the second audio Playback control is performed on the device (such as headphones) side.
  • a typical application scenario corresponding to this method may be: when listening to music using a Bluetooth headset, the user presses the previous/next button on the Bluetooth headset to switch songs.
  • the first audio device and the second audio device may exchange volume control messages based on the ACL link, and increase or decrease volume control messages.
  • the first audio device (such as a mobile phone) can send a volume control (such as volume increase, volume decrease, etc.) message to the second audio device (such as a headset) through the ACL link, which can be implemented in the first audio
  • the volume control is performed on the device (such as a mobile phone) side.
  • the typical application scenario corresponding to this method may be: when using a Bluetooth headset to listen to music, the user clicks the volume adjustment button on the mobile phone to adjust the volume.
  • the second audio device (such as a headset) can send a volume control (such as volume increase, volume decrease, etc.) message to the first audio device (such as a mobile phone) through the ACL link, which can be implemented in the second Volume control is performed on the side of the audio device (such as headphones).
  • a volume control such as volume increase, volume decrease, etc.
  • a typical application scenario corresponding to this method may be: when using a Bluetooth headset to listen to music, the user presses the volume adjustment button on the Bluetooth headset to adjust the volume to adjust the volume.
  • the first audio device may be an audio source (audio source), and the second audio device may be an audio receiver (audio sink). That is, content control can be performed on the audio source side.
  • the first audio device may also be an audio source (audio sink) and the second audio device may be an audio receiver (audio source). That is, content control can be performed on the audio receiver side.
  • the first audio device and the second audio device may exchange streaming data based on the created isochronous data transmission channel.
  • the stream data is the stream data of the aforementioned specific audio service.
  • the created isochronous data transmission channel corresponds to the aforementioned specific audio service.
  • the first audio device (such as a mobile phone) can send streaming data to the second audio device (such as a headset) through an isochronous data transmission channel.
  • the role of the first audio device (such as a mobile phone) is an audio source (audio source)
  • the role of the second audio device (such as a headset) is an audio receiver (audio sink).
  • the second audio device (such as a headset) can convert the received audio data into sound.
  • a typical application scenario corresponding to this method may be: the user wears a Bluetooth headset to listen to music played on the mobile phone.
  • the second audio device (such as a headset) can send streaming data to the first audio device (such as a mobile phone) through an isochronous data transmission channel.
  • the role of the second audio device (such as a headset) is an audio source (audio source)
  • the role of the first audio device (such as a mobile phone) is an audio receiver (audio sink).
  • the first audio device (such as a mobile phone) can process the received audio data, such as converting the audio data into sound, sending the audio data to other electronic devices (in a voice call scenario), and storing the audio data (recording scenario) under).
  • a typical application scenario corresponding to this method may be: a user wears a Bluetooth headset (equipped with a sound collection device such as a receiver/microphone) to make a call, and at this time, the Bluetooth headset collects the voice of the user's speech and converts it into audio data for transmission to the mobile phone.
  • a Bluetooth headset equipped with a sound collection device such as a receiver/microphone
  • This application does not limit the execution order of the content control process and the streaming data transmission process.
  • the streaming data transmission process may be executed before the content control process, and the two processes may also be executed at the same time.
  • the first audio device and the second audio device in the method shown in FIG. 6 may implement the BLE-based audio protocol framework shown in FIG. 3.
  • the flow control process (S602-S604) in FIG. 6 may be performed by the flow control function entity 306 in FIG. 3; the content control process (S605-S607) in FIG. 6 may be performed by the content control function in FIG.
  • the entity 305 executes.
  • the ACL link mentioned in the method in FIG. 6 may be LE ACL311 in FIG. 3, and the isochronous data transmission channel mentioned in the method in FIG. 6 may be LE312 in FIG. 3.
  • the first audio device and the second audio device may renegotiate parameters to negotiate and determine the new audio service (such as Telephone service) corresponding to the new parameters, and then create a new isochronous data transmission channel based on the new parameters.
  • the new isochronous data transmission channel can be used to transmit streaming data of the new audio service (such as telephone service).
  • the isochronous data transmission channels of various services are based on LE. In this way, the switching of the business scenario does not involve the switching of the transmission frame, and the efficiency is higher, and there is no obvious pause.
  • the isochronous data transmission channel corresponding to the old audio service can also be reconfigured using the new parameters corresponding to the new audio service (such as the telephone service), without the need for new
  • the parameters recreate a new isochronous data transmission channel. In this way, efficiency can be further improved.
  • the audio communication method provided in this application uses the audio service as the granularity for parameter negotiation and the establishment of an isochronous data transmission channel.
  • the flow control messages and content control messages of each audio service are transmitted through the LEACL link, and the stream data is transmitted through the LEISO link Transmission, unifying the transmission framework of each business.
  • the audio communication method provided in this application can be applied to more audio services and has better compatibility.
  • the isochronous data transmission channel can be configured based on the renegotiated parameters, without the need to switch between different profile protocol stacks, and without switching the transmission frame, which is more efficient and avoids obvious pauses.
  • the creation process of the isochronous data transmission channel mentioned in the method flow shown in FIG. 6 is described below.
  • FIG. 7 shows the creation process of the isochronous data transmission channel.
  • the isochronous data transmission channel is based on a connected isochronous data channel, that is, the first audio device and the second audio device are already in a connection (Connection) state.
  • Both the first audio device and the second audio device have a host Host and a link layer LL (in a controller), and the Host and LL communicate through HCI.
  • the process may include:
  • Host A (Host of the first audio device) sets related parameters of the connected isochronous group (CIG) based on the HCI instruction.
  • the CIG-related parameters may include previously determined parameters (QoS parameters, codec parameters, ISO parameters), which are used to create isochronous data transmission channels.
  • Host A can send the HCI command "LE Set CIG parameters" to LL (the first audio device's LL) through HCI.
  • LL A can return the response message "Command Complete”.
  • Host A initiates the creation of CIS through the HCI instruction.
  • Host A can send the HCI command "LE CreateCIS" to LL (LL of the first audio device) through HCI.
  • LL A can return the response message "HCI Command Status".
  • the LLA may request the LLB (LL of the second audio device) to create a CIS stream through the air interface request message LL_CSI_REQ.
  • the LLB notifies HostB (Host of the second audio device) through the HCI instruction, and HostB agrees to the CIS chain building process of the first audio device.
  • the LLB responds to the LLA through the air interface response message LL_CIS_RSP and agrees to the CIS chain building process.
  • LLA notifies LLB through the air interface notification message LL_CIS_IND to complete the chain establishment.
  • the LLB notifies HostB that the CIS chain establishment is complete.
  • LLA notifies HostA through the HCI instruction that the CIS chain establishment is completed.
  • the CIS establishment between the first audio device and the second audio device is completed. Based on the established CIS, the first audio device and the second audio device can create an isochronous data transmission channel.
  • CIS is a connection-based flow that can be used to carry isochronous data.
  • the creation time of the isochronous data transmission channel may include multiple options.
  • an isochronous data transmission channel can be created when the audio service arrives. For example, when the user opens the game application (the game background sound starts to play simultaneously), the application layer of the mobile phone will send a game background sound service creation notification to the Host, and according to the notification, the mobile phone will initiate the process shown in FIG.
  • a default isochronous data transmission channel may be established first, and the default isochronous data transmission channel may be created based on default CIG parameters. In this way, when the audio service arrives, the default isochronous data transmission channel can be used directly to carry streaming data, and the response speed is faster.
  • multiple virtual isochronous data transmission channels may be established first.
  • the multiple virtual isochronous data transmission channels may correspond to multiple sets of different CIG parameters, and may be applicable to multiple audio services.
  • a virtual isochronous data transmission channel refers to an isochronous data transmission channel where no data interaction occurs on the air interface. In this way, when an audio service arrives, a virtual isochronous data transmission channel corresponding to the audio service may be selected, and a handshake is triggered between the first audio device and the second audio device and communication is started.
  • the method flow shown in FIG. 6 describes a connection-based point-to-point audio communication method formed by the first audio device and the second audio device.
  • the first audio device may be the first audio device 101 in the wireless audio system 100 shown in FIG. 1
  • the second audio device may be the second audio device 102 in the wireless audio system 100 shown in FIG. 1.
  • the first audio device 101 and the third audio device 103 in the wireless audio system 100 may also use the audio communication method shown in FIG. 6 for communication.
  • the first audio device 101 can communicate with both the second audio device 102 and the third audio device 103.
  • the first audio device 101 may be implemented as a mobile phone, and the second audio device 102 and the third audio device 103 may be implemented as left and right headphones, respectively.
  • This situation corresponds to a typical application scenario: the left earphone and the right earphone are used together.
  • This typical application scenario can be referred to as a “binaural use together” scenario.
  • FIG. 8 shows the audio communication method in the scenario of “using both ears together”. Expand below:
  • a BLE connection is established between the left earphone and the mobile phone.
  • the BLE connection is established between the right earphone and the mobile phone.
  • This application does not limit the execution order of the above S801-S803, and the order between them may be changed.
  • the BLE connection establishment process will be described in the following content, and will not be described here.
  • the mobile phone establishes ACL links with the left earphone and the right earphone respectively.
  • the establishment of the ACL link is the responsibility of the link layer LL.
  • An ACL link can be established between the LL of the mobile phone and the LL of the left earphone, and an ACL link can be established between the LL of the mobile phone and the LL of the right earphone.
  • the ACL link can be used to carry flow control messages, such as flow control messages involved in parameter negotiation, parameter configuration, and establishment of isochronous transmission channels in the flow control process (S805-S813).
  • the ACL link can also be used to carry content control messages, such as call control (such as answering, hanging up, etc.) messages during the content control process (S814-S819), playback control (such as previous, next, etc.) messages, and volume control (Such as increasing the volume and decreasing the volume) messages, etc.
  • the mobile phone can determine the parameters corresponding to the audio service (QoS parameters, codec parameters, ISO parameters, etc.).
  • the host of the mobile phone first receives an audio service establishment notification from the application layer, and then determines the parameters corresponding to the audio service.
  • the audio service establishment notification may be generated when the mobile phone detects that the user opens an audio-related application (such as a game).
  • the parameter corresponding to the audio service may be obtained by the mobile phone querying from a database according to the service type of the audio service, and the database may store parameters corresponding to various audio services.
  • the host of the mobile phone sends the parameter corresponding to the audio service to the LL of the mobile phone through HCI.
  • the LL of the mobile phone and the LL of the left earphone can perform parameter negotiation through the established ACL link.
  • parameter negotiation For the specific process of parameter negotiation, reference may be made to the related content in the method embodiment of FIG. 6, and details are not described here.
  • the mobile phone can perform parameter configuration to the left earphone through the established ACL link.
  • the parameter configuration refers to the parameters negotiated by the left earphone configuration.
  • the LL of the mobile phone may send a parameter configuration message to the LL of the left earphone through the ACL link, and the parameter configuration message may carry parameters that have been negotiated and determined by both parties.
  • the left earphone can perform the reception or transmission of the streaming data according to the parameters that have been negotiated by both parties.
  • an isochronous data transmission channel can be established between the mobile phone and the left earphone.
  • an isochronous data transmission channel For the creation process of the isochronous data transmission channel, reference may be made to the related content in the method embodiment of FIG. 6, and details are not described herein again.
  • the LL of the mobile phone and the LL of the right earphone can perform parameter negotiation through the established ACL link.
  • parameter negotiation For the specific process of parameter negotiation, reference may be made to the related content in the method embodiment of FIG. 6, and details are not described here.
  • the mobile phone can perform parameter configuration to the right earphone through the established ACL link.
  • the parameter configuration refers to the parameters negotiated by the left earphone configuration.
  • the LL of the mobile phone may send a parameter configuration message to the LL of the right earphone through the ACL link, and the parameter configuration message may carry parameters that have been negotiated and determined by both parties.
  • the right earphone can perform the reception or transmission of the streaming data according to the parameters that have been negotiated by both parties.
  • an isochronous data transmission channel can be established between the mobile phone and the right headset.
  • an isochronous data transmission channel For the creation process of the isochronous data transmission channel, reference may be made to the related content in the method embodiment of FIG. 6, and details are not described herein again.
  • the parameter determination can be determined in units of binaural, and then negotiated and configured one by one.
  • S808-S810 describe the process of parameter negotiation, configuration and creation of isochronous data transmission channel for the left earphone by the mobile phone
  • S811-S813 describe the process of parameter negotiation, configuration and creation of isochronous data transmission channel for the right earphone by the mobile phone. This application does not limit the execution order of the two processes, and the two processes can be performed simultaneously.
  • the content control message can be exchanged between the mobile phone and the left earphone based on the ACL link.
  • the mobile phone For specific implementation, reference may be made to related content in the method embodiment of FIG. 6, and details are not described herein again.
  • the content control message can be exchanged between the mobile phone and the right earphone based on the ACL link.
  • the mobile phone For specific implementation, reference may be made to related content in the method embodiment of FIG. 6, and details are not described herein again.
  • the transmission of the content control messages from the mobile phone to the left earphone and right earphone needs to be synchronized to realize the synchronous control of the left earphone and the right earphone, so as to avoid the user from feeling confused.
  • the left and right headsets can take effect after the content control message is received synchronously.
  • the mobile phone and the left earphone can exchange streaming data based on the created isochronous data transmission channel.
  • the stream data is the stream data of the aforementioned audio service.
  • the mobile phone and the right earphone can exchange streaming data based on the created isochronous data transmission channel.
  • the stream data is the stream data of the aforementioned audio service.
  • the audio communication method shown in this FIG. 8 can be applied to the “both ears together” scenario, and the audio communication method between the mobile phone and the single ear (left earphone or right earphone) can refer to the method shown in FIG. 6, Applicable to more audio services and better compatibility.
  • the business scenario is switched, it is sufficient to configure the isochronous data transmission channel between the mobile phone and the headset based on the renegotiated parameters. There is no need to switch between different profile protocol stacks, and there is no need to switch the transmission frame, which is more efficient and avoids obvious Pause.
  • the BLE connection establishment process may include:
  • the Host of the left earphone initiates the establishment of the BLE connection through the HCI instruction. Specifically, the host of the left earphone can send the HCI command "LE create connection" to the LL of the left earphone through HCI. Correspondingly, the LL of the left earphone can return the response message "HCI Command Status".
  • the right earphone sends a broadcast.
  • the left earphone initiates the connection to the right earphone. Specifically, a connection request is sent to the LL of the left earphone to the LL of the right earphone.
  • the LL of the right earphone after receiving the connection request, notifies the Host of the right earphone through the HCI instruction, and the establishment of the BLE connection is completed.
  • the process of establishing connection in BLE described in S902-S907 is: the right earphone sends a broadcast, and the left earphone initiates a connection to the right earphone.
  • the left earphone can also send a broadcast, and the right earphone can initiate a connection to the left earphone.
  • the host of the mobile phone initiates the establishment of the BLE connection through the HCI instruction. Specifically, the host of the mobile phone can send the HCI command "LE create connection" to the LL of the mobile phone through the HCI. Correspondingly, the LL of the mobile phone can return the response message "HCI Command Status".
  • the left earphone sends a broadcast.
  • the mobile phone initiates a connection to the left earphone. Specifically, a connection request is sent to the LL of the mobile phone and the LL of the left earphone.
  • the LL of the left earphone After receiving the connection request, the LL of the left earphone notifies the Host of the left earphone through the HCI instruction that the establishment of the BLE connection is completed.
  • the LL of the mobile phone after sending the connection request, notifies the Host of the mobile phone through the HCI instruction, and the establishment of the BLE connection is completed.
  • the process of BLE connection establishment described in S909-S914 is: the left earphone sends a broadcast, and the mobile phone initiates the connection to the left earphone.
  • the mobile phone can also send a broadcast, and the left earphone initiates a connection to the mobile phone.
  • the host of the mobile phone initiates the establishment of the BLE connection through the HCI instruction. Specifically, the host of the mobile phone can send the HCI command "LE create connection" to the LL of the mobile phone through the HCI. Correspondingly, the LL of the mobile phone can return the response message "HCI Command Status".
  • the right earphone sends a broadcast.
  • the mobile phone initiates a connection to the right earphone. Specifically, a connection request is sent to the LL of the mobile phone and the LL of the right earphone.
  • the LL of the right earphone after receiving the connection request, notifies the Host of the right earphone through the HCI instruction, and the establishment of the BLE connection is completed.
  • the LL of the mobile phone after sending the connection request, notifies the Host of the mobile phone through the HCI instruction, and the establishment of the BLE connection is completed.
  • the process of BLE connection establishment described in S916-S921 is: the right earphone sends a broadcast, and the mobile phone initiates the connection to the right earphone.
  • the mobile phone can also send a broadcast, and the right headset can initiate a connection to the mobile phone.
  • the electronic device 200 may be implemented as the first audio device mentioned in the above embodiment, and may be the first audio device 101 in the wireless audio system 100 shown in FIG. 1.
  • the electronic device 200 can generally be used as an audio source (audio source), such as a mobile phone, a tablet, etc., and can transmit audio data to other audio receiving devices (audio headphones, speakers, etc.), so that other audio receiving devices can use Audio data is converted into sound.
  • the electronic device 200 can also be used as an audio receiver, receiving audio data (such as audio converted by the user's voice collected by the headset) transmitted by other device audio sources (such as a headset with a microphone) data).
  • FIG. 10A shows a schematic structural diagram of the electronic device 200.
  • the electronic device 200 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, key 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 200.
  • the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), and an image signal processor (image)signal processor (ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • application processor application processor
  • AP application processor
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec video codec
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the electronic device 200 may also include one or more processors 110.
  • the controller may be the nerve center and command center of the electronic device 200.
  • the controller can generate the operation control signal according to the instruction operation code and the timing signal to complete the control of fetching instructions and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. The repeated access is avoided, the waiting time of the processor 110 is reduced, and thus the efficiency of the electronic device 200 is improved.
  • the processor 110 may include one or more interfaces.
  • Interfaces can include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit, sound, I2S) interface, pulse code modulation (pulse code modulation (PCM) interface, universal asynchronous transceiver (universal asynchronous) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and And/or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may respectively couple the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 200.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, to realize the function of answering the phone call through the Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface to realize the function of answering the phone call through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 to peripheral devices such as the display screen 194 and the camera 193.
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI) and so on.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 200.
  • the processor 110 and the display screen 194 communicate through the DSI interface to realize the display function of the electronic device 200.
  • the GPIO interface can be configured via software.
  • the GPIO interface can be configured as a control signal or a data signal.
  • the GPIO interface may be used to connect the processor 110 to the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 200, and can also be used to transfer data between the electronic device 200 and peripheral devices. It can also be used to connect headphones and play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiments of the present invention is only a schematic description, and does not constitute a limitation on the structure of the electronic device 200.
  • the electronic device 200 may also use different interface connection methods in the foregoing embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 200. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be disposed in the processor 110.
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 200 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 200 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 200.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1 and filter, amplify, etc. the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor and convert it to electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be transmitted into a high-frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110, and may be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (bluetooth, BT), and global navigation satellites that are applied to the electronic device 200.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency-modulates and filters electromagnetic wave signals, and transmits the processed signals to the processor 110.
  • the wireless communication module 160 may also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it to electromagnetic wave radiation through the antenna 2.
  • the wireless communication module 160 may include a Bluetooth module, a Wi-Fi module, and the like.
  • the antenna 1 of the electronic device 200 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 200 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include a global mobile communication system (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long-term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a beidou navigation system (BDS), and a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS beidou navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 200 can realize a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connecting the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations, and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel may use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode) emitting diode, AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 200 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 200 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP processes the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, and the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, which is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be set in the camera 193.
  • the camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CCD charge coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 200 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the digital signal processor is used to process digital signals. In addition to digital image signals, it can also process other digital signals. For example, when the electronic device 200 is selected at a frequency point, the digital signal processor is used to perform Fourier transform on the energy at the frequency point.
  • Video codec is used to compress or decompress digital video.
  • the electronic device 200 may support one or more video codecs. In this way, the electronic device 200 can play or record videos in various encoding formats, for example: moving picture experts group (MPEG)-1, MPEG-2, MPEG-3, MPEG-4, etc.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent recognition of the electronic device 200, such as image recognition, face recognition, voice recognition, and text understanding.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 200.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, photos, videos and other data in an external memory card.
  • the internal memory 121 may be used to store one or more computer programs including instructions.
  • the processor 110 may execute the above-mentioned instructions stored in the internal memory 121, so that the electronic device 200 executes the data sharing method, various functional applications, and data processing provided in some embodiments of the present application.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store the operating system; the storage program area can also store one or more application programs (such as gallery, contacts, etc.) and so on.
  • the storage data area may store data (such as photos, contacts, etc.) created during use of the electronic device 200.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • a non-volatile memory such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • the electronic device 200 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input into digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 200 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also known as "handset" is used to convert audio electrical signals into sound signals.
  • the electronic device 200 answers a phone call or voice message, the voice can be received by bringing the receiver 170B close to the ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a person's mouth, and input a sound signal to the microphone 170C.
  • the electronic device 200 may be provided with at least one microphone 170C. In other embodiments, the electronic device 200 may be provided with two microphones 170C. In addition to collecting sound signals, it may also implement a noise reduction function. In other embodiments, the electronic device 200 may also be provided with three, four, or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the headset interface 170D is used to connect wired headsets.
  • the earphone interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device (open terminal) platform (OMTP) standard interface, and the American Telecommunications Industry Association (cellular telecommunications industry association of the United States, CTIA) standard interface.
  • OMTP open mobile electronic device
  • CTIA American Telecommunications Industry Association
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may be at least two parallel plates with conductive materials. When force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 200 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 200 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 200 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but have different touch operation intensities may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 200. In some embodiments, the angular velocity of the electronic device 200 around three axes (ie, x, y, and z axes) may be determined by the gyro sensor 180B.
  • the gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the jitter angle of the electronic device 200, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to counteract the jitter of the electronic device 200 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 200 calculates the altitude using the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 200 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the electronic device 200 may detect the opening and closing of the clamshell according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of acceleration of the electronic device 200 in various directions (generally three axes). When the electronic device 200 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of electronic devices, and be used in applications such as horizontal and vertical screen switching and pedometers.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 200 can measure the distance by infrared or laser. In some embodiments, when shooting scenes, the electronic device 200 may use the distance sensor 180F to measure distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 200 emits infrared light outward through the light emitting diode.
  • the electronic device 200 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 200. When insufficient reflected light is detected, the electronic device 200 may determine that there is no object near the electronic device 200.
  • the electronic device 200 can use the proximity light sensor 180G to detect that the user holds the electronic device 200 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense the brightness of ambient light.
  • the electronic device 200 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 200 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 200 can use the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a picture of the fingerprint, and answer the call with the fingerprint.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 200 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 200 performs to reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 200 when the temperature is lower than another threshold, the electronic device 200 heats the battery 142 to avoid abnormal shutdown of the electronic device 200 due to low temperature.
  • the electronic device 200 when the temperature is below another threshold, the electronic device 200 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
  • the touch sensor 180K can also be called a touch panel or a touch-sensitive surface.
  • the touch sensor 180K may be provided on the display screen 194, and the touch sensor 180K and the display screen 194 constitute a touch screen, also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation may be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 200, which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive a blood pressure beating signal.
  • the bone conduction sensor 180M may also be provided in the earphone and combined into a bone conduction earphone.
  • the audio module 170 may parse out the voice signal based on the vibration signal of the vibrating bone block of the voice part acquired by the bone conduction sensor 180M to realize the voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M to implement the heart rate detection function.
  • the key 190 includes a power-on key, a volume key, and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 200 may receive key input and generate key signal input related to user settings and function control of the electronic device 200.
  • the motor 191 may generate a vibration prompt.
  • the motor 191 can be used for vibration notification of incoming calls and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminder, receiving information, alarm clock, game, etc.
  • Touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate a charging state, a power change, and may also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be inserted into or removed from the SIM card interface 195 to achieve contact and separation with the electronic device 200.
  • the electronic device 200 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 can also be compatible with external memory cards.
  • the electronic device 200 interacts with the network through the SIM card to realize functions such as call and data communication.
  • the electronic device 200 uses eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 200 and cannot be separated from the electronic device 200.
  • the electronic device 200 exemplarily shown in FIG. 10A may display various user interfaces described in the following embodiments through the display screen 194.
  • the electronic device 200 can detect a touch operation in each user interface through the touch sensor 180K, for example, a click operation in each user interface (such as a touch operation on an icon, a double-click operation), and for example, an upward or Swipe down, or perform a gesture of drawing a circle, and so on.
  • the electronic device 200 may detect a motion gesture performed by the user holding the electronic device 200, such as shaking the electronic device, through the gyro sensor 180B, the acceleration sensor 180E, or the like.
  • the electronic device 200 can detect non-touch gesture operations through the camera 193 (eg, 3D camera, depth camera).
  • the terminal application processor (AP) included in the electronic device 200 can implement the Host in the audio protocol framework shown in FIG. 3, and the Bluetooth (BT) module included in the electronic device 200 can implement the shown in FIG. 3.
  • the controller in the audio protocol framework communicates between the two through HCI. That is, the functions of the audio protocol framework shown in FIG. 3 are distributed on two chips.
  • the terminal application processor (AP) of the electronic device 200 may implement the host and controller in the audio protocol framework shown in FIG. 3. That is, all the functions of the audio protocol framework shown in Figure 3 are placed on one chip, that is, the host and controller are placed on the same chip, because the host and controller are on the same chip, so the physical HCI is There is no need to exist, and the host and controller directly interact through the application programming interface API.
  • the software system of the electronic device 200 may adopt a layered architecture, event-driven architecture, micro-core architecture, micro-service architecture, or cloud architecture.
  • the embodiment of the present invention takes the Android system with a layered architecture as an example to exemplarily explain the software structure of the electronic device 200.
  • 10B is a block diagram of the software structure of the electronic device 200 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor.
  • the layers communicate with each other through a software interface.
  • the Android system is divided into four layers, from top to bottom are the application layer, the application framework layer, the Android runtime and the system library, and the kernel layer.
  • the application layer may include a series of application packages.
  • the application package may include applications such as games, voice assistants, music players, video players, mailboxes, calls, navigation, and file browsers.
  • the application framework layer provides an application programming interface (application programming interface) and programming framework for applications at the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, intercept the screen, etc.
  • Content providers are used to store and retrieve data, and make these data accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
  • the view system includes visual controls, such as controls for displaying text and controls for displaying pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes an SMS notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 200. For example, the management of the call state (including connection, hang up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear after a short stay without user interaction.
  • the notification manager is used to notify the completion of downloading, message reminders, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • the text message is displayed in the status bar, a prompt sound is emitted, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core library and virtual machine. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one part is the function function that Java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in the virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer into binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library may include multiple functional modules. For example: surface manager (surface manager), media library (Media library), 3D graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • surface manager surface manager
  • media library Media library
  • 3D graphics processing library for example: OpenGL ES
  • 2D graphics engine for example: SGL
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports a variety of commonly used audio, video format playback and recording, and still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least the display driver, camera driver, audio driver, and sensor driver.
  • the following describes the workflow of the software and hardware of the electronic device 200 in combination with capturing a photographing scene.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps and other information of touch operations).
  • the original input event is stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch operation, the control corresponding to the touch operation is a camera application icon as an example.
  • the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer. Capture still images or videos.
  • the audio output device 300 may be implemented as the second audio device or the third audio device mentioned in the above embodiment, and may be the second audio device 102 or the third audio device 103 in the wireless audio system 100 shown in FIG. 1.
  • the audio output device 300 can generally be used as an audio receiving device (audio sink), such as headphones and speakers, can transmit audio data to other audio sources (audio phones, tablets, etc.), and can receive the received audio Convert data to sound.
  • audio sink an audio receiving device
  • the audio output device 300 can also be used as an audio source to transmit audio data (such as an audio sink) (such as a mobile phone) to other devices Audio data converted by the voice collected by the headset into the user).
  • audio data such as an audio sink
  • the audio output device 300 can also be used as an audio source to transmit audio data (such as an audio sink) (such as a mobile phone) to other devices Audio data converted by the voice collected by the headset into the user).
  • FIG. 11 exemplarily shows a schematic structural diagram of an audio output device 300 provided by the present application.
  • the audio output device 300 may include a processor 302, a memory 303, a Bluetooth communication processing module 304, a power supply 305, a wear detector 306, a microphone 307, and an electric/acoustic converter 308. These components can be connected via a bus. among them:
  • the processor 302 may be used to read and execute computer-readable instructions.
  • the processor 302 may mainly include a controller, an arithmetic unit, and a register.
  • the controller is mainly responsible for instruction decoding and issues control signals for the operations corresponding to the instructions.
  • the arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations, and logical operations, and can also perform address operations and conversions.
  • the register is mainly responsible for saving the register operand and intermediate operation result temporarily stored during the execution of the instruction.
  • the hardware architecture of the processor 302 may be an application specific integrated circuit (Application Specific Integrated Circuits, ASIC) architecture, a MIPS architecture, an ARM architecture, an NP architecture, or the like.
  • the processor 302 may be used to parse signals received by the Bluetooth communication processing module 304, such as signals encapsulated with audio data, content control messages, flow control messages, and so on.
  • the processor 302 may be used to perform corresponding processing operations according to the analysis result, such as driving the electrical/acoustic converter 308 to start or pause or stop converting audio data into sound, and so on.
  • the processor 302 may also be used to generate signals sent out by the Bluetooth communication processing module 304, such as Bluetooth broadcast signals, beacon signals, and audio data converted from the collected sound.
  • the memory 303 is coupled to the processor 302 and is used to store various software programs and/or multiple sets of instructions.
  • the memory 303 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory 303 can store an operating system, such as embedded operating systems such as uCOS, VxWorks, RTLinux, and so on.
  • the memory 303 may also store a communication program that can be used to communicate with the electronic device 200, one or more servers, or additional devices.
  • the Bluetooth (BT) communication processing module 304 may receive signals transmitted by other devices (such as the electronic device 200), such as scan signals, broadcast signals, signals encapsulated with audio data, content control messages, flow control messages, and so on.
  • the Bluetooth (BT) communication processing module 304 may also transmit signals, such as broadcast signals, scan signals, signals encapsulated with audio data, content control messages, flow control messages, and so on.
  • the power supply 305 may be used to supply power to the processor 302, the memory 303, the Bluetooth communication processing module 304, the wear detector 306, the electrical/acoustic converter 308, and other internal components.
  • the wearing detector 306 may be used to detect a state where the audio output device 300 is worn by the user, such as an unworn state, a worn state, and may even include a worn tight state.
  • the wear detector 306 may be implemented by one or more of a distance sensor, a pressure sensor, and the like.
  • the wearing detector 306 can transmit the detected wearing state to the processor 302, so that the processor 302 can be powered on when the audio output device 300 is worn by the user, and powered off when the audio output device 300 is not worn by the user, to save Power consumption.
  • the microphone 307 can be used to collect sounds, such as the voice of the user speaking, and can output the collected sounds to the electric/acoustic converter 308, so that the electric/acoustic converter 308 can convert the sound collected by the microphone 307 into audio data.
  • the electric/acoustic converter 308 can be used to convert sound into electrical signals (audio data), for example, convert the sound collected by the microphone 307 into audio data, and can transmit audio data to the processor 302. In this way, the processor 302 can trigger the Bluetooth (BT) communication processing module 304 to transmit the audio data.
  • the electrical/acoustic converter 308 may also be used to convert electrical signals (audio data) into sound, for example, audio data output by the processor 302 into sound.
  • the audio data output by the processor 302 may be received by the Bluetooth (BT) communication processing module 304.
  • the processor 302 can implement the Host in the audio protocol framework shown in FIG. 3, and the Bluetooth (BT) communication processing module 304 can implement the controller in the audio protocol framework shown in FIG. 3. Communicate. That is, the functions of the audio protocol framework shown in FIG. 3 are distributed on two chips.
  • BT Bluetooth
  • the processor 302 may implement the host and controller in the audio protocol framework shown in FIG. 3. That is, all the functions of the audio protocol framework shown in Figure 3 are placed on one chip, that is, the host and controller are placed on the same chip, because the host and controller are on the same chip, so the physical HCI is There is no need to exist, and the host and controller directly interact through the application programming interface API.
  • the structure illustrated in FIG. 11 does not constitute a specific limitation on the audio output device 300.
  • the audio output device 300 may include more or fewer components than shown, or combine some components, or split some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • FIG. 12 shows a schematic structural diagram of a chip set provided by the present application.
  • the chipset 400 may include chip 1 and chip 2.
  • Chip 1 and chip 2 communicate through interface HCI409.
  • the chip 1 may include the following modules: a multimedia audio module 402, a voice module 403, a background sound module 404, a content control module 405, a stream control module 406, a stream data module 407, and an L2CAP module 408.
  • the chip 2 may include: an LE physical layer module 413 and an LE link layer module 410.
  • the LE physical layer module 413 can be used to provide a physical channel (commonly referred to as a channel) for data transmission.
  • a physical channel commonly referred to as a channel
  • channels there are several different types of channels in a communication system, such as control channels, data channels, voice channels, and so on.
  • the LE link layer module 410 can be used to provide a logical transmission channel (also referred to as a logical link) between two or more devices that is physically independent, based on the physical layer.
  • the LE link layer module 410 can be used to control the radio frequency state of the device.
  • the device will be in one of five states: waiting, advertising, scanning, initializing, and connecting.
  • the broadcast device can send data without establishing a connection, and the scanning device receives the data sent by the broadcast device; the device that initiates the connection responds to the broadcast device by sending a connection request. If the broadcast device accepts the connection request, the broadcast device and the device that initiates the connection will Will enter the connection state.
  • the device that initiates the connection is called the master device, and the device that accepts the connection request is called the slave device.
  • the LE link layer module 410 may include a LE ACL module 411 and a LE isochronous (ISO) module 412.
  • the LE ACL module 411 can be used to transmit control messages between devices through the LE ACL link, such as flow control messages, content control messages, and volume control messages.
  • the LE ISO module 412 can be used to transmit isochronous data (such as streaming data itself) between devices through an isochronous data transmission channel.
  • the L2CAP module 408 can be used to manage the logical links provided by the logical layer. Based on L2CAP, different
  • Upper layer applications can share the same logical link. Similar to the concept of port in TCP/IP.
  • the multimedia audio module 402, the voice module 403, and the background sound module 404 may be modules set according to business scenarios, and may be used to divide the audio applications of the application layer into multimedia audio, voice, background sound, and other audio services. It is not limited to multimedia audio, voice, background sound, etc. Audio services can also be divided into: voice, music, games, video, voice assistant, e-mail reminder, alarm, reminder sound, navigation sound, etc.
  • the content control module 405 can be responsible for encapsulating the content of various audio services
  • the stream control module 406 can be used to negotiate parameters for specific audio services, such as QoS parameter negotiation, codec parameter negotiation, ISO parameter negotiation, and negotiation based on the negotiated parameters for the specific service Create an isochronous data transmission channel. Creating an isochronous data transmission channel for the specific service can be used to transmit audio data of the specific audio service.
  • the specific audio service may be referred to as a first audio service
  • the negotiated parameter may be referred to as a first parameter.
  • the streaming data module 407 may be used to output the audio data of the audio service to the LE isochronous (ISO) module 412 to transmit the audio data through the isochronous data transmission channel.
  • the isochronous data transmission channel may be CIS.
  • CIS can be used to transfer isochronous data between connected devices.
  • the isochronous data transmission channel is finally carried in LEISO 412.
  • chip 1 may be implemented as an application processor (AP), and chip 2 may be implemented as a Bluetooth processor (or referred to as a Bluetooth module, Bluetooth chip, etc.).
  • chip 1 may be referred to as a first chip, and chip 2 may be referred to as a second chip.
  • the chipset 400 may be included in the first audio device in the foregoing method embodiment, or may be included in the first audio device and the second audio device in the foregoing method embodiment.
  • the structure illustrated in FIG. 12 does not constitute a specific limitation on the chipset 400.
  • the chipset 400 may include more or less components than shown, or combine some components, or split some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • FIG. 13 shows a schematic structural diagram of a chip provided by the present application.
  • the chip 500 may include: a multimedia audio module 502, a voice module 503, a background sound module 504, a content control module 505, a stream control module 506, a stream data module 507, an L2CAP module 508, a LE physical layer module 513, LE link layer module 510.
  • a multimedia audio module 502 a voice module 503, a background sound module 504, a content control module 505, a stream control module 506, a stream data module 507, an L2CAP module 508, a LE physical layer module 513, LE link layer module 510.
  • the chip architecture shown in FIG. 13 is that the host and controller in the audio protocol framework shown in FIG. 3 are implemented on one chip at the same time. Since Host and Controller are implemented on the same chip, HCI may not be needed inside the chip.
  • the chip architecture shown in FIG. 12 is to implement Host and Controller in the audio protocol framework shown in FIG. 3 in two chips, respectively.
  • the chip 500 may be included in the first audio device in the foregoing method embodiment, or may be included in the first audio device and the second audio device in the foregoing method embodiment.
  • the structure shown in FIG. 13 does not constitute a specific limitation on the chip 500.
  • the chip 500 may include more or less components than shown, or combine some components, or split some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • a person of ordinary skill in the art can understand that all or part of the process in the method of the above embodiments can be implemented by a computer program instructing related hardware.
  • the program can be stored in a computer-readable storage medium, and when the program is executed , May include the processes of the foregoing method embodiments.
  • the foregoing storage media include various media that can store program codes such as ROM or random storage memory RAM, magnetic disks or optical disks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente invention concerne un système audio sans fil et un procédé et un dispositif de communication audio, un paramètre d'un canal de transmission de données isochrone étant déterminé pour chaque service audio, avec le service audio comme granularité. Des négociations de paramètres, telles qu'une négociation de paramètre de QoS, une négociation de paramètre de codec et une négociation de paramètre ISO, peuvent être effectuées entre un premier dispositif audio (tel qu'un téléphone mobile et un lecteur multimédia) et un second dispositif audio (tel qu'un écouteur) avec un service audio en tant que granularité et un canal de transmission de données isochrone est ensuite créé sur la base des paramètres négociés. Quel que soit le type de scénarios de services audio, des données de diffusion en continu sont transmises par l'intermédiaire d'une liaison ISO LE et le transfert intercellulaire de scénarios de services ne se rapporte pas au transfert intercellulaire de trames de transmission et, par conséquent, l'efficacité est plus élevée.
PCT/CN2018/118791 2018-11-30 2018-11-30 Système audio sans fil et procédé et dispositif de communication audio WO2020107491A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2018/118791 WO2020107491A1 (fr) 2018-11-30 2018-11-30 Système audio sans fil et procédé et dispositif de communication audio
CN201880099860.1A CN113169915B (zh) 2018-11-30 2018-11-30 无线音频系统、音频通讯方法及设备
CN202211136691.9A CN115665670A (zh) 2018-11-30 2018-11-30 无线音频系统、音频通讯方法及设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/118791 WO2020107491A1 (fr) 2018-11-30 2018-11-30 Système audio sans fil et procédé et dispositif de communication audio

Publications (1)

Publication Number Publication Date
WO2020107491A1 true WO2020107491A1 (fr) 2020-06-04

Family

ID=70852513

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/118791 WO2020107491A1 (fr) 2018-11-30 2018-11-30 Système audio sans fil et procédé et dispositif de communication audio

Country Status (2)

Country Link
CN (2) CN115665670A (fr)
WO (1) WO2020107491A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615647A (zh) * 2022-03-11 2022-06-10 北京小米移动软件有限公司 通话控制方法、装置及存储介质
TWI792701B (zh) * 2020-12-18 2023-02-11 瑞昱半導體股份有限公司 支援低功耗藍牙音訊廣播運作並可同步調整音量大小的藍牙音訊廣播系統及相關的多成員藍牙裝置
US11709650B2 (en) 2020-12-18 2023-07-25 Realtek Semiconductor Corp. Bluetooth audio broadcasting system and related multi-member Bluetooth device supporting Bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
CN116490923A (zh) * 2020-12-07 2023-07-25 Oppo广东移动通信有限公司 参数设置方法、装置、设备及存储介质
US11709651B2 (en) 2020-12-18 2023-07-25 Realtek Semiconductor Corp. Bluetooth audio broadcasting system and related multi-member Bluetooth device supporting Bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
US11818555B2 (en) 2020-12-18 2023-11-14 Realtek Semiconductor Corp. Bluetooth audio broadcasting system and related multi-member Bluetooth device supporting Bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
WO2024085664A1 (fr) * 2022-10-18 2024-04-25 삼성전자 주식회사 Dispositif électronique et procédé de transmission et/ou de réception de données sur la base d'un changement de configuration dans un dispositif électronique

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116939555A (zh) * 2022-03-29 2023-10-24 Oppo广东移动通信有限公司 服务查询的处理方法、装置、设备、存储介质及程序产品
CN115278332A (zh) * 2022-06-30 2022-11-01 海信视像科技股份有限公司 一种显示设备、播放设备和数据传输方法
CN117707467B (zh) * 2024-02-04 2024-05-03 湖北芯擎科技有限公司 音频通路多主机控制方法、系统、装置及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101067897A (zh) * 2007-05-30 2007-11-07 上海晖悦数字视频科技有限公司 一种基于蓝牙的数字电视机遥控器及其控制方法
CN105792050A (zh) * 2016-04-20 2016-07-20 青岛歌尔声学科技有限公司 一种蓝牙耳机及基于该蓝牙耳机的通信方法
CN108702720A (zh) * 2016-02-24 2018-10-23 高通股份有限公司 源设备广播与蓝牙等时信道相关联的同步信息
US10136429B2 (en) * 2014-07-03 2018-11-20 Lg Electronics Inc. Method for transmitting and receiving audio data in wireless communication system supporting bluetooth communication and device therefor

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068196B (zh) * 2006-05-01 2010-05-12 中兴通讯股份有限公司 一种蓝牙手机接入蓝牙网关的业务接入控制方法
US8792945B2 (en) * 2006-10-31 2014-07-29 Motorola Mobility Llc Methods and devices for dual mode bidirectional audio communication
CN103733661A (zh) * 2011-07-25 2014-04-16 摩托罗拉移动有限责任公司 蓝牙通信系统中用于提供简档信息的方法和装置
US20160359925A1 (en) * 2015-06-08 2016-12-08 Lg Electronics Inc. Method and apparatus for transmitting and receiving data in wireless communication system
US20170208639A1 (en) * 2016-01-15 2017-07-20 Lg Electronics Inc. Method and apparatus for controlling a device using bluetooth technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101067897A (zh) * 2007-05-30 2007-11-07 上海晖悦数字视频科技有限公司 一种基于蓝牙的数字电视机遥控器及其控制方法
US10136429B2 (en) * 2014-07-03 2018-11-20 Lg Electronics Inc. Method for transmitting and receiving audio data in wireless communication system supporting bluetooth communication and device therefor
CN108702720A (zh) * 2016-02-24 2018-10-23 高通股份有限公司 源设备广播与蓝牙等时信道相关联的同步信息
CN105792050A (zh) * 2016-04-20 2016-07-20 青岛歌尔声学科技有限公司 一种蓝牙耳机及基于该蓝牙耳机的通信方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116490923A (zh) * 2020-12-07 2023-07-25 Oppo广东移动通信有限公司 参数设置方法、装置、设备及存储介质
TWI792701B (zh) * 2020-12-18 2023-02-11 瑞昱半導體股份有限公司 支援低功耗藍牙音訊廣播運作並可同步調整音量大小的藍牙音訊廣播系統及相關的多成員藍牙裝置
US11709650B2 (en) 2020-12-18 2023-07-25 Realtek Semiconductor Corp. Bluetooth audio broadcasting system and related multi-member Bluetooth device supporting Bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
US11709651B2 (en) 2020-12-18 2023-07-25 Realtek Semiconductor Corp. Bluetooth audio broadcasting system and related multi-member Bluetooth device supporting Bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
US11818555B2 (en) 2020-12-18 2023-11-14 Realtek Semiconductor Corp. Bluetooth audio broadcasting system and related multi-member Bluetooth device supporting Bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
CN114615647A (zh) * 2022-03-11 2022-06-10 北京小米移动软件有限公司 通话控制方法、装置及存储介质
WO2024085664A1 (fr) * 2022-10-18 2024-04-25 삼성전자 주식회사 Dispositif électronique et procédé de transmission et/ou de réception de données sur la base d'un changement de configuration dans un dispositif électronique

Also Published As

Publication number Publication date
CN115665670A (zh) 2023-01-31
CN113169915B (zh) 2022-10-04
CN113169915A (zh) 2021-07-23

Similar Documents

Publication Publication Date Title
CN113169915B (zh) 无线音频系统、音频通讯方法及设备
JP7293398B2 (ja) ブルートゥース接続方法、デバイス、およびシステム
EP3893475B1 (fr) Procédé de commutation automatique d'un procédé de codage audio bluetooth et appareil électronique
WO2020133183A1 (fr) Dispositif et procédé de synchronisation de données audio
WO2020014880A1 (fr) Procédé et dispositif d'interaction multi-écran
WO2020077512A1 (fr) Procédé de communication vocale, dispositif électronique et système
CN113438354B (zh) 数据传输方法、装置、电子设备和存储介质
WO2021043219A1 (fr) Procédé de reconnexion bluetooth et appareil associé
US20230189366A1 (en) Bluetooth Communication Method, Terminal Device, and Computer-Readable Storage Medium
CN112119641B (zh) 通过转发模式连接的多tws耳机实现自动翻译的方法及装置
WO2020124371A1 (fr) Dispositif et procédé d'établissement de canaux de données
CN114679710A (zh) 一种tws耳机连接方法及设备
WO2022222691A1 (fr) Procédé de traitement d'appel et dispositif associé
WO2022257563A1 (fr) Procédé de réglage de volume, et dispositif électronique et système
WO2020134868A1 (fr) Procédé d'établissement de connexion, et appareil terminal
WO2021043250A1 (fr) Procédé de communication bluetooth, et dispositif associé
CN113132959B (zh) 无线音频系统、无线通讯方法及设备
WO2022199491A1 (fr) Procédé et système de mise en réseau stéréo, et appareil associé
WO2022161006A1 (fr) Procédé et appareil de synthèse de photographie, et dispositif électronique et support de stockage lisible
WO2021218544A1 (fr) Système de fourniture de connexion sans fil, procédé et appareil électronique
CN113678481B (zh) 无线音频系统、音频通讯方法及设备
WO2022267917A1 (fr) Procédé et système de communication bluetooth
WO2024093614A1 (fr) Procédé et système d'entrée de dispositif, dispositif électronique et support de stockage
WO2023138533A1 (fr) Procédé de collaboration de service, dispositif électronique, support de stockage lisible et système de puce
CN114153531A (zh) 管理物联网设备的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18941117

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18941117

Country of ref document: EP

Kind code of ref document: A1