WO2020107491A1 - Wireless audio system, and audio communication method and device - Google Patents

Wireless audio system, and audio communication method and device Download PDF

Info

Publication number
WO2020107491A1
WO2020107491A1 PCT/CN2018/118791 CN2018118791W WO2020107491A1 WO 2020107491 A1 WO2020107491 A1 WO 2020107491A1 CN 2018118791 W CN2018118791 W CN 2018118791W WO 2020107491 A1 WO2020107491 A1 WO 2020107491A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
service
module
isochronous
content control
Prior art date
Application number
PCT/CN2018/118791
Other languages
French (fr)
Chinese (zh)
Inventor
朱宇洪
王良
郑勇
张景云
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2018/118791 priority Critical patent/WO2020107491A1/en
Priority to CN201880099860.1A priority patent/CN113169915B/en
Priority to CN202211136691.9A priority patent/CN115665670A/en
Publication of WO2020107491A1 publication Critical patent/WO2020107491A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W56/00Synchronisation arrangements
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Definitions

  • This application relates to the field of wireless technology, in particular to a wireless audio system, audio communication method and equipment.
  • Bluetooth (Bluetooth) wireless technology is a short-range communication system intended to replace cable connections between portable and/or stationary electronic devices.
  • the key features of Bluetooth wireless communication technology are stability, low power consumption and low cost. Many features of its core specifications are optional and support product differentiation.
  • Bluetooth wireless technology has two forms of system: basic rate (basic rate, BR) and low power consumption (low energy, LE). Both types of systems include device discovery, connection establishment, and connection mechanisms.
  • the basic rate BR may include an optional enhanced data rate (EDR), and alternating media access control layer and physical layer extension (alternate media access control and physical layer extensions, AMP).
  • EDR enhanced data rate
  • AMP alternating media access control and physical layer extensions
  • Devices that implement the two systems BR and LE can communicate with other devices that also implement the two systems. Some profiles and use cases are only supported by one of the systems. Therefore, devices that implement these two systems have the ability to support more use cases.
  • Bluetooth Profile is a unique concept of Bluetooth protocol.
  • the Bluetooth protocol not only specifies the core specification (called Bluetooth core), but also defines various application layer specifications for various application scenarios. These application layer specifications are called Make a Bluetooth profile.
  • the Bluetooth protocol has developed application layer specifications (profiles) for various possible and universal application scenarios, such as the Bluetooth stereo audio transmission specifications (advance audio distribution profile, A2DP), audio/video remote control profile (AVRCP), basic image profile (basic imaging profile (BIP), hands-free profile (HFP), human interface specification (human interface device profile, HID profile, Bluetooth headset profile (headset profile, HSP), serial port profile (serial port profile, SPP), file transfer profile (file transmission profile, FTP), personal area network protocol (personal area network profile, PAN profile )and many more.
  • A2DP Bluetooth stereo audio transmission specifications
  • AVRCP audio/video remote control profile
  • BIP basic image profile
  • HFP hands-free profile
  • human interface specification human interface device profile, HID profile, Bluetooth headset profile (headset profile, HSP)
  • This application provides a wireless audio system, audio communication method and device, which can solve the problem of poor compatibility based on the existing Bluetooth protocol.
  • the present application provides an audio communication method, which is applied to an audio source side.
  • the method may include: establishing an ACL link between an audio source (such as a mobile phone and a media player) and an audio receiver (such as a headset).
  • the audio source can negotiate parameters with the audio receiver through the ACL link.
  • an isochronous data transmission channel can be established between the audio source and the audio receiver.
  • the isochronous data transmission channel can be used to transmit streaming data (i.e. audio data) of the first audio service.
  • a Bluetooth low energy connection is established between the audio source and the audio receiver.
  • the present application provides an audio communication method, which is applied to the audio receiver side.
  • the method may include: the audio receiver and the audio source establish a Bluetooth low energy connectionless asynchronous LE ACL link.
  • the audio receiver performs parameter negotiation for the first audio service through the LE ACL link and the audio source, and the first parameter negotiated in the parameter negotiation corresponds to the first audio service.
  • the audio receiver may create an LE isochronous data transmission channel corresponding to the first audio service based on the first parameter and the audio source.
  • the LE isochronous data transmission channel corresponding to the first audio service is used for the audio receiver to receive the audio data of the first audio service sent by the audio source.
  • a Bluetooth low energy connection is established between the audio source and the audio receiver.
  • the audio service may refer to a service or application capable of providing audio functions (such as audio playback, audio recording, etc.).
  • Audio services may involve audio-related data transmission services, such as the transmission of audio data itself, content control messages used to control the playback of audio data, and flow control messages used to create isochronous data transmission channels.
  • parameter negotiation and the establishment of isochronous data transmission channels can be performed at the granularity of audio services.
  • the flow control messages and content control messages of each audio service are transmitted through the LEACL link
  • the data is transmitted through the LE ISO link, which unifies the transmission framework of each business.
  • the audio communication method provided in this application can be applied to more audio services and has better compatibility.
  • the ACL link may be used to carry flow control messages, such as flow control messages involved in parameter negotiation, parameter configuration, and establishment of isochronous transmission channels.
  • the ACL link can also be used to carry content control messages, such as call control (e.g. answer, hang up, etc.) messages, playback control (e.g. previous, next, etc.) messages, and volume control (e.g., increase volume, decrease volume) News etc.
  • the audio source may generate the content control message of the first audio service, and may send the content control message of the first audio service to the audio receiver through the LEACL link.
  • the audio receiver can receive the content control message of the first audio service sent by the audio source through the LEACL link, and can perform content control on the first audio service according to the content control message.
  • the content control includes one or more of the following : Volume control, playback control, call control.
  • the content control message is used by the audio receiver to perform content control on the first audio service.
  • the content control includes one or more of the following: volume control, playback control, and call control.
  • the audio source may receive user input (for example, the user presses the phone hang-up button on the audio source), and then generates a content control message for the first audio service according to the user input.
  • user input for example, the user presses the phone hang-up button on the audio source
  • the audio receiver may generate the content control message of the first audio service, and may send the content control message of the first audio service to the audio source through the LEACL link.
  • the audio source can receive the content control message of the first audio service sent by the audio receiver through the LEACL link, and can perform content control on the first audio service according to the content control message.
  • the content control includes one or more of the following : Volume control, playback control, call control.
  • the content control message is used by the audio receiver to perform content control on the first audio service.
  • the content control includes one or more of the following: volume control, playback control, and call control.
  • the audio receiver can receive user input (for example, the user presses the phone hangup button on the audio receiver), and then generates a content control message for the first audio service according to the user input.
  • user input for example, the user presses the phone hangup button on the audio receiver
  • the audio source may generate audio data of the first audio service, and may send the first audio to the audio receiver through the LE isochronous data transmission channel corresponding to the first audio service Audio data for business.
  • the audio receiver can receive the audio data sent by the audio source through the LE isochronous data transmission channel corresponding to the first audio service.
  • the audio receiver may convert the audio data of the first audio service into sound.
  • the audio receiver may store audio data of the first audio service.
  • the first parameter may include one or more of the following: QoS parameters, codec parameters, ISO parameters, and so on.
  • the QoS parameters may include parameters such as delay, packet loss rate, throughput, etc. that represent transmission quality.
  • Codec parameters may include parameters that affect audio quality, such as encoding method and compression ratio.
  • ISO parameters can include CIS ID, number of CIS, maximum data size transmitted from master to slave, maximum data size transmitted from slave to master, maximum time interval of data transmission from master to slave at the link layer, and data from slave to master The longest time interval for packet transmission at the link layer, etc.
  • the first parameter may be obtained by querying a database according to the first audio service, and the database may store parameters corresponding to various audio services.
  • the parameters corresponding to the audio service may be designed by comprehensively considering various audio switching situations or mixing situations involved in the audio service.
  • This parameter can be applied to these situations involved in the business. For example, in the game business, it may happen that the background sound of the game and the voice of the microphone are switched or superimposed (the microphone is turned on during the game). The codec parameters and QoS parameters of the game background sound and Mike's speech sound may be different. For the game business, parameters suitable for this situation can be designed, so that when the user opens the microphone to speak during the game, it will not affect the listening experience.
  • the content control message may include one or more of the following: volume control (such as increasing the volume, decreasing the volume, etc.) message, playback control (such as the previous song, the next One, etc.) message, call control (answer, hang up) message.
  • volume control such as increasing the volume, decreasing the volume, etc.
  • playback control such as the previous song, the next One, etc.
  • call control answer, hang up
  • the audio source and the audio receiver can Re-negotiate the parameters, negotiate to determine the new parameters corresponding to the new audio service (such as the telephone service), and then create a new isochronous data transmission channel based on the new parameters.
  • the new isochronous data transmission channel can be used to transmit streaming data of the new audio service (such as telephone service).
  • the isochronous data transmission channels of various services are based on LE. In this way, the switching of the business scenario does not involve the switching of the transmission frame, and the efficiency is higher, and there is no obvious pause.
  • the isochronous data transmission channel corresponding to the old audio service can also be reconfigured using the new parameters corresponding to the new audio service (such as the telephone service), without the need for new
  • the parameters recreate a new isochronous data transmission channel. In this way, efficiency can be further improved.
  • the creation time of the isochronous data transmission channel may include the following options:
  • an isochronous data transmission channel can be created when the audio service arrives. For example, when the user opens the game application (the game background sound starts playing at the same time), the application layer of the mobile phone will send a game background sound service creation notification to the Host, according to the notification, the mobile phone will initiate the creation process of the isochronous data transmission channel to the Bluetooth headset .
  • a default isochronous data transmission channel may be established first, and the default isochronous data transmission channel may be created based on default CIG parameters. In this way, when the audio service arrives, the default isochronous data transmission channel can be used directly to carry streaming data, and the response speed is faster.
  • multiple virtual isochronous data transmission channels may be established first.
  • the multiple virtual isochronous data transmission channels may correspond to multiple sets of different CIG parameters, and may be applicable to multiple audio services.
  • a virtual isochronous data transmission channel refers to an isochronous data transmission channel where no data interaction occurs on the air interface. In this way, when an audio service arrives, a virtual isochronous data transmission channel corresponding to the audio service may be selected, and a handshake is triggered between the first audio device and the second audio device and communication is started.
  • an audio device which includes multiple functional units for correspondingly performing the method provided in any one of the possible implementation manners of the first aspect.
  • an audio device including a plurality of functional units for correspondingly performing the method provided in any one of the possible implementation manners of the second aspect.
  • an audio device for performing the audio communication method described in the first aspect.
  • the network device may include: a memory and a processor, a transmitter, and a receiver coupled to the memory, where the transmitter is used to send a signal to another wireless communication device, the receiver is used to receive a signal sent by another wireless communication device, and the memory is used
  • the processor is used to execute the program code stored in the memory, that is, to execute any one of the audio communication methods described in the possible implementation manners of the first aspect.
  • an audio device for performing the audio communication method described in the second aspect.
  • the terminal may include: a memory and a processor, a transmitter, and a receiver coupled to the memory, where the transmitter is used to send signals to another wireless communication device, the receiver is used to receive signals sent by another wireless communication device, and the memory is used.
  • the implementation code of the audio communication method described in the second aspect is stored, and the processor is used to execute the program code stored in the memory, that is, the audio communication method described in any one of the possible implementation manners of the second aspect is executed.
  • a chip set may include: a first chip and a second chip.
  • the first chip and the second chip communicate through an interface HCI.
  • the first chip may include the following modules: multimedia audio module, voice module, background sound module, content control module, stream control module, stream data module, and L2CAP module.
  • the second chip may include: an LE physical layer module and an LE link layer module.
  • LE physical layer module which can be used to provide a physical channel for data transmission (commonly referred to as a channel).
  • a channel for data transmission
  • the LE link layer module can be used to provide a physical independent logical transmission channel (also called a logical link) between two or more devices on the basis of the physical layer.
  • the LE link layer module can be used to control the radio frequency state of the device. The device will be in one of five states: waiting, advertising, scanning, initializing, and connecting.
  • the broadcast device can send data without establishing a connection, and the scanning device receives the data sent by the broadcast device; the device that initiates the connection responds to the broadcast device by sending a connection request. If the broadcast device accepts the connection request, the broadcast device and the device that initiates the connection will Will enter the connection state.
  • the device that initiates the connection is called the master device, and the device that accepts the connection request is called the slave device.
  • the LE link layer module may include the LE ACL module and the LE isochronous (ISO) module.
  • the LEACL module can be used to transmit control messages between devices through the LEACL link, such as flow control messages, content control messages, and volume control messages.
  • the LEISO module can be used to transmit isochronous data between devices (such as streaming data itself) through isochronous data transmission channels.
  • the L2CAP module can be used to manage the logical links provided by the logical layer. Based on L2CAP, different upper-layer applications can share the same logical link. Similar to the concept of port in TCP/IP.
  • the multimedia audio module, voice module, and background sound module may be modules set according to business scenarios, and may be used to divide the audio application of the application layer into several types of audio services such as multimedia audio, voice, and background sound. It is not limited to multimedia audio, voice, background sound, etc. Audio services can also be divided into: voice, music, games, video, voice assistant, e-mail reminder, alarm, reminder sound, navigation sound, etc.
  • the content control module can be responsible for encapsulating the content control (such as previous, next, etc.) messages of various audio services, and output the content control message of the audio service to the LEACL module 411 to transmit the encapsulated content through the LEACL module 411 Control messages.
  • the flow control module can be used to negotiate parameters for specific audio services, such as QoS parameter negotiation, codec parameter negotiation, ISO parameter negotiation, and creating an isochronous data transmission channel for the specific service based on the negotiated parameters. Creating an isochronous data transmission channel for the specific service can be used to transmit audio data of the specific audio service.
  • the specific audio service may be referred to as a first audio service
  • the negotiated parameter may be referred to as a first parameter.
  • the streaming data module can be used to output audio data of the audio service to the LE isochronous (ISO) module to transmit audio data through the isochronous data transmission channel.
  • the isochronous data transmission channel may be CIS. CIS can be used to transfer isochronous data between connected devices. The isochronous data transmission channel is finally carried on LEISO.
  • a chip is provided.
  • the chip may include the module in the first chip and the module in the second chip described in the seventh aspect.
  • each module please refer to the seventh aspect, which will not be repeated here.
  • a communication system in a ninth aspect, includes: a first audio device and a second audio device, where the first audio device may be the audio device described in the third aspect or the fifth aspect.
  • the second audio device may be the audio device described in the fourth aspect or the sixth aspect.
  • a communication system includes: a first audio device, a second audio device, and a third audio device, where the first audio device may be the audio device described in the third aspect or the fifth aspect. Both the second audio device and the third audio device may be the audio devices described in the fourth aspect or the sixth aspect.
  • a computer-readable storage medium having instructions stored on it, which when run on a computer, causes the computer to execute the audio communication method described in the first aspect above.
  • the readable storage medium stores instructions, which when executed on a computer, causes the computer to perform the audio communication method described in the second aspect.
  • a computer program product containing instructions, which when executed on a computer, causes the computer to execute the audio communication method described in the first aspect above.
  • FIG. 1 is a schematic structural diagram of a wireless audio system provided by this application.
  • 2A is a schematic diagram of the existing protocol framework for releasing BR/EDR Bluetooth
  • 2B-2D are schematic diagrams of existing protocol stacks of several audio profiles
  • FIG. 3 is a schematic diagram of a BLE-based audio protocol framework provided by this application.
  • Figure 5 is a schematic diagram of the extended BLE transmission framework
  • FIG. 6 is a schematic diagram of the overall flow of the audio communication method provided by this application.
  • FIG. 8 is a schematic flowchart of an audio communication method in a scenario where left and right headphones are used together provided by this application;
  • 10A is a schematic diagram of a hardware architecture of an electronic device provided by an embodiment of the present application.
  • FIG. 10B is a schematic diagram of a software architecture implemented on the electronic device shown in FIG. 10A;
  • FIG. 11 is a schematic diagram of a hardware architecture of an audio output device provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of an architecture for providing a chipset according to this application.
  • FIG. 13 is a schematic diagram of a chip architecture of the present application.
  • FIG. 1 shows a wireless audio system 100 provided by this application.
  • the wireless audio system 100 may include a first audio device 101, a second audio device 102 and a third audio device 103.
  • the first audio device 101 may be implemented as any of the following electronic devices: mobile phones, portable game consoles, portable media playback devices, personal computers, vehicle-mounted media playback devices, and so on.
  • the second audio device 102 and the third audio device 103 may be configured as any type of electro-acoustic transducer for converting audio data into sound, such as speakers, in-ear headphones, headphones, etc. Wait.
  • the physical form and size of the first audio device 101, the second audio device 102, and the third audio device 103 may also be different, which is not limited in this application.
  • the first audio device 101, the second audio device 102, and the third audio device 103 may all be configured with a wireless transceiver, and the wireless transceiver may be used to transmit and receive wireless signals.
  • the two can communicate through a wireless communication connection 106 instead of a wired communication connection.
  • the first audio device 101 and the second audio device 102 can establish a wireless communication connection 104.
  • the first audio device 101 may send audio data to the second audio device 102 through the wireless communication connection 104.
  • the role of the first audio device 101 is an audio source (audiosource)
  • the role of the second audio device 102 is an audio receiver (audio sink).
  • the second audio device 102 can convert the received audio data into sound, so that the user wearing the second audio device 102 can hear the sound.
  • the second audio device 102 In the transmission direction of the second audio device 102 to the first audio device 101, when the second audio device 102 is configured with a sound collection device such as a receiver/microphone, the second audio device 102 can convert the collected sound into audio data And send audio data to the first audio device 101 through the wireless communication connection 104.
  • the role of the second audio device 102 is an audio source (audiosource)
  • the role of the first audio device 101 is an audio receiver (audio sink).
  • the first audio device 101 can process the received audio data, such as sending the audio data to other electronic devices (in a voice call scenario) and storing the audio data (in a recording scenario).
  • the first audio device 101 and the second audio device 102 can also interactively play control control (such as previous, next, etc.) messages, call control (such as answer, hang up) messages based on the wireless communication connection 104, Volume control messages (eg volume up, volume down), etc.
  • the first audio device 101 may send a playback control message and a call control message to the second audio device 102 through the wireless communication connection 104, which may implement playback control and call control on the first audio device 101 side.
  • the second audio device 102 may send a playback control message and a call control message to the first audio device 101 through the wireless communication connection 104, which may implement playback control and call control on the second audio device 102 side.
  • a wireless communication connection 105 can be established between the first audio device 101 and the third audio device 103, and audio data, playback control messages, and call control messages can be exchanged through the wireless communication connection 105.
  • the first audio device 101 can simultaneously transmit audio data to the second audio device 102 and the third audio device 103.
  • the audio data transmitted from the first audio device 101 to the second audio device 102, the audio data from the third audio device 103, and the control messages all need to achieve point-to-multipoint synchronous transmission.
  • the synchronization of the second audio device 102 and the third audio device 103 has a crucial influence on the integrity of the user's hearing experience.
  • the second audio device 102 and the third audio device 103 are implemented as a left earphone and a right earphone, respectively, if the signals of the left and right ears are out of synchronization for about 30 microseconds, it will be disturbing and the user will feel the sound chaos.
  • the wireless audio system 100 shown in FIG. 1 may be a wireless audio system implemented based on the Bluetooth protocol. That is, the wireless communication connection (wireless communication connection 104, wireless communication connection 105, wireless communication connection 106) between the devices may use a Bluetooth communication connection. In order to support audio applications, the existing BR Bluetooth protocol provides some profiles, such as A2DP, AVRCP, HFP.
  • the existing Bluetooth protocol defines different protocol frameworks for different profiles, which are independent of each other and cannot be compatible.
  • FIG. 2A exemplarily shows the existing BR/EDR Bluetooth protocol framework.
  • the existing BR/EDR Bluetooth protocol framework may include multiple profiles. To simplify the illustration, only some audio application profiles are shown in FIG. 2A: A2DP, AVRCP, and HFP. Not limited to this, the existing BR/EDR Bluetooth protocol framework may also include other profiles, such as SPP, FTP, etc.
  • A2DP stipulates the protocol stack and method of using Bluetooth asynchronous transmission channel to transmit high-quality audio. For example, you can use a stereo Bluetooth headset to listen to music from a music player.
  • AVRCP refers to the remote control function, and generally supports remote control operations such as pause, stop, replay, and volume control. For example, you can use a Bluetooth headset to perform pauses, switch to the next song, etc. to control the music player to play music.
  • FHP is a voice application that provides hands-free calling.
  • 2B-2C show the protocol stacks of A2DP and HFP respectively. among them:
  • Protocols and entities included in the A2DP protocol stack are Protocols and entities included in the A2DP protocol stack.
  • the audio source is a source of a digital audio stream, which is transmitted to an audio sink in the piconet.
  • An audio receiver is a receiver that receives a digital audio stream from an audio source (audio source) in the same piconet.
  • a typical device used as an audio source may be a media player device, such as MP3, and a typical device used as an audio receiver may be a headset.
  • a typical device used as an audio source may be a sound collection device, such as a microphone, and a typical device used as an audio receiver may be a portable recorder.
  • LMP link management protocol
  • L2CAP logical link control and adaptation protocol
  • SDP service discovery protocol
  • the audio and video data transmission protocol includes a signaling entity for negotiating streaming parameters and a transmission entity for controlling the stream itself.
  • the application (Application) layer is an entity in which application service and transmission service parameters are defined. This entity is also used to adapt the audio stream data to a defined packet format or adapt the defined packet format to audio stream data.
  • Protocols and entities included in the AVRCP protocol stack are Protocols and entities included in the AVRCP protocol stack.
  • a controller is a device that initiates a transaction by sending a command frame to a target device.
  • Typical controllers can be personal computers, mobile phones, remote controllers, etc.
  • a target is a device that receives command frames and generates response frames accordingly.
  • Typical target parties may be audio playback/recording devices, video playback/recording devices, televisions, etc.
  • Baseband Baseband
  • LMP Link Management Protocol
  • L2CAP Logical Link Control and Adaptation Protocol
  • SDP is a Bluetooth service discovery protocol (service discovery protocol).
  • OBEX object exchange protocol
  • AV/C Audio/video/control
  • the Application layer is an ACRVP entity used to exchange control and browsing commands defined in the protocol.
  • An audio gateway is a device used as a gateway for inputting audio and outputting audio.
  • a typical device used as an audio gateway may be a cellular phone.
  • Hands-free unit (Hands-Free unit) is a device used as a remote audio input and output mechanism of the audio gateway. The hands-free unit can provide some remote control methods.
  • a typical device used as a hands-free unit may be an on-board hands-free unit.
  • Baseband Baseband
  • LMP Link Management Protocol
  • L2CAP Logical Link Control and Adaptation Protocol
  • RFCOMM is a Bluetooth serial port emulation entity.
  • SDP is a Bluetooth service discovery protocol.
  • Hands-free control Hands-free control (Hands-Free control) is the entity responsible for the specific control signal of the hands-free unit. The control signal is based on AT commands.
  • the audio port simulation (audio port emulation) layer is the entity that simulates the audio port on the audio gateway (audio gateway), and the audio driver (audio driver) is the driver software in the hands-free unit.
  • A2DP, AVRCP, and HFP respectively correspond to different protocol stacks, and different profiles use different transmission links, which are not compatible with each other. That is to say, the profile is actually a different protocol stack of the Bluetooth protocol corresponding to different application scenarios.
  • the Bluetooth protocol needs to support new application scenarios, it is necessary to follow the existing Bluetooth protocol framework to add a profile and add a protocol stack.
  • a user wearing a Bluetooth headset turns on Mike and his teammates during the game (the game will produce a background sound of the game, such as a sound triggered by game skills).
  • audio transmission will need to switch from A2DP to HFP.
  • the background sound transmission during the game can be implemented based on the A2DP protocol stack, and the voice transmission to the teammates can be implemented based on the HFP protocol stack.
  • the game background sound requires higher sound quality than the voice, that is, the coding parameters (such as compression rate) used by the two are different, and the game background sound uses a higher compression rate than the voice.
  • A2DP and HFP are independent of each other, switching from A2DP to HFP needs to stop the configuration related to the transmission of game background sound under A2DP, and re-negotiate the parameters of audio data transmission under HFP, configuration initialization, etc. This switch The process takes a long time, resulting in a pause that the user can clearly perceive.
  • the existing BR/EDR Bluetooth protocol does not implement point-to-multipoint synchronous transmission.
  • the existing BR/EDR Bluetooth protocol defines two types of Bluetooth physical links: connectionless asynchronous (Asynchronous) connectionless (ACL) link, synchronous connection-oriented (synchronous connection oriented (SCO) or extended SCO (extended SCO, eSCO) link.
  • ACL connectionless asynchronous
  • SCO synchronous connection-oriented
  • SCO extended SCO
  • eSCO extended SCO
  • the ACL link supports both symmetric connection (point-to-point) and asymmetric connection (point-to-multipoint).
  • the transmission efficiency of the ACL link is high, but the delay is uncontrollable, and the number of retransmissions is not limited. It can be mainly used to transmit data that is not sensitive to delay, such as control signaling and packet data.
  • SCO/eSCO links support symmetrical connections (point-to-point).
  • the transmission efficiency of the SCO/eSCO link is low, but the delay is controllable, and the number of retransmissions is limited. It can mainly transmit delay-sensitive services (such
  • the existing links of ACL and SCO/eSCO in the BR/EDR Bluetooth protocol do not support isochronous data. That is to say, in the point-to-multipoint piconet, the data sent by the master device to multiple slave devices is not synchronized, and the signals of multiple slave devices will be out of sync.
  • this application provides an audio protocol framework based on Bluetooth low energy BLE.
  • the existing BLE protocol supports a point-to-multipoint network topology.
  • the Bluetooth Interest Group (SIG) has proposed to add isochronous data support to BLE to allow BLE devices to transmit isochronous data.
  • isochronous data is time-bounded.
  • isochronous data refers to the information in the stream. Each information entity in the stream is limited by the time relationship between it and the previous entity and the subsequent entity.
  • the existing BLE protocol does not define audio transmission, and the BLE profile does not include audio profiles (such as A2DP, HFP). That is to say, the audio transmission (voice-over-ble) based on Bluetooth low energy is not standardized.
  • the BLE-based audio protocol framework provided by this application will support audio transmission.
  • FIG. 3 shows a BLE-based audio protocol framework provided by this application.
  • the protocol framework may include: LE physical layer (LE physical layer) 313, LE link layer (LE link layer) 310, L2CAP layer and application (application) layer 308.
  • the LE physical layer 313 and the LE link layer 310 may be implemented in a controller, and the L2CAP layer 308 may be implemented in a host.
  • the protocol framework may further include some functional entities implemented in the Host: multimedia audio functional entity 302, voice functional entity 303, background sound functional entity 304, content control functional entity 305, flow control functional entity 306, and streaming data functional entity 307.
  • the LE physical layer 313 may be responsible for providing a physical channel (commonly referred to as a channel) for data transmission.
  • a physical channel commonly referred to as a channel
  • channels there are several different types of channels in a communication system, such as control channels, data channels, voice channels, and so on.
  • Bluetooth uses the 2.4 GHz industrial scientific (ISM) frequency band.
  • the LE link layer 310 provides, on the basis of the physical layer, a logical transmission channel (also called a logical link) between two or more devices that is physically independent.
  • the LE link layer 310 can be used to control the radio frequency state of the device.
  • the device will be in one of five states: waiting, advertising, scanning, initializing, and connecting.
  • the broadcast device can send data without establishing a connection, and the scanning device receives the data sent by the broadcast device; the device that initiates the connection responds to the broadcast device by sending a connection request. If the broadcast device accepts the connection request, the broadcast device and the device that initiated the connection will Will enter the connection state.
  • the device that initiates the connection is called the master device, and the device that accepts the connection request is called the slave device.
  • the LE link layer 310 may include a LE ACL link 311 and a LE isochronous (ISO) link 312.
  • the LE ACL link 311 can be used to transmit control messages between devices, such as flow control messages, content control messages, and volume control messages.
  • the LE ISO link 312 can be used to transmit isochronous data between devices (such as streaming data itself).
  • the L2CAP layer 308 can be responsible for managing the logical links provided by the logical layer. Based on L2CAP, different upper-layer applications can share the same logical link. Similar to the concept of port in TCP/IP.
  • the multimedia audio function entity 302, the voice function entity 303, and the background sound function entity 304 may be function entities set according to business scenarios, and may be used to divide the audio application of the application layer into multimedia audio, voice, background sound, etc. business. It is not limited to multimedia audio, voice, background sound, etc. Audio services can also be divided into: voice, music, games, video, voice assistant, e-mail reminder, alarm, reminder sound, navigation sound, etc.
  • the content control function entity 305 may be responsible for encapsulating content control (eg, previous, next, etc.) messages of various audio services, and transmitting the encapsulated content control message through the LEACL link 311.
  • content control eg, previous, next, etc.
  • the stream control functional entity 306 may be responsible for parameter negotiation, such as quality of service (QoS) parameter negotiation, codec parameter negotiation, and isochronous data transmission channel parameter (hereinafter referred to as ISO parameter) Negotiation, and responsible for the establishment of isochronous data transmission channels.
  • QoS quality of service
  • ISO parameter isochronous data transmission channel parameter
  • the streaming data function entity 307 may be responsible for transmitting audio data through the isochronous data transmission channel.
  • the isochronous data transmission channel (isochronous data path) may be a connected isochronous audio stream (connected isochronous stream, CIS).
  • CIS can be used to transfer isochronous data between connected devices.
  • the isochronous data transmission channel is finally carried on LEISO 312.
  • the flow control function entity 306 may also be used to negotiate parameters before creating an isochronous data transmission channel, and then create an isochronous data transmission channel based on the negotiated parameters.
  • the audio protocol framework shown in FIG. 3 may also include a host controller interface (Host Controller Interface, HCI).
  • Host and Controller communicate through HCI, and the communication medium is HCI commands.
  • the Host can be implemented in the application processor (AP) of the device, and the Controller can be implemented in the Bluetooth chip of the device.
  • AP application processor
  • the Controller can be implemented in the Bluetooth chip of the device.
  • Host and Controller can be implemented in the same processor or controller, in which case HCI is optional.
  • the BLE-based audio protocol framework provided by this application can divide the data of various audio applications (such as A2DP, HFP, etc.) into three types:
  • call control such as answering, hanging up, etc.
  • playback control such as previous song, next song, etc.
  • volume control such as increasing the volume, decreasing the volume
  • Flow control Create stream (create stream), terminate stream (terminate stream) and other signaling for stream management. Streams can be used to carry audio data.
  • Streaming data audio data itself.
  • the content control and flow control data are transmitted through the LEACL 311 link; the streaming data is transmitted through the LEISO 312 link.
  • the BLE-based audio protocol framework provided by this application provides a unified audio transmission framework, no matter which audio profile data can be divided into three types: content control, flow control, and flow data. Based on the BLE framework, two types of data, content control and flow control, are transmitted on the LEACL link, and streaming data is transmitted on the LEISO link.
  • the BLE-based audio protocol framework supports audio transmission and can unify service level connections, and divide all upper-layer audio profiles into multimedia audio, voice, background sound and other audio services according to business scenarios.
  • the flow control of each audio service (including negotiation of QoS parameters, negotiation of codec parameters, negotiation of ISO parameters and establishment of isochronous data transmission channels) is unified by the flow control functional entity in the protocol stack.
  • the content control of each audio service (such as call control such as answering and hanging up, playback control such as previous and next song, such as volume control, etc.) is unified by the content control function entity in the protocol stack.
  • Both the flow control message and the content control message are transmitted through the LE ACL link, and the streaming data is transmitted through the LE ISO link. In this way, different audio profiles can be implemented based on the same transmission framework, with better compatibility.
  • the audio protocol framework provided in this application is based on BLE, which refers to an extended BLE transmission framework (transport architecture).
  • BLE refers to an extended BLE transmission framework (transport architecture).
  • the expanded BLE transmission framework mainly includes the addition of: isochronous channel (isochronous) channel characteristics.
  • Figure 5 shows the extended BLE transport framework entities (transport architectures). Among them, the shaded entities are newly added logical sublayers, and these newly added logical sublayers jointly provide isochronous channel characteristics. As shown in Figure 5:
  • LE physical transmission (LE physical transmission) layer air interface data transmission, through the data packet structure, through coding, modulation scheme and other marks.
  • LEPHYSICALtransport carries all the information from the upper layer.
  • LE physical channel (LE physical channel) layer the air interface physical channel transmitted between Bluetooth devices, through the time domain, frequency domain, and the physical layer bearer channel marked by the air domain, including frequency hopping, time slot, event, access code the concept of.
  • a LE physical channel can carry different LE logical transmissions (LE logical transmission); for the lower layer, an LE physical channel always maps its only corresponding LE physical transmission.
  • the LE physical channel layer can include four physical channel entities: LE physical network (LE piconet physical channel), LE broadcast physical channel (LE advertising physical channel), LE periodic physical channel (LE periodic physical channel), LE isochronous physical Channel (LE isochronous physical channel). That is, the LE isochronous physical channel is added to the existing LE physical channel.
  • LE physical network LE piconet physical channel
  • LE broadcast physical channel LE advertising physical channel
  • LE periodic physical channel LE periodic physical channel
  • LE isochronous physical Channel LE isochronous physical channel
  • LE piconet physical channel can be used for communication between connected devices.
  • the communication uses frequency hopping technology.
  • LE advertising can be used for connectionless broadcast communications between devices. These broadcast communications can be used for device discovery, connection operations, and connectionless data transmission.
  • LE periodic physical channel can be used for periodic broadcast communication between devices.
  • LE isochronous physical channel can be used to transmit isochronous data, and there is a one-to-one mapping relationship with the upper LE isochronous physical link.
  • LE physical link layer the baseband connection between Bluetooth devices, it is a virtual concept, and there is no corresponding field expression in the air interface data packet.
  • an LE logical transport will only be mapped to an LE physical link.
  • a LE physical link can be carried through different LE physical channels, but a transmission is always mapped to an LE physical channel.
  • LE physical link is a further encapsulation of the LE physical channel.
  • the LE physical link layer can include four physical link entities: LE active physical link (LE active physical link), LE broadcast physical link (LE advertising physical link), LE periodic physical link (LE periodic physical link), LE Isochronous physical link (LE isochronous physical link). On the basis of the existing LE physical link, the LE isochronous physical link is added.
  • LE isochronous physical link can be used to transmit isochronous data, carrying the upper layer LE-BIS, LE-CIS, and LE physical channel there is a one-to-one mapping relationship.
  • LE logical transmission layer it can be responsible for flow control, ACK/NACK confirmation mechanism, retransmission mechanism, and scheduling mechanism. This information is generally carried in the data packet header.
  • one LElogical transport can correspond to multiple LElogical links.
  • a LElogical transport only maps to a corresponding LEphysical link.
  • the LE logical transport layer may include the following logical transport entities: LE-ACL, ADVB, PADVB, LE-BIS, LE-CIS. That is, LE-BIS and LE-CIS are added to the existing LE logical transport.
  • LE-CIS is a point-to-point logical transport between the Master and a designated slave, and each CIS supports a LE-S logical link.
  • CIS can be a symmetric rate or an asymmetric rate.
  • LE-CIS is built on LE-ACL.
  • LE-BIS is a point-to-multipoint Logical transport, and each BIS supports one LE-S logical link.
  • LE-BIS is built on PADVB.
  • BIS refers to broadcast isochronous stream
  • CIS refers to connected isochronous stream.
  • the LE ISO link 312 in FIG. 3 may be LE CIS, and the LE ACL link 311 in FIG. 3 may be LE ACL.
  • LE logical link (LE logical link) layer can be used to support different application data transmission.
  • each LElogical link may be mapped to multiple LElogical transports, but only one LElogical transport is selected for mapping at a time.
  • the LE logical layer may include the following logical link entities: LE-C, LE-U, ADVB-C, ADVB-U, low-energy broadcast control (LEB-C), and low-power flow (low energy, stream, LE-S).
  • LEB-C means control
  • -U means user. That is, LEB-C and LE-S are added on the basis of the existing LE logical link. Among them, LEB-C is used to carry BIS control information, and LE-S is used to carry isochronous data streams.
  • the present application provides an audio communication method.
  • the main inventive idea may include: determining the parameters of the isochronous data transmission channel for each audio service with the audio service as the granularity.
  • the first audio device such as a mobile phone, a media player
  • the second audio device such as a headset
  • the isochronous data transmission channel can be used to transmit streaming data.
  • the audio service may refer to a service or application capable of providing audio functions (such as audio playback, audio recording, etc.).
  • Audio services may involve audio-related data transmission services, such as the transmission of audio data itself, content control messages used to control the playback of audio data, and flow control messages used to create isochronous data transmission channels.
  • the audio communication method provided in this application no longer uses the profile as a granularity for parameter negotiation, but uses the audio service as a granularity for parameter negotiation.
  • the isochronous data transmission channel can be configured based on the renegotiated parameters, without the need to switch between different profile protocol stacks, which is more efficient and avoids obvious pauses.
  • A2DP music service
  • HFP telephone service
  • A2DP and HFP respectively correspond to different transmission frames.
  • A2DP stream data (such as stereo music data) is finally transmitted through the ACL link
  • HFP stream data (such as voice data) is finally transmitted through the SCO/eSCO link. Therefore, in the existing Bluetooth protocol, this switch will cause the switch of the underlying transmission frame, which is time-consuming.
  • the BLE-based audio protocol framework provided by this application provides a unified audio transmission framework. No matter which audio business scenario, streaming data will be transmitted through the LE ISO link. The switching of business scenarios does not involve the switching of the transmission framework, efficiency higher.
  • the QoS parameters may include parameters representing transmission quality such as delay, packet loss rate, and throughput.
  • Codec parameters may include parameters that affect audio quality, such as encoding method and compression ratio.
  • ISO parameters can include CIS ID, number of CIS, maximum data size transmitted from master to slave, maximum data size transmitted from slave to master, maximum time interval of data transmission from master to slave at the link layer, and data from slave to master The longest time interval for packet transmission at the link layer, etc.
  • FIG. 6 shows the overall flow of the audio communication method provided by the present application.
  • a BLE connection is established between the first audio device (such as a mobile phone and a media player) and the second audio device (such as a headset). Expand below:
  • an ACL link is established between the first audio device (such as a mobile phone and a media player) and the second audio device (such as a headset).
  • the ACL link can be used to carry flow control messages, such as flow control messages involved in parameter negotiation, parameter configuration, and establishment of isochronous transmission channels in the flow control process (S602-S604).
  • the ACL link can also be used to carry content control messages, such as call control (such as answering, hanging up, etc.) messages during the content control process (S605-S607), and playback control (such as previous, next, etc.) messages. , Volume control (such as increasing the volume, decreasing the volume) message, etc.
  • content control messages such as call control (such as answering, hanging up, etc.) messages during the content control process (S605-S607), and playback control (such as previous, next, etc.) messages.
  • volume control such as increasing the volume, decreasing the volume
  • the first audio device and the second audio device may perform parameter negotiation through the ACL link.
  • the parameter negotiation can be conducted with the audio service as the granularity.
  • Different audio services require parameter negotiation, such as QoS parameter negotiation, codec parameter negotiation, and ISO parameter negotiation.
  • An audio service can correspond to a set of parameters, and a set of parameters can include one or more of the following: QoS parameters, codec parameters, and ISO parameters.
  • the specific process of the parameter negotiation may include:
  • the first audio device may send a parameter negotiation message to the second audio device through the ACL link, and the message may carry a set of parameters corresponding to the specific audio service.
  • This set of parameters may be obtained by querying from a database according to the specific audio service, and the database may store parameters corresponding to various audio services.
  • Step b The second audio device receives the parameter negotiation message sent by the first audio device through the ACL link. If the second audio device agrees with the parameters carried in the message, a confirmation message is returned to the first audio device; if the second audio device does not agree or partially agrees with the parameters carried in the parameter negotiation message, it returns to the first audio device to continue the negotiation Message to return to the first audio device to continue parameter negotiation.
  • the parameters corresponding to the audio service may be designed by comprehensively considering various audio switching situations or mixing situations involved in the audio service.
  • This parameter can be applied to these situations involved in the business. For example, in the game business, it may happen that the background sound of the game and the voice of the microphone are switched or superimposed (the microphone is turned on during the game). The codec parameters and QoS parameters of the game background sound and Mike's speech sound may be different. For the game business, parameters suitable for this situation can be designed, so that when the user opens the microphone to speak during the game, it will not affect the listening experience.
  • the specific audio service may be phone, game, voice assistant, music and so on.
  • the first audio device may perform parameter configuration to the second audio device through the ACL link.
  • the parameter configuration refers to a parameter determined by configuration negotiation of the second audio device.
  • the first audio device may send a parameter configuration message to the second audio device through the ACL link, and the parameter configuration message may carry parameters that have been negotiated and determined by both the first audio device and the second audio device.
  • the second audio device can perform the reception or transmission of the streaming data according to the parameters that have been negotiated and determined by both parties.
  • the isochronous data transmission channel can be used to transmit streaming data (ie, audio data).
  • streaming data ie, audio data.
  • the subsequent content will expand and explain the specific process of establishing an isochronous data transmission channel between the first audio device and the second audio device, which will not be repeated here.
  • the first audio device may be an audio source (audio source), and the second audio device may be an audio receiver (audio sink). That is, the audio source initiates parameter negotiation and isochronous data channel creation.
  • the first audio device may also be an audio source (audio sink) and the second audio device may be an audio receiver (audio source). That is, the audio receiver initiates parameter negotiation and isochronous data channel creation.
  • the content control message can be exchanged between the first audio device and the second audio device based on the ACL link.
  • the first audio device and the second audio device may exchange call control messages based on the ACL link, such as answering and hanging up control messages.
  • the first audio device can send a call control (such as answering, hanging up, etc.) message to the second audio device (such as a headset) through the ACL link, which can be implemented on the first audio device (such as a mobile phone) Call control.
  • a typical application scenario corresponding to this method may be: when using a Bluetooth headset to make a call, the user clicks the hang up button on the mobile phone to hang up the phone.
  • the second audio device (such as a headset) can send a call control (such as answering, hanging up, etc.) message to the first audio device (such as a mobile phone) through the ACL link, which can be implemented on the second audio device ( Such as headset) side call control.
  • a typical application scenario corresponding to this method may be: when using a Bluetooth headset to make a call, the user presses the hangup button on the Bluetooth headset to hang up the phone. Not limited to pressing the hang up button, the user can also hang up the phone on the Bluetooth headset through other operations, such as tapping the headset.
  • the first audio device and the second audio device can interactively play control messages based on the ACL link, such as the previous song and the next song.
  • the first audio device (such as a mobile phone) can send a playback control (such as previous song, next song, etc.) message to the second audio device (such as a headset) through the ACL link, which can be implemented in the first audio device (Such as mobile phone) side to play control.
  • a typical application scenario corresponding to this method may be: when listening to music using a Bluetooth headset, the user clicks the previous/next button on the mobile phone to switch songs.
  • the second audio device (such as a headset) can send a playback control (such as previous song, next song, etc.) message to the first audio device (such as a mobile phone) through the ACL link, which can be implemented in the second audio Playback control is performed on the device (such as headphones) side.
  • a typical application scenario corresponding to this method may be: when listening to music using a Bluetooth headset, the user presses the previous/next button on the Bluetooth headset to switch songs.
  • the first audio device and the second audio device may exchange volume control messages based on the ACL link, and increase or decrease volume control messages.
  • the first audio device (such as a mobile phone) can send a volume control (such as volume increase, volume decrease, etc.) message to the second audio device (such as a headset) through the ACL link, which can be implemented in the first audio
  • the volume control is performed on the device (such as a mobile phone) side.
  • the typical application scenario corresponding to this method may be: when using a Bluetooth headset to listen to music, the user clicks the volume adjustment button on the mobile phone to adjust the volume.
  • the second audio device (such as a headset) can send a volume control (such as volume increase, volume decrease, etc.) message to the first audio device (such as a mobile phone) through the ACL link, which can be implemented in the second Volume control is performed on the side of the audio device (such as headphones).
  • a volume control such as volume increase, volume decrease, etc.
  • a typical application scenario corresponding to this method may be: when using a Bluetooth headset to listen to music, the user presses the volume adjustment button on the Bluetooth headset to adjust the volume to adjust the volume.
  • the first audio device may be an audio source (audio source), and the second audio device may be an audio receiver (audio sink). That is, content control can be performed on the audio source side.
  • the first audio device may also be an audio source (audio sink) and the second audio device may be an audio receiver (audio source). That is, content control can be performed on the audio receiver side.
  • the first audio device and the second audio device may exchange streaming data based on the created isochronous data transmission channel.
  • the stream data is the stream data of the aforementioned specific audio service.
  • the created isochronous data transmission channel corresponds to the aforementioned specific audio service.
  • the first audio device (such as a mobile phone) can send streaming data to the second audio device (such as a headset) through an isochronous data transmission channel.
  • the role of the first audio device (such as a mobile phone) is an audio source (audio source)
  • the role of the second audio device (such as a headset) is an audio receiver (audio sink).
  • the second audio device (such as a headset) can convert the received audio data into sound.
  • a typical application scenario corresponding to this method may be: the user wears a Bluetooth headset to listen to music played on the mobile phone.
  • the second audio device (such as a headset) can send streaming data to the first audio device (such as a mobile phone) through an isochronous data transmission channel.
  • the role of the second audio device (such as a headset) is an audio source (audio source)
  • the role of the first audio device (such as a mobile phone) is an audio receiver (audio sink).
  • the first audio device (such as a mobile phone) can process the received audio data, such as converting the audio data into sound, sending the audio data to other electronic devices (in a voice call scenario), and storing the audio data (recording scenario) under).
  • a typical application scenario corresponding to this method may be: a user wears a Bluetooth headset (equipped with a sound collection device such as a receiver/microphone) to make a call, and at this time, the Bluetooth headset collects the voice of the user's speech and converts it into audio data for transmission to the mobile phone.
  • a Bluetooth headset equipped with a sound collection device such as a receiver/microphone
  • This application does not limit the execution order of the content control process and the streaming data transmission process.
  • the streaming data transmission process may be executed before the content control process, and the two processes may also be executed at the same time.
  • the first audio device and the second audio device in the method shown in FIG. 6 may implement the BLE-based audio protocol framework shown in FIG. 3.
  • the flow control process (S602-S604) in FIG. 6 may be performed by the flow control function entity 306 in FIG. 3; the content control process (S605-S607) in FIG. 6 may be performed by the content control function in FIG.
  • the entity 305 executes.
  • the ACL link mentioned in the method in FIG. 6 may be LE ACL311 in FIG. 3, and the isochronous data transmission channel mentioned in the method in FIG. 6 may be LE312 in FIG. 3.
  • the first audio device and the second audio device may renegotiate parameters to negotiate and determine the new audio service (such as Telephone service) corresponding to the new parameters, and then create a new isochronous data transmission channel based on the new parameters.
  • the new isochronous data transmission channel can be used to transmit streaming data of the new audio service (such as telephone service).
  • the isochronous data transmission channels of various services are based on LE. In this way, the switching of the business scenario does not involve the switching of the transmission frame, and the efficiency is higher, and there is no obvious pause.
  • the isochronous data transmission channel corresponding to the old audio service can also be reconfigured using the new parameters corresponding to the new audio service (such as the telephone service), without the need for new
  • the parameters recreate a new isochronous data transmission channel. In this way, efficiency can be further improved.
  • the audio communication method provided in this application uses the audio service as the granularity for parameter negotiation and the establishment of an isochronous data transmission channel.
  • the flow control messages and content control messages of each audio service are transmitted through the LEACL link, and the stream data is transmitted through the LEISO link Transmission, unifying the transmission framework of each business.
  • the audio communication method provided in this application can be applied to more audio services and has better compatibility.
  • the isochronous data transmission channel can be configured based on the renegotiated parameters, without the need to switch between different profile protocol stacks, and without switching the transmission frame, which is more efficient and avoids obvious pauses.
  • the creation process of the isochronous data transmission channel mentioned in the method flow shown in FIG. 6 is described below.
  • FIG. 7 shows the creation process of the isochronous data transmission channel.
  • the isochronous data transmission channel is based on a connected isochronous data channel, that is, the first audio device and the second audio device are already in a connection (Connection) state.
  • Both the first audio device and the second audio device have a host Host and a link layer LL (in a controller), and the Host and LL communicate through HCI.
  • the process may include:
  • Host A (Host of the first audio device) sets related parameters of the connected isochronous group (CIG) based on the HCI instruction.
  • the CIG-related parameters may include previously determined parameters (QoS parameters, codec parameters, ISO parameters), which are used to create isochronous data transmission channels.
  • Host A can send the HCI command "LE Set CIG parameters" to LL (the first audio device's LL) through HCI.
  • LL A can return the response message "Command Complete”.
  • Host A initiates the creation of CIS through the HCI instruction.
  • Host A can send the HCI command "LE CreateCIS" to LL (LL of the first audio device) through HCI.
  • LL A can return the response message "HCI Command Status".
  • the LLA may request the LLB (LL of the second audio device) to create a CIS stream through the air interface request message LL_CSI_REQ.
  • the LLB notifies HostB (Host of the second audio device) through the HCI instruction, and HostB agrees to the CIS chain building process of the first audio device.
  • the LLB responds to the LLA through the air interface response message LL_CIS_RSP and agrees to the CIS chain building process.
  • LLA notifies LLB through the air interface notification message LL_CIS_IND to complete the chain establishment.
  • the LLB notifies HostB that the CIS chain establishment is complete.
  • LLA notifies HostA through the HCI instruction that the CIS chain establishment is completed.
  • the CIS establishment between the first audio device and the second audio device is completed. Based on the established CIS, the first audio device and the second audio device can create an isochronous data transmission channel.
  • CIS is a connection-based flow that can be used to carry isochronous data.
  • the creation time of the isochronous data transmission channel may include multiple options.
  • an isochronous data transmission channel can be created when the audio service arrives. For example, when the user opens the game application (the game background sound starts to play simultaneously), the application layer of the mobile phone will send a game background sound service creation notification to the Host, and according to the notification, the mobile phone will initiate the process shown in FIG.
  • a default isochronous data transmission channel may be established first, and the default isochronous data transmission channel may be created based on default CIG parameters. In this way, when the audio service arrives, the default isochronous data transmission channel can be used directly to carry streaming data, and the response speed is faster.
  • multiple virtual isochronous data transmission channels may be established first.
  • the multiple virtual isochronous data transmission channels may correspond to multiple sets of different CIG parameters, and may be applicable to multiple audio services.
  • a virtual isochronous data transmission channel refers to an isochronous data transmission channel where no data interaction occurs on the air interface. In this way, when an audio service arrives, a virtual isochronous data transmission channel corresponding to the audio service may be selected, and a handshake is triggered between the first audio device and the second audio device and communication is started.
  • the method flow shown in FIG. 6 describes a connection-based point-to-point audio communication method formed by the first audio device and the second audio device.
  • the first audio device may be the first audio device 101 in the wireless audio system 100 shown in FIG. 1
  • the second audio device may be the second audio device 102 in the wireless audio system 100 shown in FIG. 1.
  • the first audio device 101 and the third audio device 103 in the wireless audio system 100 may also use the audio communication method shown in FIG. 6 for communication.
  • the first audio device 101 can communicate with both the second audio device 102 and the third audio device 103.
  • the first audio device 101 may be implemented as a mobile phone, and the second audio device 102 and the third audio device 103 may be implemented as left and right headphones, respectively.
  • This situation corresponds to a typical application scenario: the left earphone and the right earphone are used together.
  • This typical application scenario can be referred to as a “binaural use together” scenario.
  • FIG. 8 shows the audio communication method in the scenario of “using both ears together”. Expand below:
  • a BLE connection is established between the left earphone and the mobile phone.
  • the BLE connection is established between the right earphone and the mobile phone.
  • This application does not limit the execution order of the above S801-S803, and the order between them may be changed.
  • the BLE connection establishment process will be described in the following content, and will not be described here.
  • the mobile phone establishes ACL links with the left earphone and the right earphone respectively.
  • the establishment of the ACL link is the responsibility of the link layer LL.
  • An ACL link can be established between the LL of the mobile phone and the LL of the left earphone, and an ACL link can be established between the LL of the mobile phone and the LL of the right earphone.
  • the ACL link can be used to carry flow control messages, such as flow control messages involved in parameter negotiation, parameter configuration, and establishment of isochronous transmission channels in the flow control process (S805-S813).
  • the ACL link can also be used to carry content control messages, such as call control (such as answering, hanging up, etc.) messages during the content control process (S814-S819), playback control (such as previous, next, etc.) messages, and volume control (Such as increasing the volume and decreasing the volume) messages, etc.
  • the mobile phone can determine the parameters corresponding to the audio service (QoS parameters, codec parameters, ISO parameters, etc.).
  • the host of the mobile phone first receives an audio service establishment notification from the application layer, and then determines the parameters corresponding to the audio service.
  • the audio service establishment notification may be generated when the mobile phone detects that the user opens an audio-related application (such as a game).
  • the parameter corresponding to the audio service may be obtained by the mobile phone querying from a database according to the service type of the audio service, and the database may store parameters corresponding to various audio services.
  • the host of the mobile phone sends the parameter corresponding to the audio service to the LL of the mobile phone through HCI.
  • the LL of the mobile phone and the LL of the left earphone can perform parameter negotiation through the established ACL link.
  • parameter negotiation For the specific process of parameter negotiation, reference may be made to the related content in the method embodiment of FIG. 6, and details are not described here.
  • the mobile phone can perform parameter configuration to the left earphone through the established ACL link.
  • the parameter configuration refers to the parameters negotiated by the left earphone configuration.
  • the LL of the mobile phone may send a parameter configuration message to the LL of the left earphone through the ACL link, and the parameter configuration message may carry parameters that have been negotiated and determined by both parties.
  • the left earphone can perform the reception or transmission of the streaming data according to the parameters that have been negotiated by both parties.
  • an isochronous data transmission channel can be established between the mobile phone and the left earphone.
  • an isochronous data transmission channel For the creation process of the isochronous data transmission channel, reference may be made to the related content in the method embodiment of FIG. 6, and details are not described herein again.
  • the LL of the mobile phone and the LL of the right earphone can perform parameter negotiation through the established ACL link.
  • parameter negotiation For the specific process of parameter negotiation, reference may be made to the related content in the method embodiment of FIG. 6, and details are not described here.
  • the mobile phone can perform parameter configuration to the right earphone through the established ACL link.
  • the parameter configuration refers to the parameters negotiated by the left earphone configuration.
  • the LL of the mobile phone may send a parameter configuration message to the LL of the right earphone through the ACL link, and the parameter configuration message may carry parameters that have been negotiated and determined by both parties.
  • the right earphone can perform the reception or transmission of the streaming data according to the parameters that have been negotiated by both parties.
  • an isochronous data transmission channel can be established between the mobile phone and the right headset.
  • an isochronous data transmission channel For the creation process of the isochronous data transmission channel, reference may be made to the related content in the method embodiment of FIG. 6, and details are not described herein again.
  • the parameter determination can be determined in units of binaural, and then negotiated and configured one by one.
  • S808-S810 describe the process of parameter negotiation, configuration and creation of isochronous data transmission channel for the left earphone by the mobile phone
  • S811-S813 describe the process of parameter negotiation, configuration and creation of isochronous data transmission channel for the right earphone by the mobile phone. This application does not limit the execution order of the two processes, and the two processes can be performed simultaneously.
  • the content control message can be exchanged between the mobile phone and the left earphone based on the ACL link.
  • the mobile phone For specific implementation, reference may be made to related content in the method embodiment of FIG. 6, and details are not described herein again.
  • the content control message can be exchanged between the mobile phone and the right earphone based on the ACL link.
  • the mobile phone For specific implementation, reference may be made to related content in the method embodiment of FIG. 6, and details are not described herein again.
  • the transmission of the content control messages from the mobile phone to the left earphone and right earphone needs to be synchronized to realize the synchronous control of the left earphone and the right earphone, so as to avoid the user from feeling confused.
  • the left and right headsets can take effect after the content control message is received synchronously.
  • the mobile phone and the left earphone can exchange streaming data based on the created isochronous data transmission channel.
  • the stream data is the stream data of the aforementioned audio service.
  • the mobile phone and the right earphone can exchange streaming data based on the created isochronous data transmission channel.
  • the stream data is the stream data of the aforementioned audio service.
  • the audio communication method shown in this FIG. 8 can be applied to the “both ears together” scenario, and the audio communication method between the mobile phone and the single ear (left earphone or right earphone) can refer to the method shown in FIG. 6, Applicable to more audio services and better compatibility.
  • the business scenario is switched, it is sufficient to configure the isochronous data transmission channel between the mobile phone and the headset based on the renegotiated parameters. There is no need to switch between different profile protocol stacks, and there is no need to switch the transmission frame, which is more efficient and avoids obvious Pause.
  • the BLE connection establishment process may include:
  • the Host of the left earphone initiates the establishment of the BLE connection through the HCI instruction. Specifically, the host of the left earphone can send the HCI command "LE create connection" to the LL of the left earphone through HCI. Correspondingly, the LL of the left earphone can return the response message "HCI Command Status".
  • the right earphone sends a broadcast.
  • the left earphone initiates the connection to the right earphone. Specifically, a connection request is sent to the LL of the left earphone to the LL of the right earphone.
  • the LL of the right earphone after receiving the connection request, notifies the Host of the right earphone through the HCI instruction, and the establishment of the BLE connection is completed.
  • the process of establishing connection in BLE described in S902-S907 is: the right earphone sends a broadcast, and the left earphone initiates a connection to the right earphone.
  • the left earphone can also send a broadcast, and the right earphone can initiate a connection to the left earphone.
  • the host of the mobile phone initiates the establishment of the BLE connection through the HCI instruction. Specifically, the host of the mobile phone can send the HCI command "LE create connection" to the LL of the mobile phone through the HCI. Correspondingly, the LL of the mobile phone can return the response message "HCI Command Status".
  • the left earphone sends a broadcast.
  • the mobile phone initiates a connection to the left earphone. Specifically, a connection request is sent to the LL of the mobile phone and the LL of the left earphone.
  • the LL of the left earphone After receiving the connection request, the LL of the left earphone notifies the Host of the left earphone through the HCI instruction that the establishment of the BLE connection is completed.
  • the LL of the mobile phone after sending the connection request, notifies the Host of the mobile phone through the HCI instruction, and the establishment of the BLE connection is completed.
  • the process of BLE connection establishment described in S909-S914 is: the left earphone sends a broadcast, and the mobile phone initiates the connection to the left earphone.
  • the mobile phone can also send a broadcast, and the left earphone initiates a connection to the mobile phone.
  • the host of the mobile phone initiates the establishment of the BLE connection through the HCI instruction. Specifically, the host of the mobile phone can send the HCI command "LE create connection" to the LL of the mobile phone through the HCI. Correspondingly, the LL of the mobile phone can return the response message "HCI Command Status".
  • the right earphone sends a broadcast.
  • the mobile phone initiates a connection to the right earphone. Specifically, a connection request is sent to the LL of the mobile phone and the LL of the right earphone.
  • the LL of the right earphone after receiving the connection request, notifies the Host of the right earphone through the HCI instruction, and the establishment of the BLE connection is completed.
  • the LL of the mobile phone after sending the connection request, notifies the Host of the mobile phone through the HCI instruction, and the establishment of the BLE connection is completed.
  • the process of BLE connection establishment described in S916-S921 is: the right earphone sends a broadcast, and the mobile phone initiates the connection to the right earphone.
  • the mobile phone can also send a broadcast, and the right headset can initiate a connection to the mobile phone.
  • the electronic device 200 may be implemented as the first audio device mentioned in the above embodiment, and may be the first audio device 101 in the wireless audio system 100 shown in FIG. 1.
  • the electronic device 200 can generally be used as an audio source (audio source), such as a mobile phone, a tablet, etc., and can transmit audio data to other audio receiving devices (audio headphones, speakers, etc.), so that other audio receiving devices can use Audio data is converted into sound.
  • the electronic device 200 can also be used as an audio receiver, receiving audio data (such as audio converted by the user's voice collected by the headset) transmitted by other device audio sources (such as a headset with a microphone) data).
  • FIG. 10A shows a schematic structural diagram of the electronic device 200.
  • the electronic device 200 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, key 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 200.
  • the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), and an image signal processor (image)signal processor (ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • application processor application processor
  • AP application processor
  • modem processor graphics processor
  • graphics processor graphics processor
  • ISP image signal processor
  • controller memory
  • video codec video codec
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the electronic device 200 may also include one or more processors 110.
  • the controller may be the nerve center and command center of the electronic device 200.
  • the controller can generate the operation control signal according to the instruction operation code and the timing signal to complete the control of fetching instructions and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. The repeated access is avoided, the waiting time of the processor 110 is reduced, and thus the efficiency of the electronic device 200 is improved.
  • the processor 110 may include one or more interfaces.
  • Interfaces can include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit, sound, I2S) interface, pulse code modulation (pulse code modulation (PCM) interface, universal asynchronous transceiver (universal asynchronous) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and And/or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may respectively couple the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 200.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, to realize the function of answering the phone call through the Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface to realize the function of answering the phone call through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 to peripheral devices such as the display screen 194 and the camera 193.
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI) and so on.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 200.
  • the processor 110 and the display screen 194 communicate through the DSI interface to realize the display function of the electronic device 200.
  • the GPIO interface can be configured via software.
  • the GPIO interface can be configured as a control signal or a data signal.
  • the GPIO interface may be used to connect the processor 110 to the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 200, and can also be used to transfer data between the electronic device 200 and peripheral devices. It can also be used to connect headphones and play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiments of the present invention is only a schematic description, and does not constitute a limitation on the structure of the electronic device 200.
  • the electronic device 200 may also use different interface connection methods in the foregoing embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 200. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be disposed in the processor 110.
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 200 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 200 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 200.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1 and filter, amplify, etc. the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor and convert it to electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be transmitted into a high-frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110, and may be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (bluetooth, BT), and global navigation satellites that are applied to the electronic device 200.
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency-modulates and filters electromagnetic wave signals, and transmits the processed signals to the processor 110.
  • the wireless communication module 160 may also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it to electromagnetic wave radiation through the antenna 2.
  • the wireless communication module 160 may include a Bluetooth module, a Wi-Fi module, and the like.
  • the antenna 1 of the electronic device 200 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 200 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include a global mobile communication system (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long-term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a beidou navigation system (BDS), and a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS beidou navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 200 can realize a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connecting the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations, and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel may use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode) emitting diode, AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 200 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 200 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP processes the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, and the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, which is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be set in the camera 193.
  • the camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CCD charge coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 200 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the digital signal processor is used to process digital signals. In addition to digital image signals, it can also process other digital signals. For example, when the electronic device 200 is selected at a frequency point, the digital signal processor is used to perform Fourier transform on the energy at the frequency point.
  • Video codec is used to compress or decompress digital video.
  • the electronic device 200 may support one or more video codecs. In this way, the electronic device 200 can play or record videos in various encoding formats, for example: moving picture experts group (MPEG)-1, MPEG-2, MPEG-3, MPEG-4, etc.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent recognition of the electronic device 200, such as image recognition, face recognition, voice recognition, and text understanding.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 200.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, photos, videos and other data in an external memory card.
  • the internal memory 121 may be used to store one or more computer programs including instructions.
  • the processor 110 may execute the above-mentioned instructions stored in the internal memory 121, so that the electronic device 200 executes the data sharing method, various functional applications, and data processing provided in some embodiments of the present application.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store the operating system; the storage program area can also store one or more application programs (such as gallery, contacts, etc.) and so on.
  • the storage data area may store data (such as photos, contacts, etc.) created during use of the electronic device 200.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • a non-volatile memory such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • the electronic device 200 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input into digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 200 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also known as "handset" is used to convert audio electrical signals into sound signals.
  • the electronic device 200 answers a phone call or voice message, the voice can be received by bringing the receiver 170B close to the ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a person's mouth, and input a sound signal to the microphone 170C.
  • the electronic device 200 may be provided with at least one microphone 170C. In other embodiments, the electronic device 200 may be provided with two microphones 170C. In addition to collecting sound signals, it may also implement a noise reduction function. In other embodiments, the electronic device 200 may also be provided with three, four, or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the headset interface 170D is used to connect wired headsets.
  • the earphone interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device (open terminal) platform (OMTP) standard interface, and the American Telecommunications Industry Association (cellular telecommunications industry association of the United States, CTIA) standard interface.
  • OMTP open mobile electronic device
  • CTIA American Telecommunications Industry Association
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may be at least two parallel plates with conductive materials. When force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 200 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 200 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 200 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but have different touch operation intensities may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 200. In some embodiments, the angular velocity of the electronic device 200 around three axes (ie, x, y, and z axes) may be determined by the gyro sensor 180B.
  • the gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the jitter angle of the electronic device 200, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to counteract the jitter of the electronic device 200 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 200 calculates the altitude using the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 200 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the electronic device 200 may detect the opening and closing of the clamshell according to the magnetic sensor 180D.
  • features such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of acceleration of the electronic device 200 in various directions (generally three axes). When the electronic device 200 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of electronic devices, and be used in applications such as horizontal and vertical screen switching and pedometers.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 200 can measure the distance by infrared or laser. In some embodiments, when shooting scenes, the electronic device 200 may use the distance sensor 180F to measure distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 200 emits infrared light outward through the light emitting diode.
  • the electronic device 200 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 200. When insufficient reflected light is detected, the electronic device 200 may determine that there is no object near the electronic device 200.
  • the electronic device 200 can use the proximity light sensor 180G to detect that the user holds the electronic device 200 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense the brightness of ambient light.
  • the electronic device 200 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 200 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 200 can use the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a picture of the fingerprint, and answer the call with the fingerprint.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 200 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 200 performs to reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection.
  • the electronic device 200 when the temperature is lower than another threshold, the electronic device 200 heats the battery 142 to avoid abnormal shutdown of the electronic device 200 due to low temperature.
  • the electronic device 200 when the temperature is below another threshold, the electronic device 200 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
  • the touch sensor 180K can also be called a touch panel or a touch-sensitive surface.
  • the touch sensor 180K may be provided on the display screen 194, and the touch sensor 180K and the display screen 194 constitute a touch screen, also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation may be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 200, which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive a blood pressure beating signal.
  • the bone conduction sensor 180M may also be provided in the earphone and combined into a bone conduction earphone.
  • the audio module 170 may parse out the voice signal based on the vibration signal of the vibrating bone block of the voice part acquired by the bone conduction sensor 180M to realize the voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M to implement the heart rate detection function.
  • the key 190 includes a power-on key, a volume key, and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 200 may receive key input and generate key signal input related to user settings and function control of the electronic device 200.
  • the motor 191 may generate a vibration prompt.
  • the motor 191 can be used for vibration notification of incoming calls and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminder, receiving information, alarm clock, game, etc.
  • Touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate a charging state, a power change, and may also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be inserted into or removed from the SIM card interface 195 to achieve contact and separation with the electronic device 200.
  • the electronic device 200 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 can also be compatible with external memory cards.
  • the electronic device 200 interacts with the network through the SIM card to realize functions such as call and data communication.
  • the electronic device 200 uses eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 200 and cannot be separated from the electronic device 200.
  • the electronic device 200 exemplarily shown in FIG. 10A may display various user interfaces described in the following embodiments through the display screen 194.
  • the electronic device 200 can detect a touch operation in each user interface through the touch sensor 180K, for example, a click operation in each user interface (such as a touch operation on an icon, a double-click operation), and for example, an upward or Swipe down, or perform a gesture of drawing a circle, and so on.
  • the electronic device 200 may detect a motion gesture performed by the user holding the electronic device 200, such as shaking the electronic device, through the gyro sensor 180B, the acceleration sensor 180E, or the like.
  • the electronic device 200 can detect non-touch gesture operations through the camera 193 (eg, 3D camera, depth camera).
  • the terminal application processor (AP) included in the electronic device 200 can implement the Host in the audio protocol framework shown in FIG. 3, and the Bluetooth (BT) module included in the electronic device 200 can implement the shown in FIG. 3.
  • the controller in the audio protocol framework communicates between the two through HCI. That is, the functions of the audio protocol framework shown in FIG. 3 are distributed on two chips.
  • the terminal application processor (AP) of the electronic device 200 may implement the host and controller in the audio protocol framework shown in FIG. 3. That is, all the functions of the audio protocol framework shown in Figure 3 are placed on one chip, that is, the host and controller are placed on the same chip, because the host and controller are on the same chip, so the physical HCI is There is no need to exist, and the host and controller directly interact through the application programming interface API.
  • the software system of the electronic device 200 may adopt a layered architecture, event-driven architecture, micro-core architecture, micro-service architecture, or cloud architecture.
  • the embodiment of the present invention takes the Android system with a layered architecture as an example to exemplarily explain the software structure of the electronic device 200.
  • 10B is a block diagram of the software structure of the electronic device 200 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor.
  • the layers communicate with each other through a software interface.
  • the Android system is divided into four layers, from top to bottom are the application layer, the application framework layer, the Android runtime and the system library, and the kernel layer.
  • the application layer may include a series of application packages.
  • the application package may include applications such as games, voice assistants, music players, video players, mailboxes, calls, navigation, and file browsers.
  • the application framework layer provides an application programming interface (application programming interface) and programming framework for applications at the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, intercept the screen, etc.
  • Content providers are used to store and retrieve data, and make these data accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
  • the view system includes visual controls, such as controls for displaying text and controls for displaying pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes an SMS notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 200. For example, the management of the call state (including connection, hang up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear after a short stay without user interaction.
  • the notification manager is used to notify the completion of downloading, message reminders, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • the text message is displayed in the status bar, a prompt sound is emitted, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core library and virtual machine. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one part is the function function that Java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in the virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer into binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library may include multiple functional modules. For example: surface manager (surface manager), media library (Media library), 3D graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • surface manager surface manager
  • media library Media library
  • 3D graphics processing library for example: OpenGL ES
  • 2D graphics engine for example: SGL
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports a variety of commonly used audio, video format playback and recording, and still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least the display driver, camera driver, audio driver, and sensor driver.
  • the following describes the workflow of the software and hardware of the electronic device 200 in combination with capturing a photographing scene.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps and other information of touch operations).
  • the original input event is stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch operation, the control corresponding to the touch operation is a camera application icon as an example.
  • the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer. Capture still images or videos.
  • the audio output device 300 may be implemented as the second audio device or the third audio device mentioned in the above embodiment, and may be the second audio device 102 or the third audio device 103 in the wireless audio system 100 shown in FIG. 1.
  • the audio output device 300 can generally be used as an audio receiving device (audio sink), such as headphones and speakers, can transmit audio data to other audio sources (audio phones, tablets, etc.), and can receive the received audio Convert data to sound.
  • audio sink an audio receiving device
  • the audio output device 300 can also be used as an audio source to transmit audio data (such as an audio sink) (such as a mobile phone) to other devices Audio data converted by the voice collected by the headset into the user).
  • audio data such as an audio sink
  • the audio output device 300 can also be used as an audio source to transmit audio data (such as an audio sink) (such as a mobile phone) to other devices Audio data converted by the voice collected by the headset into the user).
  • FIG. 11 exemplarily shows a schematic structural diagram of an audio output device 300 provided by the present application.
  • the audio output device 300 may include a processor 302, a memory 303, a Bluetooth communication processing module 304, a power supply 305, a wear detector 306, a microphone 307, and an electric/acoustic converter 308. These components can be connected via a bus. among them:
  • the processor 302 may be used to read and execute computer-readable instructions.
  • the processor 302 may mainly include a controller, an arithmetic unit, and a register.
  • the controller is mainly responsible for instruction decoding and issues control signals for the operations corresponding to the instructions.
  • the arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations, and logical operations, and can also perform address operations and conversions.
  • the register is mainly responsible for saving the register operand and intermediate operation result temporarily stored during the execution of the instruction.
  • the hardware architecture of the processor 302 may be an application specific integrated circuit (Application Specific Integrated Circuits, ASIC) architecture, a MIPS architecture, an ARM architecture, an NP architecture, or the like.
  • the processor 302 may be used to parse signals received by the Bluetooth communication processing module 304, such as signals encapsulated with audio data, content control messages, flow control messages, and so on.
  • the processor 302 may be used to perform corresponding processing operations according to the analysis result, such as driving the electrical/acoustic converter 308 to start or pause or stop converting audio data into sound, and so on.
  • the processor 302 may also be used to generate signals sent out by the Bluetooth communication processing module 304, such as Bluetooth broadcast signals, beacon signals, and audio data converted from the collected sound.
  • the memory 303 is coupled to the processor 302 and is used to store various software programs and/or multiple sets of instructions.
  • the memory 303 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory 303 can store an operating system, such as embedded operating systems such as uCOS, VxWorks, RTLinux, and so on.
  • the memory 303 may also store a communication program that can be used to communicate with the electronic device 200, one or more servers, or additional devices.
  • the Bluetooth (BT) communication processing module 304 may receive signals transmitted by other devices (such as the electronic device 200), such as scan signals, broadcast signals, signals encapsulated with audio data, content control messages, flow control messages, and so on.
  • the Bluetooth (BT) communication processing module 304 may also transmit signals, such as broadcast signals, scan signals, signals encapsulated with audio data, content control messages, flow control messages, and so on.
  • the power supply 305 may be used to supply power to the processor 302, the memory 303, the Bluetooth communication processing module 304, the wear detector 306, the electrical/acoustic converter 308, and other internal components.
  • the wearing detector 306 may be used to detect a state where the audio output device 300 is worn by the user, such as an unworn state, a worn state, and may even include a worn tight state.
  • the wear detector 306 may be implemented by one or more of a distance sensor, a pressure sensor, and the like.
  • the wearing detector 306 can transmit the detected wearing state to the processor 302, so that the processor 302 can be powered on when the audio output device 300 is worn by the user, and powered off when the audio output device 300 is not worn by the user, to save Power consumption.
  • the microphone 307 can be used to collect sounds, such as the voice of the user speaking, and can output the collected sounds to the electric/acoustic converter 308, so that the electric/acoustic converter 308 can convert the sound collected by the microphone 307 into audio data.
  • the electric/acoustic converter 308 can be used to convert sound into electrical signals (audio data), for example, convert the sound collected by the microphone 307 into audio data, and can transmit audio data to the processor 302. In this way, the processor 302 can trigger the Bluetooth (BT) communication processing module 304 to transmit the audio data.
  • the electrical/acoustic converter 308 may also be used to convert electrical signals (audio data) into sound, for example, audio data output by the processor 302 into sound.
  • the audio data output by the processor 302 may be received by the Bluetooth (BT) communication processing module 304.
  • the processor 302 can implement the Host in the audio protocol framework shown in FIG. 3, and the Bluetooth (BT) communication processing module 304 can implement the controller in the audio protocol framework shown in FIG. 3. Communicate. That is, the functions of the audio protocol framework shown in FIG. 3 are distributed on two chips.
  • BT Bluetooth
  • the processor 302 may implement the host and controller in the audio protocol framework shown in FIG. 3. That is, all the functions of the audio protocol framework shown in Figure 3 are placed on one chip, that is, the host and controller are placed on the same chip, because the host and controller are on the same chip, so the physical HCI is There is no need to exist, and the host and controller directly interact through the application programming interface API.
  • the structure illustrated in FIG. 11 does not constitute a specific limitation on the audio output device 300.
  • the audio output device 300 may include more or fewer components than shown, or combine some components, or split some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • FIG. 12 shows a schematic structural diagram of a chip set provided by the present application.
  • the chipset 400 may include chip 1 and chip 2.
  • Chip 1 and chip 2 communicate through interface HCI409.
  • the chip 1 may include the following modules: a multimedia audio module 402, a voice module 403, a background sound module 404, a content control module 405, a stream control module 406, a stream data module 407, and an L2CAP module 408.
  • the chip 2 may include: an LE physical layer module 413 and an LE link layer module 410.
  • the LE physical layer module 413 can be used to provide a physical channel (commonly referred to as a channel) for data transmission.
  • a physical channel commonly referred to as a channel
  • channels there are several different types of channels in a communication system, such as control channels, data channels, voice channels, and so on.
  • the LE link layer module 410 can be used to provide a logical transmission channel (also referred to as a logical link) between two or more devices that is physically independent, based on the physical layer.
  • the LE link layer module 410 can be used to control the radio frequency state of the device.
  • the device will be in one of five states: waiting, advertising, scanning, initializing, and connecting.
  • the broadcast device can send data without establishing a connection, and the scanning device receives the data sent by the broadcast device; the device that initiates the connection responds to the broadcast device by sending a connection request. If the broadcast device accepts the connection request, the broadcast device and the device that initiates the connection will Will enter the connection state.
  • the device that initiates the connection is called the master device, and the device that accepts the connection request is called the slave device.
  • the LE link layer module 410 may include a LE ACL module 411 and a LE isochronous (ISO) module 412.
  • the LE ACL module 411 can be used to transmit control messages between devices through the LE ACL link, such as flow control messages, content control messages, and volume control messages.
  • the LE ISO module 412 can be used to transmit isochronous data (such as streaming data itself) between devices through an isochronous data transmission channel.
  • the L2CAP module 408 can be used to manage the logical links provided by the logical layer. Based on L2CAP, different
  • Upper layer applications can share the same logical link. Similar to the concept of port in TCP/IP.
  • the multimedia audio module 402, the voice module 403, and the background sound module 404 may be modules set according to business scenarios, and may be used to divide the audio applications of the application layer into multimedia audio, voice, background sound, and other audio services. It is not limited to multimedia audio, voice, background sound, etc. Audio services can also be divided into: voice, music, games, video, voice assistant, e-mail reminder, alarm, reminder sound, navigation sound, etc.
  • the content control module 405 can be responsible for encapsulating the content of various audio services
  • the stream control module 406 can be used to negotiate parameters for specific audio services, such as QoS parameter negotiation, codec parameter negotiation, ISO parameter negotiation, and negotiation based on the negotiated parameters for the specific service Create an isochronous data transmission channel. Creating an isochronous data transmission channel for the specific service can be used to transmit audio data of the specific audio service.
  • the specific audio service may be referred to as a first audio service
  • the negotiated parameter may be referred to as a first parameter.
  • the streaming data module 407 may be used to output the audio data of the audio service to the LE isochronous (ISO) module 412 to transmit the audio data through the isochronous data transmission channel.
  • the isochronous data transmission channel may be CIS.
  • CIS can be used to transfer isochronous data between connected devices.
  • the isochronous data transmission channel is finally carried in LEISO 412.
  • chip 1 may be implemented as an application processor (AP), and chip 2 may be implemented as a Bluetooth processor (or referred to as a Bluetooth module, Bluetooth chip, etc.).
  • chip 1 may be referred to as a first chip, and chip 2 may be referred to as a second chip.
  • the chipset 400 may be included in the first audio device in the foregoing method embodiment, or may be included in the first audio device and the second audio device in the foregoing method embodiment.
  • the structure illustrated in FIG. 12 does not constitute a specific limitation on the chipset 400.
  • the chipset 400 may include more or less components than shown, or combine some components, or split some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • FIG. 13 shows a schematic structural diagram of a chip provided by the present application.
  • the chip 500 may include: a multimedia audio module 502, a voice module 503, a background sound module 504, a content control module 505, a stream control module 506, a stream data module 507, an L2CAP module 508, a LE physical layer module 513, LE link layer module 510.
  • a multimedia audio module 502 a voice module 503, a background sound module 504, a content control module 505, a stream control module 506, a stream data module 507, an L2CAP module 508, a LE physical layer module 513, LE link layer module 510.
  • the chip architecture shown in FIG. 13 is that the host and controller in the audio protocol framework shown in FIG. 3 are implemented on one chip at the same time. Since Host and Controller are implemented on the same chip, HCI may not be needed inside the chip.
  • the chip architecture shown in FIG. 12 is to implement Host and Controller in the audio protocol framework shown in FIG. 3 in two chips, respectively.
  • the chip 500 may be included in the first audio device in the foregoing method embodiment, or may be included in the first audio device and the second audio device in the foregoing method embodiment.
  • the structure shown in FIG. 13 does not constitute a specific limitation on the chip 500.
  • the chip 500 may include more or less components than shown, or combine some components, or split some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • a person of ordinary skill in the art can understand that all or part of the process in the method of the above embodiments can be implemented by a computer program instructing related hardware.
  • the program can be stored in a computer-readable storage medium, and when the program is executed , May include the processes of the foregoing method embodiments.
  • the foregoing storage media include various media that can store program codes such as ROM or random storage memory RAM, magnetic disks or optical disks.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The present application relates to a wireless audio system, and an audio communication method and device, wherein a parameter of an isochronous data transmission channel is determined for each audio service, with the audio service as the granularity. Parameter negotiations, such as QoS parameter negotiation, codec parameter negotiation and ISO parameter negotiation, can be carried out between a first audio device (such as a mobile phone and a media player) and a second audio device (such as an earphone) with an audio service as the granularity, and an isochronous data transmission channel is then created based on the negotiated parameters. Regardless of the kind of audio service scenarios, streaming data is transmitted through an LE ISO link, and service scenario handover does not relate to transmission frame handover, and thus, efficiency is higher.

Description

无线音频系统、音频通讯方法及设备Wireless audio system, audio communication method and equipment 技术领域Technical field
本申请涉及无线技术领域,尤其涉及无线音频系统、音频通讯方法及设备。This application relates to the field of wireless technology, in particular to a wireless audio system, audio communication method and equipment.
背景技术Background technique
蓝牙(Bluetooth)无线技术是意图替代便携式和/或固定式电子设备之间的线缆连接的一种短距离通信系统。蓝牙无线通信技术的关键特点是稳定、低功耗以及低成本。其核心规范的许多特征是可选的,支持产品差异化。Bluetooth (Bluetooth) wireless technology is a short-range communication system intended to replace cable connections between portable and/or stationary electronic devices. The key features of Bluetooth wireless communication technology are stability, low power consumption and low cost. Many features of its core specifications are optional and support product differentiation.
蓝牙无线技术具有两种形式的系统:基础速率(basic rate,BR)和低功耗(low energy,LE)。这两种形式的系统都包括设备发现(device discovery)、连接建立(connection establishment)和连接机制。基础速率BR可以包括可选(optional)的增强数据速率(enhanced data rate,EDR),以及交替的媒体接入控制层和物理层扩展(alternate media access control and physical layer extensions,AMP)。低功耗LE系统包括一些特性,这些特性被设计用来实现要求比BR/EDR更低电量消耗、更低复杂度以及更低成本的产品。Bluetooth wireless technology has two forms of system: basic rate (basic rate, BR) and low power consumption (low energy, LE). Both types of systems include device discovery, connection establishment, and connection mechanisms. The basic rate BR may include an optional enhanced data rate (EDR), and alternating media access control layer and physical layer extension (alternate media access control and physical layer extensions, AMP). Low-power LE systems include features that are designed to achieve products that require lower power consumption, lower complexity, and lower cost than BR/EDR.
实现了BR和LE这两种系统的设备可以和其他同样实现了这两种系统的设备进行通信。一些profile和用例(use case)只被其中一种系统所支持。因此,实现了这两种系统的设备具有支持更多用例的能力。Devices that implement the two systems BR and LE can communicate with other devices that also implement the two systems. Some profiles and use cases are only supported by one of the systems. Therefore, devices that implement these two systems have the ability to support more use cases.
profile是蓝牙协议的特有概念。为了实现不同平台下的不同设备的互联互通,蓝牙协议不止规定了核心规范(称作Bluetooth core),也为各种不同的应用场景定义了各种应用层(application)规范,这些应用层规范称作蓝牙profile。为了实现不同平台下的不同设备的互联互通,蓝牙协议为各种可能的、有通用意义的应用场景,都制定的了应用层规范(profile),如蓝牙立体声音频传输规范(advance audio distribution profile,A2DP)、音频/视频远程控制规范(audio video remote control profile,AVRCP)、基本图像规范(basic imaging profile,BIP)、免手持设备规范(hands-free profile,HFP)、人机界面规范(human interface device profile,HIDprofile)、蓝牙耳机规范(headset profile,HSP)、串行端口规范(serial port profile,SPP)、文件传输规范(file transport profile,FTP)、个人局域网协议(personal area networking profile,PAN profile)等等。Profile is a unique concept of Bluetooth protocol. In order to achieve the interconnection of different devices under different platforms, the Bluetooth protocol not only specifies the core specification (called Bluetooth core), but also defines various application layer specifications for various application scenarios. These application layer specifications are called Make a Bluetooth profile. In order to realize the interconnection of different devices under different platforms, the Bluetooth protocol has developed application layer specifications (profiles) for various possible and universal application scenarios, such as the Bluetooth stereo audio transmission specifications (advance audio distribution profile, A2DP), audio/video remote control profile (AVRCP), basic image profile (basic imaging profile (BIP), hands-free profile (HFP), human interface specification (human interface device profile, HID profile, Bluetooth headset profile (headset profile, HSP), serial port profile (serial port profile, SPP), file transfer profile (file transmission profile, FTP), personal area network protocol (personal area network profile, PAN profile )and many more.
但是,现有的蓝牙协议为不同的profile定义了不同的协议框架,彼此之间相互独立,无法兼容。However, the existing Bluetooth protocol defines different protocol frameworks for different profiles, which are independent of each other and cannot be compatible.
发明内容Summary of the invention
本申请提供了一种无线音频系统、音频通讯方法及设备,可以解决基于现有蓝牙协议的兼容性差的问题。This application provides a wireless audio system, audio communication method and device, which can solve the problem of poor compatibility based on the existing Bluetooth protocol.
第一方面,本申请提供了一种音频通讯方法,应用于音频源侧,该方法可包括:音频源(如手机、媒体播放器)和音频接收方(如耳机)之间建立ACL链路。针对特定第一音频业务,音频源可以通过ACL链路和音频接收方进行参数协商。基于协商所确定的第一参数,音频源和音频接收方之间可以建立等时数据传输通道。该等时数据传输通道可用于传 输第一音频业务的流数据(即音频数据)。其中,音频源和音频接收方之间建立有低功耗蓝牙连接。In a first aspect, the present application provides an audio communication method, which is applied to an audio source side. The method may include: establishing an ACL link between an audio source (such as a mobile phone and a media player) and an audio receiver (such as a headset). For a specific first audio service, the audio source can negotiate parameters with the audio receiver through the ACL link. Based on the first parameter determined through negotiation, an isochronous data transmission channel can be established between the audio source and the audio receiver. The isochronous data transmission channel can be used to transmit streaming data (i.e. audio data) of the first audio service. Among them, a Bluetooth low energy connection is established between the audio source and the audio receiver.
第二方面,本申请提供了一种音频通讯方法,应用于音频接收方侧,该方法可包括:音频接收方和音频源建立低功耗蓝牙无连接的异步LE ACL链路。音频接收方通过LE ACL链路和音频源为第一音频业务执行参数协商,参数协商所协商的第一参数对应第一音频业务。音频接收方可以基于第一参数和音频源创建第一音频业务对应的LE等时数据传输通道。第一音频业务对应的LE等时数据传输通道用于音频接收方接收音频源发送的第一音频业务的音频数据。其中,音频源和音频接收方之间建立有低功耗蓝牙连接。In a second aspect, the present application provides an audio communication method, which is applied to the audio receiver side. The method may include: the audio receiver and the audio source establish a Bluetooth low energy connectionless asynchronous LE ACL link. The audio receiver performs parameter negotiation for the first audio service through the LE ACL link and the audio source, and the first parameter negotiated in the parameter negotiation corresponds to the first audio service. The audio receiver may create an LE isochronous data transmission channel corresponding to the first audio service based on the first parameter and the audio source. The LE isochronous data transmission channel corresponding to the first audio service is used for the audio receiver to receive the audio data of the first audio service sent by the audio source. Among them, a Bluetooth low energy connection is established between the audio source and the audio receiver.
本申请中,音频业务可以是指能够提供音频功能(如音频播放、音频录制等)的服务(service)或应用(application)。音频业务会涉及音频相关的数据传输业务,例如音频数据本身、用于控制音频数据播放的内容控制消息、用于创建等时数据传输通道的流控制消息等的传输。In this application, the audio service may refer to a service or application capable of providing audio functions (such as audio playback, audio recording, etc.). Audio services may involve audio-related data transmission services, such as the transmission of audio data itself, content control messages used to control the playback of audio data, and flow control messages used to create isochronous data transmission channels.
实施第一方面和第二方面提供的方法,可以以音频业务为粒度进行参数协商及等时数据传输通道的建立,各个音频业务的流控制消息和内容控制消息都通过LE ACL链路传输,流数据通过LE ISO链路传输,统一了各个业务的传输框架。而不是以profile为粒度来为不同profile应用适配不同的传输框架。可以看出,本申请提供的音频通讯方法可以适用更多音频业务,兼容性更好。By implementing the methods provided in the first and second aspects, parameter negotiation and the establishment of isochronous data transmission channels can be performed at the granularity of audio services. The flow control messages and content control messages of each audio service are transmitted through the LEACL link The data is transmitted through the LE ISO link, which unifies the transmission framework of each business. Instead of using the profile as the granularity to adapt different transmission frameworks for different profile applications. It can be seen that the audio communication method provided in this application can be applied to more audio services and has better compatibility.
结合第一方面或第二方面,在一些实施例中,ACL链路可用于承载流控制消息,如参数协商、参数配置、等时传输通道建立所涉及的流控制消息。ACL链路还可用于承载内容控制消息,如通话控制(如接听、挂断等)消息、播放控制(如上一首、下一首等)消息、音量控制(如增大音量、减小音量)消息等。With reference to the first aspect or the second aspect, in some embodiments, the ACL link may be used to carry flow control messages, such as flow control messages involved in parameter negotiation, parameter configuration, and establishment of isochronous transmission channels. The ACL link can also be used to carry content control messages, such as call control (e.g. answer, hang up, etc.) messages, playback control (e.g. previous, next, etc.) messages, and volume control (e.g., increase volume, decrease volume) News etc.
结合第一方面或第二方面,在一些实施例中,音频源可以生成第一音频业务的内容控制消息,并可以通过LE ACL链路向音频接收方发送第一音频业务的内容控制消息。With reference to the first aspect or the second aspect, in some embodiments, the audio source may generate the content control message of the first audio service, and may send the content control message of the first audio service to the audio receiver through the LEACL link.
相应的,音频接收方可以通过LE ACL链路接收音频源发送的第一音频业务的内容控制消息,并可以根据内容控制消息对第一音频业务进行内容控制,内容控制包括以下一项或多项:音量控制、播放控制、通话控制。Correspondingly, the audio receiver can receive the content control message of the first audio service sent by the audio source through the LEACL link, and can perform content control on the first audio service according to the content control message. The content control includes one or more of the following : Volume control, playback control, call control.
其中,内容控制消息用于音频接收方对第一音频业务进行内容控制,内容控制包括以下一项或多项:音量控制、播放控制、通话控制。The content control message is used by the audio receiver to perform content control on the first audio service. The content control includes one or more of the following: volume control, playback control, and call control.
可选的,音频源可以接收用户输入(如用户按下音频源上的电话挂断按钮),然后根据该用户输入生成第一音频业务的内容控制消息。Optionally, the audio source may receive user input (for example, the user presses the phone hang-up button on the audio source), and then generates a content control message for the first audio service according to the user input.
结合第一方面或第二方面,在一些实施例中,音频接收方可以生成第一音频业务的内容控制消息,并可以通过LE ACL链路向音频源发送第一音频业务的内容控制消息。With reference to the first aspect or the second aspect, in some embodiments, the audio receiver may generate the content control message of the first audio service, and may send the content control message of the first audio service to the audio source through the LEACL link.
相应的,音频源可以通过LE ACL链路接收音频接收方发送的第一音频业务的内容控制消息,并可以根据内容控制消息对第一音频业务进行内容控制,内容控制包括以下一项或多项:音量控制、播放控制、通话控制。Correspondingly, the audio source can receive the content control message of the first audio service sent by the audio receiver through the LEACL link, and can perform content control on the first audio service according to the content control message. The content control includes one or more of the following : Volume control, playback control, call control.
其中,内容控制消息用于音频接收方对第一音频业务进行内容控制,内容控制包括以下一项或多项:音量控制、播放控制、通话控制。The content control message is used by the audio receiver to perform content control on the first audio service. The content control includes one or more of the following: volume control, playback control, and call control.
可选的,音频接收方可以接收用户输入(如用户按下音频接收方上的电话挂断按钮), 然后根据该用户输入生成第一音频业务的内容控制消息。Optionally, the audio receiver can receive user input (for example, the user presses the phone hangup button on the audio receiver), and then generates a content control message for the first audio service according to the user input.
结合第一方面或第二方面,在一些实施例中,音频源可以生成第一音频业务的音频数据,并可以通过第一音频业务对应的LE等时数据传输通道向音频接收方发送第一音频业务的音频数据。With reference to the first aspect or the second aspect, in some embodiments, the audio source may generate audio data of the first audio service, and may send the first audio to the audio receiver through the LE isochronous data transmission channel corresponding to the first audio service Audio data for business.
相应的,音频接收方可以通过第一音频业务对应的LE等时数据传输通道接收音频源发送第一音频业务的音频数据。可选的,音频接收方可以将第一音频业务的音频数据转换成声音。可选的,音频接收方可以存储第一音频业务的音频数据。Correspondingly, the audio receiver can receive the audio data sent by the audio source through the LE isochronous data transmission channel corresponding to the first audio service. Optionally, the audio receiver may convert the audio data of the first audio service into sound. Optionally, the audio receiver may store audio data of the first audio service.
结合第一方面或第二方面,在一些实施例中,第一参数可以包括以下一项或多项:QoS参数、codec参数、ISO参数等。With reference to the first aspect or the second aspect, in some embodiments, the first parameter may include one or more of the following: QoS parameters, codec parameters, ISO parameters, and so on.
其中,QoS参数可包括时延、丢包率、吞吐量等表示传输质量的参数。Codec参数可包括编码方式、压缩率等影响音频质量的参数。ISO参数可包括CIS的ID、CIS数量、master到slave传输的最大数据大小、slave到master传输的最大数据大小、master到slave的数据包在链路层传输最长时间间隔、slave到master的数据包在链路层传输最长时间间隔等等。Among them, the QoS parameters may include parameters such as delay, packet loss rate, throughput, etc. that represent transmission quality. Codec parameters may include parameters that affect audio quality, such as encoding method and compression ratio. ISO parameters can include CIS ID, number of CIS, maximum data size transmitted from master to slave, maximum data size transmitted from slave to master, maximum time interval of data transmission from master to slave at the link layer, and data from slave to master The longest time interval for packet transmission at the link layer, etc.
结合第一方面或第二方面,在一些实施例中,第一参数可以是根据第一音频业务从数据库中查询得到的,该数据库中可存储有多种音频业务各自对应的参数。With reference to the first aspect or the second aspect, in some embodiments, the first parameter may be obtained by querying a database according to the first audio service, and the database may store parameters corresponding to various audio services.
可选的,在数据库中,音频业务对应的参数可以是综合考虑该音频业务所涉及的各种音频切换情况或混音情况而设计的。该参数可适用该业务所涉及的这些情况。例如,游戏业务下可能出现游戏背景声音和麦克说话声音发生切换或叠加的情况(游戏时打开麦克说话)。游戏背景声音和麦克说话声音的codec参数、QoS参数可能不同。针对游戏业务,可设计适用这种情况的参数,这样当用户在游戏时打开麦克说话,也不会影响听觉体验。Optionally, in the database, the parameters corresponding to the audio service may be designed by comprehensively considering various audio switching situations or mixing situations involved in the audio service. This parameter can be applied to these situations involved in the business. For example, in the game business, it may happen that the background sound of the game and the voice of the microphone are switched or superimposed (the microphone is turned on during the game). The codec parameters and QoS parameters of the game background sound and Mike's speech sound may be different. For the game business, parameters suitable for this situation can be designed, so that when the user opens the microphone to speak during the game, it will not affect the listening experience.
结合第一方面或第二方面,在一些实施例中,内容控制消息可以包括以下一项或多项:音量控制(如增大音量、减小音量等)消息、播放控制(如上一首、下一首等)消息、通话控制(接听、挂断)消息。With reference to the first aspect or the second aspect, in some embodiments, the content control message may include one or more of the following: volume control (such as increasing the volume, decreasing the volume, etc.) message, playback control (such as the previous song, the next One, etc.) message, call control (answer, hang up) message.
结合第一方面或第二方面,在一些实施例中,当音频业务场景发生切换时,以从音乐业务切换到电话业务(听音乐时接听电话)为例,音频源和音频接收方之间可以重新进行参数协商,协商确定新音频业务(如电话业务)对应的新参数,然后基于新参数创建新的等时数据传输通道。该新的等时数据传输通道可用于传输该新音频业务(如电话业务)的流数据。各种业务的等时数据传输通道都是基于LE。这样,业务场景的切换不涉及传输框架的切换,效率更高,不会出现明显的停顿。With reference to the first aspect or the second aspect, in some embodiments, when an audio service scene is switched, switching from a music service to a telephone service (answering a call when listening to music) is used as an example, the audio source and the audio receiver can Re-negotiate the parameters, negotiate to determine the new parameters corresponding to the new audio service (such as the telephone service), and then create a new isochronous data transmission channel based on the new parameters. The new isochronous data transmission channel can be used to transmit streaming data of the new audio service (such as telephone service). The isochronous data transmission channels of various services are based on LE. In this way, the switching of the business scenario does not involve the switching of the transmission frame, and the efficiency is higher, and there is no obvious pause.
可选的,当音频业务场景发生切换时,也可以利用新音频业务(如电话业务)对应的新参数重新配置旧音频业务(如音乐业务)对应的等时数据传输通道,而不需要基于新参数重新创建新的等时数据传输通道。这样,可以进一步提高效率。Optionally, when the audio service scene is switched, the isochronous data transmission channel corresponding to the old audio service (such as the music service) can also be reconfigured using the new parameters corresponding to the new audio service (such as the telephone service), without the need for new The parameters recreate a new isochronous data transmission channel. In this way, efficiency can be further improved.
结合第一方面或第二方面,在一些实施例中,等时数据传输通道的创建时间可以包括如下几种选择:With reference to the first aspect or the second aspect, in some embodiments, the creation time of the isochronous data transmission channel may include the following options:
在一种选择中,可以在音频业务到来时创建等时数据传输通道。例如,当用户打开游戏应用程序时(游戏背景声同时开始播放),手机的应用层会向Host发送游戏背景声业务创建通知,根据该通知手机会向蓝牙耳机发起等时数据传输通道的创建流程。In one option, an isochronous data transmission channel can be created when the audio service arrives. For example, when the user opens the game application (the game background sound starts playing at the same time), the application layer of the mobile phone will send a game background sound service creation notification to the Host, according to the notification, the mobile phone will initiate the creation process of the isochronous data transmission channel to the Bluetooth headset .
在另一种选择中,可以先建立一个默认的等时数据传输通道,该默认的等时数据传输 通道可以基于默认CIG参数创建。这样当音频业务到来时可以直接使用该默认的等时数据传输通道承载流数据,响应速度更快。In another option, a default isochronous data transmission channel may be established first, and the default isochronous data transmission channel may be created based on default CIG parameters. In this way, when the audio service arrives, the default isochronous data transmission channel can be used directly to carry streaming data, and the response speed is faster.
在再一种选择中,可以先建立多个虚拟等时数据传输通道,这多个虚拟等时数据传输通道可以对应多套不同的CIG参数,可适用多种音频业务。虚拟等时数据传输通道是指空口不发生数据交互的等时数据传输通道。这样,当音频业务到来时,可以选择该音频业务对应的虚拟等时数据传输通道,第一音频设备和第二音频设备之间触发握手并开始通信。In yet another option, multiple virtual isochronous data transmission channels may be established first. The multiple virtual isochronous data transmission channels may correspond to multiple sets of different CIG parameters, and may be applicable to multiple audio services. A virtual isochronous data transmission channel refers to an isochronous data transmission channel where no data interaction occurs on the air interface. In this way, when an audio service arrives, a virtual isochronous data transmission channel corresponding to the audio service may be selected, and a handshake is triggered between the first audio device and the second audio device and communication is started.
第三方面,提供了一种音频设备,包括多个功能单元,用于相应的执行第一方面可能的实施方式中的任意一种所提供的方法。In a third aspect, an audio device is provided, which includes multiple functional units for correspondingly performing the method provided in any one of the possible implementation manners of the first aspect.
第四方面,提供了一种音频设备,包括多个功能单元,用于相应的执行第二方面可能的实施方式中的任意一种所提供的方法。According to a fourth aspect, there is provided an audio device including a plurality of functional units for correspondingly performing the method provided in any one of the possible implementation manners of the second aspect.
第五方面,提供了一种音频设备,用于执行第一方面描述的音频通讯方法。网络设备可包括:存储器以及与存储器耦合的处理器、发射器和接收器,其中:发射器用于与向另一无线通信设备发送信号,接收器用于接收另一无线通信设备发送的信号,存储器用于存储第一方面描述的音频通讯方法的实现代码,处理器用于执行存储器中存储的程序代码,即执行第一方面可能的实施方式中的任意一种所描述的音频通讯方法。In a fifth aspect, an audio device is provided for performing the audio communication method described in the first aspect. The network device may include: a memory and a processor, a transmitter, and a receiver coupled to the memory, where the transmitter is used to send a signal to another wireless communication device, the receiver is used to receive a signal sent by another wireless communication device, and the memory is used To store the implementation code of the audio communication method described in the first aspect, the processor is used to execute the program code stored in the memory, that is, to execute any one of the audio communication methods described in the possible implementation manners of the first aspect.
第六方面,提供了一种音频设备,用于执行第二方面描述的音频通讯方法。终端可包括:存储器以及与存储器耦合的处理器、发射器和接收器,其中:发射器用于与向另一无线通信设备发送信号,接收器用于接收另一无线通信设备发送的信号,存储器用于存储第二方面描述的音频通讯方法的实现代码,处理器用于执行存储器中存储的程序代码,即执行第二方面可能的实施方式中的任意一种所描述的音频通讯方法。In a sixth aspect, an audio device is provided for performing the audio communication method described in the second aspect. The terminal may include: a memory and a processor, a transmitter, and a receiver coupled to the memory, where the transmitter is used to send signals to another wireless communication device, the receiver is used to receive signals sent by another wireless communication device, and the memory is used The implementation code of the audio communication method described in the second aspect is stored, and the processor is used to execute the program code stored in the memory, that is, the audio communication method described in any one of the possible implementation manners of the second aspect is executed.
第七方面,提供了一种芯片组,芯片组可包括:第一芯片和第二芯片。第一芯片和第二芯片之间通过接口HCI通信。其中,第一芯片可包括以下模块:多媒体音频模块、话音模块、背景声模块、内容控制模块、流控制模块、流数据模块以及L2CAP模块。第二芯片可包括:LE物理层模块、LE链路层模块。According to a seventh aspect, a chip set is provided. The chip set may include: a first chip and a second chip. The first chip and the second chip communicate through an interface HCI. The first chip may include the following modules: multimedia audio module, voice module, background sound module, content control module, stream control module, stream data module, and L2CAP module. The second chip may include: an LE physical layer module and an LE link layer module.
在第二芯片中:LE物理层模块,可用于提供数据传输的物理通道(通常称为信道)。通常情况下,一个通信系统中存在几种不同类型的信道,如控制信道、数据信道、语音信道等等。LE链路层模块,可用于在物理层的基础上提供两个或多个设备之间、和物理无关的逻辑传输通道(也称作逻辑链路)。LE链路层模块可用于控制设备的射频状态,设备将处于五种状态之一:等待、广告、扫描、初始化、连接。广播设备不需要建立连接就可以发送数据,而扫描设备接收广播设备发送的数据;发起连接的设备通过发送连接请求来回应广播设备,如果广播设备接受连接请求,那么广播设备与发起连接的设备将会进入连接状态。发起连接的设备称为主设备(master),接受连接请求的设备称为从设备(slave)。LE链路层模块可包括LE ACL模块和LE等时(ISO)模块。LE ACL模块可用于通过LE ACL链路传输设备间的控制消息,如流控制消息、内容控制消息、音量控制消息。LE ISO模块可用于通过等时数据传输通道传输设备间的等时数据(如流数据本身)。In the second chip: LE physical layer module, which can be used to provide a physical channel for data transmission (commonly referred to as a channel). Generally, there are several different types of channels in a communication system, such as control channels, data channels, voice channels, and so on. The LE link layer module can be used to provide a physical independent logical transmission channel (also called a logical link) between two or more devices on the basis of the physical layer. The LE link layer module can be used to control the radio frequency state of the device. The device will be in one of five states: waiting, advertising, scanning, initializing, and connecting. The broadcast device can send data without establishing a connection, and the scanning device receives the data sent by the broadcast device; the device that initiates the connection responds to the broadcast device by sending a connection request. If the broadcast device accepts the connection request, the broadcast device and the device that initiates the connection will Will enter the connection state. The device that initiates the connection is called the master device, and the device that accepts the connection request is called the slave device. The LE link layer module may include the LE ACL module and the LE isochronous (ISO) module. The LEACL module can be used to transmit control messages between devices through the LEACL link, such as flow control messages, content control messages, and volume control messages. The LEISO module can be used to transmit isochronous data between devices (such as streaming data itself) through isochronous data transmission channels.
在第一芯片中:L2CAP模块,可用于管理逻辑层提供的逻辑链路。基于L2CAP,不同的上层应用可共享同一个逻辑链路。类似TCP/IP中端口(port)的概念。In the first chip: the L2CAP module can be used to manage the logical links provided by the logical layer. Based on L2CAP, different upper-layer applications can share the same logical link. Similar to the concept of port in TCP/IP.
多媒体音频模块、话音模块、背景声模块可以是依据业务场景设置的模块,可用于将 应用层的音频应用划分为多媒体音频、话音、背景声等几种音频业务。不限于多媒体音频、话音、背景声等,音频业务也可以分为:话音,音乐,游戏,视频,语音助手,邮件提示音,告警,提示音,导航音等。内容控制模块可负责封装各种音频业务的内容控制(如上一首、下一首等)消息,并向LE ACL模块411输出音频业务的内容控制消息,以通过LE ACL模块411传输封装后的内容控制消息。流控制模块可用于为特定音频业务进行参数协商,如QoS参数的协商,编码(Codec)参数的协商,ISO参数的协商,以及基于协商好的参数为该特定业务创建等时数据传输通道。为该特定业务创建等时数据传输通道可用于传输该特定音频业务的音频数据。本申请中,该特定音频业务可以称为第一音频业务,该协商好的参数可以称为第一参数。流数据模块可可用于向LE等时(ISO)模块输出音频业务的音频数据,以通过等时数据传输通道传输音频数据。等时数据传输通道可以是CIS。CIS可用于在连接状态的设备间传输等时数据。等时数据传输通道最终承载于LE ISO。The multimedia audio module, voice module, and background sound module may be modules set according to business scenarios, and may be used to divide the audio application of the application layer into several types of audio services such as multimedia audio, voice, and background sound. It is not limited to multimedia audio, voice, background sound, etc. Audio services can also be divided into: voice, music, games, video, voice assistant, e-mail reminder, alarm, reminder sound, navigation sound, etc. The content control module can be responsible for encapsulating the content control (such as previous, next, etc.) messages of various audio services, and output the content control message of the audio service to the LEACL module 411 to transmit the encapsulated content through the LEACL module 411 Control messages. The flow control module can be used to negotiate parameters for specific audio services, such as QoS parameter negotiation, codec parameter negotiation, ISO parameter negotiation, and creating an isochronous data transmission channel for the specific service based on the negotiated parameters. Creating an isochronous data transmission channel for the specific service can be used to transmit audio data of the specific audio service. In this application, the specific audio service may be referred to as a first audio service, and the negotiated parameter may be referred to as a first parameter. The streaming data module can be used to output audio data of the audio service to the LE isochronous (ISO) module to transmit audio data through the isochronous data transmission channel. The isochronous data transmission channel may be CIS. CIS can be used to transfer isochronous data between connected devices. The isochronous data transmission channel is finally carried on LEISO.
第八方面,提供了一种芯片,芯片可包括第七方面描述的第一芯片中的模块以及第二芯片中的模块。关于各个模块的说明,可参考第七方面,这里不再赘述。According to an eighth aspect, a chip is provided. The chip may include the module in the first chip and the module in the second chip described in the seventh aspect. For the description of each module, please refer to the seventh aspect, which will not be repeated here.
第九方面,提供了一种通信系统,通信系统包括:第一音频设备和第二音频设备,其中:第一音频设备可以是第三方面或第五方面描述的音频设备。第二音频设备可以是第四方面或第六方面描述的音频设备。In a ninth aspect, a communication system is provided. The communication system includes: a first audio device and a second audio device, where the first audio device may be the audio device described in the third aspect or the fifth aspect. The second audio device may be the audio device described in the fourth aspect or the sixth aspect.
第十方面,提供了一种通信系统,通信系统包括:第一音频设备、第二音频设备和第三音频设备,其中:第一音频设备可以是第三方面或第五方面描述的音频设备。第二音频设备、第三音频设备均可以是第四方面或第六方面描述的音频设备。According to a tenth aspect, a communication system is provided. The communication system includes: a first audio device, a second audio device, and a third audio device, where the first audio device may be the audio device described in the third aspect or the fifth aspect. Both the second audio device and the third audio device may be the audio devices described in the fourth aspect or the sixth aspect.
第十一方面,提供了一种计算机可读存储介质,可读存储介质上存储有指令,当其在计算机上运行时,使得计算机执行上述第一方面描述的音频通讯方法。According to an eleventh aspect, there is provided a computer-readable storage medium having instructions stored on it, which when run on a computer, causes the computer to execute the audio communication method described in the first aspect above.
第十二方面,提供了另一种计算机可读存储介质,可读存储介质上存储有指令,当其在计算机上运行时,使得计算机执行上述第二方面描述的音频通讯方法。According to a twelfth aspect, another computer-readable storage medium is provided. The readable storage medium stores instructions, which when executed on a computer, causes the computer to perform the audio communication method described in the second aspect.
第十三方面,提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面描述的音频通讯方法。According to a thirteenth aspect, there is provided a computer program product containing instructions, which when executed on a computer, causes the computer to execute the audio communication method described in the first aspect above.
第十四方面,提供了另一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第二方面描述的音频通讯方法。According to a fourteenth aspect, there is provided another computer program product containing instructions that, when run on a computer, cause the computer to perform the audio communication method described in the second aspect above.
附图说明BRIEF DESCRIPTION
为了更清楚地说明本申请实施例或背景技术中的技术方案,下面将对本申请实施例或背景技术中所需要使用的附图进行说明。In order to more clearly explain the technical solutions in the embodiments or the background technology of the present application, the drawings required in the embodiments or the background technology of the present application will be described below.
图1是本申请提供的一种无线音频系统的架构示意图;FIG. 1 is a schematic structural diagram of a wireless audio system provided by this application;
图2A是现有的释BR/EDR蓝牙的协议框架示意图;2A is a schematic diagram of the existing protocol framework for releasing BR/EDR Bluetooth;
图2B-图2D是现有的几种音频profile的协议栈示意图;2B-2D are schematic diagrams of existing protocol stacks of several audio profiles;
图3是本申请提供的基于BLE的音频协议框架示意图;FIG. 3 is a schematic diagram of a BLE-based audio protocol framework provided by this application;
图4是本申请提供的音频业务的几种数据类型的示意图;4 is a schematic diagram of several data types of audio services provided by this application;
图5是扩展后的BLE传输框架示意图;Figure 5 is a schematic diagram of the extended BLE transmission framework;
图6是本申请提供的音频通讯方法的总体流程示意图;6 is a schematic diagram of the overall flow of the audio communication method provided by this application;
图7是本申请的提供的创建等时数据传输通道的流程示意图;7 is a schematic flowchart of creating an isochronous data transmission channel provided by this application;
图8是本申请提供的左右耳机一起使用的场景下的音频通讯方法的流程示意图;8 is a schematic flowchart of an audio communication method in a scenario where left and right headphones are used together provided by this application;
图9是本申请提供的BLE连接创建过程的流程示意图;9 is a schematic flowchart of a BLE connection creation process provided by this application;
图10A是本申请的一个实施例提供的电子设备的硬件架构示意图;10A is a schematic diagram of a hardware architecture of an electronic device provided by an embodiment of the present application;
图10B是图10A所示的电子设备上实现的软体架构示意图;10B is a schematic diagram of a software architecture implemented on the electronic device shown in FIG. 10A;
图11是本申请的一个实施例提供的音频输出设备的硬件架构示意图;11 is a schematic diagram of a hardware architecture of an audio output device provided by an embodiment of the present application;
图12是本申请的提供一种芯片组的架构示意图;12 is a schematic diagram of an architecture for providing a chipset according to this application;
图13是本申请的一种芯片的架构示意图。13 is a schematic diagram of a chip architecture of the present application.
具体实施方式detailed description
本申请的实施方式部分使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。The terms used in the implementation part of the present application are only used to explain specific examples of the present application, and are not intended to limit the present application.
图1示出了本申请提供的无线音频系统100。如图1所示,无线音频系统100可包括第一音频设备101、第二音频设备102和第三音频设备103。其中,第一音频设备101可以实现为以下任意一种电子设备:手机、便携式游戏机、便携式媒体播放设备、个人电脑、车载媒体播放设备等等。第二音频设备102和第三音频设备103可以被配置为任意类型的用于将音频数据转换成声音的电声转换器(electro-acoustic transducer),例如扬声器、入耳式耳机、头戴式耳机等等。不限于图1所示,第一音频设备101、第二音频设备102和第三音频设备103的物理形态、尺寸还可以不同,本申请对此不做限定。FIG. 1 shows a wireless audio system 100 provided by this application. As shown in FIG. 1, the wireless audio system 100 may include a first audio device 101, a second audio device 102 and a third audio device 103. The first audio device 101 may be implemented as any of the following electronic devices: mobile phones, portable game consoles, portable media playback devices, personal computers, vehicle-mounted media playback devices, and so on. The second audio device 102 and the third audio device 103 may be configured as any type of electro-acoustic transducer for converting audio data into sound, such as speakers, in-ear headphones, headphones, etc. Wait. Not limited to that shown in FIG. 1, the physical form and size of the first audio device 101, the second audio device 102, and the third audio device 103 may also be different, which is not limited in this application.
第一音频设备101、第二音频设备102和第三音频设备103均可以配置有无线收发器,无线收发器可用于发射和接收无线信号。The first audio device 101, the second audio device 102, and the third audio device 103 may all be configured with a wireless transceiver, and the wireless transceiver may be used to transmit and receive wireless signals.
第二音频设备102和第三音频设备103之间没有线缆连接。二者可以通过无线通信连接106而不是有线通信连接,进行通信。There is no cable connection between the second audio device 102 and the third audio device 103. The two can communicate through a wireless communication connection 106 instead of a wired communication connection.
第一音频设备101可以和第二音频设备102之间建立无线通信连接104。The first audio device 101 and the second audio device 102 can establish a wireless communication connection 104.
在第一音频设备101至第二音频设备102的传输方向上,第一音频设备101可以通过无线通信连接104向第二音频设备102发送音频数据。此时,第一音频设备101的角色是音频源(audiosource),第二音频设备102的角色是音频接收方(audio sink)。这样第二音频设备102可以将接收到的音频数据转换成声音,使得佩戴第二音频设备102的用户可以听到该声音。In the transmission direction of the first audio device 101 to the second audio device 102, the first audio device 101 may send audio data to the second audio device 102 through the wireless communication connection 104. At this time, the role of the first audio device 101 is an audio source (audiosource), and the role of the second audio device 102 is an audio receiver (audio sink). In this way, the second audio device 102 can convert the received audio data into sound, so that the user wearing the second audio device 102 can hear the sound.
在第二音频设备102至第一音频设备101的传输方向上,在第二音频设备102配置有受话器/麦克风等声音采集器件的情况下,第二音频设备102可以将采集的声音转换成音频数据,并通过无线通信连接104向第一音频设备101发送音频数据。此时,第二音频设备102的角色是音频源(audiosource),第一音频设备101的角色是音频接收方(audio sink)。这样第一音频设备101可以对接收到的音频数据进行处理,如向其他电子设备发送该音频数据(语音通话场景下)、存储该音频数据(录音场景下)。In the transmission direction of the second audio device 102 to the first audio device 101, when the second audio device 102 is configured with a sound collection device such as a receiver/microphone, the second audio device 102 can convert the collected sound into audio data And send audio data to the first audio device 101 through the wireless communication connection 104. At this time, the role of the second audio device 102 is an audio source (audiosource), and the role of the first audio device 101 is an audio receiver (audio sink). In this way, the first audio device 101 can process the received audio data, such as sending the audio data to other electronic devices (in a voice call scenario) and storing the audio data (in a recording scenario).
除了音频数据,第一音频设备101和第二音频设备102之间还可以基于无线通信连接104交互播放控制(如上一首、下一首等)消息、通话控制(如接听、挂断)消息、音量控制消息(如音量增大、音量减小)等。具体的,第一音频设备101可以通过无线通信连 接104向第二音频设备102发送播放控制消息、通话控制消息,可实现在第一音频设备101侧进行播放控制、通话控制。具体的,第二音频设备102可以通过无线通信连接104向第一音频设备101发送播放控制消息、通话控制消息,可实现在第二音频设备102侧进行播放控制、通话控制。In addition to audio data, the first audio device 101 and the second audio device 102 can also interactively play control control (such as previous, next, etc.) messages, call control (such as answer, hang up) messages based on the wireless communication connection 104, Volume control messages (eg volume up, volume down), etc. Specifically, the first audio device 101 may send a playback control message and a call control message to the second audio device 102 through the wireless communication connection 104, which may implement playback control and call control on the first audio device 101 side. Specifically, the second audio device 102 may send a playback control message and a call control message to the first audio device 101 through the wireless communication connection 104, which may implement playback control and call control on the second audio device 102 side.
同样的,第一音频设备101和第三音频设备103之间可以建立无线通信连接105,并可以通过无线通信连接105交互音频数据、播放控制消息、通话控制消息等。Similarly, a wireless communication connection 105 can be established between the first audio device 101 and the third audio device 103, and audio data, playback control messages, and call control messages can be exchanged through the wireless communication connection 105.
第一音频设备101可以同时传输音频数据到第二音频设备102、第三音频设备103。为了确保听觉体验的整体性,第一音频设备101传输音频数据到第二音频设备102、第三音频设备103的音频数据、控制消息都需要实现点到多点的同步传输。第二音频设备102和第三音频设备103的同步与否对用户听觉体验的整体性有至关重要的影响。当第二音频设备102和第三音频设备103分别实现为左耳机、右耳机时,如果左右耳的信号失去大约30微秒的同步,就会令人不安,用户就会感觉到声音混乱。The first audio device 101 can simultaneously transmit audio data to the second audio device 102 and the third audio device 103. In order to ensure the integrity of the hearing experience, the audio data transmitted from the first audio device 101 to the second audio device 102, the audio data from the third audio device 103, and the control messages all need to achieve point-to-multipoint synchronous transmission. The synchronization of the second audio device 102 and the third audio device 103 has a crucial influence on the integrity of the user's hearing experience. When the second audio device 102 and the third audio device 103 are implemented as a left earphone and a right earphone, respectively, if the signals of the left and right ears are out of synchronization for about 30 microseconds, it will be disturbing and the user will feel the sound chaos.
图1所示的无线音频系统100可以是基于蓝牙协议实现的无线音频系统。即设备与设备之间的无线通信连接(无线通信连接104、无线通信连接105、无线通信连接106)可以采用蓝牙通信连接。为了支持音频应用,现有的BR蓝牙协议提供了一些profile,如A2DP、AVRCP、HFP。The wireless audio system 100 shown in FIG. 1 may be a wireless audio system implemented based on the Bluetooth protocol. That is, the wireless communication connection (wireless communication connection 104, wireless communication connection 105, wireless communication connection 106) between the devices may use a Bluetooth communication connection. In order to support audio applications, the existing BR Bluetooth protocol provides some profiles, such as A2DP, AVRCP, HFP.
但是,现有的蓝牙协议存在一些问题。下面进行说明。However, the existing Bluetooth protocol has some problems. This is explained below.
现有的蓝牙协议为不同的profile定义了不同的协议框架,彼此之间相互独立,无法兼容。The existing Bluetooth protocol defines different protocol frameworks for different profiles, which are independent of each other and cannot be compatible.
图2A示例性示出了现有的BR/EDR蓝牙协议框架。如图2A所示,现有的BR/EDR蓝牙协议框架可包括多个profile。为了简化示意,图2A中仅示出了一些音频应用的profile:A2DP、AVRCP、HFP。不限于此,现有的BR/EDR蓝牙协议框架还可包括其他profile,如SPP、FTP等。FIG. 2A exemplarily shows the existing BR/EDR Bluetooth protocol framework. As shown in FIG. 2A, the existing BR/EDR Bluetooth protocol framework may include multiple profiles. To simplify the illustration, only some audio application profiles are shown in FIG. 2A: A2DP, AVRCP, and HFP. Not limited to this, the existing BR/EDR Bluetooth protocol framework may also include other profiles, such as SPP, FTP, etc.
其中,A2DP规定了使用蓝牙非同步传输信道方式,传输高质量音频的协议栈及使用方法。例如可以使用立体声蓝牙耳机来收听来自音乐播放器的音乐。AVRCP是指遥控功能,一般可支持暂停(pause),停止(stop),重播(replay),音量控制等远程控制操作。例如,可以使用蓝牙耳机执行暂停,切换下一曲等操作来控制音乐播放器播放音乐。FHP为话音应用,提供免提通话功能。Among them, A2DP stipulates the protocol stack and method of using Bluetooth asynchronous transmission channel to transmit high-quality audio. For example, you can use a stereo Bluetooth headset to listen to music from a music player. AVRCP refers to the remote control function, and generally supports remote control operations such as pause, stop, replay, and volume control. For example, you can use a Bluetooth headset to perform pauses, switch to the next song, etc. to control the music player to play music. FHP is a voice application that provides hands-free calling.
图2B-图2C分别示出了A2DP、HFP的协议栈。其中:2B-2C show the protocol stacks of A2DP and HFP respectively. among them:
A2DP协议栈包括的协议和实体Protocols and entities included in the A2DP protocol stack
音频源(audio source)是数字音频流的源,该数字音频流被传输至匹克网(piconet)中的音频接收方(audio sink)。音频接收方(audio sink)是接收来自同一个piconet中的音频源(audio source)的数字音频流的接收方。在音乐播放场景下,典型的用作音频源的设备可以是媒体播放设备,如MP3,典型的用作音频接方的设备可以是耳机。在录音场景下,典型的用作音频源的设备可以是声音采集设备,如麦克风,典型的用作音频接方的设备可以是便携式录音机。The audio source (audio source) is a source of a digital audio stream, which is transmitted to an audio sink in the piconet. An audio receiver (audio sink) is a receiver that receives a digital audio stream from an audio source (audio source) in the same piconet. In a music playing scenario, a typical device used as an audio source may be a media player device, such as MP3, and a typical device used as an audio receiver may be a headset. In a recording scenario, a typical device used as an audio source may be a sound collection device, such as a microphone, and a typical device used as an audio receiver may be a portable recorder.
基带(Baseband)、链路管理协议(link management protocol,LMP)、逻辑链路控制和 适配协议(logical link control and adaptation protocol,L2CAP)和服务发现协议(service discovery protocol,SDP)是在蓝牙核心规范中定义的蓝牙协议。音视频数据传输协议(audio video data transport protocol,AVDTP)包括用于协商流参数(streaming parameter)的信令实体和用于控制流本身的传输实体。应用(Application)层是其中定义有应用服务和传输服务参数的实体,该实体也用于将音频流数据适配成已定义的包格式或将已定义的包格式适配成音频流数据。Baseband, link management protocol (LMP), logical link control and adaptation protocol (logical link control and adaptation protocol, L2CAP) and service discovery protocol (service discovery protocol, SDP) are at the core of Bluetooth The Bluetooth protocol defined in the specification. The audio and video data transmission protocol (audio, video, data, protocol, AVDTP) includes a signaling entity for negotiating streaming parameters and a transmission entity for controlling the stream itself. The application (Application) layer is an entity in which application service and transmission service parameters are defined. This entity is also used to adapt the audio stream data to a defined packet format or adapt the defined packet format to audio stream data.
AVRCP协议栈包括的协议和实体Protocols and entities included in the AVRCP protocol stack
控制方(controller)是通过发送命令帧到目标设备而发起交易的设备。典型的控制方可以是个人电脑、手机、远程遥控器等。目标方(target)是接收命令帧并相应的生成响应帧的设备。典型的目标方可以是音频播放/录制设备、视频播放/录制设备、电视机等。A controller is a device that initiates a transaction by sending a command frame to a target device. Typical controllers can be personal computers, mobile phones, remote controllers, etc. A target is a device that receives command frames and generates response frames accordingly. Typical target parties may be audio playback/recording devices, video playback/recording devices, televisions, etc.
基带(Baseband)、链路管理协议(LMP)和逻辑链路控制和适配协议(L2CAP)为OSI模型的层1和层2蓝牙协议。音视频控制传输协议(audio video controltransport protocol,AVCTP)和基础图像规范(basic imaging profile,BIP)定义用来换取A/V设备控制的过程和消息。SDP是蓝牙服务发现协议(service discovery protocol)。对象交换(object exchange,OBEX)协议用于在蓝牙设备间传数据对象,来源于红外定义的协议,后被蓝牙采用。音视频/控制(AV/C)是负责基于AV/C命令的设备控制信令的实体。应用(Application)层是ACRVP实体,用于交换协议中定义的控制和浏览命令。Baseband (Baseband), Link Management Protocol (LMP) and Logical Link Control and Adaptation Protocol (L2CAP) are layer 1 and layer 2 Bluetooth protocols of the OSI model. The audio and video control transmission protocol (audio, controltransport, protocol, AVCTP) and basic image specification (basic imaging profile, BIP) define procedures and messages used in exchange for A/V device control. SDP is a Bluetooth service discovery protocol (service discovery protocol). The object exchange (OBEX) protocol is used to transfer data objects between Bluetooth devices. It is derived from the protocol defined by infrared and was later adopted by Bluetooth. Audio/video/control (AV/C) is an entity responsible for device control signaling based on AV/C commands. The Application layer is an ACRVP entity used to exchange control and browsing commands defined in the protocol.
HFP协议栈包括的协议和实体Protocols and entities included in the HFP protocol stack
音频网关(audio gateway)是用作输入音频、输出音频的网关的设备。典型的用作音频网关的设备可以是蜂窝电话。免提单元(Hands-Free unit)是用作音频网关的远程音频输入、输出机制的设备。免提单元可以提供一些远程控制方法。典型的用作免提单元的设备可以是车载免提单元。An audio gateway (audio gateway) is a device used as a gateway for inputting audio and outputting audio. A typical device used as an audio gateway may be a cellular phone. Hands-free unit (Hands-Free unit) is a device used as a remote audio input and output mechanism of the audio gateway. The hands-free unit can provide some remote control methods. A typical device used as a hands-free unit may be an on-board hands-free unit.
基带(Baseband)、链路管理协议(LMP)和逻辑链路控制和适配协议(L2CAP)为OSI模型的层1和层2蓝牙协议。RFCOMM为蓝牙串口模拟(emulation)实体。SDP是蓝牙服务发现协议。免提控制(Hands-Free control)是负责免提单元的特定控制信号的实体。该控制信号是基于AT命令的。音频端口模拟(audio port emulation)层是音频网关(audio gateway)上模拟音频端口的实体,音频驱动(audio driver)是免提单元中的驱动软件。Baseband (Baseband), Link Management Protocol (LMP) and Logical Link Control and Adaptation Protocol (L2CAP) are layer 1 and layer 2 Bluetooth protocols of the OSI model. RFCOMM is a Bluetooth serial port emulation entity. SDP is a Bluetooth service discovery protocol. Hands-free control (Hands-Free control) is the entity responsible for the specific control signal of the hands-free unit. The control signal is based on AT commands. The audio port simulation (audio port emulation) layer is the entity that simulates the audio port on the audio gateway (audio gateway), and the audio driver (audio driver) is the driver software in the hands-free unit.
综合上述A-C项可以看出,A2DP、AVRCP、HFP分别对应不同的协议栈,不同的profile采用了不同的传输链路,相互之间无法兼容。也即是说,profile其实是蓝牙协议对应于不同应用场景的不同协议栈。当蓝牙协议需要支持新的应用场景时,需要遵循现有的蓝牙协议框架添加profile,添加协议栈。Based on the above items A-C, it can be seen that A2DP, AVRCP, and HFP respectively correspond to different protocol stacks, and different profiles use different transmission links, which are not compatible with each other. That is to say, the profile is actually a different protocol stack of the Bluetooth protocol corresponding to different application scenarios. When the Bluetooth protocol needs to support new application scenarios, it is necessary to follow the existing Bluetooth protocol framework to add a profile and add a protocol stack.
而且,由于不同profile采用不同的协议栈,且各个协议栈之间相互独立,因此不同profile的应用之间的切换耗时严重,会出现明显的停顿。Moreover, because different profiles use different protocol stacks, and each protocol stack is independent of each other, the switching between applications of different profiles is time-consuming and will cause significant pauses.
例如,戴着蓝牙耳麦的用户在游戏时(游戏会产生游戏背景声,如游戏技能触发的声音)打开麦克和队友喊话。在此场景下,音频传输会需要从A2DP切换到HFP。其中,游戏时的背景声传输可以是基于A2DP的协议栈实现的,和队友喊话的话音传输可以是基于HFP的协议栈实现的。游戏背景声比话音要求更高的音质,即二者采用的编码参数(如压缩率)是不一样的,游戏背景声比话音采用更高的压缩率。由于A2DP和HFP是相互独立 的,因此从A2DP切换到HFP需要停止A2DP下和游戏背景声传输相关的配置,并重新在HFP下进行音频数据传输的参数协商、配置初始化等等工作,这一切换过程需要耗费较长时间,从而导致出现用户能够明显感知到的停顿。For example, a user wearing a Bluetooth headset turns on Mike and his teammates during the game (the game will produce a background sound of the game, such as a sound triggered by game skills). In this scenario, audio transmission will need to switch from A2DP to HFP. Among them, the background sound transmission during the game can be implemented based on the A2DP protocol stack, and the voice transmission to the teammates can be implemented based on the HFP protocol stack. The game background sound requires higher sound quality than the voice, that is, the coding parameters (such as compression rate) used by the two are different, and the game background sound uses a higher compression rate than the voice. Since A2DP and HFP are independent of each other, switching from A2DP to HFP needs to stop the configuration related to the transmission of game background sound under A2DP, and re-negotiate the parameters of audio data transmission under HFP, configuration initialization, etc. This switch The process takes a long time, resulting in a pause that the user can clearly perceive.
另外,现有的BR/EDR蓝牙协议没有实现点对多点的同步传输。In addition, the existing BR/EDR Bluetooth protocol does not implement point-to-multipoint synchronous transmission.
现有的BR/EDR蓝牙协议定义了两种蓝牙物理链路:无连接的异步(asynchronous connectionless,ACL)链路、同步面向连接(synchronous connection oriented,SCO)或扩展的SCO(extended SCO,eSCO)链路。其中,ACL链路既支持对称连接(点对点),也支持非对称连接(点对多点)。ACL链路的传输效率高,但时延不可控,重传次数没有限定,可主要用于传输对时延不敏感的数据,如控制信令、分组数据等。SCO/eSCO链路支持对称连接(点对点)。SCO/eSCO链路的传输效率低,但时延可控,重传次数有限定,可主要传输对时延敏感的业务(如话音)。The existing BR/EDR Bluetooth protocol defines two types of Bluetooth physical links: connectionless asynchronous (Asynchronous) connectionless (ACL) link, synchronous connection-oriented (synchronous connection oriented (SCO) or extended SCO (extended SCO, eSCO) link. Among them, the ACL link supports both symmetric connection (point-to-point) and asymmetric connection (point-to-multipoint). The transmission efficiency of the ACL link is high, but the delay is uncontrollable, and the number of retransmissions is not limited. It can be mainly used to transmit data that is not sensitive to delay, such as control signaling and packet data. SCO/eSCO links support symmetrical connections (point-to-point). The transmission efficiency of the SCO/eSCO link is low, but the delay is controllable, and the number of retransmissions is limited. It can mainly transmit delay-sensitive services (such as voice).
现有的BR/EDR蓝牙协议中的ACL、SCO/eSCO这两种链路没有实现对等时数据(isochronous data)的支持。也即是说,在点对多点的piconet中,主设备master发往多个从设备slave的数据没有实现同步传输,多个从设备slave的信号会出现不同步。The existing links of ACL and SCO/eSCO in the BR/EDR Bluetooth protocol do not support isochronous data. That is to say, in the point-to-multipoint piconet, the data sent by the master device to multiple slave devices is not synchronized, and the signals of multiple slave devices will be out of sync.
鉴于现有的蓝牙协议存在的问题,本申请提供了一种基于低功耗蓝牙BLE的音频协议框架。In view of the problems of the existing Bluetooth protocol, this application provides an audio protocol framework based on Bluetooth low energy BLE.
现有的BLE协议支持点对多点的网络拓扑结构。而且,蓝牙利益工作组(special interest group,SIG)已经提议将等时数据(isochronous data)的支持增加到BLE中以允许BLE设备传输isochronous data。isochronous data是有时间受限(time-bounded)的。isochronous data是指流中的信息,该流中每个信息实体(information entity)都受限于它和之前的实体、之后的实体之间的时间关系。The existing BLE protocol supports a point-to-multipoint network topology. Moreover, the Bluetooth Interest Group (SIG) has proposed to add isochronous data support to BLE to allow BLE devices to transmit isochronous data. isochronous data is time-bounded. isochronous data refers to the information in the stream. Each information entity in the stream is limited by the time relationship between it and the previous entity and the subsequent entity.
但是,现有的BLE协议没有定义音频传输,BLE profile不包括音频profile(如A2DP、HFP)。也即是说,基于低功耗蓝牙的音频传输(voice-over-ble)没有标准化。本申请提供的基于BLE的音频协议框架将支持音频传输。However, the existing BLE protocol does not define audio transmission, and the BLE profile does not include audio profiles (such as A2DP, HFP). That is to say, the audio transmission (voice-over-ble) based on Bluetooth low energy is not standardized. The BLE-based audio protocol framework provided by this application will support audio transmission.
图3示出了本申请提供的基于BLE的音频协议框架。如图3所示,该协议框架可包括:LE物理层(LE physical layer)313、LE链路层(LE link layer)310、L2CAP层和应用(application)层308。LE物理层313和LE链路层310可以实现在控制器(controller)中,L2CAP层308可以实现在主机(Host)中。该协议框架还可包括实现于Host中的一些功能实体:多媒体音频功能实体302、话音功能实体303、背景声功能实体304、内容控制功能实体305、流控制功能实体306、流数据功能实体307。FIG. 3 shows a BLE-based audio protocol framework provided by this application. As shown in FIG. 3, the protocol framework may include: LE physical layer (LE physical layer) 313, LE link layer (LE link layer) 310, L2CAP layer and application (application) layer 308. The LE physical layer 313 and the LE link layer 310 may be implemented in a controller, and the L2CAP layer 308 may be implemented in a host. The protocol framework may further include some functional entities implemented in the Host: multimedia audio functional entity 302, voice functional entity 303, background sound functional entity 304, content control functional entity 305, flow control functional entity 306, and streaming data functional entity 307.
在Controller中:In the Controller:
(1)LE物理层313,可负责提供数据传输的物理通道(通常称为信道)。通常情况下,一个通信系统中存在几种不同类型的信道,如控制信道、数据信道、语音信道等等。蓝牙使用2.4GHz工业科学医疗(industrial scientific medical,ISM)频段。(1) The LE physical layer 313 may be responsible for providing a physical channel (commonly referred to as a channel) for data transmission. Generally, there are several different types of channels in a communication system, such as control channels, data channels, voice channels, and so on. Bluetooth uses the 2.4 GHz industrial scientific (ISM) frequency band.
(2)LE链路层310,在物理层的基础上提供两个或多个设备之间、和物理无关的逻辑传输通道(也称作逻辑链路)。LE链路层310可用于控制设备的射频状态,设备将处于五种状态之一:等待、广告、扫描、初始化、连接。广播设备不需要建立连接就可以发送 数据,而扫描设备接收广播设备发送的数据;发起连接的设备通过发送连接请求来回应广播设备,如果广播设备接受连接请求,那么广播设备与发起连接的设备将会进入连接状态。发起连接的设备称为主设备(master),接受连接请求的设备称为从设备(slave)。(2) The LE link layer 310 provides, on the basis of the physical layer, a logical transmission channel (also called a logical link) between two or more devices that is physically independent. The LE link layer 310 can be used to control the radio frequency state of the device. The device will be in one of five states: waiting, advertising, scanning, initializing, and connecting. The broadcast device can send data without establishing a connection, and the scanning device receives the data sent by the broadcast device; the device that initiates the connection responds to the broadcast device by sending a connection request. If the broadcast device accepts the connection request, the broadcast device and the device that initiated the connection will Will enter the connection state. The device that initiates the connection is called the master device, and the device that accepts the connection request is called the slave device.
LE链路层310可包括LE ACL链路311和LE等时(ISO)链路312。LE ACL链路311可用于传输设备间的控制消息,如流控制消息、内容控制消息、音量控制消息。LE ISO链路312可用于传输设备间的等时数据(如流数据本身)。The LE link layer 310 may include a LE ACL link 311 and a LE isochronous (ISO) link 312. The LE ACL link 311 can be used to transmit control messages between devices, such as flow control messages, content control messages, and volume control messages. The LE ISO link 312 can be used to transmit isochronous data between devices (such as streaming data itself).
在Host中:In Host:
(1)L2CAP层308,可负责管理逻辑层提供的逻辑链路。基于L2CAP,不同的上层应用可共享同一个逻辑链路。类似TCP/IP中端口(port)的概念。(1) The L2CAP layer 308 can be responsible for managing the logical links provided by the logical layer. Based on L2CAP, different upper-layer applications can share the same logical link. Similar to the concept of port in TCP/IP.
(2)多媒体音频功能实体302、话音功能实体303、背景声功能实体304可以是依据业务场景设置的功能实体,可用于将应用层的音频应用划分为多媒体音频、话音、背景声等几种音频业务。不限于多媒体音频、话音、背景声等,音频业务也可以分为:话音,音乐,游戏,视频,语音助手,邮件提示音,告警,提示音,导航音等。(2) The multimedia audio function entity 302, the voice function entity 303, and the background sound function entity 304 may be function entities set according to business scenarios, and may be used to divide the audio application of the application layer into multimedia audio, voice, background sound, etc. business. It is not limited to multimedia audio, voice, background sound, etc. Audio services can also be divided into: voice, music, games, video, voice assistant, e-mail reminder, alarm, reminder sound, navigation sound, etc.
(3)内容控制(content control)功能实体305可负责封装各种音频业务的内容控制(如上一首、下一首等)消息,并通过LE ACL链路311传输封装后的内容控制消息。(3) The content control function entity 305 may be responsible for encapsulating content control (eg, previous, next, etc.) messages of various audio services, and transmitting the encapsulated content control message through the LEACL link 311.
(4)流控制(stream control)功能实体306可负责参数协商,如服务质量quality of service,QoS)参数的协商,编码(Codec)参数的协商,等时数据传输通道参数(下面简称ISO参数)的协商,以及负责等时数据传输通道的建立。(4) The stream control functional entity 306 may be responsible for parameter negotiation, such as quality of service (QoS) parameter negotiation, codec parameter negotiation, and isochronous data transmission channel parameter (hereinafter referred to as ISO parameter) Negotiation, and responsible for the establishment of isochronous data transmission channels.
(5)流数据功能实体307可负责通过等时数据传输通道传输音频数据。等时数据传输通道(isochronous data path)可以是基于连接的等时音频流(connected isochronous stream,CIS)。CIS可用于在连接状态的设备间传输等时数据。等时数据传输通道最终承载于LE ISO 312。流控制功能实体306还可用于在创建等时数据传输通道之前进行参数协商,然后基于协商好的参数创建等时数据传输通道。(5) The streaming data function entity 307 may be responsible for transmitting audio data through the isochronous data transmission channel. The isochronous data transmission channel (isochronous data path) may be a connected isochronous audio stream (connected isochronous stream, CIS). CIS can be used to transfer isochronous data between connected devices. The isochronous data transmission channel is finally carried on LEISO 312. The flow control function entity 306 may also be used to negotiate parameters before creating an isochronous data transmission channel, and then create an isochronous data transmission channel based on the negotiated parameters.
如图3所示,在本申请提供的基于BLE的音频协议框架中,来自应用层的音频数据最后通过LE ISO链路312传输。As shown in FIG. 3, in the BLE-based audio protocol framework provided by the present application, audio data from the application layer is finally transmitted through the LE ISO link 312.
另外,图3所示的音频协议框架还可以包括主机控制器接口(Host Controller Interface,HCI)。Host和Controller就是通过HCI来进行通讯的,通信的介质就是HCI命令。Host可以实现于设备的应用处理器中(application processor,AP),Controller可以实现于该设备的蓝牙芯片中。可选的,在小型设备中,Host和Controller可以实现于同一个处理器或控制器中,此时HCI是可选的。In addition, the audio protocol framework shown in FIG. 3 may also include a host controller interface (Host Controller Interface, HCI). Host and Controller communicate through HCI, and the communication medium is HCI commands. The Host can be implemented in the application processor (AP) of the device, and the Controller can be implemented in the Bluetooth chip of the device. Optionally, in small devices, Host and Controller can be implemented in the same processor or controller, in which case HCI is optional.
如图4所示,本申请提供的基于BLE的音频协议框架可以将各种音频应用(如A2DP、HFP等)的数据都分为三种类型:As shown in Figure 4, the BLE-based audio protocol framework provided by this application can divide the data of various audio applications (such as A2DP, HFP, etc.) into three types:
内容控制:通话控制(如接听、挂断等)、播放控制(如上一首、下一首等)、音量控制(如增大音量、减小音量)等信令。Content control: call control (such as answering, hanging up, etc.), playback control (such as previous song, next song, etc.), volume control (such as increasing the volume, decreasing the volume) and other signaling.
流控制:创建流(create stream)、终止流(terminate stream)等用于流管理的信令。流可用于承载音频数据。Flow control: Create stream (create stream), terminate stream (terminate stream) and other signaling for stream management. Streams can be used to carry audio data.
流数据:音频数据本身。Streaming data: audio data itself.
其中,内容控制、流控制的数据通过LE ACL 311链路传输;流数据通过LE ISO 312链路传输。Among them, the content control and flow control data are transmitted through the LEACL 311 link; the streaming data is transmitted through the LEISO 312 link.
现有的蓝牙协议中,不同profile对应不同的协议栈,对应不同的传输框架。例如A2DP、HFP各自对应不同传输框架,A2DP的流数据(如立体声音乐数据)最后通过ACL链路传输,因为ACL链路的传输效率高,HFP的流数据(如话音数据)最后通过SCO/eSCO链路传输,因为SCO/eSCO链路的传输时延可控。与现有的蓝牙协议不同的是,本申请提供的基于BLE的音频协议框架提供统一的音频传输框架,无论哪种音频profile的数据都可以分为内容控制、流控制、流数据三种类型,并基于BLE框架在LE ACL链路上传输内容控制、流控制这两种类型的数据,在LE ISO链路上传输流数据。In the existing Bluetooth protocol, different profiles correspond to different protocol stacks and correspond to different transmission frames. For example, A2DP and HFP correspond to different transmission frames. A2DP stream data (such as stereo music data) is finally transmitted through the ACL link. Because the transmission efficiency of the ACL link is high, HFP stream data (such as voice data) finally passes through SCO/eSCO. Link transmission, because the transmission delay of the SCO/eSCO link is controllable. Different from the existing Bluetooth protocol, the BLE-based audio protocol framework provided by this application provides a unified audio transmission framework, no matter which audio profile data can be divided into three types: content control, flow control, and flow data. Based on the BLE framework, two types of data, content control and flow control, are transmitted on the LEACL link, and streaming data is transmitted on the LEISO link.
可以看出,本申请提供的基于BLE的音频协议框架支持音频传输,可统一服务级连接,将所有上层音频profile以业务场景划分为多媒体音频、话音、背景声等音频业务。各个音频业务的流控制(包括QoS参数的协商、codec参数的协商、ISO参数的协商以及等时数据传输通道的建立)统一由协议栈中的流控制(stream control)功能实体负责。各个音频业务的内容控制(如接听、挂断等通话控制、如上一首、下一首等播放控制、如音量控制等)统一由协议栈中的内容控制(content control)功能实体负责。流控制消息和内容控制消息都通过LE ACL链路传输,流数据通过LE ISO链路传输。这样能够实现不同的音频profile都可以基于同一传输框架,兼容性更好。It can be seen that the BLE-based audio protocol framework provided by this application supports audio transmission and can unify service level connections, and divide all upper-layer audio profiles into multimedia audio, voice, background sound and other audio services according to business scenarios. The flow control of each audio service (including negotiation of QoS parameters, negotiation of codec parameters, negotiation of ISO parameters and establishment of isochronous data transmission channels) is unified by the flow control functional entity in the protocol stack. The content control of each audio service (such as call control such as answering and hanging up, playback control such as previous and next song, such as volume control, etc.) is unified by the content control function entity in the protocol stack. Both the flow control message and the content control message are transmitted through the LE ACL link, and the streaming data is transmitted through the LE ISO link. In this way, different audio profiles can be implemented based on the same transmission framework, with better compatibility.
本申请提供的音频协议框架基于BLE是指是基于扩展后的BLE传输框架(transport architecture)。扩展后的BLE传输框架与现有的BLE传输框架相比,主要在于增加了:等时信道(isochronous channel)特性。The audio protocol framework provided in this application is based on BLE, which refers to an extended BLE transmission framework (transport architecture). Compared with the existing BLE transmission framework, the expanded BLE transmission framework mainly includes the addition of: isochronous channel (isochronous) channel characteristics.
图5示出了扩展的BLE传输框架实体(transport architecture entities)。其中,阴影标记的实体为新增的逻辑子层,这些新增的逻辑子层共同提供等时信道特性。如图5所示:Figure 5 shows the extended BLE transport framework entities (transport architectures). Among them, the shaded entities are newly added logical sublayers, and these newly added logical sublayers jointly provide isochronous channel characteristics. As shown in Figure 5:
(1)LE物理传输(LE physical transport)层:空口数据传输,通过数据包结构,通过编码、调制方案等标记。LE physical transport中承载了所有来自上层的信息。(1) LE physical transmission (LE physical transmission) layer: air interface data transmission, through the data packet structure, through coding, modulation scheme and other marks. LEPHYSICALtransport carries all the information from the upper layer.
(2)LE物理信道(LE physical channel)层:蓝牙设备之间传输的空口物理通道,通过时域、频域,空域标记的物理层承载通道,包括跳频、时隙、事件、接入码的概念。对于上层,一个LE physical channel可以承载不同的LE逻辑传输(LE logical transport);对于下层,一个LE physical channel总是映射其唯一对应的LE physical transport。(2) LE physical channel (LE physical channel) layer: the air interface physical channel transmitted between Bluetooth devices, through the time domain, frequency domain, and the physical layer bearer channel marked by the air domain, including frequency hopping, time slot, event, access code the concept of. For the upper layer, a LE physical channel can carry different LE logical transmissions (LE logical transmission); for the lower layer, an LE physical channel always maps its only corresponding LE physical transmission.
LE physical channel层可包括四种物理信道实体:LE匹克网物理信道(LE piconet physical channel)、LE广播物理信道(LE advertising physical channel)、LE周期物理信道(LE periodic physical channel)、LE等时物理信道(LE isochronous physical channel)。即在现有的LE physical channel的基础上增加了LE isochronous physical channel。The LE physical channel layer can include four physical channel entities: LE physical network (LE piconet physical channel), LE broadcast physical channel (LE advertising physical channel), LE periodic physical channel (LE periodic physical channel), LE isochronous physical Channel (LE isochronous physical channel). That is, the LE isochronous physical channel is added to the existing LE physical channel.
其中,LE piconet physical channel可用于在处于连接状态的设备之间的通信,该通信采用跳频技术。LE advertising physical channel可用于在设备间进行无连接的广播通信,这些广播通信可用于设备的发现、连接操作,也可用于无连接的数据传输。LE periodic physical channel可用于设备间的周期性的广播通信。LE isochronous physical channel可用于传输isochronous数据,与上层的LE isochronous physical link存在一一映射关系。Among them, LE piconet physical channel can be used for communication between connected devices. The communication uses frequency hopping technology. LE advertising can be used for connectionless broadcast communications between devices. These broadcast communications can be used for device discovery, connection operations, and connectionless data transmission. LE periodic physical channel can be used for periodic broadcast communication between devices. LE isochronous physical channel can be used to transmit isochronous data, and there is a one-to-one mapping relationship with the upper LE isochronous physical link.
(3)LE物理链路(LE physical link)层:蓝牙设备之间的基带连接,它是一个虚拟的概念,在空口数据包中没有相应的字段表达。对于上层LE logical transport,一个LE logical transport只会映射到一个LE physical links。对于下层,一个LE physical link可以通过不同的LEphysical channel承载,但一次传输总是映射到一个LEphysical channel上。(3) LE physical link layer: the baseband connection between Bluetooth devices, it is a virtual concept, and there is no corresponding field expression in the air interface data packet. For the upper-level LE logical transport, an LE logical transport will only be mapped to an LE physical link. For the lower layer, a LE physical link can be carried through different LE physical channels, but a transmission is always mapped to an LE physical channel.
LE physical link是对LE physical channel的进一步封装。LE physical link层可包括四种物理链路实体:LE激活物理链路(LE active physical link)、LE广播物理链路(LE advertising physical link)、LE周期物理链路(LE periodic physical link)、LE等时物理链路(LE isochronous physical link。即在现有的LE physical link的基础上增加了LE isochronous physical link。LE physical link is a further encapsulation of the LE physical channel. The LE physical link layer can include four physical link entities: LE active physical link (LE active physical link), LE broadcast physical link (LE advertising physical link), LE periodic physical link (LE periodic physical link), LE Isochronous physical link (LE isochronous physical link). On the basis of the existing LE physical link, the LE isochronous physical link is added.
其中,LE isochronous physical link可用于传输isochronous数据,承载上层的LE-BIS,LE-CIS,与LE physical channel存在一一映射关系。Among them, LE isochronous physical link can be used to transmit isochronous data, carrying the upper layer LE-BIS, LE-CIS, and LE physical channel there is a one-to-one mapping relationship.
(4)LE逻辑传输(LE logical transport)层:可负责流量控制,ACK/NACK确认机制,重传机制,调度机制。这些信息一般承载在数据包头中。对于上层,一个LElogical transport可以对应多个LElogical links。对于下层,一个LElogical transport只映射到一个对应的LEphysical link。(4) LE logical transmission layer: it can be responsible for flow control, ACK/NACK confirmation mechanism, retransmission mechanism, and scheduling mechanism. This information is generally carried in the data packet header. For the upper layer, one LElogical transport can correspond to multiple LElogical links. For the lower layer, a LElogical transport only maps to a corresponding LEphysical link.
LE logical transport层可包括以下逻辑传输实体:LE-ACL,ADVB,PADVB、LE-BIS,LE-CIS。即在现有的LE logical transport的基础上增加了LE-BIS,LE-CIS。The LE logical transport layer may include the following logical transport entities: LE-ACL, ADVB, PADVB, LE-BIS, LE-CIS. That is, LE-BIS and LE-CIS are added to the existing LE logical transport.
其中,LE-CIS为Master和一个指定的Slave之间的点对点logical transport,每个CIS支持一个LE-S的logical link。CIS可以是对称速率,也可以是非对称速率。LE-CIS建立于LE-ACL之上。LE-BIS为点对多点Logical transport,每个BIS支持一个LE-S的logical link。LE-BIS建立于PADVB之上。这里,BIS是指广播等时流(broadcast isochronous stream),CIS是指基于连接的等时流(connected isochronous stream)。Among them, LE-CIS is a point-to-point logical transport between the Master and a designated slave, and each CIS supports a LE-S logical link. CIS can be a symmetric rate or an asymmetric rate. LE-CIS is built on LE-ACL. LE-BIS is a point-to-multipoint Logical transport, and each BIS supports one LE-S logical link. LE-BIS is built on PADVB. Here, BIS refers to broadcast isochronous stream, and CIS refers to connected isochronous stream.
图3中的LE ISO链路312可以是LE CIS,图3中的LE ACL链路311可以是LE ACL。The LE ISO link 312 in FIG. 3 may be LE CIS, and the LE ACL link 311 in FIG. 3 may be LE ACL.
(5)LE逻辑链路(LE logical link)层:可用于支持不同的应用数据传输。对于下层,每个LElogical link可能映射到多个LElogical transport上,但一次传输只选择映射到一个LElogical transport上。(5) LE logical link (LE logical link) layer: can be used to support different application data transmission. For the lower layer, each LElogical link may be mapped to multiple LElogical transports, but only one LElogical transport is selected for mapping at a time.
LE logical link层可包括以下逻辑链路实体:LE-C,LE-U,ADVB-C,ADVB-U、低功耗广播控制(low energy broadcast control,LEB-C),低功耗流(low energy stream,LE-S)。这里,“-C”表示控制(control),“-U”表示用户(user)。即在现有的LE logical link的基础上增加了LEB-C,LE-S。其中LEB-C用于承载BIS的控制信息,LE-S用于承载等时数据流。The LE logical layer may include the following logical link entities: LE-C, LE-U, ADVB-C, ADVB-U, low-energy broadcast control (LEB-C), and low-power flow (low energy, stream, LE-S). Here, "-C" means control, and "-U" means user. That is, LEB-C and LE-S are added on the basis of the existing LE logical link. Among them, LEB-C is used to carry BIS control information, and LE-S is used to carry isochronous data streams.
基于图3所示的音频协议框架,本申请提供了一种音频通讯方法。Based on the audio protocol framework shown in FIG. 3, the present application provides an audio communication method.
其主要发明思想可包括:以音频业务为粒度为各个音频业务确定等时数据传输通道的参数。第一音频设备(如手机、媒体播放器)和第二音频设备(如耳机)之间可以以音频业务为粒度进行参数协商,如QoS参数的协商、codec参数的协商、ISO参数的协商,然后基于协商好的参数创建等时数据传输通道(isochronous data path)。等时数据传输通道可用于传输流数据。The main inventive idea may include: determining the parameters of the isochronous data transmission channel for each audio service with the audio service as the granularity. The first audio device (such as a mobile phone, a media player) and the second audio device (such as a headset) can use the audio service as a granular parameter negotiation, such as QoS parameter negotiation, codec parameter negotiation, ISO parameter negotiation, and Create an isochronous data transmission channel (isochronous data path) based on the negotiated parameters. The isochronous data transmission channel can be used to transmit streaming data.
本申请中,音频业务可以是指能够提供音频功能(如音频播放、音频录制等)的服务(service)或应用(application)。音频业务会涉及音频相关的数据传输业务,例如音频数据本身、用于控制音频数据播放的内容控制消息、用于创建等时数据传输通道的流控制消息等的传输。In this application, the audio service may refer to a service or application capable of providing audio functions (such as audio playback, audio recording, etc.). Audio services may involve audio-related data transmission services, such as the transmission of audio data itself, content control messages used to control the playback of audio data, and flow control messages used to create isochronous data transmission channels.
与现有技术不同的是,本申请提供的音频通讯方法不再以profile为粒度进行参数协商,而是以音频业务为粒度进行参数协商。当业务场景发生切换时,基于重新协商的参数配置等时数据传输通道即可,无需涉及不同profile协议栈之间的切换,更加高效,避免出现明显的停顿。Different from the prior art, the audio communication method provided in this application no longer uses the profile as a granularity for parameter negotiation, but uses the audio service as a granularity for parameter negotiation. When the business scenario is switched, the isochronous data transmission channel can be configured based on the renegotiated parameters, without the need to switch between different profile protocol stacks, which is more efficient and avoids obvious pauses.
例如,从音乐业务场景切换到电话业务场景(听音乐时接听电话)。现有的蓝牙协议中,该切换涉及到从A2DP(音乐业务)切换到HFP(电话业务)。A2DP、HFP各自对应不同传输框架,A2DP的流数据(如立体声音乐数据)最后通过ACL链路传输,HFP的流数据(如话音数据)最后通过SCO/eSCO链路传输。因此,现有的蓝牙协议中,该切换会导致底层传输框架的切换,耗时严重。然而,本申请提供的基于BLE的音频协议框架提供了统一的音频传输框架,无论是哪种音频业务场景,流数据都会通过LE ISO链路传输,业务场景的切换不涉及传输框架的切换,效率更高。For example, switching from a music business scene to a telephone business scene (answering a call while listening to music). In the existing Bluetooth protocol, this switching involves switching from A2DP (music service) to HFP (telephone service). A2DP and HFP respectively correspond to different transmission frames. A2DP stream data (such as stereo music data) is finally transmitted through the ACL link, and HFP stream data (such as voice data) is finally transmitted through the SCO/eSCO link. Therefore, in the existing Bluetooth protocol, this switch will cause the switch of the underlying transmission frame, which is time-consuming. However, the BLE-based audio protocol framework provided by this application provides a unified audio transmission framework. No matter which audio business scenario, streaming data will be transmitted through the LE ISO link. The switching of business scenarios does not involve the switching of the transmission framework, efficiency higher.
本申请中,QoS参数可包括时延、丢包率、吞吐量等表示传输质量的参数。Codec参数可包括编码方式、压缩率等影响音频质量的参数。ISO参数可包括CIS的ID、CIS数量、master到slave传输的最大数据大小、slave到master传输的最大数据大小、master到slave的数据包在链路层传输最长时间间隔、slave到master的数据包在链路层传输最长时间间隔等等。In this application, the QoS parameters may include parameters representing transmission quality such as delay, packet loss rate, and throughput. Codec parameters may include parameters that affect audio quality, such as encoding method and compression ratio. ISO parameters can include CIS ID, number of CIS, maximum data size transmitted from master to slave, maximum data size transmitted from slave to master, maximum time interval of data transmission from master to slave at the link layer, and data from slave to master The longest time interval for packet transmission at the link layer, etc.
图6示出了本申请提供的音频通讯方法的总体流程。图6中,第一音频设备(如手机、媒体播放器)和第二音频设备(如耳机)之间建立了BLE连接。下面展开:FIG. 6 shows the overall flow of the audio communication method provided by the present application. In FIG. 6, a BLE connection is established between the first audio device (such as a mobile phone and a media player) and the second audio device (such as a headset). Expand below:
1.建立ACL链路(S601)1. Establish ACL link (S601)
S601,第一音频设备(如手机、媒体播放器)和第二音频设备(如耳机)之间建立ACL链路。S601, an ACL link is established between the first audio device (such as a mobile phone and a media player) and the second audio device (such as a headset).
具体的,ACL链路可用于承载流控制消息,如流控制过程(S602-S604)中的参数协商、参数配置、等时传输通道建立所涉及的流控制消息。Specifically, the ACL link can be used to carry flow control messages, such as flow control messages involved in parameter negotiation, parameter configuration, and establishment of isochronous transmission channels in the flow control process (S602-S604).
具体的,ACL链路还可用于承载内容控制消息,如内容控制过程(S605-S607)中的通话控制(如接听、挂断等)消息、播放控制(如上一首、下一首等)消息、音量控制(如增大音量、减小音量)消息等。Specifically, the ACL link can also be used to carry content control messages, such as call control (such as answering, hanging up, etc.) messages during the content control process (S605-S607), and playback control (such as previous, next, etc.) messages. , Volume control (such as increasing the volume, decreasing the volume) message, etc.
2.流控制过程(S602-S604)2. Flow control process (S602-S604)
S602,针对特定音频业务,第一音频设备和第二音频设备可以通过ACL链路进行参数协商。S602. For a specific audio service, the first audio device and the second audio device may perform parameter negotiation through the ACL link.
具体的,该参数协商可以以音频业务为粒度进行。不同的音频业务都需要进行参数协商,如QoS参数的协商、codec参数的协商、ISO参数的协商。一个音频业务可以对应一套参数,一套参数可包括以下一项或多项:QoS参数、codec参数、ISO参数。Specifically, the parameter negotiation can be conducted with the audio service as the granularity. Different audio services require parameter negotiation, such as QoS parameter negotiation, codec parameter negotiation, and ISO parameter negotiation. An audio service can correspond to a set of parameters, and a set of parameters can include one or more of the following: QoS parameters, codec parameters, and ISO parameters.
具体的,该参数协商的具体流程可包括:Specifically, the specific process of the parameter negotiation may include:
步骤a.第一音频设备可以通过ACL链路向第二音频设备发送参数协商消息,该消息 可携带特定音频业务对应的一套参数。这一套参数可以是根据该特定音频业务从数据库中查询得到的,该数据库中可存储有多种音频业务各自对应的参数。Step a. The first audio device may send a parameter negotiation message to the second audio device through the ACL link, and the message may carry a set of parameters corresponding to the specific audio service. This set of parameters may be obtained by querying from a database according to the specific audio service, and the database may store parameters corresponding to various audio services.
步骤b.第二音频设备通过ACL链路接收到第一音频设备发送的参数协商消息。如果第二音频设备同意该消息中携带的参数,则向第一音频设备返回确认消息;如果第二音频设备不同意或部分同意参数协商消息中携带的参数,则向第一音频设备返回继续协商消息,以和第一音频设备返回继续进行参数协商。Step b. The second audio device receives the parameter negotiation message sent by the first audio device through the ACL link. If the second audio device agrees with the parameters carried in the message, a confirmation message is returned to the first audio device; if the second audio device does not agree or partially agrees with the parameters carried in the parameter negotiation message, it returns to the first audio device to continue the negotiation Message to return to the first audio device to continue parameter negotiation.
可选的,在数据库中,音频业务对应的参数可以是综合考虑该音频业务所涉及的各种音频切换情况或混音情况而设计的。该参数可适用该业务所涉及的这些情况。例如,游戏业务下可能出现游戏背景声音和麦克说话声音发生切换或叠加的情况(游戏时打开麦克说话)。游戏背景声音和麦克说话声音的codec参数、QoS参数可能不同。针对游戏业务,可设计适用这种情况的参数,这样当用户在游戏时打开麦克说话,也不会影响听觉体验。Optionally, in the database, the parameters corresponding to the audio service may be designed by comprehensively considering various audio switching situations or mixing situations involved in the audio service. This parameter can be applied to these situations involved in the business. For example, in the game business, it may happen that the background sound of the game and the voice of the microphone are switched or superimposed (the microphone is turned on during the game). The codec parameters and QoS parameters of the game background sound and Mike's speech sound may be different. For the game business, parameters suitable for this situation can be designed, so that when the user opens the microphone to speak during the game, it will not affect the listening experience.
这里,该特定音频业务可以是电话、游戏、语音助手、音乐等等。Here, the specific audio service may be phone, game, voice assistant, music and so on.
S603,第一音频设备可以通过ACL链路向第二音频设备进行参数配置。该参数配置是指向第二音频设备配置协商所确定的参数。S603. The first audio device may perform parameter configuration to the second audio device through the ACL link. The parameter configuration refers to a parameter determined by configuration negotiation of the second audio device.
具体实现中,第一音频设备可以通过ACL链路向第二音频设备发送参数配置消息,该参数配置消息中可携带第一音频设备和第二音频设备双方已协商确定的参数。相应的,在通过ACL链路接收到该参数配置消息后,第二音频设备便可以依据双方已协商确定的参数执行流数据的接收或发送。In a specific implementation, the first audio device may send a parameter configuration message to the second audio device through the ACL link, and the parameter configuration message may carry parameters that have been negotiated and determined by both the first audio device and the second audio device. Correspondingly, after receiving the parameter configuration message through the ACL link, the second audio device can perform the reception or transmission of the streaming data according to the parameters that have been negotiated and determined by both parties.
S604,基于协商所确定的参数,第一音频设备和第二音频设备之间建立等时数据传输通道。S604. Based on the parameters determined through negotiation, establish an isochronous data transmission channel between the first audio device and the second audio device.
具体的,等时数据传输通道可用于传输流数据(即音频数据)。后续内容中会展开说明第一音频设备和第二音频设备之间建立等时数据传输通道的具体流程,这里先不赘述。Specifically, the isochronous data transmission channel can be used to transmit streaming data (ie, audio data). The subsequent content will expand and explain the specific process of establishing an isochronous data transmission channel between the first audio device and the second audio device, which will not be repeated here.
在上述流控制过程中,第一音频设备可以是音频源(audio source),第二音频设备可以是音频接收方(audio sink)。即可由音频源发起参数协商、等时数据通道创建。在上述流控制过程中,第一音频设备也可以是音频源(audio sink)第二音频设备可以是音频接收方(audio source)。即可由音频接收方发起参数协商、等时数据通道创建。In the above flow control process, the first audio device may be an audio source (audio source), and the second audio device may be an audio receiver (audio sink). That is, the audio source initiates parameter negotiation and isochronous data channel creation. In the above flow control process, the first audio device may also be an audio source (audio sink) and the second audio device may be an audio receiver (audio source). That is, the audio receiver initiates parameter negotiation and isochronous data channel creation.
3.内容控制过程(S605-S607)3. Content control process (S605-S607)
S605-S607,第一音频设备和第二音频设备之间可以基于ACL链路交互内容控制消息。S605-S607, the content control message can be exchanged between the first audio device and the second audio device based on the ACL link.
S605,第一音频设备和第二音频设备之间可以基于ACL链路交互通话控制消息,如接听、挂断等控制消息。S605: The first audio device and the second audio device may exchange call control messages based on the ACL link, such as answering and hanging up control messages.
在一种方式中,第一音频设备可以通过ACL链路向第二音频设备(如耳机)发送通话控制(如接听、挂断等)消息,可实现在第一音频设备(如手机)侧进行通话控制。这种方式对应的典型应用场景可以是:在使用蓝牙耳机打电话时,用户点击手机上的挂断按钮来挂断电话。在另一种方式中,第二音频设备(如耳机)可以通过ACL链路向第一音频设备(如手机)发送通话控制(如接听、挂断等)消息,可实现在第二音频设备(如耳机)侧进行通话控制。这种方式对应的典型应用场景可以是:在使用蓝牙耳机打电话时,用户按下蓝牙耳机上的挂断按钮来挂断电话。不限于按下挂断按钮,用户还可以通过其他操作在蓝牙耳机上挂断电话,如敲击耳机。In one way, the first audio device can send a call control (such as answering, hanging up, etc.) message to the second audio device (such as a headset) through the ACL link, which can be implemented on the first audio device (such as a mobile phone) Call control. A typical application scenario corresponding to this method may be: when using a Bluetooth headset to make a call, the user clicks the hang up button on the mobile phone to hang up the phone. In another way, the second audio device (such as a headset) can send a call control (such as answering, hanging up, etc.) message to the first audio device (such as a mobile phone) through the ACL link, which can be implemented on the second audio device ( Such as headset) side call control. A typical application scenario corresponding to this method may be: when using a Bluetooth headset to make a call, the user presses the hangup button on the Bluetooth headset to hang up the phone. Not limited to pressing the hang up button, the user can also hang up the phone on the Bluetooth headset through other operations, such as tapping the headset.
S606,第一音频设备和第二音频设备之间可以基于ACL链路交互播放控制消息,如上一首、下一首等控制消息。S606. The first audio device and the second audio device can interactively play control messages based on the ACL link, such as the previous song and the next song.
在一种方式中,第一音频设备(如手机)可以通过ACL链路向第二音频设备(如耳机)发送播放控制(如上一首、下一首等)消息,可实现在第一音频设备(如手机)侧进行播放控制。这种方式对应的典型应用场景可以是:在使用蓝牙耳机听音乐时,用户点击手机上的上一首/下一首按钮来切换歌曲。在另一种方式中,第二音频设备(如耳机)可以通过ACL链路向第一音频设备(如手机)发送播放控制(如上一首、下一首等)消息,可实现在第二音频设备(如耳机)侧进行播放控制。这种方式对应的典型应用场景可以是:在使用蓝牙耳机听音乐时,用户按下蓝牙耳机上的上一首/下一首按钮来切换歌曲。In one way, the first audio device (such as a mobile phone) can send a playback control (such as previous song, next song, etc.) message to the second audio device (such as a headset) through the ACL link, which can be implemented in the first audio device (Such as mobile phone) side to play control. A typical application scenario corresponding to this method may be: when listening to music using a Bluetooth headset, the user clicks the previous/next button on the mobile phone to switch songs. In another way, the second audio device (such as a headset) can send a playback control (such as previous song, next song, etc.) message to the first audio device (such as a mobile phone) through the ACL link, which can be implemented in the second audio Playback control is performed on the device (such as headphones) side. A typical application scenario corresponding to this method may be: when listening to music using a Bluetooth headset, the user presses the previous/next button on the Bluetooth headset to switch songs.
S607,第一音频设备和第二音频设备之间可以基于ACL链路交互音量控制消息,增大音量、减小音量等控制消息。S607: The first audio device and the second audio device may exchange volume control messages based on the ACL link, and increase or decrease volume control messages.
在一种方式中,第一音频设备(如手机)可以通过ACL链路向第二音频设备(如耳机)发送音量控制(如增大音量、减小音量等)消息,可实现在第一音频设备(如手机)侧进行音量控制。这种方式对应的典型应用场景可以是:在使用蓝牙耳机听音乐时,用户点击手机上的音量调节按钮来调节音量。在另一种方式中,第二音频设备(如耳机)可以通过ACL链路向第一音频设备(如手机)发送音量控制(如增大音量、减小音量等)消息,可实现在第二音频设备(如耳机)侧进行音量控制。这种方式对应的典型应用场景可以是:在使用蓝牙耳机听音乐时,用户按下蓝牙耳机上的音量调节按钮来调节音量来调节音量。In one way, the first audio device (such as a mobile phone) can send a volume control (such as volume increase, volume decrease, etc.) message to the second audio device (such as a headset) through the ACL link, which can be implemented in the first audio The volume control is performed on the device (such as a mobile phone) side. The typical application scenario corresponding to this method may be: when using a Bluetooth headset to listen to music, the user clicks the volume adjustment button on the mobile phone to adjust the volume. In another way, the second audio device (such as a headset) can send a volume control (such as volume increase, volume decrease, etc.) message to the first audio device (such as a mobile phone) through the ACL link, which can be implemented in the second Volume control is performed on the side of the audio device (such as headphones). A typical application scenario corresponding to this method may be: when using a Bluetooth headset to listen to music, the user presses the volume adjustment button on the Bluetooth headset to adjust the volume to adjust the volume.
在上述内容控制过程中,第一音频设备可以是音频源(audio source),第二音频设备可以是音频接收方(audio sink)。即可以在音频源侧进行内容控制。在上述内容控制过程中,第一音频设备也可以是音频源(audio sink)第二音频设备可以是音频接收方(audio source)。即可以在音频接收方侧进行内容控制。In the above content control process, the first audio device may be an audio source (audio source), and the second audio device may be an audio receiver (audio sink). That is, content control can be performed on the audio source side. In the above content control process, the first audio device may also be an audio source (audio sink) and the second audio device may be an audio receiver (audio source). That is, content control can be performed on the audio receiver side.
4.流数据传输过程(S608)4. Streaming data transmission process (S608)
S608,第一音频设备和第二音频设备之间可以基于已创建的等时数据传输通道交互流数据。该流数据是前述特定音频业务的流数据。已创建的等时数据传输通道对应前述特定音频业务。S608: The first audio device and the second audio device may exchange streaming data based on the created isochronous data transmission channel. The stream data is the stream data of the aforementioned specific audio service. The created isochronous data transmission channel corresponds to the aforementioned specific audio service.
在一种方式中,第一音频设备(如手机)可以通过等时数据传输通道向第二音频设备(如耳机)发送流数据。此时第一音频设备(如手机)的角色是音频源(audio source),第二音频设备(如耳机)的角色是音频接收方(audio sink)。这样第二音频设备(如耳机)可以将接收到的音频数据转换成声音。这种方式对应的典型应用场景可以是:用户佩戴蓝牙耳机收听手机上播放的音乐。In one way, the first audio device (such as a mobile phone) can send streaming data to the second audio device (such as a headset) through an isochronous data transmission channel. At this time, the role of the first audio device (such as a mobile phone) is an audio source (audio source), and the role of the second audio device (such as a headset) is an audio receiver (audio sink). In this way, the second audio device (such as a headset) can convert the received audio data into sound. A typical application scenario corresponding to this method may be: the user wears a Bluetooth headset to listen to music played on the mobile phone.
在另一种方式中,第二音频设备(如耳机)可以通过等时数据传输通道向第一音频设备(如手机)发送流数据。此时第二音频设备(如耳机)的角色是音频源(audio source),第一音频设备(如手机)的角色是音频接收方(audio sink)。这样第一音频设备(如手机)可以对接收到的音频数据进行处理,如将该音频数据转换成声音、向其他电子设备发送该音频数据(语音通话场景下)、存储该音频数据(录音场景下)。这种方式对应的典型应用场景可以是:用户佩戴蓝牙耳机(配置有受话器/麦克风等声音采集器件)打电话,此时蓝牙耳机采集用户说话的声音,并将其转换成音频数据传输给手机。In another way, the second audio device (such as a headset) can send streaming data to the first audio device (such as a mobile phone) through an isochronous data transmission channel. At this time, the role of the second audio device (such as a headset) is an audio source (audio source), and the role of the first audio device (such as a mobile phone) is an audio receiver (audio sink). In this way, the first audio device (such as a mobile phone) can process the received audio data, such as converting the audio data into sound, sending the audio data to other electronic devices (in a voice call scenario), and storing the audio data (recording scenario) under). A typical application scenario corresponding to this method may be: a user wears a Bluetooth headset (equipped with a sound collection device such as a receiver/microphone) to make a call, and at this time, the Bluetooth headset collects the voice of the user's speech and converts it into audio data for transmission to the mobile phone.
本申请对上述内容控制过程和上述流数据传输过程的执行顺序不做限制,上述流数据传输过程可以在上述内容控制过程之前被执行,这两个过程也可以同时被执行。This application does not limit the execution order of the content control process and the streaming data transmission process. The streaming data transmission process may be executed before the content control process, and the two processes may also be executed at the same time.
图6所示方法中的第一音频设备、第二音频设备可以实现图3所示的基于BLE的音频协议框架。此时,图6中的流控制过程(S602-S604)可以由图3中的流控制功能实体306来执行;图6中的内容控制过程(S605-S607)可以由图3中的内容控制功能实体305来执行。图6方法中提及的ACL链路可以是图3中的LE ACL311,图6方法中提及的等时数据传输通道可以是图3中LE ISO312。The first audio device and the second audio device in the method shown in FIG. 6 may implement the BLE-based audio protocol framework shown in FIG. 3. At this time, the flow control process (S602-S604) in FIG. 6 may be performed by the flow control function entity 306 in FIG. 3; the content control process (S605-S607) in FIG. 6 may be performed by the content control function in FIG. The entity 305 executes. The ACL link mentioned in the method in FIG. 6 may be LE ACL311 in FIG. 3, and the isochronous data transmission channel mentioned in the method in FIG. 6 may be LE312 in FIG. 3.
当音频业务场景发生切换时,以从音乐业务切换到电话业务(听音乐时接听电话)为例,第一音频设备和第二音频设备之间可以重新进行参数协商,协商确定新音频业务(如电话业务)对应的新参数,然后基于新参数创建新的等时数据传输通道。该新的等时数据传输通道可用于传输该新音频业务(如电话业务)的流数据。各种业务的等时数据传输通道都是基于LE。这样,业务场景的切换不涉及传输框架的切换,效率更高,不会出现明显的停顿。When the audio service scene is switched, taking the switching from the music service to the phone service (answering the phone while listening to music) as an example, the first audio device and the second audio device may renegotiate parameters to negotiate and determine the new audio service (such as Telephone service) corresponding to the new parameters, and then create a new isochronous data transmission channel based on the new parameters. The new isochronous data transmission channel can be used to transmit streaming data of the new audio service (such as telephone service). The isochronous data transmission channels of various services are based on LE. In this way, the switching of the business scenario does not involve the switching of the transmission frame, and the efficiency is higher, and there is no obvious pause.
可选的,当音频业务场景发生切换时,也可以利用新音频业务(如电话业务)对应的新参数重新配置旧音频业务(如音乐业务)对应的等时数据传输通道,而不需要基于新参数重新创建新的等时数据传输通道。这样,可以进一步提高效率。Optionally, when the audio service scene is switched, the isochronous data transmission channel corresponding to the old audio service (such as the music service) can also be reconfigured using the new parameters corresponding to the new audio service (such as the telephone service), without the need for new The parameters recreate a new isochronous data transmission channel. In this way, efficiency can be further improved.
本申请提供的音频通讯方法以音频业务为粒度进行参数协商及等时数据传输通道的建立,各个音频业务的流控制消息和内容控制消息都通过LE ACL链路传输,流数据通过LE ISO链路传输,统一了各个业务的传输框架。而不是以profile为粒度来为不同profile应用适配不同的传输框架。可以看出,本申请提供的音频通讯方法可以适用更多音频业务,兼容性更好。而且,当业务场景发生切换时,基于重新协商的参数配置等时数据传输通道即可,无需涉及不同profile协议栈之间的切换,无需切换传输框架,更加高效,可避免出现明显的停顿。The audio communication method provided in this application uses the audio service as the granularity for parameter negotiation and the establishment of an isochronous data transmission channel. The flow control messages and content control messages of each audio service are transmitted through the LEACL link, and the stream data is transmitted through the LEISO link Transmission, unifying the transmission framework of each business. Instead of using the profile as the granularity to adapt different transmission frameworks for different profile applications. It can be seen that the audio communication method provided in this application can be applied to more audio services and has better compatibility. Moreover, when the business scenario is switched, the isochronous data transmission channel can be configured based on the renegotiated parameters, without the need to switch between different profile protocol stacks, and without switching the transmission frame, which is more efficient and avoids obvious pauses.
下面描述图6所示的方法流程中提及的等时数据传输通道的创建过程。The creation process of the isochronous data transmission channel mentioned in the method flow shown in FIG. 6 is described below.
图7示出了等时数据传输通道的创建过程。该等时数据传输通道是基于连接的等时数据通道,即第一音频设备和第二音频设备已经处于连接(Connection)状态。第一音频设备、第二音频设备都具有主机Host和链路层LL(controller中),Host和LL之间通过HCI通信。在如图7所示,该过程可包括:FIG. 7 shows the creation process of the isochronous data transmission channel. The isochronous data transmission channel is based on a connected isochronous data channel, that is, the first audio device and the second audio device are already in a connection (Connection) state. Both the first audio device and the second audio device have a host Host and a link layer LL (in a controller), and the Host and LL communicate through HCI. As shown in Figure 7, the process may include:
S701-S702,Host A(第一音频设备的Host)通过HCI指令设置基于连接的等时组(connected isochronous group,CIG)的相关参数。S701-S702, Host A (Host of the first audio device) sets related parameters of the connected isochronous group (CIG) based on the HCI instruction.
其中,CIG相关参数可包括之前已协商确定的参数(QoS参数、codec参数、ISO参数),用于创建等时数据传输通道。Among them, the CIG-related parameters may include previously determined parameters (QoS parameters, codec parameters, ISO parameters), which are used to create isochronous data transmission channels.
具体的,Host A可以通过HCI向LL A(第一音频设备的LL)发送HCI指令“LE Set CIG parameters”。相应的,LL A可以返回响应消息“Command Complete”。Specifically, Host A can send the HCI command "LE Set CIG parameters" to LL (the first audio device's LL) through HCI. Correspondingly, LL A can return the response message "Command Complete".
S703-S704,Host A通过HCI指令发起创建CIS。S703-S704, Host A initiates the creation of CIS through the HCI instruction.
具体的,Host A可以通过HCI向LL A(第一音频设备的LL)发送HCI指令“LE CreateCIS”。相应的,LL A可以返回响应消息“HCI Command Status”。Specifically, Host A can send the HCI command "LE CreateCIS" to LL (LL of the first audio device) through HCI. Correspondingly, LL A can return the response message "HCI Command Status".
S705,LLA可以通空口请求消息LL_CSI_REQ向LLB(第二音频设备的LL)请求创建CIS流。S705, the LLA may request the LLB (LL of the second audio device) to create a CIS stream through the air interface request message LL_CSI_REQ.
S706-S708,LLB通过HCI指令通知到HostB(第二音频设备的Host),HostB同意第一音频设备的CIS建链流程。S706-S708, the LLB notifies HostB (Host of the second audio device) through the HCI instruction, and HostB agrees to the CIS chain building process of the first audio device.
S709,LLB通过空口响应消息LL_CIS_RSP回复LLA同意CIS建链流程。S709, the LLB responds to the LLA through the air interface response message LL_CIS_RSP and agrees to the CIS chain building process.
S710,LLA通过空口通知消息LL_CIS_IND通知LLB完成建链。S710, LLA notifies LLB through the air interface notification message LL_CIS_IND to complete the chain establishment.
S711,LLB通知到HostB,CIS建链完成。S711, the LLB notifies HostB that the CIS chain establishment is complete.
S712,LLA通过HCI指令通知HostA,CIS建链完成。S712, LLA notifies HostA through the HCI instruction that the CIS chain establishment is completed.
至此,第一音频设备和第二音频设备之间的CIS建立完成。基于已建立的CIS,第一音频设备、第二音频设备可以创建等时数据传输通道。At this point, the CIS establishment between the first audio device and the second audio device is completed. Based on the established CIS, the first audio device and the second audio device can create an isochronous data transmission channel.
可以看出,等时数据传输通道承载于CIS。CIS是基于连接的流,可用于承载等时数据。It can be seen that the isochronous data transmission channel is carried on the CIS. CIS is a connection-based flow that can be used to carry isochronous data.
本申请中,等时数据传输通道的创建时间(即何时执行图7所示流程)可以包括多种选择。在一种选择中,可以在音频业务到来时创建等时数据传输通道。例如,当用户打开游戏应用程序时(游戏背景声同时开始播放),手机的应用层会向Host发送游戏背景声业务创建通知,根据该通知手机会向蓝牙耳机发起图7所示流程。在另一种选择中,可以先建立一个默认的等时数据传输通道,该默认的等时数据传输通道可以基于默认CIG参数创建。这样当音频业务到来时可以直接使用该默认的等时数据传输通道承载流数据,响应速度更快。在再一种选择中,可以先建立多个虚拟等时数据传输通道,这多个虚拟等时数据传输通道可以对应多套不同的CIG参数,可适用多种音频业务。虚拟等时数据传输通道是指空口不发生数据交互的等时数据传输通道。这样,当音频业务到来时,可以选择该音频业务对应的虚拟等时数据传输通道,第一音频设备和第二音频设备之间触发握手并开始通信。In this application, the creation time of the isochronous data transmission channel (ie, when the process shown in FIG. 7 is executed) may include multiple options. In one option, an isochronous data transmission channel can be created when the audio service arrives. For example, when the user opens the game application (the game background sound starts to play simultaneously), the application layer of the mobile phone will send a game background sound service creation notification to the Host, and according to the notification, the mobile phone will initiate the process shown in FIG. In another option, a default isochronous data transmission channel may be established first, and the default isochronous data transmission channel may be created based on default CIG parameters. In this way, when the audio service arrives, the default isochronous data transmission channel can be used directly to carry streaming data, and the response speed is faster. In yet another option, multiple virtual isochronous data transmission channels may be established first. The multiple virtual isochronous data transmission channels may correspond to multiple sets of different CIG parameters, and may be applicable to multiple audio services. A virtual isochronous data transmission channel refers to an isochronous data transmission channel where no data interaction occurs on the air interface. In this way, when an audio service arrives, a virtual isochronous data transmission channel corresponding to the audio service may be selected, and a handshake is triggered between the first audio device and the second audio device and communication is started.
图6所示的方法流程描述了第一音频设备和第二音频设备所形成的基于连接的点对点的音频通讯方法。第一音频设备可以是图1所示的无线音频系统100中的第一音频设备101,第二音频设备可以是图1所示的无线音频系统100中的第二音频设备102。无线音频系统100中的第一音频设备101与第三音频设备103之间也可以采用图6所示的音频通讯方法进行通讯。The method flow shown in FIG. 6 describes a connection-based point-to-point audio communication method formed by the first audio device and the second audio device. The first audio device may be the first audio device 101 in the wireless audio system 100 shown in FIG. 1, and the second audio device may be the second audio device 102 in the wireless audio system 100 shown in FIG. 1. The first audio device 101 and the third audio device 103 in the wireless audio system 100 may also use the audio communication method shown in FIG. 6 for communication.
在一种情况下,第一音频设备101可以和第二音频设备102、第三音频设备103这二者都进行通讯。第一音频设备101可以实现为手机,第二音频设备102和第三音频设备103可以分别实现为左耳机、右耳机。这种情况对应一种典型的应用场景:左耳机、右耳机一起使用。这种典型的应用场景可以称为“双耳一起使用”的场景。In one case, the first audio device 101 can communicate with both the second audio device 102 and the third audio device 103. The first audio device 101 may be implemented as a mobile phone, and the second audio device 102 and the third audio device 103 may be implemented as left and right headphones, respectively. This situation corresponds to a typical application scenario: the left earphone and the right earphone are used together. This typical application scenario can be referred to as a “binaural use together” scenario.
图8示出了在“双耳一起使用”场景下的音频通讯方法。下面展开:FIG. 8 shows the audio communication method in the scenario of “using both ears together”. Expand below:
1.建立BLE连接(S801-S803)1. Establish a BLE connection (S801-S803)
S801,左耳机和右耳机建立BLE连接。S801, the left earphone and the right earphone establish a BLE connection.
S802,左耳机和手机建立BLE连接。S802, a BLE connection is established between the left earphone and the mobile phone.
S803,右耳机和手机建立BLE连接。S803, the BLE connection is established between the right earphone and the mobile phone.
本申请对上述S801-S803的执行顺序不做限制,它们之间的先后顺序可以改变。后续 内容中展开说明BLE连接建立过程,这里先不赘述。This application does not limit the execution order of the above S801-S803, and the order between them may be changed. The BLE connection establishment process will be described in the following content, and will not be described here.
2.建立ACL链路(S804)2. Establish ACL link (S804)
S804,手机和左耳机、右耳机分别建立ACL链路。S804, the mobile phone establishes ACL links with the left earphone and the right earphone respectively.
具体的,ACL链路的建立由链路层LL负责。手机的LL与左耳机的LL之间可以建立ACL链路,手机的LL与右耳机的LL之间可以建立ACL链路。Specifically, the establishment of the ACL link is the responsibility of the link layer LL. An ACL link can be established between the LL of the mobile phone and the LL of the left earphone, and an ACL link can be established between the LL of the mobile phone and the LL of the right earphone.
具体的,ACL链路可用于承载流控制消息,如流控制过程(S805-S813)中的参数协商、参数配置、等时传输通道建立所涉及的流控制消息。ACL链路还可用于承载内容控制消息,如内容控制过程(S814-S819)中的通话控制(如接听、挂断等)消息、播放控制(如上一首、下一首等)消息、音量控制(如增大音量、减小音量)消息等。Specifically, the ACL link can be used to carry flow control messages, such as flow control messages involved in parameter negotiation, parameter configuration, and establishment of isochronous transmission channels in the flow control process (S805-S813). The ACL link can also be used to carry content control messages, such as call control (such as answering, hanging up, etc.) messages during the content control process (S814-S819), playback control (such as previous, next, etc.) messages, and volume control (Such as increasing the volume and decreasing the volume) messages, etc.
3.流控制过程(S805-S813)3. Flow control process (S805-S813)
S805-S806,在音频业务到来时,手机可以确定该音频业务对应的参数(QoS参数、codec参数、ISO参数等)。S805-S806, when the audio service arrives, the mobile phone can determine the parameters corresponding to the audio service (QoS parameters, codec parameters, ISO parameters, etc.).
具体的,手机的Host先接收到来自应用层的音频业务建立通知,然后确定该音频业务对应的参数。音频业务建立通知可以是手机在检测到用户打开音频相关的应用程序(例如游戏)时产生的。Specifically, the host of the mobile phone first receives an audio service establishment notification from the application layer, and then determines the parameters corresponding to the audio service. The audio service establishment notification may be generated when the mobile phone detects that the user opens an audio-related application (such as a game).
具体的,该音频业务对应的参数可以是手机根据该音频业务的业务类型从数据库中查询得到的,该数据库中可存储有多种音频业务各自对应的参数。Specifically, the parameter corresponding to the audio service may be obtained by the mobile phone querying from a database according to the service type of the audio service, and the database may store parameters corresponding to various audio services.
S807,手机的Host通过HCI将该音频业务对应的参数发送给手机的LL。S807, the host of the mobile phone sends the parameter corresponding to the audio service to the LL of the mobile phone through HCI.
S808,针对该音频业务,手机的LL和左耳机的LL可以通过已建立的ACL链路进行参数协商。参数协商的具体流程可参考图6方法实施例中的相关内容,这里不再赘述。S808, for the audio service, the LL of the mobile phone and the LL of the left earphone can perform parameter negotiation through the established ACL link. For the specific process of parameter negotiation, reference may be made to the related content in the method embodiment of FIG. 6, and details are not described here.
S809,在参数协商完成后,手机的可以通过已建立的ACL链路向左耳机进行参数配置。该参数配置是指向左耳机配置已协商好的参数。S809. After the parameter negotiation is completed, the mobile phone can perform parameter configuration to the left earphone through the established ACL link. The parameter configuration refers to the parameters negotiated by the left earphone configuration.
具体实现中,手机的LL可以通过ACL链路向左耳机的LL发送参数配置消息,该参数配置消息中可携带双方已协商确定的参数。相应的,在通过ACL链路接收到该参数配置消息后,左耳机便可以依据双方已协商确定的参数执行流数据的接收或发送。In a specific implementation, the LL of the mobile phone may send a parameter configuration message to the LL of the left earphone through the ACL link, and the parameter configuration message may carry parameters that have been negotiated and determined by both parties. Correspondingly, after receiving the parameter configuration message through the ACL link, the left earphone can perform the reception or transmission of the streaming data according to the parameters that have been negotiated by both parties.
S810,基于协商所确定的参数,手机和左耳机之间可以建立等时数据传输通道。等时数据传输通道的创建体流程可参考图6方法实施例中的相关内容,这里不再赘述。S810. Based on the parameters determined through negotiation, an isochronous data transmission channel can be established between the mobile phone and the left earphone. For the creation process of the isochronous data transmission channel, reference may be made to the related content in the method embodiment of FIG. 6, and details are not described herein again.
S811,针对该音频业务,手机的LL和右耳机的LL可以通过已建立的ACL链路进行参数协商。参数协商的具体流程可参考图6方法实施例中的相关内容,这里不再赘述。S811. For the audio service, the LL of the mobile phone and the LL of the right earphone can perform parameter negotiation through the established ACL link. For the specific process of parameter negotiation, reference may be made to the related content in the method embodiment of FIG. 6, and details are not described here.
S812,在参数协商完成后,手机的可以通过已建立的ACL链路向右耳机进行参数配置。该参数配置是指向左耳机配置已协商好的参数。S812. After the parameter negotiation is completed, the mobile phone can perform parameter configuration to the right earphone through the established ACL link. The parameter configuration refers to the parameters negotiated by the left earphone configuration.
具体实现中,手机的LL可以通过ACL链路向右耳机的LL发送参数配置消息,该参数配置消息中可携带双方已协商确定的参数。相应的,在通过ACL链路接收到该参数配置消息后,右耳机便可以依据双方已协商确定的参数执行流数据的接收或发送。In a specific implementation, the LL of the mobile phone may send a parameter configuration message to the LL of the right earphone through the ACL link, and the parameter configuration message may carry parameters that have been negotiated and determined by both parties. Correspondingly, after receiving the parameter configuration message through the ACL link, the right earphone can perform the reception or transmission of the streaming data according to the parameters that have been negotiated by both parties.
S813,基于协商所确定的参数,手机和右耳机之间可以建立等时数据传输通道。等时数据传输通道的创建体流程可参考图6方法实施例中的相关内容,这里不再赘述。S813. Based on the parameters determined through negotiation, an isochronous data transmission channel can be established between the mobile phone and the right headset. For the creation process of the isochronous data transmission channel, reference may be made to the related content in the method embodiment of FIG. 6, and details are not described herein again.
可以看出,为了保持双耳参数(QoS参数、codec参数、ISO参数等)的整体性,参数确定可以以双耳为单位确定,再逐一协商和配置。It can be seen that in order to maintain the integrity of the binaural parameters (QoS parameters, codec parameters, ISO parameters, etc.), the parameter determination can be determined in units of binaural, and then negotiated and configured one by one.
S808-S810描述了手机对左耳机进行参数协商、配置和以及创建等时数据传输通道的过程,S811-S813描述了手机对右耳机进行参数协商、配置和以及创建等时数据传输通道的过程。本申请对这两个过程的执行顺序不做限定,这两个过程可以同时进行。S808-S810 describe the process of parameter negotiation, configuration and creation of isochronous data transmission channel for the left earphone by the mobile phone, and S811-S813 describe the process of parameter negotiation, configuration and creation of isochronous data transmission channel for the right earphone by the mobile phone. This application does not limit the execution order of the two processes, and the two processes can be performed simultaneously.
内容控制过程(S814-S819)Content control process (S814-S819)
S814-S816,手机和左耳机之间可以基于ACL链路交互内容控制消息。具体实现可参考图6方法实施例中的相关内容,这里不再赘述。S814-S816, the content control message can be exchanged between the mobile phone and the left earphone based on the ACL link. For specific implementation, reference may be made to related content in the method embodiment of FIG. 6, and details are not described herein again.
S817-S819,手机和右耳机之间可以基于ACL链路交互内容控制消息。具体实现可参考图6方法实施例中的相关内容,这里不再赘述。S817-S819, the content control message can be exchanged between the mobile phone and the right earphone based on the ACL link. For specific implementation, reference may be made to related content in the method embodiment of FIG. 6, and details are not described herein again.
当手机向左耳机、右耳机传输内容控制消息时,手机到左耳机、右耳机的内容控制消息传输需要达到同步,以实现左耳机、右耳机同步控制,避免用户感觉到听觉混乱。为了达到这一目的,左耳机、右耳机可以在同步收到内容控制消息后再生效内容控制。When the mobile phone transmits content control messages to the left earphone and right earphone, the transmission of the content control messages from the mobile phone to the left earphone and right earphone needs to be synchronized to realize the synchronous control of the left earphone and the right earphone, so as to avoid the user from feeling confused. In order to achieve this, the left and right headsets can take effect after the content control message is received synchronously.
5.流数据传输过程(S820-S821)5. Streaming data transmission process (S820-S821)
S820,手机和左耳机之间可以基于已创建的等时数据传输通道交互流数据。该流数据是前述音频业务的流数据。S820, the mobile phone and the left earphone can exchange streaming data based on the created isochronous data transmission channel. The stream data is the stream data of the aforementioned audio service.
S821,手机和右耳机之间可以基于已创建的等时数据传输通道交互流数据。该流数据是前述音频业务的流数据。S821, the mobile phone and the right earphone can exchange streaming data based on the created isochronous data transmission channel. The stream data is the stream data of the aforementioned audio service.
可以看出,本图8所示的音频通讯方法可以适用“双耳一起使用”场景,而且手机与单耳(左耳机或右耳机)之间的音频通讯方法可参考图6所示方法,可适用更多音频业务,兼容性更好。当业务场景发生切换时,基于重新协商的参数配置手机与耳机之间的等时数据传输通道即可,无需涉及不同profile协议栈之间的切换,无需切换传输框架,更加高效,可避免出现明显的停顿。It can be seen that the audio communication method shown in this FIG. 8 can be applied to the “both ears together” scenario, and the audio communication method between the mobile phone and the single ear (left earphone or right earphone) can refer to the method shown in FIG. 6, Applicable to more audio services and better compatibility. When the business scenario is switched, it is sufficient to configure the isochronous data transmission channel between the mobile phone and the headset based on the renegotiated parameters. There is no need to switch between different profile protocol stacks, and there is no need to switch the transmission frame, which is more efficient and avoids obvious Pause.
下面结合图9说明BLE连接建立过程。如图9所示,BLE连接建立过程可包括:The following describes the BLE connection establishment process with reference to FIG. 9. As shown in Figure 9, the BLE connection establishment process may include:
1.左耳机和右耳机建立BLE连接(S902-S907)1. Establish a BLE connection between the left and right headphones (S902-S907)
S902-S903,左耳机的Host通过HCI指令发起BLE连接建立。具体的,左耳机的Host可以通过HCI向左耳机的LL发送HCI指令“LE create connection”。相应的,左耳机的LL可以返回响应消息“HCI Command Status”。S902-S903, the Host of the left earphone initiates the establishment of the BLE connection through the HCI instruction. Specifically, the host of the left earphone can send the HCI command "LE create connection" to the LL of the left earphone through HCI. Correspondingly, the LL of the left earphone can return the response message "HCI Command Status".
S904,右耳机发送广播。S904, the right earphone sends a broadcast.
S905,左耳机向右耳机发起连接。具体的,向左耳机的LL向右耳机的LL发送连接请求。S905, the left earphone initiates the connection to the right earphone. Specifically, a connection request is sent to the LL of the left earphone to the LL of the right earphone.
S906,在接收到连接请求后,右耳机的LL通过HCI指令通知右耳机的Host,BLE连接建立完成。S906, after receiving the connection request, the LL of the right earphone notifies the Host of the right earphone through the HCI instruction, and the establishment of the BLE connection is completed.
S907,在发送连接请求后,左耳机的LL通过HCI指令通知左耳机的Host,BLE连接建立完成。S907: After sending the connection request, the LL of the left earphone notifies the Host of the left earphone through the HCI instruction that the establishment of the BLE connection is completed.
概括地说,S902-S907描述的BLE建立连接的过程是:右耳机发送广播,左耳机向右耳机发起连接。可选的,也可以由左耳机发送广播,右耳机向左耳机发起连接。In a nutshell, the process of establishing connection in BLE described in S902-S907 is: the right earphone sends a broadcast, and the left earphone initiates a connection to the right earphone. Alternatively, the left earphone can also send a broadcast, and the right earphone can initiate a connection to the left earphone.
2.左耳机和手机建立BLE连接(S909-S914)2. Establish a BLE connection between the left headset and the phone (S909-S914)
S909-S910,手机的Host通过HCI指令发起BLE连接建立。具体的,手机的Host可 以通过HCI向手机的LL发送HCI指令“LE create connection”。相应的,手机的LL可以返回响应消息“HCI Command Status”。S909-S910, the host of the mobile phone initiates the establishment of the BLE connection through the HCI instruction. Specifically, the host of the mobile phone can send the HCI command "LE create connection" to the LL of the mobile phone through the HCI. Correspondingly, the LL of the mobile phone can return the response message "HCI Command Status".
S911,左耳机发送广播。S911, the left earphone sends a broadcast.
S912,手机向左耳机发起连接。具体的,向手机的LL向左耳机的LL发送连接请求。S912, the mobile phone initiates a connection to the left earphone. Specifically, a connection request is sent to the LL of the mobile phone and the LL of the left earphone.
S913,在接收到连接请求后,左耳机的LL通过HCI指令通知左耳机的Host,BLE连接建立完成。S913. After receiving the connection request, the LL of the left earphone notifies the Host of the left earphone through the HCI instruction that the establishment of the BLE connection is completed.
S914,在发送连接请求后,手机的LL通过HCI指令通知手机的Host,BLE连接建立完成。S914, after sending the connection request, the LL of the mobile phone notifies the Host of the mobile phone through the HCI instruction, and the establishment of the BLE connection is completed.
概括地说,S909-S914描述的BLE建立连接的过程是:左耳机发送广播,手机向左耳机发起连接。可选的,也可以由手机发送广播,左耳机向手机发起连接。In summary, the process of BLE connection establishment described in S909-S914 is: the left earphone sends a broadcast, and the mobile phone initiates the connection to the left earphone. Alternatively, the mobile phone can also send a broadcast, and the left earphone initiates a connection to the mobile phone.
3.右耳机和手机建立BLE连接(S916-S921)3. Establish a BLE connection between the right headset and the mobile phone (S916-S921)
S909-S910,手机的Host通过HCI指令发起BLE连接建立。具体的,手机的Host可以通过HCI向手机的LL发送HCI指令“LE create connection”。相应的,手机的LL可以返回响应消息“HCI Command Status”。S909-S910, the host of the mobile phone initiates the establishment of the BLE connection through the HCI instruction. Specifically, the host of the mobile phone can send the HCI command "LE create connection" to the LL of the mobile phone through the HCI. Correspondingly, the LL of the mobile phone can return the response message "HCI Command Status".
S911,右耳机发送广播。S911, the right earphone sends a broadcast.
S912,手机向右耳机发起连接。具体的,向手机的LL向右耳机的LL发送连接请求。S912, the mobile phone initiates a connection to the right earphone. Specifically, a connection request is sent to the LL of the mobile phone and the LL of the right earphone.
S913,在接收到连接请求后,右耳机的LL通过HCI指令通知右耳机的Host,BLE连接建立完成。S913, after receiving the connection request, the LL of the right earphone notifies the Host of the right earphone through the HCI instruction, and the establishment of the BLE connection is completed.
S914,在发送连接请求后,手机的LL通过HCI指令通知手机的Host,BLE连接建立完成。S914, after sending the connection request, the LL of the mobile phone notifies the Host of the mobile phone through the HCI instruction, and the establishment of the BLE connection is completed.
概括地说,S916-S921描述的BLE建立连接的过程是:右耳机发送广播,手机向右耳机发起连接。可选的,也可以由手机发送广播,右耳机向手机发起连接。In summary, the process of BLE connection establishment described in S916-S921 is: the right earphone sends a broadcast, and the mobile phone initiates the connection to the right earphone. Alternatively, the mobile phone can also send a broadcast, and the right headset can initiate a connection to the mobile phone.
下面介绍本申请实施例中提供的示例性电子设备200。电子设备200可以实现为上述实施例中提及的第一音频设备,可以是图1所示的无线音频系统100中的第一音频设备101。电子设备200通常可以用作音频源(audio source),如手机、平板电脑等,可以向其他音频接收设备(audio sink),如耳机、音箱等,传输音频数据,这样其他音频接收设备便可以将音频数据转换成声音。在一些场景下,电子设备200也可以用作音频接收方(audio sink),接收其他设备音频源(如具有麦克风的耳机)传输的音频数据(如耳机采集的用户说话的声音所转换成的音频数据)。The following describes an exemplary electronic device 200 provided in an embodiment of the present application. The electronic device 200 may be implemented as the first audio device mentioned in the above embodiment, and may be the first audio device 101 in the wireless audio system 100 shown in FIG. 1. The electronic device 200 can generally be used as an audio source (audio source), such as a mobile phone, a tablet, etc., and can transmit audio data to other audio receiving devices (audio headphones, speakers, etc.), so that other audio receiving devices can use Audio data is converted into sound. In some scenarios, the electronic device 200 can also be used as an audio receiver, receiving audio data (such as audio converted by the user's voice collected by the headset) transmitted by other device audio sources (such as a headset with a microphone) data).
图10A示出了电子设备200的结构示意图。FIG. 10A shows a schematic structural diagram of the electronic device 200.
电子设备200可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器 180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。The electronic device 200 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, key 190, motor 191, indicator 192, camera 193, display screen 194, and Subscriber identification module (SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
可以理解的是,本发明实施例示意的结构并不构成对电子设备200的具体限定。在本申请另一些实施例中,电子设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 200. In other embodiments of the present application, the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components. The illustrated components can be implemented in hardware, software, or a combination of software and hardware.
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。在一些实施例中,电子设备200也可以包括一个或多个处理器110。The processor 110 may include one or more processing units, for example, the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), and an image signal processor (image)signal processor (ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and/or neural-network processing unit (NPU) Wait. Among them, different processing units may be independent devices, or may be integrated in one or more processors. In some embodiments, the electronic device 200 may also include one or more processors 110.
其中,控制器可以是电子设备200的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。The controller may be the nerve center and command center of the electronic device 200. The controller can generate the operation control signal according to the instruction operation code and the timing signal to complete the control of fetching instructions and executing instructions.
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了电子设备200的效率。The processor 110 may also be provided with a memory for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may store instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. The repeated access is avoided, the waiting time of the processor 110 is reduced, and thus the efficiency of the electronic device 200 is improved.
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。In some embodiments, the processor 110 may include one or more interfaces. Interfaces can include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit, sound, I2S) interface, pulse code modulation (pulse code modulation (PCM) interface, universal asynchronous transceiver (universal asynchronous) receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and And/or universal serial bus (USB) interface, etc.
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备200的触摸功能。The I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL). In some embodiments, the processor 110 may include multiple sets of I2C buses. The processor 110 may respectively couple the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces. For example, the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 200.
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。The I2S interface can be used for audio communication. In some embodiments, the processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, to realize the function of answering the phone call through the Bluetooth headset.
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。The PCM interface can also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface. In some embodiments, the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface to realize the function of answering the phone call through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。The UART interface is a universal serial data bus used for asynchronous communication. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, the UART interface is generally used to connect the processor 110 and the wireless communication module 160. For example, the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function. In some embodiments, the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
MIPI接口可以被用于连接处理器110与显示屏194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备200的拍摄功能。处理器110和显示屏194通过DSI接口通信,实现电子设备200的显示功能。The MIPI interface can be used to connect the processor 110 to peripheral devices such as the display screen 194 and the camera 193. MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI) and so on. In some embodiments, the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 200. The processor 110 and the display screen 194 communicate through the DSI interface to realize the display function of the electronic device 200.
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示屏194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。The GPIO interface can be configured via software. The GPIO interface can be configured as a control signal or a data signal. In some embodiments, the GPIO interface may be used to connect the processor 110 to the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like. GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备200充电,也可以用于电子设备200与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。The USB interface 130 is an interface that conforms to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and so on. The USB interface 130 can be used to connect a charger to charge the electronic device 200, and can also be used to transfer data between the electronic device 200 and peripheral devices. It can also be used to connect headphones and play audio through the headphones. The interface can also be used to connect other electronic devices, such as AR devices.
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备200的结构限定。在另一些实施例中,电子设备200也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。It can be understood that the interface connection relationship between the modules illustrated in the embodiments of the present invention is only a schematic description, and does not constitute a limitation on the structure of the electronic device 200. In other embodiments, the electronic device 200 may also use different interface connection methods in the foregoing embodiments, or a combination of multiple interface connection methods.
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备200的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。The charging management module 140 is used to receive charging input from the charger. The charger may be a wireless charger or a wired charger. In some wired charging embodiments, the charging management module 140 may receive the charging input of the wired charger through the USB interface 130. In some wireless charging embodiments, the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 200. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。The power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, internal memory 121, external memory, display screen 194, camera 193, wireless communication module 160, and the like. The power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters. In some other embodiments, the power management module 141 may also be disposed in the processor 110. In other embodiments, the power management module 141 and the charging management module 140 may also be set in the same device.
电子设备200的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。The wireless communication function of the electronic device 200 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
天线1和天线2用于发射和接收电磁波信号。电子设备200中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals. Each antenna in the electronic device 200 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example, the antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
移动通信模块150可以提供应用在电子设备200上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大 器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。The mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 200. The mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like. The mobile communication module 150 can receive electromagnetic waves from the antenna 1 and filter, amplify, etc. the received electromagnetic waves, and transmit them to the modem processor for demodulation. The mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor and convert it to electromagnetic wave radiation through the antenna 1. In some embodiments, at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110. In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。The modem processor may include a modulator and a demodulator. Among them, the modulator is used to modulate the low-frequency baseband signal to be transmitted into a high-frequency signal. The demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing. The low-frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194. In some embodiments, the modem processor may be an independent device. In other embodiments, the modem processor may be independent of the processor 110, and may be set in the same device as the mobile communication module 150 or other functional modules.
无线通信模块160可以提供应用在电子设备200上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。示例性地,无线通信模块160可以包括蓝牙模块、Wi-Fi模块等。The wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (bluetooth, BT), and global navigation satellites that are applied to the electronic device 200. System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency-modulates and filters electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it to electromagnetic wave radiation through the antenna 2. Exemplarily, the wireless communication module 160 may include a Bluetooth module, a Wi-Fi module, and the like.
在一些实施例中,电子设备200的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备200可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。In some embodiments, the antenna 1 of the electronic device 200 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 200 can communicate with the network and other devices through wireless communication technology. The wireless communication technology may include a global mobile communication system (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long-term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc. The GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a beidou navigation system (BDS), and a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite-based augmentation systems (SBAS).
电子设备200通过GPU,显示屏194,以及应用处理器等可以实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行指令以生成或改变显示信息。The electronic device 200 can realize a display function through a GPU, a display screen 194, and an application processor. The GPU is a microprocessor for image processing, connecting the display screen 194 and the application processor. The GPU is used to perform mathematical and geometric calculations, and is used for graphics rendering. The processor 110 may include one or more GPUs that execute instructions to generate or change display information.
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED), 有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备200可以包括1个或N个显示屏194,N为大于1的正整数。The display screen 194 is used to display images, videos and the like. The display screen 194 includes a display panel. The display panel may use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode) emitting diode, AMOLED, flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc. In some embodiments, the electronic device 200 may include 1 or N display screens 194, where N is a positive integer greater than 1.
电子设备200可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。The electronic device 200 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。The ISP processes the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, and the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, which is converted into an image visible to the naked eye. ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be set in the camera 193.
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备200可以包括1个或N个摄像头193,N为大于1的正整数。The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects it onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. DSP converts digital image signals into standard RGB, YUV and other image signals. In some embodiments, the electronic device 200 may include 1 or N cameras 193, where N is a positive integer greater than 1.
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备200在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。The digital signal processor is used to process digital signals. In addition to digital image signals, it can also process other digital signals. For example, when the electronic device 200 is selected at a frequency point, the digital signal processor is used to perform Fourier transform on the energy at the frequency point.
视频编解码器用于对数字视频压缩或解压缩。电子设备200可以支持一种或多种视频编解码器。这样,电子设备200可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)-1,MPEG-2,MPEG-3,MPEG-4等。Video codec is used to compress or decompress digital video. The electronic device 200 may support one or more video codecs. In this way, the electronic device 200 can play or record videos in various encoding formats, for example: moving picture experts group (MPEG)-1, MPEG-2, MPEG-3, MPEG-4, etc.
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备200的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。NPU is a neural-network (NN) computing processor. By drawing on the structure of biological neural networks, for example, the transfer mode between neurons in the human brain, it can quickly process the input information and can continue to self-learn. The NPU can realize applications such as intelligent recognition of the electronic device 200, such as image recognition, face recognition, voice recognition, and text understanding.
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备200的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐、照片、视频等数据保存在外部存储卡中。The external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 200. The external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, photos, videos and other data in an external memory card.
内部存储器121可以用于存储一个或多个计算机程序,该一个或多个计算机程序包括指令。处理器110可以通过运行存储在内部存储器121的上述指令,从而使得电子设备200执行本申请一些实施例中所提供的数据分享的方法,以及各种功能应用以及数据处理等。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统;该存储程序区还可以存储一个或多个应用程序(比如图库、联系人等)等。存储数据区可存储电子设备200使用过程中所创建的数据(比如照片,联系人等)。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。The internal memory 121 may be used to store one or more computer programs including instructions. The processor 110 may execute the above-mentioned instructions stored in the internal memory 121, so that the electronic device 200 executes the data sharing method, various functional applications, and data processing provided in some embodiments of the present application. The internal memory 121 may include a storage program area and a storage data area. Among them, the storage program area can store the operating system; the storage program area can also store one or more application programs (such as gallery, contacts, etc.) and so on. The storage data area may store data (such as photos, contacts, etc.) created during use of the electronic device 200. In addition, the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
电子设备200可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。The electronic device 200 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. For example, music playback, recording, etc.
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。The audio module 170 is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input into digital audio signal. The audio module 170 can also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备200可以通过扬声器170A收听音乐,或收听免提通话。The speaker 170A, also called "speaker", is used to convert audio electrical signals into sound signals. The electronic device 200 can listen to music through the speaker 170A, or listen to a hands-free call.
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备200接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。The receiver 170B, also known as "handset", is used to convert audio electrical signals into sound signals. When the electronic device 200 answers a phone call or voice message, the voice can be received by bringing the receiver 170B close to the ear.
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备200可以设置至少一个麦克风170C。在另一些实施例中,电子设备200可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备200还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。The microphone 170C, also called "microphone", "microphone", is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can make a sound by approaching the microphone 170C through a person's mouth, and input a sound signal to the microphone 170C. The electronic device 200 may be provided with at least one microphone 170C. In other embodiments, the electronic device 200 may be provided with two microphones 170C. In addition to collecting sound signals, it may also implement a noise reduction function. In other embodiments, the electronic device 200 may also be provided with three, four, or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。The headset interface 170D is used to connect wired headsets. The earphone interface 170D may be a USB interface 130 or a 3.5mm open mobile electronic device (open terminal) platform (OMTP) standard interface, and the American Telecommunications Industry Association (cellular telecommunications industry association of the United States, CTIA) standard interface.
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180A,电极之间的电容改变。电子设备200根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,电子设备200根据压力传感器180A检测所述触摸操作强度。电子设备200也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。The pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be provided on the display screen 194. There are many types of pressure sensors 180A, such as resistive pressure sensors, inductive pressure sensors, and capacitive pressure sensors. The capacitive pressure sensor may be at least two parallel plates with conductive materials. When force is applied to the pressure sensor 180A, the capacitance between the electrodes changes. The electronic device 200 determines the intensity of the pressure according to the change in capacitance. When a touch operation is applied to the display screen 194, the electronic device 200 detects the intensity of the touch operation according to the pressure sensor 180A. The electronic device 200 may also calculate the touched position based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch position but have different touch operation intensities may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
陀螺仪传感器180B可以用于确定电子设备200的运动姿态。在一些实施例中,可以通过陀螺仪传感器180B确定电子设备200围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器180B可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器180B检测电子设备200抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消电子设备200的抖动,实现防抖。陀螺仪传感器180B还可以用于导航,体感游戏场景。The gyro sensor 180B may be used to determine the movement posture of the electronic device 200. In some embodiments, the angular velocity of the electronic device 200 around three axes (ie, x, y, and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor 180B detects the jitter angle of the electronic device 200, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to counteract the jitter of the electronic device 200 through reverse movement to achieve anti-shake. The gyro sensor 180B can also be used for navigation and somatosensory game scenes.
气压传感器180C用于测量气压。在一些实施例中,电子设备200通过气压传感器180C测得的气压值计算海拔高度,辅助定位和导航。The air pressure sensor 180C is used to measure air pressure. In some embodiments, the electronic device 200 calculates the altitude using the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
磁传感器180D包括霍尔传感器。电子设备200可以利用磁传感器180D检测翻盖皮套的开合。在一些实施例中,当电子设备200是翻盖机时,电子设备200可以根据磁传感器 180D检测翻盖的开合。进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。The magnetic sensor 180D includes a Hall sensor. The electronic device 200 can detect the opening and closing of the flip holster using the magnetic sensor 180D. In some embodiments, when the electronic device 200 is a clamshell machine, the electronic device 200 may detect the opening and closing of the clamshell according to the magnetic sensor 180D. Furthermore, according to the detected opening and closing state of the holster or the opening and closing state of the flip cover, features such as automatic unlocking of the flip cover are set.
加速度传感器180E可检测电子设备200在各个方向上(一般为三轴)加速度的大小。当电子设备200静止时可检测出重力的大小及方向。还可以用于识别电子设备姿态,应用于横竖屏切换,计步器等应用。The acceleration sensor 180E can detect the magnitude of acceleration of the electronic device 200 in various directions (generally three axes). When the electronic device 200 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of electronic devices, and be used in applications such as horizontal and vertical screen switching and pedometers.
距离传感器180F,用于测量距离。电子设备200可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备200可以利用距离传感器180F测距以实现快速对焦。The distance sensor 180F is used to measure the distance. The electronic device 200 can measure the distance by infrared or laser. In some embodiments, when shooting scenes, the electronic device 200 may use the distance sensor 180F to measure distance to achieve fast focusing.
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备200通过发光二极管向外发射红外光。电子设备200使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备200附近有物体。当检测到不充分的反射光时,电子设备200可以确定电子设备200附近没有物体。电子设备200可以利用接近光传感器180G检测用户手持电子设备200贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器180G也可用于皮套模式,口袋模式自动解锁与锁屏。The proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode. The light emitting diode may be an infrared light emitting diode. The electronic device 200 emits infrared light outward through the light emitting diode. The electronic device 200 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 200. When insufficient reflected light is detected, the electronic device 200 may determine that there is no object near the electronic device 200. The electronic device 200 can use the proximity light sensor 180G to detect that the user holds the electronic device 200 close to the ear to talk, so as to automatically turn off the screen to save power. The proximity light sensor 180G can also be used in leather case mode, pocket mode automatically unlocks and locks the screen.
环境光传感器180L用于感知环境光亮度。电子设备200可以根据感知的环境光亮度自适应调节显示屏194亮度。环境光传感器180L也可用于拍照时自动调节白平衡。环境光传感器180L还可以与接近光传感器180G配合,检测电子设备200是否在口袋里,以防误触。The ambient light sensor 180L is used to sense the brightness of ambient light. The electronic device 200 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light. The ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures. The ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 200 is in a pocket to prevent accidental touch.
指纹传感器180H用于采集指纹。电子设备200可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。The fingerprint sensor 180H is used to collect fingerprints. The electronic device 200 can use the collected fingerprint characteristics to unlock the fingerprint, access the application lock, take a picture of the fingerprint, and answer the call with the fingerprint.
温度传感器180J用于检测温度。在一些实施例中,电子设备200利用温度传感器180J检测的温度,执行温度处理策略。例如,当温度传感器180J上报的温度超过阈值,电子设备200执行降低位于温度传感器180J附近的处理器的性能,以便降低功耗实施热保护。在另一些实施例中,当温度低于另一阈值时,电子设备200对电池142加热,以避免低温导致电子设备200异常关机。在其他一些实施例中,当温度低于又一阈值时,电子设备200对电池142的输出电压执行升压,以避免低温导致的异常关机。The temperature sensor 180J is used to detect the temperature. In some embodiments, the electronic device 200 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds the threshold, the electronic device 200 performs to reduce the performance of the processor located near the temperature sensor 180J, so as to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 200 heats the battery 142 to avoid abnormal shutdown of the electronic device 200 due to low temperature. In some other embodiments, when the temperature is below another threshold, the electronic device 200 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
触摸传感器180K,也可称触控面板或触敏表面。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于电子设备200的表面,与显示屏194所处的位置不同。The touch sensor 180K can also be called a touch panel or a touch-sensitive surface. The touch sensor 180K may be provided on the display screen 194, and the touch sensor 180K and the display screen 194 constitute a touch screen, also called a "touch screen". The touch sensor 180K is used to detect a touch operation acting on or near it. The touch sensor can pass the detected touch operation to the application processor to determine the type of touch event. The visual output related to the touch operation may be provided through the display screen 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 200, which is different from the location where the display screen 194 is located.
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器180M可以获取人体声部振动骨块的振动信号。骨传导传感器180M也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器180M也可以设置于耳机中,结合成骨传导耳机。音频模块170可以基于所述骨传导传感器180M获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器180M获取的血压跳动信号解析心率信息,实现心率检测功能。The bone conduction sensor 180M can acquire vibration signals. In some embodiments, the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice. The bone conduction sensor 180M can also contact the pulse of the human body and receive a blood pressure beating signal. In some embodiments, the bone conduction sensor 180M may also be provided in the earphone and combined into a bone conduction earphone. The audio module 170 may parse out the voice signal based on the vibration signal of the vibrating bone block of the voice part acquired by the bone conduction sensor 180M to realize the voice function. The application processor may analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M to implement the heart rate detection function.
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。 电子设备200可以接收按键输入,产生与电子设备200的用户设置以及功能控制有关的键信号输入。The key 190 includes a power-on key, a volume key, and the like. The key 190 may be a mechanical key. It can also be a touch button. The electronic device 200 may receive key input and generate key signal input related to user settings and function control of the electronic device 200.
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏194不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。The motor 191 may generate a vibration prompt. The motor 191 can be used for vibration notification of incoming calls and can also be used for touch vibration feedback. For example, touch operations applied to different applications (such as taking pictures, playing audio, etc.) may correspond to different vibration feedback effects. For the touch operation in different areas of the display screen 194, the motor 191 can also correspond to different vibration feedback effects. Different application scenarios (for example: time reminder, receiving information, alarm clock, game, etc.) can also correspond to different vibration feedback effects. Touch vibration feedback effect can also support customization.
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。The indicator 192 may be an indicator light, which may be used to indicate a charging state, a power change, and may also be used to indicate a message, a missed call, a notification, and the like.
SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备200的接触和分离。电子设备200可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口195可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口195也可以兼容不同类型的SIM卡。SIM卡接口195也可以兼容外部存储卡。电子设备200通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备200采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备200中,不能和电子设备200分离。The SIM card interface 195 is used to connect a SIM card. The SIM card can be inserted into or removed from the SIM card interface 195 to achieve contact and separation with the electronic device 200. The electronic device 200 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc. The same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards may be the same or different. The SIM card interface 195 can also be compatible with different types of SIM cards. The SIM card interface 195 can also be compatible with external memory cards. The electronic device 200 interacts with the network through the SIM card to realize functions such as call and data communication. In some embodiments, the electronic device 200 uses eSIM, that is, an embedded SIM card. The eSIM card can be embedded in the electronic device 200 and cannot be separated from the electronic device 200.
图10A示例性所示的电子设备200可以通过显示屏194显示以下各个实施例中所描述的各个用户界面。电子设备200可以通过触摸传感器180K在各个用户界面中检测触控操作,例如在各个用户界面中的点击操作(如在图标上的触摸操作、双击操作),又例如在各个用户界面中的向上或向下的滑动操作,或执行画圆圈手势的操作,等等。在一些实施例中,电子设备200可以通过陀螺仪传感器180B、加速度传感器180E等检测用户手持电子设备200执行的运动手势,例如晃动电子设备。在一些实施例中,电子设备200可以通过摄像头193(如3D摄像头、深度摄像头)检测非触控的手势操作。The electronic device 200 exemplarily shown in FIG. 10A may display various user interfaces described in the following embodiments through the display screen 194. The electronic device 200 can detect a touch operation in each user interface through the touch sensor 180K, for example, a click operation in each user interface (such as a touch operation on an icon, a double-click operation), and for example, an upward or Swipe down, or perform a gesture of drawing a circle, and so on. In some embodiments, the electronic device 200 may detect a motion gesture performed by the user holding the electronic device 200, such as shaking the electronic device, through the gyro sensor 180B, the acceleration sensor 180E, or the like. In some embodiments, the electronic device 200 can detect non-touch gesture operations through the camera 193 (eg, 3D camera, depth camera).
在一些实施中,电子设备200中包括的终端应用处理器(AP)可以实现图3所示的音频协议框架中的Host,电子设备200中包括的蓝牙(BT)模块可以实现图3所示的音频协议框架中的controller,二者之间通过HCI进行通信。即把图3所示的音频协议框架的功能分布在两颗芯片上。In some implementations, the terminal application processor (AP) included in the electronic device 200 can implement the Host in the audio protocol framework shown in FIG. 3, and the Bluetooth (BT) module included in the electronic device 200 can implement the shown in FIG. 3. The controller in the audio protocol framework communicates between the two through HCI. That is, the functions of the audio protocol framework shown in FIG. 3 are distributed on two chips.
在另一些实施例中,电子设备200终端应用处理器(AP)可以实现图3所示音频协议框架中的Host和controller。即图3所示的音频协议框架的所有功能都放在一颗芯片上,也就是说,host和controller都放在同一颗芯片上,由于host和controller都在同一颗芯片上,因此物理HCI就没有存在的必要性,host和controller之间直接通过应用编程接口API来交互。In other embodiments, the terminal application processor (AP) of the electronic device 200 may implement the host and controller in the audio protocol framework shown in FIG. 3. That is, all the functions of the audio protocol framework shown in Figure 3 are placed on one chip, that is, the host and controller are placed on the same chip, because the host and controller are on the same chip, so the physical HCI is There is no need to exist, and the host and controller directly interact through the application programming interface API.
电子设备200的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备200的软件结构。The software system of the electronic device 200 may adopt a layered architecture, event-driven architecture, micro-core architecture, micro-service architecture, or cloud architecture. The embodiment of the present invention takes the Android system with a layered architecture as an example to exemplarily explain the software structure of the electronic device 200.
图10B是本发明实施例的电子设备200的软件结构框图。10B is a block diagram of the software structure of the electronic device 200 according to an embodiment of the present invention.
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件 接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。The layered architecture divides the software into several layers, and each layer has a clear role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom are the application layer, the application framework layer, the Android runtime and the system library, and the kernel layer.
应用程序层可以包括一系列应用程序包。The application layer may include a series of application packages.
如图10B所示,应用程序包可以包括游戏,语音助手,音乐播放器,视频播放器,邮箱,通话,导航,文件浏览器等应用程序。As shown in FIG. 10B, the application package may include applications such as games, voice assistants, music players, video players, mailboxes, calls, navigation, and file browsers.
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。The application framework layer provides an application programming interface (application programming interface) and programming framework for applications at the application layer. The application framework layer includes some predefined functions.
如图10B所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。As shown in FIG. 10B, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。The window manager is used to manage window programs. The window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, intercept the screen, etc.
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。Content providers are used to store and retrieve data, and make these data accessible to applications. The data may include videos, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。The view system includes visual controls, such as controls for displaying text and controls for displaying pictures. The view system can be used to build applications. The display interface can be composed of one or more views. For example, a display interface that includes an SMS notification icon may include a view that displays text and a view that displays pictures.
电话管理器用于提供电子设备200的通信功能。例如通话状态的管理(包括接通,挂断等)。The phone manager is used to provide the communication function of the electronic device 200. For example, the management of the call state (including connection, hang up, etc.).
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。The resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。The notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear after a short stay without user interaction. For example, the notification manager is used to notify the completion of downloading, message reminders, etc. The notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, the text message is displayed in the status bar, a prompt sound is emitted, the electronic device vibrates, and the indicator light flashes.
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。Android Runtime includes core library and virtual machine. Android runtime is responsible for the scheduling and management of the Android system.
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。The core library contains two parts: one part is the function function that Java language needs to call, and the other part is the core library of Android.
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。The application layer and the application framework layer run in the virtual machine. The virtual machine executes the java files of the application layer and the application framework layer into binary files. The virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。The system library may include multiple functional modules. For example: surface manager (surface manager), media library (Media library), 3D graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。The surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。The media library supports a variety of commonly used audio, video format playback and recording, and still image files. The media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。The 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
2D图形引擎是2D绘图的绘图引擎。The 2D graphics engine is a drawing engine for 2D drawing.
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。The kernel layer is the layer between hardware and software. The kernel layer contains at least the display driver, camera driver, audio driver, and sensor driver.
下面结合捕获拍照场景,示例性说明电子设备200软件以及硬件的工作流程。The following describes the workflow of the software and hardware of the electronic device 200 in combination with capturing a photographing scene.
当触摸传感器180K接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸触摸操作,该触摸操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头193捕获静态图像或视频。When the touch sensor 180K receives the touch operation, the corresponding hardware interrupt is sent to the kernel layer. The kernel layer processes touch operations into original input events (including touch coordinates, time stamps and other information of touch operations). The original input event is stored in the kernel layer. The application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch operation, the control corresponding to the touch operation is a camera application icon as an example. The camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer. Capture still images or videos.
下面介绍本申请实施例中提供的示例性音频输出设备300。音频输出设备300可以实现为上述实施例中提及的第二音频设备或第三音频设备,可以是图1所示的无线音频系统100中的第二音频设备102或第三音频设备103。音频输出设备300通常可以用作音频接收设备(audio sink),如耳机、音箱,可以向其他音频源(audio source),如手机、平板电脑等,传输的音频数据,并可以将接收到的音频数据转换成声音。在一些场景下,如果配置有麦克风/受话器等声音采集器件,音频输出设备300也可以用作音频源(audio source),向其他设备音频接收方(audio sink)(如手机)传输音频数据(如耳机采集的用户说话的声音所转换成的音频数据)。The following describes an exemplary audio output device 300 provided in an embodiment of the present application. The audio output device 300 may be implemented as the second audio device or the third audio device mentioned in the above embodiment, and may be the second audio device 102 or the third audio device 103 in the wireless audio system 100 shown in FIG. 1. The audio output device 300 can generally be used as an audio receiving device (audio sink), such as headphones and speakers, can transmit audio data to other audio sources (audio phones, tablets, etc.), and can receive the received audio Convert data to sound. In some scenarios, if a sound collection device such as a microphone/receiver is configured, the audio output device 300 can also be used as an audio source to transmit audio data (such as an audio sink) (such as a mobile phone) to other devices Audio data converted by the voice collected by the headset into the user).
图11示例性示出了本申请提供的音频输出设备300的结构示意图。FIG. 11 exemplarily shows a schematic structural diagram of an audio output device 300 provided by the present application.
如图11所示,音频输出设备300可包括处理器302、存储器303、蓝牙通信处理模块304、电源305、佩戴检测器306、麦克风307和电/声转换器308。这些部件可以通过总线连接。其中:As shown in FIG. 11, the audio output device 300 may include a processor 302, a memory 303, a Bluetooth communication processing module 304, a power supply 305, a wear detector 306, a microphone 307, and an electric/acoustic converter 308. These components can be connected via a bus. among them:
处理器302可用于读取和执行计算机可读指令。具体实现中,处理器302可主要包括控制器、运算器和寄存器。其中,控制器主要负责指令译码,并为指令对应的操作发出控制信号。运算器主要负责执行定点或浮点算数运算操作、移位操作以及逻辑操作等,也可以执行地址运算和转换。寄存器主要负责保存指令执行过程中临时存放的寄存器操作数和中间操作结果等。具体实现中,处理器302的硬件架构可以是专用集成电路(Application Specific Integrated Circuits,ASIC)架构、MIPS架构、ARM架构或者NP架构等等。The processor 302 may be used to read and execute computer-readable instructions. In a specific implementation, the processor 302 may mainly include a controller, an arithmetic unit, and a register. Among them, the controller is mainly responsible for instruction decoding and issues control signals for the operations corresponding to the instructions. The arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations, and logical operations, and can also perform address operations and conversions. The register is mainly responsible for saving the register operand and intermediate operation result temporarily stored during the execution of the instruction. In a specific implementation, the hardware architecture of the processor 302 may be an application specific integrated circuit (Application Specific Integrated Circuits, ASIC) architecture, a MIPS architecture, an ARM architecture, an NP architecture, or the like.
在一些实施例中,处理器302可以用于解析蓝牙通信处理模块304接收到的信号,如封装有音频数据的信号、内容控制消息,流控制消息等等。处理器302可以用于根据解析结果进行相应的处理操作,如驱动电/声转换器308开始或暂停或停止将音频数据转换成声音等等。In some embodiments, the processor 302 may be used to parse signals received by the Bluetooth communication processing module 304, such as signals encapsulated with audio data, content control messages, flow control messages, and so on. The processor 302 may be used to perform corresponding processing operations according to the analysis result, such as driving the electrical/acoustic converter 308 to start or pause or stop converting audio data into sound, and so on.
在一些实施例中,处理器302还可以用于生成蓝牙通信处理模块304向外发送的信号,如蓝牙广播信号、信标信号,又如采集到的声音所转换成的音频数据。In some embodiments, the processor 302 may also be used to generate signals sent out by the Bluetooth communication processing module 304, such as Bluetooth broadcast signals, beacon signals, and audio data converted from the collected sound.
存储器303与处理器302耦合,用于存储各种软件程序和/或多组指令。具体实现中,存储器303可包括高速随机存取的存储器,并且也可包括非易失性存储器,例如一个或多个磁盘存储设备、闪存设备或其他非易失性固态存储设备。存储器303可以存储操作系统, 例如uCOS、VxWorks、RTLinux等嵌入式操作系统。存储器303还可以存储通信程序,该通信程序可用于与电子设备200,一个或多个服务器,或附加设备进行通信。The memory 303 is coupled to the processor 302 and is used to store various software programs and/or multiple sets of instructions. In a specific implementation, the memory 303 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 303 can store an operating system, such as embedded operating systems such as uCOS, VxWorks, RTLinux, and so on. The memory 303 may also store a communication program that can be used to communicate with the electronic device 200, one or more servers, or additional devices.
蓝牙(BT)通信处理模块304可以接收其他设备(如电子设备200)发射的信号,如扫描信号、广播信号、封装有音频数据的信号、内容控制消息、流控制消息等等。蓝牙(BT)通信处理模块304也可以发射信号,如广播信号、扫描信号、封装有音频数据的信号、内容控制消息、流控制消息等等。The Bluetooth (BT) communication processing module 304 may receive signals transmitted by other devices (such as the electronic device 200), such as scan signals, broadcast signals, signals encapsulated with audio data, content control messages, flow control messages, and so on. The Bluetooth (BT) communication processing module 304 may also transmit signals, such as broadcast signals, scan signals, signals encapsulated with audio data, content control messages, flow control messages, and so on.
电源305可用于向处理器302、存储器303、蓝牙通信处理模块304、佩戴检测器306、电/声转换器308等其他内部部件供电。The power supply 305 may be used to supply power to the processor 302, the memory 303, the Bluetooth communication processing module 304, the wear detector 306, the electrical/acoustic converter 308, and other internal components.
佩戴检测器306可用于检测音频输出设备300被用户佩戴的状态,如未被佩戴状态、被佩戴状态,甚至可以包括佩戴松紧状态。在一些实施例中,佩戴检测器306可以由距离传感器、压力传感器等传感器中的一项或多项实现。佩戴检测器306可将检测到的佩戴状态传输至处理器302,这样处理器302便可以在音频输出设备300被用户佩戴时上电,在音频输出设备300未被用户佩戴时断电,以节省功耗。The wearing detector 306 may be used to detect a state where the audio output device 300 is worn by the user, such as an unworn state, a worn state, and may even include a worn tight state. In some embodiments, the wear detector 306 may be implemented by one or more of a distance sensor, a pressure sensor, and the like. The wearing detector 306 can transmit the detected wearing state to the processor 302, so that the processor 302 can be powered on when the audio output device 300 is worn by the user, and powered off when the audio output device 300 is not worn by the user, to save Power consumption.
麦克风307可用于采集声音,如用户说话的声音,并可以将采集到声音输出给电/声转换器308,这样电/声转换器308便可以将麦克风307采集到的声音转换成音频数据。The microphone 307 can be used to collect sounds, such as the voice of the user speaking, and can output the collected sounds to the electric/acoustic converter 308, so that the electric/acoustic converter 308 can convert the sound collected by the microphone 307 into audio data.
电/声转换器308可用于将声音转换成电信号(音频数据),例如将麦克风307采集到的声音转换成音频数据,并可以传输音频数据至处理器302。这样,处理器302便可以触发蓝牙(BT)通信处理模块304发射该音频数据。电/声转换器308还可用于将电信号(音频数据)转换成声音,例如将处理器302输出的音频数据转换成声音。处理器302输出的音频数据可以是蓝牙(BT)通信处理模块304接收到的。The electric/acoustic converter 308 can be used to convert sound into electrical signals (audio data), for example, convert the sound collected by the microphone 307 into audio data, and can transmit audio data to the processor 302. In this way, the processor 302 can trigger the Bluetooth (BT) communication processing module 304 to transmit the audio data. The electrical/acoustic converter 308 may also be used to convert electrical signals (audio data) into sound, for example, audio data output by the processor 302 into sound. The audio data output by the processor 302 may be received by the Bluetooth (BT) communication processing module 304.
在一些实施中,处理器302可以实现图3所示的音频协议框架中的Host,蓝牙(BT)通信处理模块304可以实现图3所示的音频协议框架中的controller,二者之间通过HCI进行通信。即把图3所示的音频协议框架的功能分布在两颗芯片上。In some implementations, the processor 302 can implement the Host in the audio protocol framework shown in FIG. 3, and the Bluetooth (BT) communication processing module 304 can implement the controller in the audio protocol framework shown in FIG. 3. Communicate. That is, the functions of the audio protocol framework shown in FIG. 3 are distributed on two chips.
在另一些实施例中,处理器302可以实现图3所示音频协议框架中的Host和controller。即图3所示的音频协议框架的所有功能都放在一颗芯片上,也就是说,host和controller都放在同一颗芯片上,由于host和controller都在同一颗芯片上,因此物理HCI就没有存在的必要性,host和controller之间直接通过应用编程接口API来交互。In other embodiments, the processor 302 may implement the host and controller in the audio protocol framework shown in FIG. 3. That is, all the functions of the audio protocol framework shown in Figure 3 are placed on one chip, that is, the host and controller are placed on the same chip, because the host and controller are on the same chip, so the physical HCI is There is no need to exist, and the host and controller directly interact through the application programming interface API.
可以理解的是,图11示意的结构并不构成对音频输出设备300的具体限定。在本申请另一些实施例中,音频输出设备300可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in FIG. 11 does not constitute a specific limitation on the audio output device 300. In other embodiments of the present application, the audio output device 300 may include more or fewer components than shown, or combine some components, or split some components, or arrange different components. The illustrated components can be implemented in hardware, software, or a combination of software and hardware.
参见图12,图12示出了本申请提供的一种芯片组的结构示意图。如图12所示,芯片组400可包括芯片1和芯片2。芯片1和芯片2之间通过接口HCI 409通信。其中,芯片1可包括以下模块:多媒体音频模块402、话音模块403、背景声模块404、内容控制模块405、流控制模块406、流数据模块407以及L2CAP模块408。芯片2可包括:LE物理层模块413、LE链路层模块410。Referring to FIG. 12, FIG. 12 shows a schematic structural diagram of a chip set provided by the present application. As shown in FIG. 12, the chipset 400 may include chip 1 and chip 2. Chip 1 and chip 2 communicate through interface HCI409. The chip 1 may include the following modules: a multimedia audio module 402, a voice module 403, a background sound module 404, a content control module 405, a stream control module 406, a stream data module 407, and an L2CAP module 408. The chip 2 may include: an LE physical layer module 413 and an LE link layer module 410.
在芯片2中:In chip 2:
(1)LE物理层模块413,可用于提供数据传输的物理通道(通常称为信道)。通常情况下,一个通信系统中存在几种不同类型的信道,如控制信道、数据信道、语音信道等等。(1) The LE physical layer module 413 can be used to provide a physical channel (commonly referred to as a channel) for data transmission. Generally, there are several different types of channels in a communication system, such as control channels, data channels, voice channels, and so on.
(2)LE链路层模块410,可用于在物理层的基础上提供两个或多个设备之间、和物理无关的逻辑传输通道(也称作逻辑链路)。LE链路层模块410可用于控制设备的射频状态,设备将处于五种状态之一:等待、广告、扫描、初始化、连接。广播设备不需要建立连接就可以发送数据,而扫描设备接收广播设备发送的数据;发起连接的设备通过发送连接请求来回应广播设备,如果广播设备接受连接请求,那么广播设备与发起连接的设备将会进入连接状态。发起连接的设备称为主设备(master),接受连接请求的设备称为从设备(slave)。(2) The LE link layer module 410 can be used to provide a logical transmission channel (also referred to as a logical link) between two or more devices that is physically independent, based on the physical layer. The LE link layer module 410 can be used to control the radio frequency state of the device. The device will be in one of five states: waiting, advertising, scanning, initializing, and connecting. The broadcast device can send data without establishing a connection, and the scanning device receives the data sent by the broadcast device; the device that initiates the connection responds to the broadcast device by sending a connection request. If the broadcast device accepts the connection request, the broadcast device and the device that initiates the connection will Will enter the connection state. The device that initiates the connection is called the master device, and the device that accepts the connection request is called the slave device.
LE链路层模块410可包括LE ACL模块411和LE等时(ISO)模块412。LE ACL模块411可用于通过LE ACL链路传输设备间的控制消息,如流控制消息、内容控制消息、音量控制消息。LE ISO模块412可用于通过等时数据传输通道传输设备间的等时数据(如流数据本身)。The LE link layer module 410 may include a LE ACL module 411 and a LE isochronous (ISO) module 412. The LE ACL module 411 can be used to transmit control messages between devices through the LE ACL link, such as flow control messages, content control messages, and volume control messages. The LE ISO module 412 can be used to transmit isochronous data (such as streaming data itself) between devices through an isochronous data transmission channel.
在芯片1中:In chip 1:
(1)L2CAP模块408,可用于管理逻辑层提供的逻辑链路。基于L2CAP,不同的(1) The L2CAP module 408 can be used to manage the logical links provided by the logical layer. Based on L2CAP, different
上层应用可共享同一个逻辑链路。类似TCP/IP中端口(port)的概念。Upper layer applications can share the same logical link. Similar to the concept of port in TCP/IP.
(2)多媒体音频模块402、话音模块403、背景声模块404可以是依据业务场景设置的模块,可用于将应用层的音频应用划分为多媒体音频、话音、背景声等几种音频业务。不限于多媒体音频、话音、背景声等,音频业务也可以分为:话音,音乐,游戏,视频,语音助手,邮件提示音,告警,提示音,导航音等。(2) The multimedia audio module 402, the voice module 403, and the background sound module 404 may be modules set according to business scenarios, and may be used to divide the audio applications of the application layer into multimedia audio, voice, background sound, and other audio services. It is not limited to multimedia audio, voice, background sound, etc. Audio services can also be divided into: voice, music, games, video, voice assistant, e-mail reminder, alarm, reminder sound, navigation sound, etc.
(3)内容控制(content control)模块405可负责封装各种音频业务的内容(3) The content control module 405 can be responsible for encapsulating the content of various audio services
控制(如上一首、下一首等)消息,并向LE ACL模块411输出音频业务的内容控制消息,以通过LE ACL模块411传输封装后的内容控制消息。Control (such as previous song, next song, etc.) messages, and output content control messages of the audio service to the LEACL module 411, so as to transmit the encapsulated content control messages through the LEACL module 411.
(4)流控制(stream control)模块406可用于为特定音频业务进行参数协商,如QoS参数的协商,编码(Codec)参数的协商,ISO参数的协商,以及基于协商好的参数为该特定业务创建等时数据传输通道。为该特定业务创建等时数据传输通道可用于传输该特定音频业务的音频数据。本申请中,该特定音频业务可以称为第一音频业务,该协商好的参数可以称为第一参数。(4) The stream control module 406 can be used to negotiate parameters for specific audio services, such as QoS parameter negotiation, codec parameter negotiation, ISO parameter negotiation, and negotiation based on the negotiated parameters for the specific service Create an isochronous data transmission channel. Creating an isochronous data transmission channel for the specific service can be used to transmit audio data of the specific audio service. In this application, the specific audio service may be referred to as a first audio service, and the negotiated parameter may be referred to as a first parameter.
(5)流数据模块407可可用于向LE等时(ISO)模块412输出音频业务的音频数据,以通过等时数据传输通道传输音频数据。等时数据传输通道可以是CIS。CIS可用于在连接状态的设备间传输等时数据。等时数据传输通道最终承载于LE ISO 412。(5) The streaming data module 407 may be used to output the audio data of the audio service to the LE isochronous (ISO) module 412 to transmit the audio data through the isochronous data transmission channel. The isochronous data transmission channel may be CIS. CIS can be used to transfer isochronous data between connected devices. The isochronous data transmission channel is finally carried in LEISO 412.
具体实现中,芯片1可以实现为应用处理器(AP),芯片2可以实现为蓝牙处理器(或称为蓝牙模块、蓝牙芯片等)。本申请中,芯片1可以称为第一芯片,芯片2可以称为第二芯片。芯片组400可以包含于前述方法实施例中的第一音频设备中,也可以包含于前述方法实施例中的第一音频设备、第二音频设备中。In a specific implementation, chip 1 may be implemented as an application processor (AP), and chip 2 may be implemented as a Bluetooth processor (or referred to as a Bluetooth module, Bluetooth chip, etc.). In this application, chip 1 may be referred to as a first chip, and chip 2 may be referred to as a second chip. The chipset 400 may be included in the first audio device in the foregoing method embodiment, or may be included in the first audio device and the second audio device in the foregoing method embodiment.
可以理解的是,图12示意的结构并不构成对芯片组400的具体限定。在本申请另一些实施例中,芯片组400可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure illustrated in FIG. 12 does not constitute a specific limitation on the chipset 400. In other embodiments of the present application, the chipset 400 may include more or less components than shown, or combine some components, or split some components, or arrange different components. The illustrated components can be implemented in hardware, software, or a combination of software and hardware.
参见图13,图13示出了本申请提供的一种芯片的结构示意图。如图13所示,芯片500可包括:多媒体音频模块502、话音模块503、背景声模块504、内容控制模块505、流控制模块506、流数据模块507、L2CAP模块508、LE物理层模块513、LE链路层模块510。关于各个模块的说明可参考图12中的对应模块,这里不再赘述。Referring to FIG. 13, FIG. 13 shows a schematic structural diagram of a chip provided by the present application. As shown in FIG. 13, the chip 500 may include: a multimedia audio module 502, a voice module 503, a background sound module 504, a content control module 505, a stream control module 506, a stream data module 507, an L2CAP module 508, a LE physical layer module 513, LE link layer module 510. For the description of each module, please refer to the corresponding module in FIG. 12, which will not be repeated here.
与图12所示的芯片架构方式不同的是,图13示出的芯片架构方式是在放在一颗芯片上同时实现了图3所示的音频协议框架中的Host和Controller。由于Host和Controller实现于同一颗芯片上,因此该芯片内部可以不需要HCI。图12所示的芯片架构方式是在两颗芯片中分别实现图3所示的音频协议框架中的Host、Controller。Different from the chip architecture shown in FIG. 12, the chip architecture shown in FIG. 13 is that the host and controller in the audio protocol framework shown in FIG. 3 are implemented on one chip at the same time. Since Host and Controller are implemented on the same chip, HCI may not be needed inside the chip. The chip architecture shown in FIG. 12 is to implement Host and Controller in the audio protocol framework shown in FIG. 3 in two chips, respectively.
芯片500可以包含于前述方法实施例中的第一音频设备中,也可以包含于前述方法实施例中的第一音频设备、第二音频设备中。The chip 500 may be included in the first audio device in the foregoing method embodiment, or may be included in the first audio device and the second audio device in the foregoing method embodiment.
可以理解的是,图13示意的结构并不构成对芯片500的具体限定。在本申请另一些实施例中,芯片500可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。It can be understood that the structure shown in FIG. 13 does not constitute a specific limitation on the chip 500. In other embodiments of the present application, the chip 500 may include more or less components than shown, or combine some components, or split some components, or arrange different components. The illustrated components can be implemented in hardware, software, or a combination of software and hardware.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。A person of ordinary skill in the art can understand that all or part of the process in the method of the above embodiments can be implemented by a computer program instructing related hardware. The program can be stored in a computer-readable storage medium, and when the program is executed , May include the processes of the foregoing method embodiments. The foregoing storage media include various media that can store program codes such as ROM or random storage memory RAM, magnetic disks or optical disks.

Claims (18)

  1. 一种音频通讯方法,其特征在于,包括:An audio communication method, characterized in that it includes:
    音频源和音频接收方建立低功耗蓝牙无连接的异步LE ACL链路;所述音频源和所述音频接收方之间建立有低功耗蓝牙连接;The audio source and the audio receiver establish a low-power Bluetooth connectionless asynchronous LE ACL link; a low-power Bluetooth connection is established between the audio source and the audio receiver;
    所述音频源通过所述LE ACL链路和所述音频接收方为第一音频业务执行参数协商,所述参数协商所协商的第一参数对应所述第一音频业务;The audio source performs parameter negotiation for the first audio service through the LE ACL link and the audio receiver, and the first parameter negotiated in the parameter negotiation corresponds to the first audio service;
    所述音频源基于所述第一参数和所述音频接收方创建所述第一音频业务对应的LE等时数据传输通道;所述第一音频业务对应的LE等时数据传输通道用于所述音频源向所述音频接收方发送所述第一音频业务的音频数据。The audio source creates an LE isochronous data transmission channel corresponding to the first audio service based on the first parameter and the audio receiver; an LE isochronous data transmission channel corresponding to the first audio service is used for the The audio source sends the audio data of the first audio service to the audio receiver.
  2. 如权利要求1所述的方法,其特征在于,包括:The method of claim 1, comprising:
    所述音频源生成所述第一音频业务的内容控制消息;The audio source generates a content control message of the first audio service;
    所述音频源通过所述LE ACL链路向所述音频接收方发送所述第一音频业务的内容控制消息;所述内容控制消息用于所述音频接收方对所述第一音频业务进行内容控制,所述内容控制包括以下一项或多项:音量控制、播放控制、通话控制。The audio source sends a content control message of the first audio service to the audio receiver through the LE ACL link; the content control message is used by the audio receiver to perform content on the first audio service Control, the content control includes one or more of the following: volume control, playback control, call control.
  3. 如权利要求1-2中任一项所述的方法,其特征在于,包括:The method according to any one of claims 1-2, comprising:
    所述音频源通过所述LE ACL链路接收所述音频接收方发送的所述第一音频业务的内容控制消息;The audio source receives the content control message of the first audio service sent by the audio receiver through the LE ACL link;
    所述音频源根据所述内容控制消息对所述第一音频业务进行内容控制,所述内容控制包括以下一项或多项:音量控制、播放控制、通话控制。The audio source performs content control on the first audio service according to the content control message, and the content control includes one or more of the following: volume control, playback control, and call control.
  4. 如权利要求1-3中任一项所述的方法,其特征在于,包括:The method according to any one of claims 1-3, comprising:
    所述音频源生成所述第一音频业务的音频数据;The audio source generates audio data of the first audio service;
    所述音频源通过所述第一音频业务对应的LE等时数据传输通道向所述音频接收方发送所述第一音频业务的音频数据。The audio source sends the audio data of the first audio service to the audio receiver through the LE isochronous data transmission channel corresponding to the first audio service.
  5. 如权利要求1-4中任一项所述的方法,其特征在于,所述内容控制消息包括以下一项或多项:音量控制消息、播放控制消息、通话控制消息。The method according to any one of claims 1 to 4, wherein the content control message includes one or more of the following: volume control message, playback control message, and call control message.
  6. 如权利要求1-5中任一项所述的方法,其特征在于,所述第一参数包括以下一项或多项:服务质量QoS参数、编码codec参数、等时数据传输通道参数。The method according to any one of claims 1-5, wherein the first parameter includes one or more of the following: quality of service QoS parameters, coding codec parameters, and isochronous data transmission channel parameters.
  7. 一种音频通讯方法,其特征在于,包括:An audio communication method, characterized in that it includes:
    音频接收方和音频源建立低功耗蓝牙无连接的异步LE ACL链路;所述音频源和所述音频接收方之间建立有低功耗蓝牙连接;The audio receiver and the audio source establish a Bluetooth low energy connectionless asynchronous LE ACL link; a low power Bluetooth connection is established between the audio source and the audio receiver;
    所述音频接收方通过所述LE ACL链路和所述音频源为第一音频业务执行参数协商,所述参数协商所协商的第一参数对应所述第一音频业务;The audio receiver performs parameter negotiation for the first audio service through the LE ACL link and the audio source, and the first parameter negotiated in the parameter negotiation corresponds to the first audio service;
    所述音频接收方基于所述第一参数和所述音频源创建所述第一音频业务对应的LE等时数据传输通道;所述第一音频业务对应的LE等时数据传输通道用于所述音频接收方接收所述音频源发送的所述第一音频业务的音频数据。The audio receiver creates an LE isochronous data transmission channel corresponding to the first audio service based on the first parameter and the audio source; an LE isochronous data transmission channel corresponding to the first audio service is used for the The audio receiver receives the audio data of the first audio service sent by the audio source.
  8. 如权利要求7所述的方法,其特征在于,包括:The method of claim 7, comprising:
    所述音频接收方通过所述LE ACL链路接收所述音频源发送的所述第一音频业务的内容控制消息;The audio receiver receives the content control message of the first audio service sent by the audio source through the LE ACL link;
    所述音频接收方根据所述内容控制消息对所述第一音频业务进行内容控制,所述内容控制包括以下一项或多项:音量控制、播放控制、通话控制。The audio receiver performs content control on the first audio service according to the content control message, and the content control includes one or more of the following: volume control, playback control, and call control.
  9. 如权利要求7或8所述的方法,其特征在于,包括:The method according to claim 7 or 8, comprising:
    所述音频接收方生成所述第一音频业务的内容控制消息;The audio receiver generates a content control message of the first audio service;
    所述音频接收方通过所述LE ACL链路向所述音频源发送所述第一音频业务的内容控制消息;所述内容控制消息用于所述音频源对所述第一音频业务进行内容控制,所述内容控制包括以下一项或多项:音量控制、播放控制、通话控制。The audio receiver sends the content control message of the first audio service to the audio source through the LE ACL link; the content control message is used by the audio source to control the content of the first audio service The content control includes one or more of the following: volume control, playback control, and call control.
  10. 如权利要求7-9中任一项所述的方法,其特征在于,包括:The method according to any one of claims 7-9, comprising:
    所述音频接收方通过所述第一音频业务对应的LE等时数据传输通道接收所述音频源发送所述第一音频业务的音频数据。The audio receiver receives the audio data of the first audio service sent by the audio source through the LE isochronous data transmission channel corresponding to the first audio service.
  11. 如权利要求7-10中任一项所述的方法,其特征在于,所述内容控制消息包括以下一项或多项:音量控制消息、播放控制消息、通话控制消息。The method according to any one of claims 7-10, wherein the content control message includes one or more of the following: volume control message, playback control message, and call control message.
  12. 如权利要求7-11中任一项所述的方法,其特征在于,所述第一参数包括以下一项或多项:服务质量QoS参数、编码codec参数、等时数据传输通道参数。The method according to any one of claims 7-11, wherein the first parameter includes one or more of the following: quality of service QoS parameters, coding codec parameters, and isochronous data transmission channel parameters.
  13. 一种音频设备,其特征在于,包括:发射器和接收器,存储器以及耦合于所述存储器的处理器,所述存储器用于存储可由所述处理器执行的指令,所述处理器用于调用所述存储器中的所述指令,执行权利要求1-6中任一项所述的方法。An audio device, comprising: a transmitter and a receiver, a memory and a processor coupled to the memory, the memory is used to store instructions executable by the processor, and the processor is used to call The instructions in the memory execute the method of any one of claims 1-6.
  14. 一种音频设备,其特征在于,包括:发射器和接收器,存储器以及耦合于所述存储器的处理器,所述存储器用于存储可由所述处理器执行的指令,所述处理器用于调用所述存储器中的所述指令,执行权利要求7-12中任一项所述的方法。An audio device, comprising: a transmitter and a receiver, a memory and a processor coupled to the memory, the memory is used to store instructions executable by the processor, and the processor is used to call The instructions in the memory execute the method of any one of claims 7-12.
  15. 一种芯片组,其特征在于,包括:第一芯片和第二芯片;所述第一芯片包括流控制模块、内容控制模块、流数据模块;所述第二芯片包括LE ACL模块和LE等时模块;其中:A chipset, comprising: a first chip and a second chip; the first chip includes a stream control module, a content control module, and a stream data module; the second chip includes a LE ACL module and LE isochronous Module; where:
    所述流控制模块用于为第一音频业务进行参数协商,并基于所述参数协商所协商的第 一参数创建所述第一音频业务对应的LE等时数据传输通道;The flow control module is configured to perform parameter negotiation for the first audio service and create an LE isochronous data transmission channel corresponding to the first audio service based on the first parameter negotiated by the parameter negotiation;
    所述内容控制模块用于向所述LE ACL模块输出所述第一音频业务的内容控制消息;The content control module is used to output the content control message of the first audio service to the LE ACL module;
    所述流数据模块用于向所述LE等时模块输出所述第一音频业务的音频数据;The streaming data module is used to output the audio data of the first audio service to the LE isochronous module;
    所述LE ACL模块用于通过LE ACL链路传输所述第一音频业务的内容控制消息;The LE ACL module is used to transmit the content control message of the first audio service through the LE ACL link;
    所述LE等时模块用于通过所述第一音频业务对应的LE等时数据传输通道传输所述第一音频业务的音频数据。The LE isochronous module is configured to transmit audio data of the first audio service through an LE isochronous data transmission channel corresponding to the first audio service.
  16. 一种芯片,其特征在于,包括:流控制模块、内容控制模块、流数据模块、LE ACL模块和LE等时模块;其中:A chip is characterized by comprising: a stream control module, a content control module, a stream data module, a LE ACL module and a LE isochronous module; wherein:
    所述流控制模块用于为第一音频业务进行参数协商,并基于所述参数协商所协商的第一参数创建所述第一音频业务对应的LE等时数据传输通道;The flow control module is used to perform parameter negotiation for the first audio service, and create an LE isochronous data transmission channel corresponding to the first audio service based on the first parameter negotiated by the parameter negotiation;
    所述内容控制模块用于向所述LE ACL模块输出所述第一音频业务的内容控制消息;The content control module is used to output the content control message of the first audio service to the LE ACL module;
    所述流数据模块用于向所述LE等时模块输出所述第一音频业务的音频数据;The streaming data module is used to output the audio data of the first audio service to the LE isochronous module;
    所述LE ACL模块用于通过LE ACL链路传输所述第一音频业务的内容控制消息;The LE ACL module is used to transmit the content control message of the first audio service through the LE ACL link;
    所述LE等时模块用于通过所述第一音频业务对应的LE等时数据传输通道传输所述第一音频业务的音频数据。The LE isochronous module is configured to transmit audio data of the first audio service through an LE isochronous data transmission channel corresponding to the first audio service.
  17. 一种通信系统,其特征在于,包括:第一音频设备和第二音频设备,其中:A communication system is characterized by comprising: a first audio device and a second audio device, wherein:
    所述第一音频设备为权利要求13所述的音频设备,所述第二音频设备为权利要求14所述的音频设备。The first audio device is the audio device of claim 13, and the second audio device is the audio device of claim 14.
  18. 一种通信系统,其特征在于,包括:第一音频设备、第二音频设备和第三音频设备,其中:A communication system is characterized by comprising: a first audio device, a second audio device and a third audio device, wherein:
    所述第一音频设备为权利要求13所述的音频设备,所述第二音频设备、第三音频设备均为权利要求14所述的音频设备。The first audio device is the audio device of claim 13, and the second audio device and the third audio device are the audio device of claim 14.
PCT/CN2018/118791 2018-11-30 2018-11-30 Wireless audio system, and audio communication method and device WO2020107491A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2018/118791 WO2020107491A1 (en) 2018-11-30 2018-11-30 Wireless audio system, and audio communication method and device
CN201880099860.1A CN113169915B (en) 2018-11-30 2018-11-30 Wireless audio system, audio communication method and equipment
CN202211136691.9A CN115665670A (en) 2018-11-30 2018-11-30 Wireless audio system, audio communication method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/118791 WO2020107491A1 (en) 2018-11-30 2018-11-30 Wireless audio system, and audio communication method and device

Publications (1)

Publication Number Publication Date
WO2020107491A1 true WO2020107491A1 (en) 2020-06-04

Family

ID=70852513

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/118791 WO2020107491A1 (en) 2018-11-30 2018-11-30 Wireless audio system, and audio communication method and device

Country Status (2)

Country Link
CN (2) CN115665670A (en)
WO (1) WO2020107491A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615647A (en) * 2022-03-11 2022-06-10 北京小米移动软件有限公司 Call control method, device and storage medium
TWI792701B (en) * 2020-12-18 2023-02-11 瑞昱半導體股份有限公司 Bluetooth audio broadcasting system and related multi-member bluetooth device supporting bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
US11709650B2 (en) 2020-12-18 2023-07-25 Realtek Semiconductor Corp. Bluetooth audio broadcasting system and related multi-member Bluetooth device supporting Bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
CN116490923A (en) * 2020-12-07 2023-07-25 Oppo广东移动通信有限公司 Parameter setting method, device, equipment and storage medium
US11709651B2 (en) 2020-12-18 2023-07-25 Realtek Semiconductor Corp. Bluetooth audio broadcasting system and related multi-member Bluetooth device supporting Bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
US11818555B2 (en) 2020-12-18 2023-11-14 Realtek Semiconductor Corp. Bluetooth audio broadcasting system and related multi-member Bluetooth device supporting Bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
WO2024085664A1 (en) * 2022-10-18 2024-04-25 삼성전자 주식회사 Electronic device and method for transmitting and/or receiving data on basis of configuration change in electronic device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116939555A (en) * 2022-03-29 2023-10-24 Oppo广东移动通信有限公司 Service query processing method, device, equipment, storage medium and program product
CN115278332A (en) * 2022-06-30 2022-11-01 海信视像科技股份有限公司 Display device, playing device and data transmission method
CN117707467B (en) * 2024-02-04 2024-05-03 湖北芯擎科技有限公司 Audio path multi-host control method, system, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101067897A (en) * 2007-05-30 2007-11-07 上海晖悦数字视频科技有限公司 Digital TV set remote-controller based on bluetooth and controlling method thereof
CN105792050A (en) * 2016-04-20 2016-07-20 青岛歌尔声学科技有限公司 Bluetooth earphone and communication method based on same
CN108702720A (en) * 2016-02-24 2018-10-23 高通股份有限公司 Source device broadcasts synchronizing information associated with bluetooth equal ratio channel
US10136429B2 (en) * 2014-07-03 2018-11-20 Lg Electronics Inc. Method for transmitting and receiving audio data in wireless communication system supporting bluetooth communication and device therefor

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101068196B (en) * 2006-05-01 2010-05-12 中兴通讯股份有限公司 Bluetooth mobile telephone switch-in bluetooth gateway service insertion controlling method
US8792945B2 (en) * 2006-10-31 2014-07-29 Motorola Mobility Llc Methods and devices for dual mode bidirectional audio communication
CN103733661A (en) * 2011-07-25 2014-04-16 摩托罗拉移动有限责任公司 Methods and apparatuses for providing profile information in a bluetooth communication system
US20160359925A1 (en) * 2015-06-08 2016-12-08 Lg Electronics Inc. Method and apparatus for transmitting and receiving data in wireless communication system
US20170208639A1 (en) * 2016-01-15 2017-07-20 Lg Electronics Inc. Method and apparatus for controlling a device using bluetooth technology

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101067897A (en) * 2007-05-30 2007-11-07 上海晖悦数字视频科技有限公司 Digital TV set remote-controller based on bluetooth and controlling method thereof
US10136429B2 (en) * 2014-07-03 2018-11-20 Lg Electronics Inc. Method for transmitting and receiving audio data in wireless communication system supporting bluetooth communication and device therefor
CN108702720A (en) * 2016-02-24 2018-10-23 高通股份有限公司 Source device broadcasts synchronizing information associated with bluetooth equal ratio channel
CN105792050A (en) * 2016-04-20 2016-07-20 青岛歌尔声学科技有限公司 Bluetooth earphone and communication method based on same

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116490923A (en) * 2020-12-07 2023-07-25 Oppo广东移动通信有限公司 Parameter setting method, device, equipment and storage medium
TWI792701B (en) * 2020-12-18 2023-02-11 瑞昱半導體股份有限公司 Bluetooth audio broadcasting system and related multi-member bluetooth device supporting bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
US11709650B2 (en) 2020-12-18 2023-07-25 Realtek Semiconductor Corp. Bluetooth audio broadcasting system and related multi-member Bluetooth device supporting Bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
US11709651B2 (en) 2020-12-18 2023-07-25 Realtek Semiconductor Corp. Bluetooth audio broadcasting system and related multi-member Bluetooth device supporting Bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
US11818555B2 (en) 2020-12-18 2023-11-14 Realtek Semiconductor Corp. Bluetooth audio broadcasting system and related multi-member Bluetooth device supporting Bluetooth low energy audio broadcasting operations and capable of synchronously adjusting audio volume
CN114615647A (en) * 2022-03-11 2022-06-10 北京小米移动软件有限公司 Call control method, device and storage medium
WO2024085664A1 (en) * 2022-10-18 2024-04-25 삼성전자 주식회사 Electronic device and method for transmitting and/or receiving data on basis of configuration change in electronic device

Also Published As

Publication number Publication date
CN115665670A (en) 2023-01-31
CN113169915B (en) 2022-10-04
CN113169915A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
CN113169915B (en) Wireless audio system, audio communication method and equipment
JP7293398B2 (en) Bluetooth connection methods, devices and systems
EP3893475B1 (en) Method for automatically switching bluetooth audio encoding method and electronic apparatus
WO2020133183A1 (en) Audio data synchronization method and device
WO2020014880A1 (en) Multi-screen interaction method and device
WO2020077512A1 (en) Voice communication method, electronic device, and system
CN113438354B (en) Data transmission method and device, electronic equipment and storage medium
WO2021043219A1 (en) Bluetooth reconnection method and related apparatus
US20230189366A1 (en) Bluetooth Communication Method, Terminal Device, and Computer-Readable Storage Medium
CN112119641B (en) Method and device for realizing automatic translation through multiple TWS (time and frequency) earphones connected in forwarding mode
WO2020124371A1 (en) Method and device for establishing data channels
CN114679710A (en) TWS earphone connection method and equipment
WO2022222691A1 (en) Call processing method and related device
WO2022257563A1 (en) Volume adjustment method, and electronic device and system
WO2020134868A1 (en) Connection establishment method, and terminal apparatus
WO2021043250A1 (en) Bluetooth communication method, and related device
CN113132959B (en) Wireless audio system, wireless communication method and device
WO2022199491A1 (en) Stereo networking method and system, and related apparatus
WO2022161006A1 (en) Photograph synthesis method and apparatus, and electronic device and readable storage medium
WO2021218544A1 (en) Wireless connection providing system, method, and electronic apparatus
CN113678481B (en) Wireless audio system, audio communication method and equipment
WO2022267917A1 (en) Bluetooth communication method and system
WO2024093614A1 (en) Device input method and system, and electronic device and storage medium
WO2023138533A1 (en) Service collaboration method, electronic device, readable storage medium, and chip system
CN114153531A (en) Method and device for managing Internet of things equipment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18941117

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18941117

Country of ref document: EP

Kind code of ref document: A1