WO2020249098A1 - 蓝牙通信方法、tws蓝牙耳机及终端 - Google Patents

蓝牙通信方法、tws蓝牙耳机及终端 Download PDF

Info

Publication number
WO2020249098A1
WO2020249098A1 PCT/CN2020/095872 CN2020095872W WO2020249098A1 WO 2020249098 A1 WO2020249098 A1 WO 2020249098A1 CN 2020095872 W CN2020095872 W CN 2020095872W WO 2020249098 A1 WO2020249098 A1 WO 2020249098A1
Authority
WO
WIPO (PCT)
Prior art keywords
headset
terminal
audio data
language
earphone
Prior art date
Application number
PCT/CN2020/095872
Other languages
English (en)
French (fr)
Inventor
高天星
郝一休
唐能福
宋业全
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2020249098A1 publication Critical patent/WO2020249098A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Definitions

  • the embodiments of the present application relate to the field of communication technologies, and in particular, to a Bluetooth communication method, a TWS Bluetooth headset and a terminal.
  • the Bluetooth headset is the application of Bluetooth technology to the hands-free headset, allowing users to avoid the entanglement of the headset cable and use the headset in various ways freely.
  • the TWS headset includes a main headset and a secondary headset.
  • the primary headset establishes a Bluetooth connection with the terminal, and the primary headset establishes a Bluetooth connection with the secondary headset.
  • Data transmission is carried out between the main earphone and the auxiliary earphone through data forwarding.
  • the terminal sends the audio data to the main earphone, and the main earphone forwards the audio data to the auxiliary earphone, so that the main earphone and the auxiliary earphone sound synchronously. Since there is no physical cable connection between the main headset and the secondary headset, the wearing experience of the TWS Bluetooth headset has been improved.
  • the current TWS Bluetooth headset only has the basic functions of listening to music and making calls, making the function of the Bluetooth headset relatively simple and unable to meet the user's multi-functional needs for the TWS Bluetooth headset.
  • the embodiments of the present application provide a Bluetooth communication method, a TWS Bluetooth headset, and a terminal to enrich the functions of the TWS Bluetooth headset and meet the user's multi-functional requirements for the TWS Bluetooth headset.
  • the embodiments of the present application provide a Bluetooth communication method, which is applied to a Bluetooth communication system.
  • the Bluetooth communication system includes a truly wireless stereo TWS Bluetooth headset and a terminal.
  • the TWS Bluetooth headset includes a first headset and a second headset,
  • the method includes:
  • the terminal sends first audio data to the first headset through the first ACL link, and the first headset forwards the first audio data to the second headset through the second ACL link ;
  • the terminal receives a first scene change instruction
  • a first SCO link is established between the terminal and the first headset, and a second SCO link is established between the terminal and the second headset;
  • the second audio data is transmitted between the terminal and the first headset through the first SCO link, and the second audio data is transmitted between the terminal and the second headset through the second SCO link.
  • the terminal can transmit voice data between the first SCO link and the main headset, and transmit voice data between the second SCO link and the secondary headset, so that the primary headset and the secondary headset can collect two
  • the voice data of different users are transmitted to the terminal through different SCO links.
  • the terminal can also send different voice data to the main earphone and the auxiliary earphone through the two SCO links, which enriches the application scenarios of Bluetooth earphones and meets the user's multi-functional requirements for Bluetooth earphones.
  • the first scene change instruction is used to instruct the application scene of the TWS Bluetooth headset to be changed to a simultaneous translation scene;
  • the first SCO link is used to transmit the first scene between the terminal and the first headset.
  • Second audio data transmitting the second audio data between the terminal and the second earphone through the second SCO link, including:
  • the first earphone collects second audio data in a first language, and sends the second audio data in the first language to the terminal through the first SCO link, and the terminal transmits the second audio data in the first language Translate the second audio data of the second language into the second audio data of the second language, and send the translated second audio data of the second language to the second earphone through the second SCO link, the second earphone Playing the second audio data in the second language;
  • the second earphone collects second audio data in a second language, and sends the second audio data in the second language to the terminal through the second SCO link, and the terminal transmits the second language Translate the second audio data in the first language into the second audio data in the first language, and send the translated second audio data in the first language to the first headset through the first SCO link, the first headset Play the second audio data in the first language.
  • SCO links are established between the terminal and the main earphone and the auxiliary earphone.
  • the two SCO links and the two translation paths are independent of each other without interfering with each other, and both SCO links support the transmission of two-way voice data. , So that two users who use different languages can communicate with each other in real time without barriers, and achieve the effect of simultaneous translation.
  • the method before the terminal receives the first scene change instruction, the method further includes:
  • the first earphone receives a first operation input by a user
  • the first headset In response to the first operation, the first headset sends a physical address request message to the second headset;
  • the first headset sends the first scene change instruction and the physical address of the second headset to the terminal.
  • the method before the terminal receives the first scene change instruction, the method further includes:
  • the second earphone receives the first operation input by the user
  • the second headset In response to the first operation, the second headset sends a first scene change instruction and the physical address of the second headset to the first headset;
  • the first headset forwards the first scene change instruction and the physical address of the second headset to the terminal.
  • the user can not only realize the scene change by operating the first earphone, but also realize the scene change by operating the second earphone, which can be applied to various scenarios, and the application flexibility is improved.
  • the method further includes:
  • the terminal receives translation configuration information input by a user, where the translation configuration information is used to indicate that the collection language of the first headset is the first language, and the collection language of the second headset is the second language.
  • the terminal of the present application can be flexibly applied to various application scenarios.
  • a first SCO link is established between the terminal and the first headset, and after a second SCO link is established between the terminal and the second headset, the method further includes:
  • the terminal receives a second scene change instruction
  • the method before the terminal receives the second scene change instruction, the method further includes:
  • the first earphone receives a second operation input by the user
  • the first headset In response to the second operation, the first headset sends a second scene change instruction to the terminal.
  • the method before the terminal receives the second scene change instruction, the method further includes:
  • the second earphone receives a second operation input by the user
  • the second headset In response to the second operation, the second headset sends a second scene change instruction to the first headset;
  • the first headset forwards the received second scene change instruction to the terminal.
  • the first audio data is media audio data
  • the second audio data is voice call data
  • an embodiment of the present application provides a TWS Bluetooth headset.
  • the TWS Bluetooth headset includes a first headset and a second headset. Both the first headset and the second headset include a processor, a memory, and A computer program on the memory that can run on the processor, and when the processor executes the computer program, the following steps are executed:
  • the first headset receives the first audio data sent by the terminal through the first ACL link, and the first headset forwards the first audio data to the second through the second ACL link. headset;
  • a first SCO link is established between the terminal and the first headset, and a second SCO link is established between the terminal and the second headset;
  • the second audio data is transmitted between the terminal and the first headset through the first SCO link, and the second audio data is transmitted between the terminal and the second headset through the second SCO link.
  • the first scene change instruction is used to instruct the application scene of the TWS Bluetooth headset to be changed to a simultaneous translation scene;
  • the first SCO link is used to transmit the first scene between the terminal and the first headset.
  • Second audio data transmitting the second audio data between the terminal and the second earphone through the second SCO link, including:
  • the first earphone collects the second audio data in the first language, and sends the second audio data in the first language to the terminal through the first SCO link, so that the terminal transmits the second audio data in the first language.
  • the second audio data in one language is translated into the second audio data in the second language, and the translated second audio data in the second language is sent to the second earphone through the second SCO link, and the first Two headphones play the second audio data in the second language;
  • the second earphone collects the second audio data in the second language, and sends the second audio data in the second language to the terminal through the second SCO link, so that the terminal transmits the second audio data in the second language.
  • the second audio data in the second language is translated into the second audio data in the first language, and the translated second audio data in the first language is sent to the first earphone through the first SCO link.
  • a headset plays the second audio data in the first language.
  • the method before the response to the first scene change instruction received by the terminal, the method further includes:
  • the first earphone receives a first operation input by a user
  • the first headset In response to the first operation, the first headset sends a physical address request message to the second headset;
  • the first headset sends the first scene change instruction and the physical address of the second headset to the terminal.
  • the method before the response to the first scene change instruction received by the terminal, the method further includes:
  • the second earphone receives the first operation input by the user
  • the second headset In response to the first operation, the second headset sends a first scene change instruction and the physical address of the second headset to the first headset;
  • the first headset forwards the first scene change instruction and the physical address of the second headset to the terminal.
  • the method further includes:
  • the method before the response to the second scene change instruction received by the terminal, the method further includes:
  • the first earphone receives a second operation input by the user
  • the first headset In response to the second operation, the first headset sends a second scene change instruction to the terminal.
  • the method before the response to the second scene change instruction received by the terminal, the method further includes:
  • the second earphone receives a second operation input by the user
  • the second headset In response to the second operation, the second headset sends a second scene change instruction to the first headset;
  • the first headset forwards the received second scene change instruction to the terminal.
  • the first audio data is media audio data
  • the second audio data is voice call data
  • an embodiment of the present application provides a terminal.
  • the terminal includes a processor, a memory, and a computer program stored on the memory and running on the processor, and the processor executes the computer Perform the following steps during the program:
  • a first ACL link is established between the terminal and the first headset, and a second ACL link is established between the first headset and the second headset, and the first headset and the second headset are true wireless stereo TWS A single headset in a Bluetooth headset;
  • the terminal sends first audio data to the first headset through the first ACL link, and the first headset forwards the first audio data to the second headset through the second ACL link ;
  • the terminal receives a first scene change instruction
  • a first SCO link is established between the terminal and the first headset, and a second SCO link is established between the terminal and the second headset;
  • the second audio data is transmitted between the terminal and the first headset through the first SCO link, and the second audio data is transmitted between the terminal and the second headset through the second SCO link.
  • the first scene change instruction is used to instruct the application scene of the TWS Bluetooth headset to be changed to a simultaneous translation scene;
  • the first SCO link is used to transmit the first scene between the terminal and the first headset.
  • Second audio data transmitting the second audio data between the terminal and the second earphone through the second SCO link, including:
  • the terminal receives second audio data in a first language from the first headset through the first SCO link, the second audio data in the first language is collected by the first headset, and the terminal Translate the second audio data in the first language into second audio data in the second language, and send the translated second audio data in the second language to the second headset through the second SCO link , So that the second earphone plays the second audio data in the second language;
  • the terminal receives second audio data in a second language from the second headset through the second SCO link, and the second audio data in the second language is collected by the second headset, and the terminal Translate the second audio data in the second language into the second audio data in the first language, and send the translated second audio data in the first language to the first headset through the first SCO link , So that the first earphone plays the second audio data in the first language.
  • the method further includes: the terminal receives translation configuration information input by a user, where the translation configuration information is used to indicate that the collection language of the first headset is the first language, and the collection language of the second headset is The second language.
  • the method further includes:
  • the terminal receives a second scene change instruction
  • the first audio data is media audio data
  • the second audio data is voice call data
  • an embodiment of the present application provides a Bluetooth communication method, which is applied to a TWS Bluetooth headset, the TWS Bluetooth headset includes a first headset and a second headset, and the method includes:
  • the first headset receives the first audio data sent by the terminal through the first ACL link, and the first headset forwards the first audio data to the second through the second ACL link. headset;
  • a first SCO link is established between the terminal and the first headset, and a second SCO link is established between the terminal and the second headset;
  • the second audio data is transmitted between the terminal and the first headset through the first SCO link, and the second audio data is transmitted between the terminal and the second headset through the second SCO link.
  • the first scene change instruction is used to instruct the application scene of the TWS Bluetooth headset to be changed to a simultaneous translation scene;
  • the first SCO link is used to transmit the first scene between the terminal and the first headset.
  • Second audio data transmitting the second audio data between the terminal and the second earphone through the second SCO link, including:
  • the first earphone collects the second audio data in the first language, and sends the second audio data in the first language to the terminal through the first SCO link, so that the terminal transmits the second audio data in the first language.
  • the second audio data in one language is translated into the second audio data in the second language, and the translated second audio data in the second language is sent to the second earphone through the second SCO link, and the first Two headphones play the second audio data in the second language;
  • the second earphone collects the second audio data in the second language, and sends the second audio data in the second language to the terminal through the second SCO link, so that the terminal transmits the second audio data in the second language.
  • the second audio data in the second language is translated into the second audio data in the first language, and the translated second audio data in the first language is sent to the first earphone through the first SCO link.
  • a headset plays the second audio data in the first language.
  • the method before the response to the first scene change instruction received by the terminal, the method further includes:
  • the first earphone receives a first operation input by a user
  • the first headset In response to the first operation, the first headset sends a physical address request message to the second headset;
  • the first headset sends the first scene change instruction and the physical address of the second headset to the terminal.
  • the method before the response to the first scene change instruction received by the terminal, the method further includes:
  • the second earphone receives the first operation input by the user
  • the second headset In response to the first operation, the second headset sends a first scene change instruction and the physical address of the second headset to the first headset;
  • the first headset forwards the first scene change instruction and the physical address of the second headset to the terminal.
  • a first SCO link is established between the terminal and the first headset, and after a second SCO link is established between the terminal and the second headset, the method further includes:
  • the method before the response to the second scene change instruction received by the terminal, the method further includes:
  • the first earphone receives a second operation input by the user
  • the first headset In response to the second operation, the first headset sends a second scene change instruction to the terminal.
  • the method before the response to the second scene change instruction received by the terminal, the method further includes:
  • the second earphone receives a second operation input by the user
  • the second headset In response to the second operation, the second headset sends a second scene change instruction to the first headset;
  • the first headset forwards the received second scene change instruction to the terminal.
  • the first audio data is media audio data
  • the second audio data is voice call data
  • an embodiment of the present application provides a Bluetooth communication method applied to a terminal, and the method includes:
  • a first ACL link is established between the terminal and the first headset, and a second ACL link is established between the first headset and the second headset, and the first headset and the second headset are true wireless stereo TWS A single headset in a Bluetooth headset;
  • the terminal sends first audio data to the first headset through the first ACL link, and the first headset forwards the first audio data to the second headset through the second ACL link ;
  • the terminal receives a first scene change instruction
  • a first SCO link is established between the terminal and the first headset, and a second SCO link is established between the terminal and the second headset;
  • the second audio data is transmitted between the terminal and the first headset through the first SCO link, and the second audio data is transmitted between the terminal and the second headset through the second SCO link.
  • the first scene change instruction is used to instruct the application scene of the TWS Bluetooth headset to be changed to a simultaneous translation scene;
  • the first SCO link is used to transmit the first scene between the terminal and the first headset.
  • Second audio data transmitting the second audio data between the terminal and the second earphone through the second SCO link, including:
  • the terminal receives second audio data in a first language from the first headset through the first SCO link, the second audio data in the first language is collected by the first headset, and the terminal Translate the second audio data in the first language into second audio data in the second language, and send the translated second audio data in the second language to the second headset through the second SCO link , So that the second earphone plays the second audio data in the second language;
  • the terminal receives second audio data in a second language from the second headset through the second SCO link, and the second audio data in the second language is collected by the second headset, and the terminal Translate the second audio data in the second language into the second audio data in the first language, and send the translated second audio data in the first language to the first headset through the first SCO link , So that the first earphone plays the second audio data in the first language.
  • the method further includes: the terminal receives translation configuration information input by a user, the translation configuration information is used to indicate that the collection language of the first headset is the first language, and the language of the second headset is The acquisition language is the second language.
  • a first SCO link is established between the terminal and the first headset, and after a second SCO link is established between the terminal and the second headset, the method further includes:
  • the terminal receives a second scene change instruction
  • the first audio data is media audio data
  • the second audio data is voice call data
  • an embodiment of the present application provides a chip that includes at least one communication interface, at least one processor, and at least one memory.
  • the communication interface, memory, and processor are interconnected by a bus, and the processor executes
  • the instructions stored in the memory are used to execute the Bluetooth communication method according to any one of the fourth aspect or the Bluetooth communication method according to any one of the fifth aspect.
  • an embodiment of the present application provides a storage medium, the storage medium is used to store a computer program, and when the computer program is executed by a computer or a processor, it is used to implement the Bluetooth communication according to any one of the fourth aspect.
  • embodiments of the present application provide a computer program product, the computer program product includes instructions, and when the instructions are executed by a computer or a processor, the Bluetooth communication method according to any one of the fourth aspects is implemented, Or, implement the Bluetooth communication method according to any one of the fifth aspect.
  • a first ACL link is established between the terminal and the main headset, and the main headset and the secondary headset establish a second ACL link, in response to the terminal receiving In the first scene change instruction, the terminal establishes a first SCO link with the main headset, and establishes a second SCO link with the secondary headset.
  • Figure 1 is a system architecture diagram provided by an embodiment of the application
  • FIG. 2 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • 3A to 3E are schematic diagrams of terminal interfaces in an application scenario provided by embodiments of this application.
  • 4A is a schematic diagram of a link between a terminal and a Bluetooth headset in a music listening scenario provided by an embodiment of the application;
  • 4B is a schematic diagram of data transmission between a terminal and a Bluetooth headset in a music listening scenario provided by an embodiment of the application;
  • FIG. 5A is a schematic diagram of a link between a terminal and a Bluetooth headset in a call scenario provided by an embodiment of the application;
  • FIG. 5B is a schematic diagram of data transmission between a terminal and a Bluetooth headset in a call scenario provided by an embodiment of the application;
  • FIG. 6 is a schematic flowchart of a Bluetooth communication method provided by an embodiment of the application.
  • FIG. 7A is a schematic diagram of the Bluetooth headset provided by an embodiment of the application switching from a music listening scene to a simultaneous translation scene;
  • 7B is a schematic diagram of data transmission between a terminal and a Bluetooth headset in a simultaneous translation scenario provided by an embodiment of the application;
  • FIG. 8A is a schematic diagram of the Bluetooth headset provided by an embodiment of the application being switched from a single-person call scenario to a two-person call scenario;
  • 8B is a schematic diagram of data transmission between a terminal and a Bluetooth headset in a two-person call scenario provided by an embodiment of the application;
  • FIG. 9 is a schematic diagram of a process of entering a simultaneous translation mode according to an embodiment of the application.
  • FIG. 10 is a schematic diagram of another process for entering the simultaneous translation mode according to an embodiment of the application.
  • FIG. 11 is a schematic diagram of data transmission between a terminal and a Bluetooth headset in a simultaneous translation scenario provided by an embodiment of the application;
  • FIG. 12 is a schematic diagram of a translation configuration interface of a terminal provided by an embodiment of the application.
  • FIG. 13 is a schematic diagram of a translation configuration interface of a terminal provided by an embodiment of the application.
  • FIG. 14 is a schematic flowchart of a Bluetooth communication method provided by an embodiment of the application.
  • 15 is a schematic diagram of the link between the terminal and the Bluetooth headset after exiting the simultaneous translation mode according to an embodiment of the application;
  • FIG. 16 is a schematic structural diagram of a TWS Bluetooth headset provided by an embodiment of the application.
  • FIG. 17 is a schematic structural diagram of a terminal provided by an embodiment of the application.
  • FIG. 1 is a system architecture diagram provided by an embodiment of the application. Please refer to FIG. 1, which includes a terminal 10 and a Bluetooth headset 20. The terminal 10 and the Bluetooth headset 20 are connected through Bluetooth (Bluetooth).
  • Bluetooth Bluetooth
  • the Bluetooth headset is a headset that supports the Bluetooth communication protocol.
  • the Bluetooth communication protocol may be ER traditional Bluetooth protocol, BDR traditional Bluetooth protocol, or BLE low energy Bluetooth protocol. Of course, it can also be other new Bluetooth protocol types introduced in the future.
  • the version of the Bluetooth communication protocol can be any of the following: 1.0 series version, 2.0 series version, 3.0 series version, 4.0 series version, and other series versions based on future releases.
  • the Bluetooth headset in this embodiment is a TWS Bluetooth headset.
  • the TWS Bluetooth headset includes a first headset and a second headset.
  • the first earphone is called the main earphone
  • the second earphone is called the auxiliary earphone.
  • the main earphone and the auxiliary earphone are equipped with a Bluetooth module, and the main earphone and the auxiliary earphone can transmit data through the Bluetooth protocol. From the appearance of the TWS headset, there is no connection line between the main headset and the secondary headset, which has the advantages of convenient carrying and easy use.
  • the main earphone and the auxiliary earphone both include a microphone, that is, in addition to the audio playback function of the main earphone and the auxiliary earphone, the main earphone and the auxiliary earphone also have the function of audio collection.
  • the Bluetooth headset in this application can be one or more of the following applications: HSP (Headset Profile) application, HFP (Hands-free Profile) application, A2DP (Advanced Audio Distribution Profile) application, AVRCP (Audio/Video Remote) Control Profile) application.
  • HSP Headset Profile
  • HFP Headset Profile
  • A2DP Advanced Audio Distribution Profile
  • AVRCP Audio/Video Remote Control Profile
  • the HSP application represents the headset application, which provides the basic functions required for communication between the terminal and the headset.
  • the Bluetooth headset can be used as the audio input and output interface of the terminal.
  • the HFP application stands for hands-free application.
  • the HFP application adds some extended functions to the HSP application.
  • the Bluetooth headset can control the call process of the terminal, such as answering, hanging up, rejecting, and voice dialing.
  • the A2DP application is an advanced audio transmission application.
  • A2DP can use the chip in the headset to stack data to achieve high-definition sound.
  • the AVRCP application is an audio and video remote control application.
  • the AVRCP application defines the characteristics of how to control streaming media, including: pause, stop, start playback, volume control and other types of remote control operations.
  • the terminal 100 may be any device with computing and processing capabilities.
  • the terminal can also have audio and video playback and interface display functions.
  • the terminal can be a mobile phone, a computer, a smart TV, a vehicle-mounted device, a wearable device, an industrial device, and so on.
  • the terminal 100 supports the Bluetooth communication protocol.
  • the terminal is an electronic device.
  • the structure of the electronic device will be described below with reference to FIG. 2.
  • FIG. 2 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, and a battery 142 , Antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193 , A display 194, and a subscriber identification module (SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include pressure sensor 180A, gyroscope sensor 180B, air pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU), etc.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, and a universal asynchronous transmitter (universal asynchronous transmitter) interface.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB Universal Serial Bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through an I2C interface, so that the processor 110 and the touch sensor 180K communicate through an I2C bus interface to implement the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled with the audio module 170 through an I2S bus to realize communication between the processor 110 and the audio module 170.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through an I2S interface, so as to realize the function of answering calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communication to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to realize the function of answering calls through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a two-way communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 with the display 194, the camera 193 and other peripheral devices.
  • the MIPI interface includes a camera serial interface (camera serial interface, CSI), a display serial interface (display serial interface, DSI), and so on.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display 194 communicate through a DSI interface to realize the display function of the electronic device 100.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is merely a schematic description, and does not constitute a structural limitation of the electronic device 100.
  • the electronic device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves from the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves for radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal. Then the demodulator transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellites.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic wave radiation via the antenna 2.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • the GNSS may include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi -zenith satellite system, QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 implements a display function through a GPU, a display 194, and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs, which execute program instructions to generate or change display information.
  • the display 194 is used to display images, videos, etc.
  • the display 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light emitting diode). diode, AMOLED), flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the electronic device 100 may include one or N displays 194, and N is a positive integer greater than one.
  • the electronic device 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the photosensitive element of the camera through the lens, the light signal is converted into an electrical signal, and the photosensitive element of the camera transfers the electrical signal to the ISP for processing and is converted into an image visible to the naked eye.
  • ISP can also optimize the image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be provided in the camera 193.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects the frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in a variety of encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, and so on.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent cognition of the electronic device 100, such as image recognition, face recognition, voice recognition, text understanding, and so on.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store an operating system, at least one application program (such as a sound playback function, an image playback function, etc.) required by at least one function.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), etc.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be provided in the processor 110, or part of the functional modules of the audio module 170 may be provided in the processor 110.
  • the process of establishing a service connection between the terminal and the Bluetooth headset includes three phases, which are the scanning phase, the pairing phase, and the service connection establishment phase. The detailed description will be given below with reference to FIGS. 3A to 3E.
  • FIGS. 3A to 3E are schematic diagrams of terminal interfaces in an application scenario provided by embodiments of this application.
  • select "Bluetooth” in the terminal setting interface to enter the Bluetooth setting interface and the Bluetooth setting interface is shown in Figure 3B.
  • the terminal receives an instruction corresponding to the user operating the Bluetooth enable option, the terminal turns on the Bluetooth function.
  • the terminal can find nearby Bluetooth devices that can be paired, and display the scanned Bluetooth devices in the "Available Devices" list.
  • FIG. 3B illustrates a situation where the current terminal device HUAWEIP30 scans the Bluetooth device HUAWEIMate20 and the Bluetooth headset 1. This stage is called the scan stage.
  • the terminal when the terminal detects that the user clicks on a Bluetooth device in the "Available Devices” list, the terminal will pair with the Bluetooth device.
  • the terminal when the terminal detects that the user clicks "Bluetooth Headset 1" in the "Available Devices” list, the terminal is paired with Bluetooth Headset 1. If the pairing is successful, "Bluetooth Headset 1" is displayed on "Paired Devices” list, as shown in Figure 3D. This stage is called the pairing stage.
  • the Bluetooth device that has established a business connection will be displayed in the "paired devices" list.
  • the "paired devices” list HUAWEI free buds of Bluetooth devices that have established business connections are also displayed.
  • the terminal when the terminal detects that the user clicks on a certain Bluetooth device in the "paired devices” list, the terminal will establish a business connection with the Bluetooth device. As shown in FIG. 3E, when the terminal detects that the user clicks on "Bluetooth headset 1" in the “pairable devices” list, the terminal establishes a business connection with the Bluetooth headset 1. If the service connection is established successfully, audio data can be transmitted between the terminal and the Bluetooth headset 1. This stage is called the business connection establishment stage. The interaction processes between the terminal and the Bluetooth headset described in the subsequent embodiments are all performed after the terminal and the Bluetooth headset establish a service connection.
  • FIGS. 3A to 3E are only an example.
  • the setting interface and operation mode corresponding to different terminal devices may be different.
  • audio data can be transmitted between the terminal and the Bluetooth headset.
  • TWS Bluetooth headsets when the terminal establishes a connection with the Bluetooth headset, the terminal only establishes a Bluetooth connection with the main headset, and there is no need to establish a Bluetooth connection between the terminal and the secondary headset.
  • the main earphone and the auxiliary earphone communicate with each other by data forwarding.
  • the terminal sends audio data to the main earphone, and after the main earphone receives the audio data, it forwards the audio data to the auxiliary earphone, thereby realizing synchronous sound production between the main earphone and the auxiliary earphone.
  • the Bluetooth physical link between the terminal and the TWS Bluetooth headset is divided into two types, one is an asynchronous connectionless (ACL) link, and the other is a synchronous connection oriented (SCO) link.
  • ACL asynchronous connectionless
  • SCO synchronous connection oriented
  • the ACL link is the basic Bluetooth connection. It is generally used to transmit connection negotiation signaling and to maintain the Bluetooth connection.
  • the ACL link also supports one-way transmission of audio data. Exemplarily, when the terminal sends audio data to the main earphone through the ACL link, the main earphone cannot send audio data to the terminal at the same time.
  • the SCO link is a connection technology supported by the Bluetooth baseband, which uses reserved time slots to transmit data.
  • the SCO link supports two-way transmission of audio data. Exemplarily, when the terminal sends audio data to the main earphone through the SCO link, the main earphone may also send audio data to the terminal through the SCO link.
  • the current TWS Bluetooth headsets usually have the basic functions of listening to music and making calls.
  • the following describes the communication process between the terminal and the Bluetooth headset in combination with two application scenarios of listening to music and making a call.
  • FIG. 4A is a schematic diagram of a link between a terminal and a Bluetooth headset in a music listening scenario provided by an embodiment of the application.
  • the terminal in the music listening scene, the terminal establishes an ACL link with the main earphone in the TWS Bluetooth earphone, and the ACL link is also established between the main earphone and the auxiliary earphone.
  • FIG. 4B is a schematic diagram of data transmission between a terminal and a Bluetooth headset in a music listening scenario provided by an embodiment of the application.
  • the terminal receives the music to be played selected by the user.
  • the terminal sends the currently played audio data to the main earphone through the ACL link, and the main earphone forwards the received audio data to the secondary earphone through the ACL link.
  • the Bluetooth headset controls the primary headset and the secondary headset to play the audio data synchronously, so that the user can listen to music through the worn primary headset and/or secondary headset.
  • the wearing state of the main earphone and the auxiliary earphone can also be detected, and audio data can be transmitted according to the wearing state of the main earphone and the auxiliary earphone.
  • the terminal when it is detected that the user is wearing the main earphone and the auxiliary earphone at the same time, the terminal sends the currently played audio data to the main earphone through the ACL link, and the main earphone forwards the received audio data to the auxiliary earphone through the ACL link. To make the main earphone and the auxiliary earphone sound synchronously.
  • the terminal When it is detected that the user only wears the main earphone, the terminal sends the currently played audio data to the main earphone through the ACL link, and the main earphone does not need to forward the audio data to the secondary earphone.
  • the roles of the primary headset and secondary headset can be switched first, that is, the primary headset is switched to the secondary headset, and the secondary headset is switched to the primary headset. Then, the terminal sends the currently played audio data to the main earphone through the ACL link, and the main earphone does not need to forward the audio data to the secondary earphone.
  • a sensor-based wearing detection technology may be used, that is, sensors are provided in the main earphone and the auxiliary earphone, and the sensors are used to collect the wearing state signal. According to the wearing state signal collected by the optical proximity sensor, it can be determined whether the main earphone or the auxiliary earphone is in the wearing state.
  • the aforementioned sensor may be one or more of an optical proximity sensor, a pressure sensor, a thermal sensor, and a moisture sensor.
  • FIG. 5A is a schematic diagram of a link between a terminal and a Bluetooth headset in a call scenario provided by an embodiment of the application.
  • the terminal in the dial-out call scenario, the terminal establishes an SCO link with the main headset in the TWS Bluetooth headset, and the SCO link is also established between the main headset and the secondary headset.
  • FIG. 5B is a schematic diagram of data transmission between a terminal and a Bluetooth headset in a call scenario provided by an embodiment of the application.
  • the user wears the main headset and the secondary headset at the same time.
  • the microphone of the main headset performs voice collection
  • the microphone of the secondary headset does not perform voice collection.
  • the microphone of the main headset collects the first voice data spoken by the user, and sends the collected first voice data to the terminal through the SCO link, and the terminal performs voice processing and transmits it to the calling party through wireless transmission technology.
  • the terminal After receiving the second voice data of the call partner, the terminal sends the second voice data to the main headset through the SCO link, and the main headset forwards the received second voice data to the secondary headset through the SCO link.
  • the Bluetooth headset controls the primary headset and the secondary headset to synchronously play the second voice data, so that the user can listen to the second voice data of the calling party through the worn primary headset and the secondary headset.
  • the main headset refers to the headset that establishes a connection link with the terminal, and does not limit the left headset or the right headset.
  • the main earphone is the left earphone and the secondary earphone is the right earphone.
  • the main earphone is the right earphone and the secondary earphone is the left earphone.
  • the main headset and the secondary headset can also be switched. In one possible way, it can be switched according to the power state of the main earphone and the auxiliary earphone.
  • the auxiliary earphone is switched to the main earphone. In another possible way, it can also be switched according to the wearing state of the main earphone and the auxiliary earphone.
  • the Bluetooth headset detects that the main headset is out of ear or falls, the main headset is switched to the secondary headset, and the secondary headset is switched to the primary headset.
  • the current TWS Bluetooth headset usually only has the basic functions of listening to music and making calls, making the function of the Bluetooth headset relatively simple.
  • an embodiment of the present application provides a Bluetooth communication method.
  • the microphones of the main headset and the secondary headset of the TWS Bluetooth headset collect voice data of different users, and the TWS Bluetooth headset and the terminal are used to realize the function of simultaneous translation.
  • user A who speaks a first language wears a main headset
  • user B who speaks a second language wears a secondary headset.
  • the microphone of the main headset collects the voice data of the first language spoken by user A and sends it to the terminal, which translates it into the second language and transmits it to the secondary headset.
  • the microphone of the secondary earphone collects the voice data of the second language spoken by the user B and sends it to the terminal, and the terminal translates it into the first language and transmits it to the main earphone.
  • a barrier-free conversation between user A and user B is realized.
  • FIG. 6 is a schematic flowchart of a Bluetooth communication method provided by an embodiment of the application. As shown in Figure 6, the method includes:
  • a first ACL link is established between the terminal and the main earphone, and a second ACL link is established between the main earphone and the secondary earphone.
  • S602 The terminal sends first audio data to the main earphone through the first ACL link, and the main earphone forwards the first audio data to the secondary earphone through the second ACL link.
  • the TWS Bluetooth headset is in a music listening scene.
  • a schematic diagram of the link between the terminal, the main earphone, and the auxiliary earphone is shown in FIG. 4A, that is, an ACL link is established between the terminal and the main earphone, and between the main earphone and the auxiliary earphone.
  • the ACL link between the terminal and the main earphone is called the first ACL link
  • the ACL link between the main earphone and the secondary earphone is called the second ACL link.
  • the audio transmission process between the terminal, the main earphone, and the auxiliary earphone is shown in Figure 4B. That is, the terminal sends the first audio data to the main earphone through the first ACL link, and the main earphone forwards the received first audio data to the secondary earphone through the ACL link.
  • the first audio data is media audio data, including but not limited to: music, audio data in film and television programs, and recorded audio data.
  • S603 The terminal receives a first scene change instruction.
  • the first scene change instruction is used to indicate that the application scene of the TWS Bluetooth headset has changed.
  • the application scenarios of the Bluetooth headset include, but are not limited to: listening to music scenes, making phone calls, simultaneous translation scenes, and two-person conversation scenes.
  • the link between the terminal and the Bluetooth headset in the music listening scene is shown in FIG. 4A
  • the audio data transmission process is shown in FIG. 4B.
  • the link between the terminal and the Bluetooth headset in the call scenario is shown in Figure 5A
  • the audio data transmission process is shown in Figure 5B.
  • Bluetooth headsets are worn and work differently. The following describes the wearing and working methods in several application scenarios.
  • the wearers corresponding to the main earphone and the auxiliary earphone are the same.
  • the main earphone and the auxiliary earphone only perform audio playback, and the audio data played by the main earphone and the auxiliary earphone are the same.
  • the microphones of the main earphone and the auxiliary earphone do not collect audio.
  • the main earphone can perform audio playback and audio collection, and the secondary earphone can only perform audio playback but not audio collection. Moreover, in the scene of making a call, the audio data played by the main earphone and the auxiliary earphone are also the same.
  • the main earphone and the auxiliary earphone correspond to different wearers. Both the main earphone and the auxiliary earphone can perform audio playback and audio collection at the same time, and the audio data played by the main earphone and the auxiliary earphone are different, and the audio data collected by the main earphone and the auxiliary earphone are different.
  • the main headset and the secondary headset correspond to different wearers. Both the main earphone and the auxiliary earphone can perform audio playback and audio collection at the same time, and the audio data collected by the main earphone and the auxiliary earphone are different, and the audio data played by the main earphone and the auxiliary earphone are the same.
  • the terminal receives the first scene change instruction, which may be directly input by the user to the terminal, or input by the user to the TWS Bluetooth headset, which is forwarded to the terminal by the TWS Bluetooth headset.
  • the TWS Bluetooth headset receives a first operation input by the user to the TWS Bluetooth headset, and in response to the first operation, the TWS Bluetooth headset sends a first scene change instruction to the terminal.
  • the first operation can be input to the Bluetooth headset.
  • the user can input the first operation to the Bluetooth headset in many ways.
  • the body of the Bluetooth headset is provided with a mode switch button, and the user can operate the mode switch button.
  • the user can click the mode switch button, or double-click the mode switch button, or touch the mode switch button. Mode switch button to input the first operation to the Bluetooth headset.
  • the Bluetooth headset can recognize a preset voice command, and the user can input the preset voice command to the Bluetooth headset to change the scene. For example, when a user needs to use a Bluetooth headset to listen to a music scene, he can input the voice "Enter music listening mode" into the Bluetooth headset. When the user needs to use the Bluetooth headset in the simultaneous translation scene, he can input the voice "enter the simultaneous translation mode" into the Bluetooth headset.
  • the terminal receives the first scene change instruction input by the user.
  • the user may input the first scene change instruction to the terminal in various ways.
  • the first scene change instruction is input to the terminal by tapping the mode switching control, double-clicking the mode switching control, touching the mode switching control, sliding mode switching control, etc.
  • voice interaction for example, input the voice "enter simultaneous translation mode" to the terminal.
  • sensors are provided in the main earphone and the auxiliary earphone to collect the wearing state signals of the main earphone and the auxiliary earphone.
  • the sensor may be one or more of an optical proximity sensor, a pressure sensor, a thermal sensor, and a moisture sensor. It can intelligently determine whether the current application scenario is switched according to the wearing state signal collected by the sensor of the main earphone and the wearing state signal collected by the sensor of the secondary earphone. Exemplarily, taking the sensor as a thermal sensor as an example, the wearing state signal collected by the thermal sensor indicates the body temperature of the wearer.
  • the wearing state signals corresponding to the main earphone and the auxiliary earphone may be the same or similar, and there may be large differences.
  • the difference between the body temperature indicated by the wearing state signal of the main earphone and the body temperature indicated by the wearing state signal of the auxiliary earphone is less than the first threshold (the user is wearing the main earphone and the auxiliary earphone at the same time), or the wearing state signal of the main earphone
  • the difference between the indicated body temperature and the body temperature indicated by the wearing state signal of the auxiliary earphone is greater than the second threshold (a case where the user only wears the main earphone but not the auxiliary earphone).
  • the main earphone When it is detected that the difference between the body temperature of the wearer indicated by the wearing state signal of the main earphone and the body temperature of the wearer indicated by the wearing state signal of the secondary earphone is between the first threshold and the second threshold, it can be determined that the main The wearer corresponding to the headset and the secondary headset is different, that is, the application scenario of the TWS Bluetooth headset has changed.
  • S604 In response to the first scene change instruction, establish a first SCO link between the terminal and the main headset, and establish a second SCO link between the terminal and the secondary headset.
  • the second audio data is transmitted between the terminal and the main headset through the first SCO link, and the second audio data is transmitted between the terminal and the secondary headset through the second SCO link.
  • the first scene change instruction may be an AT (attention) instruction.
  • AT command refers to the control command used in the Bluetooth communication protocol.
  • the existing AT command can be reused, and a certain information field in the existing AT command can be set as a value for indicating the changed application scenario.
  • An AT command can also be added to indicate changes in application scenarios.
  • the first scene change instruction is used to instruct the application scene of the TWS Bluetooth headset to be changed to a simultaneous translation scene.
  • the link established between the terminal and the Bluetooth headset is different from the link established in the call scenario shown in FIG. 5A.
  • the terminal not only establishes an SCO link with the main headset, but also establishes an SCO link between the terminal and the secondary headset.
  • the SCO link between the terminal and the main headset is called the first SCO link
  • the SCO link between the terminal and the secondary headset is called the second SCO link.
  • the second audio data is transmitted between the terminal and the main headset through the first SCO link, and the second audio data is transmitted between the terminal and the secondary headset through the second SCO link.
  • the second audio data is voice call data.
  • it can be the voice data during the user's voice call, or the voice data during the user's video call.
  • the first SCO link and the second SCO link are both bidirectional links, and both can transmit voice data in both directions.
  • the main earphone may also send the voice data to the terminal through the first SCO link at the same time.
  • the secondary headset can also simultaneously send voice data to the terminal through the second SCO link.
  • the first SCO link and the second SCO link are independent of each other and do not interfere with each other.
  • the Bluetooth communication method of the embodiment of the present application can realize the switching of the application scene of the TWS Bluetooth headset.
  • the following describes the changes in the link and the changes in data transmission after switching from the music listening scene to the simultaneous translation scene with reference to FIGS. 7A and 7B.
  • FIG. 7A is a schematic diagram of the Bluetooth headset provided by an embodiment of the application switching from a music listening scene to a simultaneous translation scene.
  • the first ACL link is between the terminal and the main earphone
  • the second ACL link is between the main earphone and the secondary earphone.
  • a first SCO link is established between the terminal and the main headset
  • a second SCO link is established between the terminal and the secondary headset. Therefore, in the simultaneous translation scenario, there are two links between the terminal and the main headset, namely the first ACL link and the first SCO link.
  • the terminal has a translation function.
  • the terminal After the terminal establishes the SCO link with the main headset and the secondary headset, the user can use the Bluetooth headset and the terminal to perform simultaneous translation.
  • the main earphone collects second audio data in a first language, and sends the second audio data in the first language to the terminal through the first SCO link, and the terminal transmits the second audio data in the first language to the terminal.
  • the second audio data in one language is translated into second audio data in the second language, and the translated second audio data in the second language is sent to the pair of headphones through the second SCO link. Play the second audio data in the second language.
  • the pair of headphones collects second audio data in the second language, and sends the second audio data in the second language to the terminal through the second SCO link, and the terminal transmits the second audio data to the terminal.
  • the second audio data of the language is translated into the second audio data of the first language, and the translated second audio data of the first language is sent to the main earphone through the first SCO link, and the main earphone pair
  • the second audio data in the first language is played.
  • FIG. 7B is a schematic diagram of data transmission between a terminal and a Bluetooth headset in a simultaneous translation scenario provided by an embodiment of the application.
  • user A wears the main headset
  • user B wears the secondary headset.
  • the voice data in the first language spoken by user A is collected by the main headset and sent to the terminal through the first SCO link.
  • the terminal translates the voice data into the second language and translates it
  • the voice data in the second language is sent to the secondary headset through the second SCO link, so that user B can hear the voice data in the second language.
  • the voice data of the second language spoken by user B is collected by the secondary headset and sent to the terminal through the second SCO link.
  • the terminal translates the voice data into the first language and translates
  • the subsequent voice data in the first language is sent to the main headset through the first SCO link, so that user A can hear the voice data in the first language.
  • the first ACL link is established between the terminal and the main earphone
  • the second ACL link is established between the main earphone and the secondary earphone, in response to the first scene change instruction received by the terminal,
  • a first SCO link is established between the terminal and the main headset
  • a second SCO link is established between the terminal and the secondary headset.
  • Figure 7A and Figure 7B illustrate schematic diagrams of switching from listening to music scenes to simultaneous translation scenes.
  • scene switching situations for example: switching from a single-person conversation scene to a two-person conversation
  • the scenario is described below in conjunction with Figure 8A and Figure 8B.
  • FIG. 8A is a schematic diagram of the Bluetooth headset provided by an embodiment of the application being switched from a single-person call scenario to a two-person call scenario.
  • the Bluetooth headset before switching, the Bluetooth headset is in a single talk mode.
  • User A wears the main earphone and the auxiliary earphone at the same time to make a voice call with user C.
  • user A determines that it needs to switch to a two-person call scenario, suppose that user A wants to make user B next to him join the call.
  • the user A wears the secondary headset to the user B, and the user A inputs the first scene change instruction to the main headset or the terminal. That is to say, after switching to the two-person call scenario, user A wears the main headset, user B wears the secondary headset, and user A and user B use the same terminal to make a voice call with user C.
  • the first SCO link is between the terminal and the main headset
  • the third SCO link is between the main headset and the secondary headset.
  • the microphone of the main headset collects audio data.
  • the audio data transmission process is similar to that of FIG. 5B, that is, the microphone of the main headset adopts the voice data of user A, and sends the voice data to the terminal through the first SCO link.
  • the terminal sends the voice data to user C.
  • the terminal receives the voice data of user C and sends the received voice data to the main headset through the first SCO link.
  • the main headset sends the received voice data to the secondary headset through the second SCO link, so that the main headset The voice is synchronized with the auxiliary earphone, and the user A listens to the user C's speech content through the main earphone and the auxiliary earphone.
  • the terminal after receiving the first scene change instruction input by the user, the terminal establishes a second SCO link with the secondary headset.
  • the first SCO link between the terminal and the main headset is still the first SCO link between the main headset and the secondary headset, and the second SCO link is added between the terminal and the secondary headset.
  • FIG. 8B is a schematic diagram of data transmission between a terminal and a Bluetooth headset in a two-person call scenario provided by an embodiment of the application.
  • the microphones of the main headset and the secondary headset both collect audio data.
  • the main earphone collects the audio data of the user A, and sends the audio data of the user A to the terminal through the first SCO link, so that the terminal sends the audio data of the user A to the user C.
  • the secondary earphone collects the audio data of the user B, and sends the audio data of the user B to the terminal through the second SCO link, so that the terminal sends the audio data of the user B to the user C.
  • the terminal will use the audio data of user A received through the first SCO link and the audio data of user B received through the second SCO link. Perform mixing, and then send the mixed audio data to user C.
  • the terminal receives the voice data of user C, it sends the voice data to the main headset through the first SCO link, and sends the voice data to the secondary headset through the second SCO link, so that both users A and B You can listen to the content of user C's words.
  • the terminal can also send the voice data to the main headset only through the first SCO link, and the main headset forwards the audio to the secondary headset through the third SCO link. data.
  • the TWS Bluetooth headset sends the first scene change instruction to the terminal in response to the first operation input by the user.
  • the first operation may be input to the main earphone by the user wearing the main earphone, or input to the auxiliary earphone by the user wearing the auxiliary earphone.
  • FIG. 9 is a schematic diagram of a process of entering a simultaneous translation mode according to an embodiment of the application. As shown in Figure 9, the method includes:
  • S901 The main headset receives the first operation input by the user.
  • the first operation is used to instruct the application scene of the TWS Bluetooth headset to be changed to a simultaneous translation scene.
  • the user wearing the main earphone can operate the mode switch button on the main earphone, or the user wearing the main earphone can input a preset voice command into the main earphone, for example: the user inputs the voice "enter the same" into the microphone of the main earphone. Voice translation mode".
  • the main headset In response to the first operation, the main headset sends a physical address request message to the secondary headset.
  • S903 The physical address of the auxiliary earphone sent by the auxiliary earphone to the main earphone.
  • the primary headset may send a physical address request message to the secondary headset through the second ACL link.
  • the secondary headset can send the physical address of the secondary headset to the primary headset through the second ACL link.
  • the above physical address request message and the physical address of the secondary earphone can be transmitted through AT commands.
  • S904 The main headset sends the first scene change instruction and the physical address of the secondary headset to the terminal.
  • the primary headset sends the first scene change instruction and the physical address of the secondary headset to the terminal through the first ACL link.
  • the terminal receives the first scene change instruction and the physical address of the secondary headset through the first ACL link.
  • the first scene change instruction and the physical address of the secondary earphone can be sent at the same time, or sent sequentially.
  • the first scene change command and the physical address of the secondary earphone are transmitted through one AT command, or the first scene change command is sent through the first AT command, and then the physical address of the secondary earphone is sent through the second AT command.
  • a first SCO link is established between the terminal and the main earphone, and a second SCO link is established between the terminal and the auxiliary earphone according to the physical address of the auxiliary earphone.
  • SCO links are established between the terminal and the main headset, and between the terminal and the secondary headset.
  • FIG. 10 is a schematic diagram of another process for entering the simultaneous translation mode according to an embodiment of the application. As shown in Figure 10, the method includes:
  • the secondary headset receives the first operation input by the user.
  • the first operation is used to instruct the application scene of the TWS Bluetooth headset to be changed to a simultaneous translation scene.
  • the user wearing the secondary headset can operate the mode switch button on the secondary headset, or the user wearing the secondary headset can input a preset voice command to the secondary headset, for example: the user inputs the voice "enter the same" into the microphone of the secondary headset. Voice translation mode".
  • the secondary headset In response to the first operation, the secondary headset sends a first scene change instruction and the physical address of the secondary headset to the primary headset.
  • the secondary headset sends the first scene change instruction and the physical address of the secondary headset to the primary headset through the second ACL link.
  • the primary headset receives the first scene change instruction and the physical address of the secondary headset through the second ACL link.
  • Both the first scene change command sent by the secondary earphone to the main earphone and the physical address of the secondary earphone can be transmitted through AT commands.
  • the first scene change instruction and the physical address of the secondary earphone can be sent at the same time, or sent sequentially.
  • the first scene change command and the physical address of the secondary earphone are transmitted through one AT command, or the first scene change command is sent through the first AT command, and then the physical address of the secondary earphone is sent through the second AT command.
  • S1003 The main headset forwards the first scene change instruction and the physical address of the secondary headset to the terminal.
  • the main headset sends the first scene change instruction and the physical address of the secondary headset to the terminal through the first ACL link.
  • the terminal receives the first scene change instruction and the physical address of the secondary headset through the first ACL link.
  • the first scene change instruction and the physical address of the secondary earphone can be sent at the same time, or sent sequentially.
  • the first scene change command and the physical address of the secondary earphone are transmitted through one AT command, or the first scene change command is sent through the first AT command, and then the physical address of the secondary earphone is sent through the second AT command.
  • S1004 Establish a first SCO link between the terminal and the main headset, and establish a second SCO link between the terminal and the secondary headset according to the physical address of the secondary headset.
  • SCO links are established between the terminal and the main headset, and between the terminal and the secondary headset.
  • the terminal may use multiple transmissions to perform data transmission with the main earphone and the auxiliary earphone.
  • the terminal may also receive the audio data sent by the secondary earphone through the second SCO link.
  • the terminal performs translation processing on the audio data received through the first SCO link, it can also perform translation processing on the audio data received through the second SCO link.
  • the terminal may also send the translated audio data to the secondary earphone through the second SCO link.
  • a Bluetooth chip is provided in the terminal, and the Bluetooth chip is a chip that supports two SCO links at the same time.
  • the Bluetooth chip establishes a connection with the Bluetooth software protocol stack layer through the Bluetooth driver layer.
  • the Bluetooth software protocol stack supports the management of two SCO links.
  • the Bluetooth software protocol stack supports state maintenance of two SCO links.
  • the Bluetooth software protocol stack also supports protocol processing of the audio data obtained from each SCO link, and transmits the processed audio data to the audio processing device.
  • the Bluetooth software protocol stack also supports receiving audio data from the audio processing device and transmitting the received audio data to the corresponding SCO link.
  • the audio processing device supports translation processing of audio data.
  • Each audio processing device corresponds to a SCO link.
  • the audio processing device 1 is configured to perform translation processing on audio data transmitted by the first SCO link.
  • the audio processing device 2 is used for translating the audio data transmitted by the second SCO link.
  • each audio processing device includes: a receiver, a translator, and a transmitter.
  • the receiver is used to receive audio data in the first language from the first SCO link
  • the translator is used to translate audio data in the first language to obtain audio data in the second language
  • the transmitter is used to transmit the translated audio data in the second language to the second SCO link.
  • the receiver is used to receive audio data in the second language from the second SCO link
  • the translator is used to translate the audio data in the second language to obtain audio data in the first language
  • the transmitter is used to translate The translated audio data in the first language is transmitted to the first SCO link.
  • both SCO links can transmit data in both directions at the same time.
  • the two SCO links are independent of each other and do not affect each other, so that two users who speak different languages can communicate in real time without any obstacle. .
  • the following is a detailed description in conjunction with a specific usage scenario.
  • FIG. 11 is a schematic diagram of data transmission between a terminal and a Bluetooth headset in a simultaneous translation scenario provided by an embodiment of the application. Assume that user A uses Chinese and user B uses English. User A wears the main headset, and user B wears the secondary headset.
  • the Chinese voice spoken by user A is collected by the microphone of the main headset, and transmitted from the main headset to the receiver 1 of the terminal through the first SCO link.
  • the terminal After receiving the Chinese voice, the terminal translates the Chinese voice into English voice through the translator 1, and then the transmitter 1 sends the translated English voice to the auxiliary headset through the second SCO link. Therefore, the user B wearing the auxiliary earphone hears the content of the user A's speech.
  • the English voice spoken by user B is collected by the microphone of the secondary headset, and transmitted by the secondary headset to the receiver 2 of the terminal through the second SCO link. After receiving the English voice, the terminal translates the English voice into Chinese voice through the translator 2, and then the transmitter 2 sends the translated Chinese voice to the main headset through the first SCO link. Thus, the user A wearing the main earphone hears the content of the user B's speech.
  • the translator may also be set in a cloud server. After receiving the voice data to be translated, the terminal sends the voice data to be translated to the cloud server. The cloud server returns the translated voice data to the terminal.
  • SCO links are established between the terminal and the main earphone and the auxiliary earphone.
  • the two SCO links and the two translation paths are independent of each other and do not interfere with each other, and both SCO links support two-way voice data.
  • the transmission enables two users with different languages to communicate in real time without obstacles, achieving the effect of simultaneous translation.
  • the terminal may also receive the translation configuration information input by the user.
  • the translation configuration information indicates the acquisition language and/or playback language of the main earphone and the auxiliary earphone.
  • the user can reasonably set the translation configuration information according to the conversation scenario and the language used by the conversation personnel, so that the terminal can perform translation processing according to the translation configuration information. The following describes two possible implementation manners.
  • FIG. 12 is a schematic diagram of a translation configuration interface of a terminal provided by an embodiment of the application.
  • the user sets the collection language of the main headset to Chinese, and the collection language of the secondary headset to English.
  • the terminal automatically uses Chinese as the playback language of the main headset and English as the playback language of the secondary headset. That is, the terminal will translate the Chinese picked up from the main ear into English and play it by the secondary earphone. The terminal will translate the English picked up by the secondary earphone into Chinese and play it by the main earphone.
  • FIG. 13 is a schematic diagram of a translation configuration interface of a terminal provided by an embodiment of the application.
  • the user sets the playback language of the main headset to Chinese, and the playback language of the secondary headset to English.
  • the terminal automatically uses Chinese as the collection language of the main headset and English as the collection language of the secondary headset. That is, the terminal translates the Chinese picked up from the main earphone into English and plays it by the auxiliary earphone. The terminal will translate the English picked up by the secondary earphone into Chinese and play it by the main earphone.
  • the terminal of the present application can be flexibly applied to various application scenarios.
  • the terminal can also automatically recognize the picked-up voice of the main earphone and the auxiliary earphone, so as to intelligently perform translation processing, simplify user operations, and improve user experience.
  • user A wears the main headset and user B wears the secondary headset.
  • user A and user B can first input a test voice into the terminal.
  • the terminal determines the collection language of the main earphone according to the test voice input by user A.
  • the terminal determines the collection language of the secondary earphone according to the test voice input by user B. For example: the test voice input by user A is "I am correct", and the test voice input by user B is "I am ok".
  • the terminal can determine that the collection language of the main headset is Chinese, and the collection language of the secondary headset is English. Furthermore, the terminal automatically translates the Chinese picked up from the main ear into English and plays it by the secondary earphone. The terminal will translate the English picked up by the secondary earphone into Chinese and play it by the main earphone.
  • the user can exit the simultaneous translation mode by operating the terminal or the TWS Bluetooth headset. This will be described with reference to FIG. 14 below.
  • FIG. 14 is a schematic flowchart of a Bluetooth communication method provided by an embodiment of the application.
  • the terminal establishes an SCO link with the main headset and the secondary headset. That is, the first SCO link is established between the terminal and the main headset, and the second SCO link is established between the terminal and the secondary headset.
  • the method includes:
  • the second scene change instruction is used to instruct the application scene of the TWS Bluetooth headset to be changed to a non-simultaneous translation scene.
  • the TWS Bluetooth headset receives a second operation input by the user to the TWS Bluetooth headset, and in response to the second operation, the TWS Bluetooth headset sends a second scene change instruction to the terminal.
  • the second operation can be input to the Bluetooth headset in various ways.
  • the user can operate the mode switch button.
  • the user may click the mode switching button, or double-click the mode switching button, or touch the mode switching button.
  • the user may input a preset voice command to the Bluetooth headset, for example, the user inputs the voice "exit simultaneous translation mode" to the Bluetooth headset.
  • the second operation received by the TWS Bluetooth headset may be input to the main headset by the user wearing the main headset, or input to the secondary headset by the user wearing the secondary headset.
  • the main earphone receives a second operation input by the user, and in response to the second operation, the main earphone sends a second scene change instruction to the terminal through the first ACL link.
  • the terminal receives the second scene change instruction through the first ACL link.
  • the secondary headset receives a second operation input by the user, and in response to the second operation, the secondary headset sends a second scene change instruction to the primary headset through the second ACL link.
  • the main headset forwards the received second scene change instruction to the terminal through the first ACL link.
  • the second scene change command may be an AT command.
  • an existing AT command can be reused, and a certain information field in the existing AT command can be set as a value for instructing to exit the simultaneous translation mode. You can also add an AT command to instruct to exit the simultaneous translation mode.
  • the terminal receives the second scene change instruction input by the user. Similar to the way the user inputs the first scene change instruction to the TWS Bluetooth headset, the user can input the second scene change instruction to the terminal in a variety of ways. For example: in the terminal interface, the second scene change instruction is input to the terminal by means of clicking the mode switching control, double-clicking the mode switching control, touching the mode switching control, sliding mode switching control, etc. It is also possible to input the second scene change instruction to the terminal through voice interaction, for example, input the voice "exit simultaneous translation mode" to the terminal.
  • S1402 In response to the second scene change instruction, disconnect the first SCO link between the terminal and the first headset, and disconnect the second SCO link between the terminal and the second headset link.
  • FIG. 15 is a schematic diagram of the link between the terminal and the Bluetooth headset after exiting the simultaneous translation mode according to an embodiment of the application.
  • the simultaneous translation mode there are two links between the terminal and the main headset, namely the first ACL link and the first SCO link.
  • FIG. 16 is a schematic structural diagram of a TWS Bluetooth headset provided by an embodiment of the application.
  • the TWS Bluetooth headset 20 of this embodiment includes a first headset 21 and a second headset 22, and the first headset 21 includes a processor 211, a memory 212, and a computer program that is stored on the memory 212 and can run on the processor 211.
  • the second earphone 22 includes a processor 221, a memory 222, and a computer program that is stored on the memory 222 and can run on the processor 221.
  • the memory 211 and the processor 212 may communicate through a communication bus.
  • the processor 211 executes the computer program, the technical solution of the first earphone 21 in the foregoing embodiment is executed.
  • the memory 222 and the processor 221 may communicate through a communication bus.
  • the processor 221 executes the computer program, the technical solution of the second earphone 22 in the foregoing embodiment is executed. Its implementation principle and technical effect are similar, and will not be repeated here.
  • FIG. 17 is a schematic structural diagram of a terminal provided by an embodiment of the application.
  • the terminal 10 of this embodiment includes a processor 11, a memory 12, and a computer program stored on the memory 12 and running on the processor 11.
  • the memory 12 and the processor 11 may communicate through the communication bus 13.
  • the processor 11 executes the computer program, the terminal-side technical solution in any of the above method embodiments is executed, and the implementation principles and technical effects are similar. , I won’t repeat it here.
  • the embodiment of the present application provides a storage medium, the storage medium is used to store a computer program, when the computer program is executed by a computer or a processor, it is used to implement the Bluetooth communication method on the TWS Bluetooth headset side, or implement the Bluetooth communication method on the terminal side Communication method.
  • the embodiment of the present application provides a computer program product, the computer program product includes instructions, when the instructions are executed, the computer executes the above-mentioned TWS Bluetooth headset side Bluetooth communication method, or implements the terminal side Bluetooth communication method .
  • An embodiment of the present application provides a chip that can be applied to a terminal or a TWS Bluetooth headset.
  • the chip includes: at least one communication interface, at least one processor, at least one memory, and the communication interface, memory, and processor pass through Bus interconnection, the processor executes the instructions stored in the memory, so that the terminal can execute the aforementioned Bluetooth communication method, or causes the TWS Bluetooth headset to execute the aforementioned Bluetooth communication method.
  • the processor may be a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, and may implement or Perform the methods, steps, and logic block diagrams disclosed in the embodiments of the present application.
  • the general-purpose processor may be a microprocessor or any conventional processor. The steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
  • the memory may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or a volatile memory (volatile memory), for example Random-access memory (random-access memory, RAM).
  • the memory is any other medium that can be used to carry or store desired program codes in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto.
  • the memory in the embodiments of the present application may also be a circuit or any other device capable of realizing a storage function, for storing program instructions and/or data.
  • the methods provided in the embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented by software, it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, network equipment, user equipment, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a digital video disc (digital video disc, DVD)), or a semiconductor medium (for example, SSD).

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Telephone Function (AREA)
  • Headphones And Earphones (AREA)

Abstract

本实施例提供的蓝牙通信方法、TWS蓝牙耳机及终端,在终端与主耳机之间建立第一ACL链路,主耳机与副耳机建立第二ACL链路的情况下,响应于终端接收到的第一场景变更指令,终端与主耳机之间建立第一SCO链路,并与副耳机之间建立第二SCO链路。使得终端能够通过第一SCO链路与主耳机之间传输语音数据,并通过第二SCO链路与副耳机之间传输语音数据,从而实现主耳机和副耳机分别采集两个不同用户的语音数据,并分别通过不同的SCO链路传输给终端进行翻译,实现同声翻译过程,丰富了蓝牙耳机的应用场景,满足了用户对于蓝牙耳机多功能的需求。

Description

蓝牙通信方法、TWS蓝牙耳机及终端
本申请要求在2019年6月14日提交中国国家知识产权局、申请号为201910513900.9的中国专利申请的优先权,发明名称为“蓝牙通信方法、TWS蓝牙耳机及终端”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及通信技术领域,尤其涉及一种蓝牙通信方法、TWS蓝牙耳机及终端。
背景技术
随着社会进步和通信技术的发展,耳机已成为人们生活中必不可少的生活用品。其中,蓝牙耳机是将蓝牙技术应用在免持耳机上,让使用者可以免除耳机线缆的牵绊,自在地以各种方式轻松使用耳机。
目前,真正无线立体声(True Wireless Stereo,TWS)蓝牙耳机,彻底摆脱了连接线的束缚。TWS耳机包括主耳机和副耳机,主耳机与终端建立蓝牙连接,主耳机与副耳机之间建立蓝牙连接。主耳机与副耳机之间通过数据转发的方式进行数据传输。示例性的,终端将音频数据发送给主耳机,主耳机将音频数据转发给副耳机,从而,主耳机与副耳机同步发声。由于主耳机和副耳机之间无物理线缆连接,使得TWS蓝牙耳机的佩戴体验得到了提升。
然而,目前的TWS蓝牙耳机只是具备听音乐和接打电话的基本功能,使得蓝牙耳机的功能比较单一,无法满足用户对TWS蓝牙耳机多功能的需求。
发明内容
本申请实施例提供一种蓝牙通信方法、TWS蓝牙耳机及终端,以丰富TWS蓝牙耳机的功能,满足用户对于TWS蓝牙耳机多功能的需求。
第一方面,本申请实施例提供一种蓝牙通信方法,应用于蓝牙通信系统,所述蓝牙通信系统包括真正无线立体声TWS蓝牙耳机和终端,所述TWS蓝牙耳机包括第一耳机和第二耳机,所述方法包括:
所述终端与所述第一耳机之间建立第一ACL链路,所述第一耳机与所述第二耳机之间建立第二ACL链路;
所述终端通过所述第一ACL链路向所述第一耳机发送第一音频数据,所述第一耳机将所述第一音频数据通过所述第二ACL链路转发至所述第二耳机;
所述终端接收第一场景变更指令;
响应于所述第一场景变更指令,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路;
所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据。
采用这种方案,使得终端能够通过第一SCO链路与主耳机之间传输语音数据,并通过第二SCO链路与副耳机之间传输语音数据,从而实现主耳机和副耳机分别采集两个不同用户的语音数据,并分别通过不同的SCO链路传输给终端。同样的,终端还可以通过两个SCO链路 分别向主耳机和副耳机发送不同的语音数据,丰富了蓝牙耳机的应用场景,满足了用户对于蓝牙耳机多功能的需求。
可选的,所述第一场景变更指令用于指示所述TWS蓝牙耳机的应用场景变更为同声翻译场景;所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据,包括:
所述第一耳机采集第一语言的第二音频数据,并将所述第一语言的第二音频数据通过所述第一SCO链路发送给所述终端,所述终端将所述第一语言的第二音频数据翻译为第二语言的第二音频数据,并将翻译后的第二语言的第二音频数据通过所述第二SCO链路发送给所述第二耳机,所述第二耳机对所述第二语言的第二音频数据进行播放;
所述第二耳机采集第二语言的第二音频数据,并将所述第二语言的第二音频数据通过所述第二SCO链路发送给所述终端,所述终端将所述第二语言的第二音频数据翻译为第一语言的第二音频数据,并将翻译后的第一语言的第二音频数据通过所述第一SCO链路发送给所述第一耳机,所述第一耳机对所述第一语言的第二音频数据进行播放。
采用这种方案,终端与主耳机、副耳机之间均建立SCO链路,两条SCO链路以及两条翻译通路相互独立,互不干扰,且两条SCO链路均支持双向语音数据的传输,使得两个使用不同语言的用户能够实时无障碍交流,达到同声翻译的效果。
可选的,所述终端接收第一场景变更指令之前,所述方法还包括:
所述第一耳机接收用户输入的第一操作;
响应于所述第一操作,所述第一耳机向所述第二耳机发送物理地址请求消息;
所述第一耳机接收所述第二耳机发送的所述第二耳机的物理地址;
所述第一耳机将第一场景变更指令以及所述第二耳机的物理地址发送给所述终端。
可选的,所述终端接收第一场景变更指令之前,所述方法还包括:
所述第二耳机接收用户输入的第一操作;
响应于所述第一操作,所述第二耳机向所述第一耳机发送第一场景变更指令以及所述第二耳机的物理地址;
所述第一耳机将所述第一场景变更指令以及所述第二耳机的物理地址转发给所述终端。
采用这种方案,用户既可以通过操作第一耳机实现场景变更,还可以通过操作第二耳机实现场景变更,可应用于各种场景,提高了应用灵活性。
可选的,所述方法还包括:
所述终端接收用户输入的翻译配置信息,所述翻译配置信息用于指示所述第一耳机的采集语言为所述第一语言,所述第二耳机的采集语言为所述第二语言。
采用这种方案,通过根据用户设置的翻译配置信息进行翻译处理,使得本申请的终端可灵活应用于各种应用场景。
可选的,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路之后,所述方法还包括:
所述终端接收第二场景变更指令;
响应于所述第二场景变更指令,断开所述终端与所述第一耳机之间的第一SCO链路,并断开所述终端与所述第二耳机之间的第二SCO链路。
可选的,所述终端接收第二场景变更指令之前,所述方法还包括:
所述第一耳机接收用户输入的第二操作;
响应于所述第二操作,所述第一耳机向所述终端发送第二场景变更指令。
可选的,所述终端接收第二场景变更指令之前,所述方法还包括:
所述第二耳机接收用户输入的第二操作;
响应于所述第二操作,所述第二耳机向所述第一耳机发送第二场景变更指令;
所述第一耳机将接收到的所述第二场景变更指令转发给所述终端。
可选的,所述第一音频数据为媒体音频数据,所述第二音频数据为语音通话数据。
第二方面,本申请实施例提供一种TWS蓝牙耳机,所述TWS蓝牙耳机包括第一耳机和第二耳机,所述第一耳机和所述第二耳机均包括处理器、存储器、以及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时执行如下步骤:
终端与所述第一耳机之间建立第一ACL链路,所述第一耳机与所述第二耳机之间建立第二ACL链路;
所述第一耳机通过所述第一ACL链路接收所述终端发送的第一音频数据,所述第一耳机将所述第一音频数据通过所述第二ACL链路转发至所述第二耳机;
响应于所述终端接收到的第一场景变更指令,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路;
所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据。
可选的,所述第一场景变更指令用于指示所述TWS蓝牙耳机的应用场景变更为同声翻译场景;所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据,包括:
所述第一耳机采集第一语言的第二音频数据,并将所述第一语言的第二音频数据通过所述第一SCO链路发送给所述终端,以使所述终端将所述第一语言的第二音频数据翻译为第二语言的第二音频数据,并将翻译后的第二语言的第二音频数据通过所述第二SCO链路发送给所述第二耳机,所述第二耳机对所述第二语言的第二音频数据进行播放;
所述第二耳机采集第二语言的第二音频数据,并将所述第二语言的第二音频数据通过所述第二SCO链路发送给所述终端,以使所述终端将所述第二语言的第二音频数据翻译为第一语言的第二音频数据,并将翻译后的第一语言的第二音频数据通过所述第一SCO链路发送给所述第一耳机,所述第一耳机对所述第一语言的第二音频数据进行播放。
可选的,所述响应于所述终端接收到的第一场景变更指令之前,还包括:
所述第一耳机接收用户输入的第一操作;
响应于所述第一操作,所述第一耳机向所述第二耳机发送物理地址请求消息;
所述第一耳机接收所述第二耳机发送的所述第二耳机的物理地址;
所述第一耳机将第一场景变更指令以及所述第二耳机的物理地址发送给所述终端。
可选的,所述响应于所述终端接收到的第一场景变更指令之前,还包括:
所述第二耳机接收用户输入的第一操作;
响应于所述第一操作,所述第二耳机向所述第一耳机发送第一场景变更指令以及所述第二耳机的物理地址;
所述第一耳机将所述第一场景变更指令以及所述第二耳机的物理地址转发给所述终端。
可选的,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路之后,还包括:
响应于所述终端接收到的第二场景变更指令,断开所述终端与所述第一耳机之间的第一SCO链路,并断开所述终端与所述第二耳机之间的第二SCO链路。
可选的,所述响应于所述终端接收到的第二场景变更指令之前,还包括:
所述第一耳机接收用户输入的第二操作;
响应于所述第二操作,所述第一耳机向所述终端发送第二场景变更指令。
可选的,所述响应于所述终端接收到的第二场景变更指令之前,还包括:
所述第二耳机接收用户输入的第二操作;
响应于所述第二操作,所述第二耳机向所述第一耳机发送第二场景变更指令;
所述第一耳机将接收到的所述第二场景变更指令转发给所述终端。
可选的,所述第一音频数据为媒体音频数据,所述第二音频数据为语音通话数据。
第三方面,本申请实施例提供一种终端,所述终端包括处理器、存储器、以及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时执行如下步骤:
所述终端与第一耳机之间建立第一ACL链路,所述第一耳机与第二耳机之间建立第二ACL链路,所述第一耳机和所述第二耳机为真正无线立体声TWS蓝牙耳机中的单体耳机;
所述终端通过所述第一ACL链路向所述第一耳机发送第一音频数据,所述第一耳机将所述第一音频数据通过所述第二ACL链路转发至所述第二耳机;
所述终端接收第一场景变更指令;
响应于所述第一场景变更指令,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路;
所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据。
可选的,所述第一场景变更指令用于指示所述TWS蓝牙耳机的应用场景变更为同声翻译场景;所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据,包括:
所述终端通过所述第一SCO链路从所述第一耳机接收第一语言的第二音频数据,所述第一语言的第二音频数据是由所述第一耳机采集的,所述终端将所述第一语言的第二音频数据翻译为第二语言的第二音频数据,并将翻译后的第二语言的第二音频数据通过所述第二SCO链路发送给所述第二耳机,以使所述第二耳机对所述第二语言的第二音频数据进行播放;
所述终端通过所述第二SCO链路从所述第二耳机接收第二语言的第二音频数据,所述第二语言的第二音频数据是由所述第二耳机采集的,所述终端将所述第二语言的第二音频数据翻译为第一语言的第二音频数据,并将翻译后的第一语言的第二音频数据通过所述第一SCO链路发送给所述第一耳机,以使所述第一耳机对所述第一语言的第二音频数据进行播放。
可选的,还包括:所述终端接收用户输入的翻译配置信息,所述翻译配置信息用于指示所述第一耳机的采集语言为所述第一语言,所述第二耳机的采集语言为所述第二语言。
可选的,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路之后,还包括:
所述终端接收第二场景变更指令;
响应于所述第二场景变更指令,断开所述终端与所述第一耳机之间的第一SCO链路,并断开所述终端与所述第二耳机之间的第二SCO链路。
可选的,所述第一音频数据为媒体音频数据,所述第二音频数据为语音通话数据。
第四方面,本申请实施例提供一种蓝牙通信方法,应用于TWS蓝牙耳机,所述TWS蓝牙耳机包括第一耳机和第二耳机,所述方法包括:
终端与所述第一耳机之间建立第一ACL链路,所述第一耳机与所述第二耳机之间建立第二ACL链路;
所述第一耳机通过所述第一ACL链路接收所述终端发送的第一音频数据,所述第一耳机将所述第一音频数据通过所述第二ACL链路转发至所述第二耳机;
响应于所述终端接收到的第一场景变更指令,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路;
所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据。
可选的,所述第一场景变更指令用于指示所述TWS蓝牙耳机的应用场景变更为同声翻译场景;所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据,包括:
所述第一耳机采集第一语言的第二音频数据,并将所述第一语言的第二音频数据通过所述第一SCO链路发送给所述终端,以使所述终端将所述第一语言的第二音频数据翻译为第二语言的第二音频数据,并将翻译后的第二语言的第二音频数据通过所述第二SCO链路发送给所述第二耳机,所述第二耳机对所述第二语言的第二音频数据进行播放;
所述第二耳机采集第二语言的第二音频数据,并将所述第二语言的第二音频数据通过所述第二SCO链路发送给所述终端,以使所述终端将所述第二语言的第二音频数据翻译为第一语言的第二音频数据,并将翻译后的第一语言的第二音频数据通过所述第一SCO链路发送给所述第一耳机,所述第一耳机对所述第一语言的第二音频数据进行播放。
可选的,所述响应于所述终端接收到的第一场景变更指令之前,所述方法还包括:
所述第一耳机接收用户输入的第一操作;
响应于所述第一操作,所述第一耳机向所述第二耳机发送物理地址请求消息;
所述第一耳机接收所述第二耳机发送的所述第二耳机的物理地址;
所述第一耳机将第一场景变更指令以及所述第二耳机的物理地址发送给所述终端。
可选的,所述响应于所述终端接收到的第一场景变更指令之前,所述方法还包括:
所述第二耳机接收用户输入的第一操作;
响应于所述第一操作,所述第二耳机向所述第一耳机发送第一场景变更指令以及所述第二耳机的物理地址;
所述第一耳机将所述第一场景变更指令以及所述第二耳机的物理地址转发给所述终端。
可选的,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路之后,所述方法还包括:
响应于所述终端接收到的第二场景变更指令,断开所述终端与所述第一耳机之间的第一SCO链路,并断开所述终端与所述第二耳机之间的第二SCO链路。
可选的,所述响应于所述终端接收到的第二场景变更指令之前,所述方法还包括:
所述第一耳机接收用户输入的第二操作;
响应于所述第二操作,所述第一耳机向所述终端发送第二场景变更指令。
可选的,所述响应于所述终端接收到的第二场景变更指令之前,所述方法还包括:
所述第二耳机接收用户输入的第二操作;
响应于所述第二操作,所述第二耳机向所述第一耳机发送第二场景变更指令;
所述第一耳机将接收到的所述第二场景变更指令转发给所述终端。
可选的,所述第一音频数据为媒体音频数据,所述第二音频数据为语音通话数据。
第五方面,本申请实施例提供一种蓝牙通信方法,应用于终端,所述方法包括:
所述终端与第一耳机之间建立第一ACL链路,所述第一耳机与第二耳机之间建立第二ACL链路,所述第一耳机和所述第二耳机为真正无线立体声TWS蓝牙耳机中的单体耳机;
所述终端通过所述第一ACL链路向所述第一耳机发送第一音频数据,所述第一耳机将所述第一音频数据通过所述第二ACL链路转发至所述第二耳机;
所述终端接收第一场景变更指令;
响应于所述第一场景变更指令,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路;
所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据。
可选的,所述第一场景变更指令用于指示所述TWS蓝牙耳机的应用场景变更为同声翻译场景;所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据,包括:
所述终端通过所述第一SCO链路从所述第一耳机接收第一语言的第二音频数据,所述第一语言的第二音频数据是由所述第一耳机采集的,所述终端将所述第一语言的第二音频数据翻译为第二语言的第二音频数据,并将翻译后的第二语言的第二音频数据通过所述第二SCO链路发送给所述第二耳机,以使所述第二耳机对所述第二语言的第二音频数据进行播放;
所述终端通过所述第二SCO链路从所述第二耳机接收第二语言的第二音频数据,所述第二语言的第二音频数据是由所述第二耳机采集的,所述终端将所述第二语言的第二音频数据翻译为第一语言的第二音频数据,并将翻译后的第一语言的第二音频数据通过所述第一SCO链路发送给所述第一耳机,以使所述第一耳机对所述第一语言的第二音频数据进行播放。
可选的,所述方法还包括:所述终端接收用户输入的翻译配置信息,所述翻译配置信息用于指示所述第一耳机的采集语言为所述第一语言,所述第二耳机的采集语言为所述第二语言。
可选的,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路之后,所述方法还包括:
所述终端接收第二场景变更指令;
响应于所述第二场景变更指令,断开所述终端与所述第一耳机之间的第一SCO链路,并断开所述终端与所述第二耳机之间的第二SCO链路。
可选的,所述第一音频数据为媒体音频数据,所述第二音频数据为语音通话数据。
第六方面,本申请实施例提供一种芯片,所述芯片包括至少一个通信接口,至少一个处理器,至少一个存储器,所述通信接口、存储器和处理器通过总线互联,所述处理器通过执行所述存储器中存储的指令,以执行如第四方面任一项所述的蓝牙通信方法,或者如第五方面任一项所述的蓝牙通信方法。
第七方面,本申请实施例提供一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序被计算机或处理器执行时用于实现如第四方面任一项所述的蓝牙通信方法,或者,实现如第五方面任一项所述的蓝牙通信方法。
第八方面,本申请实施例提供一种计算机程序产品,所述计算机程序产品包括指令,当所述指令被计算机或者处理器执行时,实现如第四方面任一项所述的蓝牙通信方法,或者,实现如第五方面任一项所述的蓝牙通信方法。
本申请实施例提供的蓝牙通信方法、TWS蓝牙耳机及终端,在终端与主耳机之间建立第 一ACL链路,主耳机与副耳机建立第二ACL链路的情况下,响应于终端接收到的第一场景变更指令,终端与主耳机之间建立第一SCO链路,并与副耳机之间建立第二SCO链路。使得终端能够通过第一SCO链路与主耳机之间传输语音数据,并通过第二SCO链路与副耳机之间传输语音数据,从而实现主耳机和副耳机分别采集两个不同用户的语音数据,并分别通过不同的SCO链路传输给终端进行翻译,实现同声翻译过程,丰富了蓝牙耳机的应用场景,满足了用户对于蓝牙耳机多功能的需求。
附图说明
图1为本申请实施例提供的系统架构图;
图2为本申请实施例提供的电子设备的结构示意图;
图3A至图3E为本申请实施例提供的一种应用场景下的终端界面示意图;
图4A为本申请实施例提供的听音乐场景下终端与蓝牙耳机之间的链路示意图;
图4B为本申请实施例提供的听音乐场景下终端与蓝牙耳机之间的数据传输示意图;
图5A为本申请实施例提供的拨打电话场景下终端与蓝牙耳机之间的链路示意图;
图5B为本申请实施例提供的拨打电话场景下终端与蓝牙耳机之间的数据传输示意图;
图6为本申请实施例提供的蓝牙通信方法的流程示意图;
图7A为本申请实施例提供的蓝牙耳机由听音乐场景切换为同声翻译场景的示意图;
图7B为本申请实施例提供的同声翻译场景下终端与蓝牙耳机之间的数据传输示意图;
图8A为本申请实施例提供的蓝牙耳机由单人通话场景切换为双人通话场景的示意图;
图8B为本申请实施例提供的双人通话场景下终端与蓝牙耳机之间的数据传输示意图;
图9为本申请实施例提供的一种进入同声翻译模式的流程示意图;
图10为本申请实施例提供的另一种进入同声翻译模式的流程示意图;
图11为本申请实施例提供的同声翻译场景下终端与蓝牙耳机之间的数据传输示意图;
图12为本申请实施例提供的终端的翻译配置界面的示意图;
图13为本申请实施例提供的终端的翻译配置界面的示意图;
图14为本申请实施例提供的蓝牙通信方法的流程示意图;
图15为本申请实施例提供的退出同声翻译模式后终端与蓝牙耳机之间的链路示意图;
图16为本申请实施例提供的TWS蓝牙耳机的结构示意图;
图17为本申请实施例提供的终端的结构示意图。
具体实施方式
为了便于对本申请的理解,首先结合图1和图2,对本申请所适用的系统架构和设备进行说明。
图1为本申请实施例提供的系统架构图。请参见图1,包括终端10和蓝牙耳机20。终端10和蓝牙耳机20之间通过蓝牙(Bluetooth)连接。
本申请中,蓝牙耳机为支持蓝牙通信协议的耳机。其中,蓝牙通信协议可以为ER传统蓝牙协议,也可以为BDR传统蓝牙协议,还可以为BLE低功耗蓝牙协议。当然,还可以是未来推出的其他新的蓝牙协议类型。从蓝牙协议的版本角度,蓝牙通信协议的版本可以为下述中的任一:1.0系列版本、2.0系列版本、3.0系列版本、4.0系列版本、基于未来推出的其他系列版本。
本实施例中的蓝牙耳机为TWS蓝牙耳机。TWS蓝牙耳机包括第一耳机和第二耳机。为了描述方便,本申请实施例中,将第一耳机称为主耳机,将第二耳机称为副耳机。主耳机和副耳机内均设置有蓝牙模块,主耳机和副耳机可以之间通过蓝牙协议进行数据传输。TWS耳机从外观上看,主耳机和副耳机之间没有连接线,具有携带方便、易于使用的优点。
其中,主耳机和副耳机均包括麦克风,也就是说,主耳机和副耳机除了具有音频播放的功能外,主耳机和副耳机还具有音频采集的功能。
本申请中的蓝牙耳机可以为下述应用中的一种或多种:HSP(Headset Profile)应用、HFP(Hands-free Profile)应用、A2DP(Advanced Audio Distribution Profile)应用、AVRCP(Audio/Video Remote Control Profile)应用。
其中,HSP应用代表耳机应用,提供终端与耳机之间通信所需的基本功能。蓝牙耳机可以作为终端的音频输入和输出接口。
HFP应用代表免提应用,HFP应用在HSP应用的基础上增加了某些扩展功能,蓝牙耳机可以控制终端的通话过程,例如:接听、挂断、拒接、语音拨号等。
A2DP应用为高级音频传送应用,A2DP能够采用耳机内的芯片来堆栈数据,达到声音的高清晰度。
AVRCP应用为音频视频遥控应用,AVRCP应用定义了如何控制流媒体的特征,包括:暂停、停止、启动重放、音量控制及其它类型的远程控制操作。
本申请中,终端100可以为任意具有运算处理能力的设备。当然,终端还可以具有音视频播放以及界面显示功能。例如,终端可以为手机、电脑、智能电视机、车载设备、可穿戴设备、工业设备等。终端100支持蓝牙通信协议。
在本申请中,终端为电子设备,下面结合图2,对电子设备的结构进行说明。
图2为本申请实施例提供的电子设备的结构示意图。如图2所示,电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示器194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的 控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180K,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180K,使处理器110与触摸传感器180K通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器110与显示器194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示器串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示器194通过DSI接口通信,实现电子设备100的显示功能。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波 进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示器194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100通过GPU,显示器194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示器194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示器194用于显示图像,视频等。显示器194包括显示面板。显示面板可以采用液晶显示器(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示器194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示器194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。处理器110通过运行存储在内部存储器121的指令,和/或存储在设置于处理器中的存储器的指令,执行电子设备100的各种功能应用以及数据处理。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
为了便于理解,下面以终端为手机为例,介绍终端与蓝牙耳机建立业务连接的过程。终端与蓝牙耳机建立业务连接过程包括三个阶段,分别是扫描阶段、配对阶段和业务连接建立阶段。下面结合图3A至图3E进行详细说明。
图3A至图3E为本申请实施例提供的一种应用场景下的终端界面示意图。示例性的,如图3A所示,在终端设置界面中选择“蓝牙”进入蓝牙设置界面,蓝牙设置界面如图3B所示。参见图3B,当终端接收到用户操作蓝牙开启选项对应的指令时,终端打开蓝牙功能。一些场景中,终端打开蓝牙功能后,能够发现附近可配对的蓝牙设备,并将扫描到的蓝牙设备显示在“可用设备”列表中。示例性的,图3B示例的是当前终端设备HUAWEIP30扫描到蓝牙设备HUAWEIMate20和蓝牙耳机1的情况。该阶段称为扫描阶段。
一些场景中,终端检测到用户点击“可用设备”列表中的某个蓝牙设备时,终端会与该 蓝牙设备进行配对。示例性的,如图3C所示,终端检测到用户点击“可用设备”列表中的“蓝牙耳机1”时,终端与蓝牙耳机1进行配对,若配对成功,则将“蓝牙耳机1”显示在“已配对的设备”列表中,如图3D所示。该阶段称为配对阶段。
一些场景中,如果终端之前曾与其他某个蓝牙设备建立过业务连接,则会在“已配对的设备”列表中显示曾经建立过业务连接的蓝牙设备。参见图3D,在“已配对的设备”列表中还显示了曾经建立过业务连接的蓝牙设备HUAWEI free buds。
一些场景中,终端检测到用户点击“已配对的设备”列表中的某个蓝牙设备时,终端会与该蓝牙设备建立业务连接。如图3E所示,终端检测到用户点击“可配对的设备”列表中的“蓝牙耳机1”时,终端与蓝牙耳机1建立业务连接。若业务连接建立成功,则终端与蓝牙耳机1之间可以传输音频数据。该阶段称为业务连接建立阶段。后续实施例中描述的终端与蓝牙耳机之间的交互过程均是在终端与蓝牙耳机建立业务连接之后进行的。
需要说明的是,图3A至图3E示例的终端界面的示意图仅为一种示例。不同的终端设备对应的设置界面以及操作方式可能有所不同。
终端与蓝牙耳机建立连接后,终端与蓝牙耳机之间可以传输音频数据。对于TWS蓝牙耳机,终端与蓝牙耳机建立连接时,终端仅与主耳机建立蓝牙连接,终端与副耳机之间无需建立蓝牙连接。主耳机与副耳机之间采用数据转发的方式进行通信。示例性的,终端将音频数据发送给主耳机,主耳机接收到音频数据后,将音频数据转发给副耳机,从而,实现主耳机和副耳机之间的同步发声。
终端与TWS蓝牙耳机之间的蓝牙物理链路分为两种,一种为异步无连接(asynchronous connection less,ACL)链路,另一种为同步定向连接(synchronous connection oriented,SCO)链路。
ACL链路是蓝牙的基本连接,一般用于传输连接类的协商信令,用于保持蓝牙连接。ACL链路还支持单向传输音频数据。示例性的,当终端通过ACL链路向主耳机发送音频数据时,主耳机无法同时向终端发送音频数据。
SCO链路是蓝牙基带支持的连接技术,利用保留时隙传输数据。SCO链路支持双向传输音频数据。示例性的,当终端通过SCO链路向主耳机发送音频数据时,主耳机也可以通过该SCO链路向终端发送音频数据。
目前的TWS蓝牙耳机通常都具备听音乐和接打电话的基本功能。下面结合听音乐和拨打电话两种应用场景,描述终端与蓝牙耳机之间的通信过程。
图4A为本申请实施例提供的听音乐场景下终端与蓝牙耳机之间的链路示意图。如图4A所示,在听音乐场景中,终端与TWS蓝牙耳机中的主耳机建立ACL链路,主耳机与副耳机之间也建立ACL链路。
图4B为本申请实施例提供的听音乐场景下终端与蓝牙耳机之间的数据传输示意图。如图4B所示,终端接收到用户选择的待播放的音乐。终端在播放该音乐时,将当前播放的音频数据通过ACL链路发送给主耳机,主耳机将接收到的音频数据通过ACL链路转发给副耳机。蓝牙耳机控制主耳机和副耳机同步播放该音频数据,从而,用户通过佩戴的主耳机和/或副耳机收听到音乐。
实际应用中,还可以对主耳机和副耳机的佩戴状态进行检测,根据主耳机和副耳机的佩戴状态进行音频数据的传输。示例性的,当检测到用户同时佩戴了主耳机和副耳机时,终端将当前播放的音频数据通过ACL链路发送给主耳机,主耳机将接收到的音频数据通过ACL链 路转发给副耳机,使得主耳机和副耳机同步发声。当检测到用户仅佩戴了主耳机时,终端将当前播放的音频数据通过ACL链路发送给主耳机,主耳机无需向副耳机进行音频数据的转发。当检测到用户仅佩戴了副耳机时,可以先将主耳机和副耳机的角色进行切换,即将主耳机切换为副耳机,将副耳机切换为主耳机。然后,终端将当前播放的音频数据通过ACL链路发送给主耳机,主耳机无需向副耳机进行音频数据的转发。
其中,对主耳机和副耳机的佩戴状态进行检测可以采用多种检测方式。示例性的,可以采用基于传感器的佩戴检测技术,即,在主耳机和副耳机内设置有传感器,传感器用于采集佩戴状态信号。根据光学接近传感器采集到的佩戴状态信号可以确定出主耳机或副耳机是否处于佩戴状态。其中,上述的传感器可以为光学接近传感器、压力传感器、热传感器、水分传感器中的一个或者多个。
图5A为本申请实施例提供的拨打电话场景下终端与蓝牙耳机之间的链路示意图。如图5所示,在拔打电话场景中,终端与TWS蓝牙耳机中的主耳机建立SCO链路,主耳机与副耳机之间也建立SCO链路。
图5B为本申请实施例提供的拨打电话场景下终端与蓝牙耳机之间的数据传输示意图。如图5B所示,用户同时佩戴主耳机和副耳机。用户在进行语音通话时,主耳机的麦克风进行语音采集,副耳机的麦克风不进行语音采集。主耳机的麦克风采集用户说的第一语音数据,并将采集到的第一语音数据通过SCO链路发送给终端,由终端进行语音处理并通过无线传输技术传输给通话对方。同时,终端接收到通话对方的第二语音数据后,将第二语音数据通过SCO链路发送给主耳机,主耳机将接收到的第二语音数据通过SCO链路转发给副耳机。蓝牙耳机控制主耳机和副耳机同步播放该第二语音数据,从而,用户通过佩戴的主耳机和副耳机收听到通话对方的第二语音数据。
需要说明的是,本申请中,主耳机是指与终端之间建立连接链路的耳机,并不限定左耳机或者右耳机。一些场景中,主耳机为左耳机,副耳机为右耳机。另一些场景中,主耳机为右耳机,副耳机为左耳机。还有一些场景中,主耳机和副耳机还可以进行切换。一种可能的方式中,可以根据主耳机和副耳机的电量状态进行切换。示例性的,若当前主耳机的电量低于某个阈值,且主耳机的电量低于副耳机的电量,则将副耳机切换为主耳机。另一种可能的方式中,还可以根据主耳机和副耳机的佩戴状态进行切换。示例性的,在某些应用场景中,若蓝牙耳机检测到主耳机出耳或掉落,则将主耳机切换为副耳机,并将副耳机切换为主耳机。
目前的TWS蓝牙耳机通常仅具备听音乐和接打电话的基本功能,使得蓝牙耳机的功能比较单一。为了丰富TWS蓝牙耳机的应用场景,满足用户对TWS蓝牙耳机多功能的需求,本申请实施例提供一种蓝牙通信方法。
在本申请中,TWS蓝牙耳机的主耳机和副耳机的麦克风分别采集不同用户的语音数据,利用TWS蓝牙耳机和终端实现同声翻译的功能。示例性的,说第一语言的用户A佩戴主耳机,说第二语言的用户B佩戴副耳机。用户A和用户B进行交谈时,主耳机的麦克风采集用户A说的第一语言的语音数据,并发送给终端,终端将其翻译为第二语言后传输给副耳机。副耳机的麦克风采集用户B说的第二语言的语音数据,并发送给终端,终端将其翻译为第一语言后传输给主耳机。从而,实现用户A和用户B的无障碍交谈。
下面,通过具体实施例,对本申请所示的技术方案进行详细说明。需要说明的是,下面几个实施例可以单独存在,也可以相互结合,对于相同或相似的内容,在不同的实施例中不再重复说明。
图6为本申请实施例提供的蓝牙通信方法的流程示意图。如图6所示,该方法包括:
S601:所述终端与所述主耳机之间建立第一ACL链路,所述主耳机与所述副耳机之间建立第二ACL链路。
S602:所述终端通过所述第一ACL链路向所述主耳机发送第一音频数据,所述主耳机将所述第一音频数据通过所述第二ACL链路转发至所述副耳机。
本实施例中,S601和S602中,TWS蓝牙耳机处于听音乐场景。示例性的,终端、主耳机、副耳机之间的链路示意图如图4A所示,即,终端与主耳机之间、以及主耳机与副耳机之间均建立ACL链路。为了描述方便,本实施例中,将终端与主耳机之间的ACL链路称为第一ACL链路,将主耳机与副耳机之间的ACL链路称为第二ACL链路。
终端、主耳机、副耳机之间的音频传输过程如图4B所示。即,终端将第一音频数据通过第一ACL链路发送给主耳机,主耳机再将接收到的第一音频数据通过ACL链路转发给副耳机。其中,第一音频数据为媒体音频数据,包括但不限于:音乐、影视节目中的音频数据、录音的音频数据等。
S603:所述终端接收第一场景变更指令。
其中,所述第一场景变更指令用于指示所述TWS蓝牙耳机的应用场景发生了变更。本申请中,蓝牙耳机的应用场景包括但不限于:听音乐场景、拨打电话场景、同声翻译场景、双人通话场景。其中,听音乐场景下终端与蓝牙耳机之间的链路如图4A所示,音频数据的传输过程如图4B所示。拨打电话场景下终端与蓝牙耳机之间的链路如图5A所示,音频数据的传输过程如图5B所示。
在不同的应用场景中,蓝牙耳机的佩戴方式以及工作方式不同。下面对几种应用场景下的佩戴方式以及工作方式分别进行介绍。
在听音乐和拨打电话场景中,主耳机和副耳机对应的佩戴者相同。在听音乐场景中,主耳机和副耳机仅进行音频播放,且主耳机和副耳机播放的音频数据是相同的。主耳机和副耳机的麦克风均不进行音频采集。
在拨打电话场景中,主耳机既可以进行音频播放也可以进行音频采集,副耳机只进行音频播放不进行音频采集。并且,在拨打电话场景中,主耳机和副耳机播放的音频数据也是相同的。
在同声翻译场景中,所述主耳机和所述副耳机对应的佩戴者不同。主耳机和副耳机均可以同时进行音频播放和音频采集,并且,主耳机和副耳机播放的音频数据不同,主耳机和副耳机采集的音频数据不同。
在双人通话场景中,所述主耳机和所述副耳机对应的佩戴者不同。主耳机和副耳机均可以同时进行音频播放和音频采集,并且,主耳机和副耳机采集的音频数据不同,主耳机和副耳机播放的音频数据相同。
本申请实施例中,终端接收第一场景变更指令,可以是由用户直接向终端输入的,还可以是用户向TWS蓝牙耳机输入,由TWS蓝牙耳机转发给终端的。
一些实施方式中,TWS蓝牙耳机接收用户向TWS蓝牙耳机输入的第一操作,响应于第一操作,TWS蓝牙耳机向终端发送第一场景变更指令。示例性的,当用户需要变更蓝牙耳机的应用场景时,可以向蓝牙耳机输入第一操作。用户可以通过多种方式向蓝牙耳机输入第一操作。一种可能的实施方式中,蓝牙耳机的机身上设置有模式切换按钮,用户可以操作该模式切换按钮,示例性的,用户可以点击该模式切换按钮、或者双击该模式切换按钮、或者触摸该模式切换按钮以向蓝牙耳机输入第一操作。另一种可能的实施方式中,蓝牙耳机可以识别 预设的语音指令,用户可以向蓝牙耳机输入预设的语音指令实现场景变更。例如,用户需要将蓝牙耳机应用在听音乐场景时,可以向蓝牙耳机输入语音“进入听音乐模式”。用户需要将蓝牙耳机应用在同声翻译场景时,可以向蓝牙耳机输入语音“进入同声翻译模式”。
另一些实施方式中,终端接收用户输入的第一场景变更指令。示例性的,与用户向TWS蓝牙耳机输入第一场景变更指令的方式类似,用户可以通过多种方式向终端输入第一场景变更指令。例如:在终端界面中,通过点击模式切换控件、双击模式切换控件、触摸模式切换控件、滑动模式切换控件等方式向终端输入第一场景变更指令。还可以通过语音交互的方式向终端输入第一场景变更指令,例如,向终端输入语音“进入同声翻译模式”等。
又一些实施方式中,主耳机和副耳机内均设置有传感器,用于采集主耳机和副耳机的佩戴状态信号。该传感器可以为光学接近传感器、压力传感器、热传感器、水分传感器中的一个或者多个。可以根据主耳机的传感器采集到的佩戴状态信号和副耳机的传感器采集到的佩戴状态信号,来智能确定当前的应用场景是否切换。示例性的,以传感器为热传感器为例,热传感器采集到的佩戴状态信号指示的是佩戴者的体温。在听音乐场景或者拨打电话场景下,主耳机和副耳机对应的佩戴状态信号有可能相同或相近,还有可能差异较大。例如:主耳机的佩戴状态信号所指示的体温与副耳机的佩戴状态信号所指示的体温之差小于第一阈值(用户同时佩戴主耳机和副耳机的情况),或者,主耳机的佩戴状态信号所指示的体温与副耳机的佩戴状态信号所指示的体温之差大于第二阈值(用户仅佩戴主耳机未佩戴副耳机的情况)。当检测到主耳机的佩戴状态信号所指示的佩戴者的体温与副耳机的佩戴状态信号所指示的佩戴者的体温之间存在差值在第一阈值和第二阈值之间时,可以确定主耳机和副耳机对应的佩戴者不同,即TWS蓝牙耳机的应用场景发生变更。
S604:响应于所述第一场景变更指令,所述终端与所述主耳机之间建立第一SCO链路,所述终端与所述副耳机之间建立第二SCO链路。
S605:所述终端和所述主耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述副耳机之间通过所述第二SCO链路传输第二音频数据。
本申请中,第一场景变更指令可以为AT(attention)指令。AT指令是指蓝牙通信协议中使用的控制命令。示例性的,可以复用已有AT指令,将已有AT指令中的某个信息域设置为用于指示变更后的应用场景的值。还可以新增AT指令,用于指示应用场景的变更情况。
假设第一场景变更指令用于指示所述TWS蓝牙耳机的应用场景变更为同声翻译场景。本申请中,在同声翻译场景下,终端与蓝牙耳机之间建立的链路与图5A所示的拨打电话场景中建立的链路不同。同声翻译场景下,终端不仅与主耳机之间建立SCO链路,终端与副耳机之间也建立SCO链路。为了描述方便,本申请实施例中,将终端与主耳机之间的SCO链路称为第一SCO链路,将终端与副耳机之间的SCO链路称为第二SCO链路。
建立第一SCO链路和第二SCO链路之后,终端和主耳机之间通过所述第一SCO链路传输第二音频数据,终端和副耳机之间通过所述第二SCO链路传输第二音频数据。其中,第二音频数据为语音通话数据。例如:可以是用户进行语音通话过程中的语音数据,还可以是用户进行视频通话过程中的语音数据。
第一SCO链路和第二SCO链路均为双向链路,均能够双向传输语音数据。示例性的,终端在通过第一SCO链路向主耳机发送语音数据时,主耳机也可以同时通过第一SCO链路向终端发送语音数据。终端在通过第二SCO链路向副耳机发送语音数据时,副耳机也可以同时通过第二SCO链路向终端发送语音数据。第一SCO链路和第二SCO链路相互独立,互不干扰。
由此可见,本申请实施例的蓝牙通信方法,响应于第一场景变更指令,能够实现TWS蓝 牙耳机的应用场景的切换。下面结合图7A和图7B描述以由听音乐场景切换为同声翻译场景后链路的变化情况,以及数据传输的变化情况。
图7A为本申请实施例提供的蓝牙耳机由听音乐场景切换为同声翻译场景的示意图。如图7A所示,在听音乐场景中,终端与主耳机之间为第一ACL链路,主耳机与副耳机之间为第二ACL链路。在终端接收到第一场景变更指令之后,终端与主耳机之间建立第一SCO链路,终端与副耳机之间建立第二SCO链路。因此,在同声翻译场景中,终端与主耳机之间存在两条链路,分别为第一ACL链路和第一SCO链路。终端与副耳机之间为第二SCO链路。主耳机与副耳机之间依然为第二ACL链路。
同声翻译场景中,终端具有翻译功能。终端与主耳机和副耳机均建立SCO链路之后,用户可以使用蓝牙耳机和终端进行同声翻译。具体的,所述主耳机采集第一语言的第二音频数据,并将所述第一语言的第二音频数据通过所述第一SCO链路发送给所述终端,所述终端将所述第一语言的第二音频数据翻译为第二语言的第二音频数据,并将翻译后的第二语言的第二音频数据通过所述第二SCO链路发送给所述副耳机,所述副耳机对所述第二语言的第二音频数据进行播放。同时,所述副耳机采集第二语言的第二音频数据,并将所述第二语言的第二音频数据通过所述第二SCO链路发送给所述终端,所述终端将所述第二语言的第二音频数据翻译为第一语言的第二音频数据,并将翻译后的第一语言的第二音频数据通过所述第一SCO链路发送给所述主耳机,所述主耳机对所述第一语言的第二音频数据进行播放。
图7B为本申请实施例提供的同声翻译场景下终端与蓝牙耳机之间的数据传输示意图。如图7B所示,用户A佩戴主耳机,用户B佩戴副耳机。如图7B中实线所示,用户A说的第一语言的语音数据被主耳机采集后,通过第一SCO链路发送给终端,终端将该语音数据翻译为第二语言,并将翻译后的第二语言的语音数据通过第二SCO链路发送给副耳机,从而用户B能够听到第二语言的语音数据。同时,如图7B中虚线所示,用户B说的第二语言的语音数据被副耳机采集后,通过第二SCO链路发送给终端,终端将该语音数据翻译为第一语言,并将翻译后的第一语言的语音数据通过第一SCO链路发送给主耳机,从而用户A能够听到第一语言的语音数据。通过上述过程,实现用户A和用户B的同声翻译过程。
本实施例提供的蓝牙通信方法,在终端与主耳机之间建立第一ACL链路,主耳机与副耳机建立第二ACL链路的情况下,响应于终端接收到的第一场景变更指令,终端与主耳机之间建立第一SCO链路,并与副耳机之间建立第二SCO链路。使得终端能够通过第一SCO链路与主耳机之间传输语音数据,并通过第二SCO链路与副耳机之间传输语音数据,从而实现主耳机和副耳机分别采集两个不同用户的语音数据,并分别通过不同的SCO链路传输给终端进行翻译,实现同声翻译过程,丰富了蓝牙耳机的应用场景,满足了用户对于蓝牙耳机多功能的需求。
需要说明的是,图7A、图7B示例的是听音乐场景切换为同声翻译场景的示意图,实际应用中,可能还存在其他的场景切换的情况,例如:由单人通话场景切换为双人通话场景,下面结合图8A和图8B进行描述。
图8A为本申请实施例提供的蓝牙耳机由单人通话场景切换为双人通话场景的示意图。如图8A所示,在切换之前,蓝牙耳机为单人通话模式。例如:用户A同时佩戴主耳机和副耳机与用户C进行语音通话。在通话过程中,用户A确定需要切换为双人通话场景,假设用户A希望使其身边的用户B也加入通话中。示例性的,用户A将副耳机佩戴给用户B,并且,用户A向主耳机或者终端输入第一场景变更指令。也就是说,切换为双人通话场景后,用户A 佩戴主耳机,用户B佩戴副耳机,用户A和用户B使用同一个终端与用户C进行语音通话。
如图8A所示,在单人通话场景中,终端与主耳机之间为第一SCO链路,主耳机与副耳机之间为第三SCO链路。该场景中,只有主耳机的麦克风采集音频数据。音频数据的传输过程与图5B类似,即,主耳机的麦克风采用用户A的语音数据,并将该语音数据通过第一SCO链路发送给终端。从而,终端将该语音数据发送给用户C。同时,终端接收用户C的语音数据,并将接收到的语音数据通过第一SCO链路发送给主耳机,主耳机将接收到的语音数据通过第二SCO链路发送给副耳机,使得主耳机与副耳机同步发声,用户A通过主耳机和副耳机收听到用户C的说话内容。
继续参见图8A,本申请实施例中,当接收到用户输入的第一场景变更指令后,终端与副耳机建立第二SCO链路。也就是说,进入双人通话场景后,终端与主耳机之间依然为第一SCO链路,主耳机与副耳机之间依然为第三SCO链路,终端与副耳机之间新增了第二SCO链路。
图8B为本申请实施例提供的双人通话场景下终端与蓝牙耳机之间的数据传输示意图。如图8B所示,在双人通话场景下,主耳机和副耳机的麦克风均采集音频数据。具体的,当用户A说话时,主耳机采集用户A的音频数据,并将用户A的音频数据通过第一SCO链路发送给终端,从而终端将用户A的音频数据发送给用户C。当用户B说话时,副耳机采集用户B的音频数据,并将用户B的音频数据通过第二SCO链路发送给终端,从而终端将用户B的音频数据发送给用户C。当然,在一些场景下,用户A和用户C还可能同时说话,因此,终端将通过第一SCO链路接收到的用户A的音频数据和通过第二SCO链路接收到的用户B的音频数据进行混音,然后将混音后的音频数据发送给用户C。
在双人通话场景下,终端接收用户C的语音数据后,将该语音数据通过第一SCO链路发送给主耳机,并通过第二SCO链路发送给副耳机,从而,用户A和用户B均可以收听到用户C的说话内容。当然,在一些场景下,终端从用户C接收到语音数据后,还可以将该语音数据仅通过第一SCO链路发送给主耳机,由主耳机通过第三SCO链路向副耳机转发该音频数据。
上述实施例中,在终端从TWS蓝牙耳机接收第一场景变更指令的实施方式中,TWS蓝牙耳机响应于用户输入的第一操作,向终端发送第一场景变更指令。其中,第一操作可以是佩戴主耳机的用户向主耳机输入的,还可以是佩戴副耳机的用户向副耳机输入的。下面分别结合两种可能的实施方式进行详细描述。
图9为本申请实施例提供的一种进入同声翻译模式的流程示意图。如图9所示,该方法包括:
S901:主耳机接收用户输入的第一操作。
示例性的,第一操作用于指示TWS蓝牙耳机的应用场景变更为同声翻译场景。
示例性的,佩戴主耳机的用户可以操作主耳机上的模式切换按钮,或者,佩戴主耳机的用户可以向主耳机输入预设的语音指令,例如:用户向主耳机的麦克风输入语音“进入同声翻译模式”。
S902:响应于第一操作,主耳机向副耳机发送物理地址请求消息。
S903:副耳机向主耳机发送的副耳机的物理地址。
示例性的,主耳机可以通过第二ACL链路向副耳机发送物理地址请求消息。相应的,副耳机可以通过第二ACL链路向主耳机发送副耳机的物理地址。
上述的物理地址请求消息以及副耳机的物理地址均可以通过AT指令传输。
S904:主耳机将第一场景变更指令以及副耳机的物理地址发送给终端。
示例性的,主耳机通过第一ACL链路向终端发送第一场景变更指令,以及副耳机的物理地址。相应的,终端通过第一ACL链路接收第一场景变更指令,以及副耳机的物理地址。
其中,第一场景变更指令和副耳机的物理地址可以同时发送,还可以先后发送。示例性的,第一场景变更指令和副耳机的物理地址通过一条AT指令传输,或者,先通过第一AT指令发送第一场景变更指令,再通过第二AT指令发送副耳机的物理地址。或者,先通过第一AT指令发送副耳机的物理地址,再通过第二AT指令发送第一场景变更指令。
S905:终端与主耳机之间建立第一SCO链路,并根据所述副耳机的物理地址,终端与副耳机之间建立第二SCO链路。
通过上述过程,终端与主耳机之间、以及终端与副耳机之间均建立了SCO链路。
图10为本申请实施例提供的另一种进入同声翻译模式的流程示意图。如图10所示,该方法包括:
S1001:副耳机接收用户输入的第一操作。
示例性的,第一操作用于指示TWS蓝牙耳机的应用场景变更为同声翻译场景。
示例性的,佩戴副耳机的用户可以操作副耳机上的模式切换按钮,或者,佩戴副耳机的用户可以向副耳机输入预设的语音指令,例如:用户向副耳机的麦克风输入语音“进入同声翻译模式”。
S1002:响应于所述第一操作,副耳机向主耳机发送第一场景变更指令以及所述副耳机的物理地址。
示例性的,副耳机通过第二ACL链路向主耳机发送第一场景变更指令以及所述副耳机的物理地址。相应的,主耳机通过第二ACL链路接收第一场景变更指令以及所述副耳机的物理地址。
副耳机向主耳机发送的第一场景变更指令以及副耳机的物理地址均可以通过AT指令传输。其中,第一场景变更指令和副耳机的物理地址可以同时发送,也可以先后发送。示例性的,第一场景变更指令和副耳机的物理地址通过一条AT指令传输,或者,先通过第一AT指令发送第一场景变更指令,再通过第二AT指令发送副耳机的物理地址。或者,先通过第一AT指令发送副耳机的物理地址,再通过第二AT指令发送第一场景变更指令。
S1003:主耳机将第一场景变更指令以及副耳机的物理地址转发给终端。
示例性的,主耳机通过第一ACL链路向终端发送第一场景变更指令以及副耳机的物理地址。相应的,终端通过第一ACL链路接收第一场景变更指令以及副耳机的物理地址。
其中,第一场景变更指令和副耳机的物理地址可以同时发送,还可以先后发送。示例性的,第一场景变更指令和副耳机的物理地址通过一条AT指令传输,或者,先通过第一AT指令发送第一场景变更指令,再通过第二AT指令发送副耳机的物理地址。或者,先通过第一AT指令发送副耳机的物理地址,再通过第二AT指令发送第一场景变更指令。
S1004:终端与主耳机之间建立第一SCO链路,并根据所述副耳机的物理地址,终端与副耳机之间建立第二SCO链路。
通过上述过程,终端与主耳机之间、以及终端与副耳机之间均建立了SCO链路。
上述各实施例中,终端与主耳机、副耳机之间均建立SCO链路后,终端可以采用多发的方式与主耳机和副耳机进行数据传输。示例性的,终端通过第一SCO链路接收主耳机发送的音频数据的同时,还可以通过第二SCO链路接收副耳机发送的音频数据。终端对通过第一SCO链路接收的音频数据进行翻译处理的同时,还可以对通过第二SCO链路接收的音频数据进行 翻译处理。终端在通过第一SCO链路向主耳机发送翻译后的音频数据的同时,还可以通过第二SCO链路向副耳机发送翻译后的音频数据。
本申请实施例中,终端内设置有蓝牙芯片,蓝牙芯片为同时支持两路SCO链路的芯片。蓝牙芯片通过蓝牙驱动层与蓝牙软件协议栈层建立联系。蓝牙软件协议栈支持对两路SCO链路进行管理。示例性的,蓝牙软件协议栈支持对两路SCO链路进行状态维护。蓝牙软件协议栈还支持对从每个SCO链路上获取的音频数据进行协议处理,并将处理后的音频数据传输给音频处理装置。当然,蓝牙软件协议栈还支持从音频处理装置接收音频数据,并将接收到的音频数据传输到对应的SCO链路上。
其中,终端内可以设置两个音频处理装置。音频处理装置支持对音频数据进行翻译处理。每个音频处理装置对应一个SCO链路。示例性的,音频处理装置1用于对第一SCO链路传输的音频数据进行翻译处理。音频处理装置2用于对第二SCO链路传输的音频数据进行翻译处理。
一种可能的实施方式中,每个音频处理装置中包括:接收器、翻译器和发送器。示例性的,对于音频处理装置1而言,接收器用于从第一SCO链路接收第一语言的音频数据,翻译器用于对第一语言的音频数据进行翻译,得到第二语言的音频数据,发送器用于将翻译后的第二语言的音频数据传输至第二SCO链路。对于音频处理装置2而言,接收器用于从第二SCO链路接收第二语言的音频数据,翻译器用于对第二语言的音频数据进行翻译,得到第一语言的音频数据,发送器用于将翻译后的第一语言的音频数据传输至第一SCO链路。
通过在终端中设置两个音频处理装置,实现了两个SCO链路均能够同时双向传输数据,两个SCO链路相互独立,互不影响,使得两个使用不同语言的用户能够实时无障碍交流。下面结合一个具体的使用场景进行详细描述。
图11为本申请实施例提供的同声翻译场景下终端与蓝牙耳机之间的数据传输示意图。假设用户A使用汉语,用户B使用英文。用户A佩戴主耳机,用户B佩戴副耳机。
如图11所示,用户A说的中文语音被主耳机的麦克风采集,并由主耳机通过第一SCO链路传输至终端的接收器1。终端接收到中文语音后,通过翻译器1将中文语音翻译为英文语音,然后,发送器1将翻译后的英文语音通过第二SCO链路发送给副耳机。从而,佩戴副耳机的用户B收听到用户A的说话内容。
用户B说的英文语音被副耳机的麦克风采集,并由副耳机通过第二SCO链路传输至终端的接收器2。终端接收到英文语音后,通过翻译器2将英文语音翻译为中文语音,然后,发送器2将翻译后的中文语音通过第一SCO链路发送给主耳机。从而,佩戴主耳机的用户A收听到用户B的说话内容。
一种可能的实施方式中,翻译器还可以设置在云端服务器中。终端接收到待翻译的语音数据后,将待翻译的语音数据发送给云端服务器。云端服务器将翻译后的语音数据返回给终端。
传统的翻译装置,比如翻译棒,两个用户在使用翻译棒进行对话时,通常无法实时对话。具体的,用户A说话时,需要用户A在翻译棒上选择需要翻译的第一目标语言,并按下翻译棒的录音按钮进行录音。录音结束后,翻译棒将录音的数据翻译为该第一目标语言,并进行播放,从而用户B通过收听翻译后的第一目标语言了解用户A的语义。类似的,当用户B说话时,需要用户B在翻译棒上选择需要翻译的第二目标语言,并按下翻译棒的录音按钮进行录音。录音结束后,翻译棒将录音的数据翻译为该第二目标语言,并进行播放,从而用户A通过收听翻译后的第二目标语言了解用户B的语义。上述过程中,用户A和用户B需要交替 使用翻译棒,分别进行录音、播放的操作,操作繁琐,且无法实时对话。
而本申请实施例中,终端与主耳机、副耳机之间均建立SCO链路,两条SCO链路以及两条翻译通路相互独立,互不干扰,且两条SCO链路均支持双向语音数据的传输,使得两个使用不同语言的用户能够实时无障碍交流,达到同声翻译的效果。
本申请实施例中,终端还可以接收用户输入的翻译配置信息。翻译配置信息指示的是主耳机和副耳机的采集语言和/或播放语言。示例性的,用户可以根据交谈场景以及交谈人员使用的语言,合理设置翻译配置信息,使得终端可以根据翻译配置信息进行翻译处理。下面结合两种可能的实施方式进行描述。
一种可能的实施方式中,终端接收用户输入的主耳机的采集语言和副耳机的采集语言。图12为本申请实施例提供的终端的翻译配置界面的示意图。如图12所示,用户设置主耳机的采集语言为中文,副耳机的采集语言为英文。这样配置之后,终端自动将中文作为主耳机的播放语言,将英文作为副耳机的播放语言。即,终端将从主耳拾音的中文翻译为英文,由副耳机播放。终端将从副耳机拾音的英文翻译为中文,由主耳机播放。
另一种可能的实施方式中,终端接收用户输入的主耳机的播放语言和副耳机的播放语言。图13为本申请实施例提供的终端的翻译配置界面的示意图。如图13所示,用户设置主耳机的播放语言为中文,副耳机的播放语言为英文。这样配置之后,终端自动将中文作为主耳机的采集语言,将英文作为副耳机的采集语言。即,终端将从主耳机拾音的中文翻译为英文,由副耳机播放。终端将从副耳机拾音的英文翻译为中文,由主耳机播放。
通过根据用户设置的翻译配置信息进行翻译处理,使得本申请的终端可灵活应用于各种应用场景。
当然,终端还可以自动识别主耳机和副耳机的拾音语音,从而智能的进行翻译处理,简化用户操作,提升用户体验。示例性的,假设用户A佩戴主耳机,用户B佩戴副耳机。在用户A和用户B进行正式交谈之前,用户A和用户B可以首先向终端输入一句测试语音。终端根据用户A输入的测试语音确定主耳机的采集语言。终端根据用户B输入的测试语音确定副耳机的采集语言。例如:用户A输入的测试语音为“我准确好了”,用户B输入的测试语音为“I am ok”。则终端可以确定主耳机的采集语言为中文,副耳机的采集语言为英文。进而,终端自动将将从主耳拾音的中文翻译为英文,由副耳机播放。终端将从副耳机拾音的英文翻译为中文,由主耳机播放。
本申请实施例中,当翻译结束后,用户可以通过操作终端或者TWS蓝牙耳机退出同声翻译模式。下面结合图14进行说明。
图14为本申请实施例提供的蓝牙通信方法的流程示意图。本实施例的执行时机时终端与主耳机和副耳机均建立SCO链路的情况。即,终端与主耳机之间建立了第一SCO链路,终端与副耳机之间建立了第二SCO链路。
如图14所示,该方法包括:
S1401:终端接收第二场景变更指令。
其中,所述第二场景变更指令用于指示所述TWS蓝牙耳机的应用场景变更为非同声翻译场景。
一些实施方式中,TWS蓝牙耳机接收用户向TWS蓝牙耳机输入的第二操作,并响应于第二操作,TWS蓝牙耳机向终端发送第二场景变更指令。示例性的,当用户需要结束同声翻译时,可以通过多种方式向蓝牙耳机输入第二操作。一种可能的实施方式中,用户可以操作模 式切换按钮。示例性,用户可以点击该模式切换按钮、或者双击该模式切换按钮、或者触摸该模式切换按钮。另一种可能的实施方式中,用户可以向蓝牙耳机输入预设的语音指令,例如,用户向蓝牙耳机输入语音“退出同声翻译模式”。
本申请实施例中,TWS蓝牙耳机接收到的第二操作,可以是佩戴主耳机的用户向主耳机输入的,还可以是佩戴副耳机的用户向副耳机输入的。
一种可能的场景中,主耳机接收用户输入的第二操作,响应于第二操作,主耳机通过第一ACL链路向终端发送第二场景变更指令。相应的,终端通过第一ACL链路接收该第二场景变更指令。
另一种可能的场景中,副耳机接收用户输入的第二操作,响应于第二操作,副耳机通过第二ACL链路向主耳机发送第二场景变更指令。主耳机将接收到的第二场景变更指令通过第一ACL链路转发给终端。
本申请中,第二场景变更指令可以为AT指令。示例性的,可以复用已有AT指令,将已有AT指令中的某个信息域设置为用于指示退出同声翻译模式的值。还可以新增AT指令,用于指示退出同声翻译模式。
另一些实施例中,终端接收用户输入的第二场景变更指令。与用户向TWS蓝牙耳机输入第一场景变更指令的方式类似,用户可以通过多种方式向终端输入第二场景变更指令。例如:在终端界面中,通过点击模式切换控件、双击模式切换控件、触摸模式切换控件、滑动模式切换控件等方式向终端输入第二场景变更指令。还可以通过语音交互的方式向终端输入第二场景变更指令,例如,向终端输入语音“退出同声翻译模式”等。
S1402:响应于所述第二场景变更指令,断开所述终端与所述第一耳机之间的第一SCO链路,并断开所述终端与所述第二耳机之间的第二SCO链路。
当终端接收到第二场景变更指令之后,断开与主耳机之间的第一SCO链路,并断开与副耳机之间的第二SCO链路。图15为本申请实施例提供的退出同声翻译模式后终端与蓝牙耳机之间的链路示意图。如图15所示,在同声翻译模式中,终端与主耳机之间存在两条链路,分别为第一ACL链路和第一SCO链路。终端与副耳机之间为第二SCO链路。主耳机与副耳机之间为第二ACL链路。在终端接收到第二场景变更指令之后,终端断开与主耳机之间的第一SCO链路,并断开与副耳机之间的第二SCO链路。因此,退出同声翻译模式之后,终端与主耳机之间为第一ACL链路,终端与副耳机之间无连接链路。主耳机与副耳机之间依然为第二ACL链路。即,结束同声翻译后,终端与蓝牙耳机之间的链路恢复为如图4所示的听音乐场景下的链路状态。从而,便于用户正常使用蓝牙耳机。
图16为本申请实施例提供的TWS蓝牙耳机的结构示意图,如图16所示,本实施例的TWS蓝牙耳机20包括第一耳机21和第二耳机22,所述第一耳机21包括处理器211、存储器212、以及存储在所述存储器212上并可在所述处理器211上运行的计算机程序。所述第二耳机22包括处理器221、存储器222、以及存储在所述存储器222上并可在所述处理器221上运行的计算机程序。示例性的,存储器211和处理器212可以通过通信总线通信。所述处理器211执行所述计算机程序时执行上述实施例中第一耳机21的技术方案。示例性的,存储器222和处理器221可以通过通信总线通信。所述处理器221执行所述计算机程序时执行上述实施例中第二耳机22的技术方案。其实现原理和技术效果类似,此处不再赘述。
图17为本申请实施例提供的终端的结构示意图。如图17所示,本实施例的终端10包括处理器11、存储器12、以及存储在所述存储器12上并可在所述处理器11上运行的计算机程 序。示例性的,存储器12和处理器11可以通过通信总线13通信,所述处理器11执行所述计算机程序时执行上述任一方法实施例中的终端侧的技术方案,其实现原理和技术效果类似,此处不再赘述。
本申请实施例提供一种存储介质,所述存储介质用于存储计算机程序,所述计算机程序被计算机或处理器执行时用于实现TWS蓝牙耳机侧的蓝牙通信方法,或者,实现终端侧的蓝牙通信方法。
本申请实施例提供一种计算机程序产品,所述计算机程序产品包括指令,当所述指令被执行时,使得计算机执行上述的TWS蓝牙耳机侧的蓝牙通信方法,或者,实现终端侧的蓝牙通信方法。
本申请实施例提供一种芯片,所述芯片可应用于终端或者TWS蓝牙耳机,所述芯片包括:至少一个通信接口,至少一个处理器,至少一个存储器,所述通信接口、存储器和处理器通过总线互联,所述处理器通过执行所述存储器中存储的指令,使得终端可执行上述的蓝牙通信方法,或者使得TWS蓝牙耳机执行上述的蓝牙通信方法。
在本申请实施例中,处理器可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
在本申请实施例中,存储器可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,还可以是易失性存储器(volatile memory),例如随机存取存储器(random-access memory,RAM)。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。
本申请各实施例提供的方法中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机可以存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,SSD)等。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (22)

  1. 一种蓝牙通信方法,其特征在于,应用于蓝牙通信系统,所述蓝牙通信系统包括真正无线立体声TWS蓝牙耳机和终端,所述TWS蓝牙耳机包括第一耳机和第二耳机,所述方法包括:
    所述终端与所述第一耳机之间建立第一ACL链路,所述第一耳机与所述第二耳机之间建立第二ACL链路;
    所述终端通过所述第一ACL链路向所述第一耳机发送第一音频数据,所述第一耳机将所述第一音频数据通过所述第二ACL链路转发至所述第二耳机;
    所述终端接收第一场景变更指令;
    响应于所述第一场景变更指令,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路;
    所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据。
  2. 根据权利要求1所述的方法,其特征在于,所述第一场景变更指令用于指示所述TWS蓝牙耳机的应用场景变更为同声翻译场景;所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据,包括:
    所述第一耳机采集第一语言的第二音频数据,并将所述第一语言的第二音频数据通过所述第一SCO链路发送给所述终端,所述终端将所述第一语言的第二音频数据翻译为第二语言的第二音频数据,并将翻译后的第二语言的第二音频数据通过所述第二SCO链路发送给所述第二耳机,所述第二耳机对所述第二语言的第二音频数据进行播放;
    所述第二耳机采集第二语言的第二音频数据,并将所述第二语言的第二音频数据通过所述第二SCO链路发送给所述终端,所述终端将所述第二语言的第二音频数据翻译为第一语言的第二音频数据,并将翻译后的第一语言的第二音频数据通过所述第一SCO链路发送给所述第一耳机,所述第一耳机对所述第一语言的第二音频数据进行播放。
  3. 根据权利要求1或2所述的方法,其特征在于,所述终端接收第一场景变更指令之前,所述方法还包括:
    所述第一耳机接收用户输入的第一操作;
    响应于所述第一操作,所述第一耳机向所述第二耳机发送物理地址请求消息;
    所述第一耳机接收所述第二耳机发送的所述第二耳机的物理地址;
    所述第一耳机将第一场景变更指令以及所述第二耳机的物理地址发送给所述终端。
  4. 根据权利要求1或2所述的方法,其特征在于,所述终端接收第一场景变更指令之前,所述方法还包括:
    所述第二耳机接收用户输入的第一操作;
    响应于所述第一操作,所述第二耳机向所述第一耳机发送第一场景变更指令以及所述第二耳机的物理地址;
    所述第一耳机将所述第一场景变更指令以及所述第二耳机的物理地址转发给所述终端。
  5. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    所述终端接收用户输入的翻译配置信息,所述翻译配置信息用于指示所述第一耳机的采集语言为所述第一语言,所述第二耳机的采集语言为所述第二语言。
  6. 根据权利要求1至5任一项所述的方法,其特征在于,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路之后,所述方法还包括:
    所述终端接收第二场景变更指令;
    响应于所述第二场景变更指令,断开所述终端与所述第一耳机之间的第一SCO链路,并断开所述终端与所述第二耳机之间的第二SCO链路。
  7. 根据权利要求6所述的方法,其特征在于,所述终端接收第二场景变更指令之前,所述方法还包括:
    所述第一耳机接收用户输入的第二操作;
    响应于所述第二操作,所述第一耳机向所述终端发送第二场景变更指令。
  8. 根据权利要求6所述的方法,其特征在于,所述终端接收第二场景变更指令之前,所述方法还包括:
    所述第二耳机接收用户输入的第二操作;
    响应于所述第二操作,所述第二耳机向所述第一耳机发送第二场景变更指令;
    所述第一耳机将接收到的所述第二场景变更指令转发给所述终端。
  9. 根据权利要求1至8任一项所述的方法,其特征在于,所述第一音频数据为媒体音频数据,所述第二音频数据为语音通话数据。
  10. 一种真正无线立体声TWS蓝牙耳机,其特征在于,所述TWS蓝牙耳机包括第一耳机和第二耳机,所述第一耳机和所述第二耳机均包括处理器、存储器、以及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时执行如下步骤:
    终端与所述第一耳机之间建立第一ACL链路,所述第一耳机与所述第二耳机之间建立第二ACL链路;
    所述第一耳机通过所述第一ACL链路接收所述终端发送的第一音频数据,所述第一耳机将所述第一音频数据通过所述第二ACL链路转发至所述第二耳机;
    响应于所述终端接收到的第一场景变更指令,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路;
    所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据。
  11. 根据权利要求10所述的TWS蓝牙耳机,其特征在于,所述第一场景变更指令用于指示所述TWS蓝牙耳机的应用场景变更为同声翻译场景;所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据,包括:
    所述第一耳机采集第一语言的第二音频数据,并将所述第一语言的第二音频数据通过所述第一SCO链路发送给所述终端,以使所述终端将所述第一语言的第二音频数据翻译为第二语言的第二音频数据,并将翻译后的第二语言的第二音频数据通过所述第二SCO链路发送给所述第二耳机,所述第二耳机对所述第二语言的第二音频数据进行播放;
    所述第二耳机采集第二语言的第二音频数据,并将所述第二语言的第二音频数据通过所述第二SCO链路发送给所述终端,以使所述终端将所述第二语言的第二音频数据翻译为第一语言的第二音频数据,并将翻译后的第一语言的第二音频数据通过所述第一SCO链路发送给所述第一耳机,所述第一耳机对所述第一语言的第二音频数据进行播放。
  12. 根据权利要求10或11所述的TWS蓝牙耳机,其特征在于,所述响应于所述终端接收到的第一场景变更指令之前,还包括:
    所述第一耳机接收用户输入的第一操作;
    响应于所述第一操作,所述第一耳机向所述第二耳机发送物理地址请求消息;
    所述第一耳机接收所述第二耳机发送的所述第二耳机的物理地址;
    所述第一耳机将第一场景变更指令以及所述第二耳机的物理地址发送给所述终端。
  13. 根据权利要求10或11所述的TWS蓝牙耳机,其特征在于,所述响应于所述终端接收到的第一场景变更指令之前,还包括:
    所述第二耳机接收用户输入的第一操作;
    响应于所述第一操作,所述第二耳机向所述第一耳机发送第一场景变更指令以及所述第二耳机的物理地址;
    所述第一耳机将所述第一场景变更指令以及所述第二耳机的物理地址转发给所述终端。
  14. 根据权利要求10至13任一项所述的TWS蓝牙耳机,其特征在于,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路之后,还包括:
    响应于所述终端接收到的第二场景变更指令,断开所述终端与所述第一耳机之间的第一SCO链路,并断开所述终端与所述第二耳机之间的第二SCO链路。
  15. 根据权利要求14所述的TWS蓝牙耳机,其特征在于,所述响应于所述终端接收到的第二场景变更指令之前,还包括:
    所述第一耳机接收用户输入的第二操作;
    响应于所述第二操作,所述第一耳机向所述终端发送第二场景变更指令。
  16. 根据权利要求14所述的TWS蓝牙耳机,其特征在于,所述响应于所述终端接收到的第二场景变更指令之前,还包括:
    所述第二耳机接收用户输入的第二操作;
    响应于所述第二操作,所述第二耳机向所述第一耳机发送第二场景变更指令;
    所述第一耳机将接收到的所述第二场景变更指令转发给所述终端。
  17. 根据权利要求10至16任一项所述的TWS蓝牙耳机,其特征在于,所述第一音频数据为媒体音频数据,所述第二音频数据为语音通话数据。
  18. 一种终端,其特征在于,所述终端包括处理器、存储器、以及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时执行如下步骤:
    所述终端与第一耳机之间建立第一ACL链路,所述第一耳机与第二耳机之间建立第二ACL链路,所述第一耳机和所述第二耳机为真正无线立体声TWS蓝牙耳机中的单体耳机;
    所述终端通过所述第一ACL链路向所述第一耳机发送第一音频数据,所述第一耳机将所述第一音频数据通过所述第二ACL链路转发至所述第二耳机;
    所述终端接收第一场景变更指令;
    响应于所述第一场景变更指令,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路;
    所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据。
  19. 根据权利要求18所述的终端,其特征在于,所述第一场景变更指令用于指示所述TWS蓝牙耳机的应用场景变更为同声翻译场景;所述终端和所述第一耳机之间通过所述第一SCO链路传输第二音频数据,所述终端和所述第二耳机之间通过所述第二SCO链路传输第二音频数据,包括:
    所述终端通过所述第一SCO链路从所述第一耳机接收第一语言的第二音频数据,所述第一语言的第二音频数据是由所述第一耳机采集的,所述终端将所述第一语言的第二音频数据翻译为第二语言的第二音频数据,并将翻译后的第二语言的第二音频数据通过所述第二SCO链路发送给所述第二耳机,以使所述第二耳机对所述第二语言的第二音频数据进行播放;
    所述终端通过所述第二SCO链路从所述第二耳机接收第二语言的第二音频数据,所述第二语言的第二音频数据是由所述第二耳机采集的,所述终端将所述第二语言的第二音频数据翻译为第一语言的第二音频数据,并将翻译后的第一语言的第二音频数据通过所述第一SCO链路发送给所述第一耳机,以使所述第一耳机对所述第一语言的第二音频数据进行播放。
  20. 根据权利要求19所述的终端,其特征在于,还包括:
    所述终端接收用户输入的翻译配置信息,所述翻译配置信息用于指示所述第一耳机的采集语言为所述第一语言,所述第二耳机的采集语言为所述第二语言。
  21. 根据权利要求18至20任一项所述的终端,其特征在于,所述终端与所述第一耳机之间建立第一SCO链路,所述终端与所述第二耳机之间建立第二SCO链路之后,还包括:
    所述终端接收第二场景变更指令;
    响应于所述第二场景变更指令,断开所述终端与所述第一耳机之间的第一SCO链路,并断开所述终端与所述第二耳机之间的第二SCO链路。
  22. 根据权利要求18至21任一项所述的终端,其特征在于,所述第一音频数据为媒体音频数据,所述第二音频数据为语音通话数据。
PCT/CN2020/095872 2019-06-14 2020-06-12 蓝牙通信方法、tws蓝牙耳机及终端 WO2020249098A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910513900.9 2019-06-14
CN201910513900.9A CN110381485B (zh) 2019-06-14 2019-06-14 蓝牙通信方法、tws蓝牙耳机及终端

Publications (1)

Publication Number Publication Date
WO2020249098A1 true WO2020249098A1 (zh) 2020-12-17

Family

ID=68250292

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/095872 WO2020249098A1 (zh) 2019-06-14 2020-06-12 蓝牙通信方法、tws蓝牙耳机及终端

Country Status (2)

Country Link
CN (1) CN110381485B (zh)
WO (1) WO2020249098A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113411712A (zh) * 2021-06-29 2021-09-17 紫优科技(深圳)有限公司 一种基于智能耳机的双耳分离模式实现方法及系统
CN114697754A (zh) * 2020-12-28 2022-07-01 深圳Tcl新技术有限公司 基于无线接入电话会议的方法、系统、介质及终端设备
EP3598435B1 (en) * 2018-07-19 2022-10-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for processing information and electronic device
CN115776628A (zh) * 2023-02-13 2023-03-10 成都市安比科技有限公司 用于tws蓝牙耳机双耳录音精确同步的方法
CN115915037A (zh) * 2021-09-24 2023-04-04 Oppo广东移动通信有限公司 通话控制方法、装置、电子设备及计算机可读存储介质
CN113825125B (zh) * 2021-09-15 2024-05-31 国芯科技(广州)有限公司 一种多组无线蓝牙耳机的音频共享方法

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110381485B (zh) * 2019-06-14 2021-01-29 华为技术有限公司 蓝牙通信方法、tws蓝牙耳机及终端
CN110446192A (zh) * 2019-06-26 2019-11-12 惠州迪芬尼声学科技股份有限公司 从属耳机与音源设备配对连线的方法
CN111063170B (zh) * 2019-11-29 2021-12-07 歌尔股份有限公司 Tws耳机防丢方法、tws耳机及计算机可读存储介质
CN112911566B (zh) * 2019-12-25 2022-06-07 华为终端有限公司 蓝牙通信方法和装置
CN111107463B (zh) * 2019-12-30 2021-06-11 广州由我科技股份有限公司 一种无线耳机切换方法、装置、无线耳机及介质
CN111314814A (zh) * 2020-01-19 2020-06-19 湖南国声声学科技股份有限公司 基于tws蓝牙耳机的翻译方法、移动终端、tws蓝牙耳机及存储介质
EP3890356A1 (en) * 2020-03-30 2021-10-06 Sonova AG Bluetooth audio exchange with transmission diversity
CN111428515B (zh) * 2020-03-30 2022-07-15 浙江大学 一种同声传译的设备及方法
CN111615036B (zh) * 2020-04-17 2021-07-23 歌尔科技有限公司 一种数据处理方法、装置及电子设备
CN111526440B (zh) * 2020-04-27 2022-03-01 歌尔科技有限公司 一种通话场景下tws耳机的主从耳切换方法、装置及介质
CN111698672B (zh) * 2020-05-26 2022-09-13 展讯通信(上海)有限公司 无线耳机的音频同步方法及无线耳机
CN111739538B (zh) * 2020-06-05 2022-04-26 北京搜狗科技发展有限公司 一种翻译方法、装置、耳机和服务器
CN111696552B (zh) * 2020-06-05 2023-09-22 北京搜狗科技发展有限公司 一种翻译方法、装置和耳机
CN111988771B (zh) * 2020-08-28 2023-09-15 维沃移动通信有限公司 无线连接控制方法、装置及电子设备
CN112446223A (zh) * 2020-11-23 2021-03-05 维沃移动通信有限公司 翻译方法、装置及电子设备
CN112511942A (zh) * 2020-12-03 2021-03-16 歌尔科技有限公司 一种基于tws耳机的语音翻译方法及tws耳机
KR20220105402A (ko) 2021-01-20 2022-07-27 삼성전자주식회사 오디오 데이터를 처리하는 전자 장치와 이의 동작 방법
CN113055868B (zh) * 2021-03-12 2022-09-23 上海物骐微电子有限公司 蓝牙快速组网方法、系统及蓝牙耳机
CN112787742B (zh) * 2021-03-16 2022-11-22 芯原微电子(成都)有限公司 时钟同步的方法及装置、无线耳机、可读存储介质
CN113115179B (zh) * 2021-03-24 2023-03-24 维沃移动通信有限公司 工作状态调节方法和装置
CN113329381B (zh) * 2021-04-28 2022-03-11 荣耀终端有限公司 一种建立蓝牙连接的方法及电子设备
CN113271376B (zh) * 2021-05-08 2023-08-22 维沃移动通信有限公司 通信控制方法、电子设备和耳机
CN113382337A (zh) * 2021-06-21 2021-09-10 紫优科技(深圳)有限公司 一种智能耳机的设备管理系统及方法
CN113543101A (zh) * 2021-07-09 2021-10-22 Oppo广东移动通信有限公司 一种音频输出方法、蓝牙设备、移动终端及存储介质
WO2023014032A1 (en) * 2021-08-02 2023-02-09 Samsung Electronics Co., Ltd. System and method for establishing call audio sharing using bluetooth low energy audio technology
CN114710771B (zh) * 2022-06-07 2022-08-23 成都市安比科技有限公司 基于tws系统的链路切换方法、装置及蓝牙通信系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106851450A (zh) * 2016-12-26 2017-06-13 歌尔科技有限公司 一种无线耳机对及电子设备
US20170318375A1 (en) * 2016-04-19 2017-11-02 Snik Llc Magnetic earphones holder
CN107708006A (zh) * 2017-08-23 2018-02-16 广东思派康电子科技有限公司 计算机可读存储介质、实时翻译系统
CN107894881A (zh) * 2017-10-18 2018-04-10 恒玄科技(上海)有限公司 蓝牙耳机的主从连接切换、通话监听和麦克切换的方法
CN108345591A (zh) * 2018-01-26 2018-07-31 歌尔股份有限公司 基于移动终端双耳无线耳机的语音实时翻译方法及系统
CN109218883A (zh) * 2018-08-27 2019-01-15 深圳市声临科技有限公司 一种翻译方法、翻译系统、tws耳机及终端
CN110381485A (zh) * 2019-06-14 2019-10-25 华为技术有限公司 蓝牙通信方法、tws蓝牙耳机及终端

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9749766B2 (en) * 2015-12-27 2017-08-29 Philip Scott Lyren Switching binaural sound
US10517111B2 (en) * 2016-09-21 2019-12-24 Apple Inc. Mitigating scheduling conflicts in wireless communication devices
CN107333200B (zh) * 2017-07-24 2023-10-20 歌尔科技有限公司 一种翻译耳机收纳盒、无线翻译耳机和无线翻译系统
CN109005480A (zh) * 2018-07-19 2018-12-14 Oppo广东移动通信有限公司 信息处理方法及相关产品
CN109391724B (zh) * 2018-08-01 2020-12-22 展讯通信(上海)有限公司 实现双耳通话的方法、移动终端及双耳无线耳机
CN109151789B (zh) * 2018-09-30 2021-08-17 Oppo广东移动通信有限公司 翻译方法、装置、系统以及蓝牙耳机

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170318375A1 (en) * 2016-04-19 2017-11-02 Snik Llc Magnetic earphones holder
CN106851450A (zh) * 2016-12-26 2017-06-13 歌尔科技有限公司 一种无线耳机对及电子设备
CN107708006A (zh) * 2017-08-23 2018-02-16 广东思派康电子科技有限公司 计算机可读存储介质、实时翻译系统
CN107894881A (zh) * 2017-10-18 2018-04-10 恒玄科技(上海)有限公司 蓝牙耳机的主从连接切换、通话监听和麦克切换的方法
CN108345591A (zh) * 2018-01-26 2018-07-31 歌尔股份有限公司 基于移动终端双耳无线耳机的语音实时翻译方法及系统
CN109218883A (zh) * 2018-08-27 2019-01-15 深圳市声临科技有限公司 一种翻译方法、翻译系统、tws耳机及终端
CN110381485A (zh) * 2019-06-14 2019-10-25 华为技术有限公司 蓝牙通信方法、tws蓝牙耳机及终端

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3598435B1 (en) * 2018-07-19 2022-10-05 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for processing information and electronic device
CN114697754A (zh) * 2020-12-28 2022-07-01 深圳Tcl新技术有限公司 基于无线接入电话会议的方法、系统、介质及终端设备
CN114697754B (zh) * 2020-12-28 2024-02-13 深圳Tcl新技术有限公司 基于无线接入电话会议的方法、系统、介质及终端设备
CN113411712A (zh) * 2021-06-29 2021-09-17 紫优科技(深圳)有限公司 一种基于智能耳机的双耳分离模式实现方法及系统
CN113825125B (zh) * 2021-09-15 2024-05-31 国芯科技(广州)有限公司 一种多组无线蓝牙耳机的音频共享方法
CN115915037A (zh) * 2021-09-24 2023-04-04 Oppo广东移动通信有限公司 通话控制方法、装置、电子设备及计算机可读存储介质
CN115776628A (zh) * 2023-02-13 2023-03-10 成都市安比科技有限公司 用于tws蓝牙耳机双耳录音精确同步的方法
CN115776628B (zh) * 2023-02-13 2023-04-14 成都市安比科技有限公司 用于tws蓝牙耳机双耳录音精确同步的方法

Also Published As

Publication number Publication date
CN110381485A (zh) 2019-10-25
CN110381485B (zh) 2021-01-29

Similar Documents

Publication Publication Date Title
WO2020249098A1 (zh) 蓝牙通信方法、tws蓝牙耳机及终端
WO2020107485A1 (zh) 一种蓝牙连接方法及设备
WO2020133183A1 (zh) 音频数据的同步方法及设备
WO2020244623A1 (zh) 一种空鼠模式实现方法及相关设备
CN114009055A (zh) 一种投屏显示方法及电子设备
JP7324311B2 (ja) オーディオ及びビデオ再生方法、端末、並びにオーディオ及びビデオ再生装置
WO2021129521A1 (zh) 蓝牙通信方法和装置
WO2021017909A1 (zh) 一种通过nfc标签实现功能的方法、电子设备及系统
CN113330761B (zh) 占用设备的方法以及电子设备
WO2020124371A1 (zh) 一种数据信道的建立方法及设备
WO2022116930A1 (zh) 内容共享方法、电子设备及存储介质
WO2021013196A1 (zh) 一种同时响应的方法及设备
WO2023020028A1 (zh) 支持双卡的终端设备中业务处理方法和装置
CN114339709A (zh) 无线通信方法和终端设备
WO2022213689A1 (zh) 一种音频设备间语音互通的方法及设备
WO2022042265A1 (zh) 通信方法、终端设备及存储介质
WO2022089034A1 (zh) 视频笔记生成方法及电子设备
CN113365274B (zh) 一种网络接入方法和电子设备
WO2021027623A1 (zh) 一种设备能力发现方法及p2p设备
WO2022042261A1 (zh) 屏幕共享方法、电子设备及系统
WO2022116992A1 (zh) 通话方法及电子设备
WO2023165513A1 (zh) 通信方法、电子设备及装置
CN116981108B (zh) 无线投屏连接方法、移动终端及计算机可读存储介质
CN113950037B (zh) 一种音频播放方法及终端设备
WO2023236670A1 (zh) 数据传输管理方法、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20823575

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20823575

Country of ref document: EP

Kind code of ref document: A1