CN110381485B - Bluetooth communication method, TWS (two way communication system) Bluetooth headset and terminal - Google Patents

Bluetooth communication method, TWS (two way communication system) Bluetooth headset and terminal Download PDF

Info

Publication number
CN110381485B
CN110381485B CN201910513900.9A CN201910513900A CN110381485B CN 110381485 B CN110381485 B CN 110381485B CN 201910513900 A CN201910513900 A CN 201910513900A CN 110381485 B CN110381485 B CN 110381485B
Authority
CN
China
Prior art keywords
earphone
terminal
headset
audio data
change instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910513900.9A
Other languages
Chinese (zh)
Other versions
CN110381485A (en
Inventor
高天星
郝一休
唐能福
宋业全
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201910513900.9A priority Critical patent/CN110381485B/en
Publication of CN110381485A publication Critical patent/CN110381485A/en
Priority to PCT/CN2020/095872 priority patent/WO2020249098A1/en
Application granted granted Critical
Publication of CN110381485B publication Critical patent/CN110381485B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Abstract

In the bluetooth communication method, the TWS bluetooth headset, and the terminal provided in this embodiment, a first ACL link is established between the terminal and the primary headset, and a second ACL link is established between the primary headset and the secondary headset, where a first scene change instruction received by the terminal is responded, a first SCO link is established between the terminal and the primary headset, and a second SCO link is established between the terminal and the secondary headset. Make the terminal can be through transmitting voice data between first SCO link and the main earphone to through transmitting voice data between second SCO link and the auxiliary earphone, thereby realize that main earphone and auxiliary earphone gather two different users 'voice data respectively, and transmit for the terminal through different SCO links respectively and translate, realize the simultaneous interpretation process, richened bluetooth headset's application scene, satisfied the multi-functional demand of user to bluetooth headset.

Description

Bluetooth communication method, TWS (two way communication system) Bluetooth headset and terminal
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to a Bluetooth communication method, a TWS (two way communication system) Bluetooth headset and a terminal.
Background
With the social progress and the development of communication technology, earphones become essential living goods in people's lives. The Bluetooth earphone applies the Bluetooth technology to the hands-free earphone, so that a user can avoid the influence of an earphone cable and can easily use the earphone in various modes.
At present, a True Wireless Stereo (TWS) bluetooth headset completely gets rid of the constraint of a connecting wire. The TWS earphone comprises a main earphone and an auxiliary earphone, the main earphone is connected with the terminal in a Bluetooth mode, and the main earphone is connected with the auxiliary earphone in the Bluetooth mode. And data transmission is carried out between the main earphone and the auxiliary earphone in a data forwarding mode. Illustratively, the terminal sends the audio data to the primary earphone, which forwards the audio data to the secondary earphone, so that the primary earphone and the secondary earphone sound synchronously. Because no physical cable is connected between the main earphone and the auxiliary earphone, the wearing experience of the TWS Bluetooth earphone is improved.
However, the existing TWS bluetooth headset only has basic functions of listening music and receiving and making calls, so that the function of the bluetooth headset is single, and the requirement of a user on the multifunction of the TWS bluetooth headset cannot be met.
Disclosure of Invention
The embodiment of the application provides a Bluetooth communication method, a TWS Bluetooth headset and a terminal, so that the functions of the TWS Bluetooth headset are enriched, and the multifunctional requirements of a user on the TWS Bluetooth headset are met.
In a first aspect, an embodiment of the present application provides a bluetooth communication method, which is applied to a bluetooth communication system, where the bluetooth communication system includes a TWS bluetooth headset that is a true wireless stereo, and a terminal, where the TWS bluetooth headset includes a first headset and a second headset, and the method includes:
a first ACL link is established between the terminal and the first earphone, and a second ACL link is established between the first earphone and the second earphone;
the terminal sends first audio data to the first earphone through the first ACL link, and the first earphone forwards the first audio data to the second earphone through the second ACL link;
the terminal receives a first scene change instruction;
responding to the first scene change instruction, establishing a first SCO link between the terminal and the first earphone, and establishing a second SCO link between the terminal and the second earphone;
and second audio data is transmitted between the terminal and the first earphone through the first SCO link, and second audio data is transmitted between the terminal and the second earphone through the second SCO link.
By adopting the scheme, the terminal can transmit voice data between the first SCO link and the main earphone and transmit the voice data between the second SCO link and the auxiliary earphone, so that the main earphone and the auxiliary earphone can respectively acquire the voice data of two different users and respectively transmit the voice data to the terminal through the different SCO links. Similarly, the terminal can also send different voice data to the main earphone and the auxiliary earphone through two SCO links, so that the application scenes of the Bluetooth earphone are enriched, and the multifunctional requirement of the user on the Bluetooth earphone is met.
Optionally, the first scene change instruction is used to instruct the application scene of the TWS bluetooth headset to change into a simultaneous interpretation scene; the transmitting second audio data between the terminal and the first earphone through the first SCO link, and the transmitting second audio data between the terminal and the second earphone through the second SCO link, includes:
the first earphone collects second audio data of a first language, the second audio data of the first language is sent to the terminal through the first SCO link, the terminal translates the second audio data of the first language into second audio data of a second language and sends the translated second audio data of the second language to the second earphone through the second SCO link, and the second earphone plays the second audio data of the second language;
the second earphone collects second audio data of a second language, sends the second audio data of the second language to the terminal through the second SCO link, the terminal translates the second audio data of the second language into second audio data of a first language and sends the translated second audio data of the first language to the first earphone through the first SCO link, and the first earphone plays the second audio data of the first language.
By adopting the scheme, SCO links are respectively established between the terminal and the main earphone and between the terminal and the auxiliary earphone, the two SCO links and the two translation paths are mutually independent and do not interfere with each other, and the two SCO links support the transmission of two-way voice data, so that two users using different languages can communicate without barriers in real time, and the effect of simultaneous interpretation is achieved.
Optionally, before the terminal receives the first scene change instruction, the method further includes:
the first earphone receives a first operation input by a user;
in response to the first operation, the first earpiece sending a physical address request message to the second earpiece;
the first earphone receives the physical address of the second earphone sent by the second earphone;
and the first earphone sends a first scene change instruction and the physical address of the second earphone to the terminal.
Optionally, before the terminal receives the first scene change instruction, the method further includes:
the second earphone receives a first operation input by a user;
in response to the first operation, the second headset sends a first scene change instruction and a physical address of the second headset to the first headset;
and the first earphone forwards the first scene change instruction and the physical address of the second earphone to the terminal.
By adopting the scheme, a user can realize scene change by operating the first earphone and can also realize scene change by operating the second earphone, so that the method can be applied to various scenes, and the application flexibility is improved.
Optionally, the method further includes:
the terminal receives translation configuration information input by a user, wherein the translation configuration information is used for indicating that the collection language of the first earphone is the first language, and the collection language of the second earphone is the second language.
By adopting the scheme, the terminal can be flexibly applied to various application scenes by carrying out translation processing according to the translation configuration information set by the user.
Optionally, after a first SCO link is established between the terminal and the first headset and a second SCO link is established between the terminal and the second headset, the method further includes:
the terminal receives a second scene change instruction;
and responding to the second scene change instruction, disconnecting a first SCO link between the terminal and the first earphone and disconnecting a second SCO link between the terminal and the second earphone.
Optionally, before the terminal receives the second scene change instruction, the method further includes:
the first earphone receives a second operation input by a user;
and responding to the second operation, and sending a second scene change instruction to the terminal by the first earphone.
Optionally, before the terminal receives the second scene change instruction, the method further includes:
the second earphone receives a second operation input by the user;
in response to the second operation, the second headset sends a second scene change instruction to the first headset;
and the first earphone forwards the received second scene change instruction to the terminal.
Optionally, the first audio data is media audio data, and the second audio data is voice call data.
In a second aspect, embodiments of the present application provide a TWS bluetooth headset, the TWS bluetooth headset comprising a first headset and a second headset, the first headset and the second headset each comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the processor executing the computer program to perform the steps of:
a first ACL link is established between the terminal and the first earphone, and a second ACL link is established between the first earphone and the second earphone;
the first earphone receives first audio data sent by the terminal through the first ACL link, and forwards the first audio data to the second earphone through the second ACL link;
responding to a first scene change instruction received by the terminal, establishing a first SCO link between the terminal and the first earphone, and establishing a second SCO link between the terminal and the second earphone;
and second audio data is transmitted between the terminal and the first earphone through the first SCO link, and second audio data is transmitted between the terminal and the second earphone through the second SCO link.
Optionally, the first scene change instruction is used to instruct the application scene of the TWS bluetooth headset to change into a simultaneous interpretation scene; the transmitting second audio data between the terminal and the first earphone through the first SCO link, and the transmitting second audio data between the terminal and the second earphone through the second SCO link, includes:
the first earphone collects second audio data of a first language, and sends the second audio data of the first language to the terminal through the first SCO link, so that the terminal translates the second audio data of the first language into second audio data of a second language and sends the translated second audio data of the second language to the second earphone through the second SCO link, and the second earphone plays the second audio data of the second language;
the second earphone collects second audio data of a second language, sends the second audio data of the second language to the terminal through the second SCO link, so that the terminal translates the second audio data of the second language into second audio data of a first language and sends the translated second audio data of the first language to the first earphone through the first SCO link, and the first earphone plays the second audio data of the first language.
Optionally, before responding to the first scene change instruction received by the terminal, the method further includes:
the first earphone receives a first operation input by a user;
in response to the first operation, the first earpiece sending a physical address request message to the second earpiece;
the first earphone receives the physical address of the second earphone sent by the second earphone;
and the first earphone sends a first scene change instruction and the physical address of the second earphone to the terminal.
Optionally, before responding to the first scene change instruction received by the terminal, the method further includes:
the second earphone receives a first operation input by a user;
in response to the first operation, the second headset sends a first scene change instruction and a physical address of the second headset to the first headset;
and the first earphone forwards the first scene change instruction and the physical address of the second earphone to the terminal.
Optionally, after a first SCO link is established between the terminal and the first earphone, and a second SCO link is established between the terminal and the second earphone, the method further includes:
and in response to a second scene change instruction received by the terminal, disconnecting a first SCO link between the terminal and the first earphone and disconnecting a second SCO link between the terminal and the second earphone.
Optionally, before responding to the second scene change instruction received by the terminal, the method further includes:
the first earphone receives a second operation input by a user;
and responding to the second operation, and sending a second scene change instruction to the terminal by the first earphone.
Optionally, before responding to the second scene change instruction received by the terminal, the method further includes:
the second earphone receives a second operation input by the user;
in response to the second operation, the second headset sends a second scene change instruction to the first headset;
and the first earphone forwards the received second scene change instruction to the terminal.
Optionally, the first audio data is media audio data, and the second audio data is voice call data.
In a third aspect, an embodiment of the present application provides a terminal, where the terminal includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and the processor executes the computer program to perform the following steps:
a first ACL link is established between the terminal and a first earphone, a second ACL link is established between the first earphone and a second earphone, and the first earphone and the second earphone are single earphones in a real wireless stereo TWS Bluetooth earphone;
the terminal sends first audio data to the first earphone through the first ACL link, and the first earphone forwards the first audio data to the second earphone through the second ACL link;
the terminal receives a first scene change instruction;
responding to the first scene change instruction, establishing a first SCO link between the terminal and the first earphone, and establishing a second SCO link between the terminal and the second earphone;
and second audio data is transmitted between the terminal and the first earphone through the first SCO link, and second audio data is transmitted between the terminal and the second earphone through the second SCO link.
Optionally, the first scene change instruction is used to instruct the application scene of the TWS bluetooth headset to change into a simultaneous interpretation scene; the transmitting second audio data between the terminal and the first earphone through the first SCO link, and the transmitting second audio data between the terminal and the second earphone through the second SCO link, includes:
the terminal receives second audio data of a first language from the first earphone through the first SCO link, the second audio data of the first language is collected by the first earphone, the terminal translates the second audio data of the first language into second audio data of a second language and sends the translated second audio data of the second language to the second earphone through the second SCO link, so that the second earphone plays the second audio data of the second language;
the terminal receives second audio data of a second language from the second earphone through the second SCO link, the second audio data of the second language is collected by the second earphone, the terminal translates the second audio data of the second language into second audio data of the first language and sends the translated second audio data of the first language to the first earphone through the first SCO link, so that the first earphone plays the second audio data of the first language.
Optionally, the method further includes: the terminal receives translation configuration information input by a user, wherein the translation configuration information is used for indicating that the collection language of the first earphone is the first language, and the collection language of the second earphone is the second language.
Optionally, after a first SCO link is established between the terminal and the first earphone, and a second SCO link is established between the terminal and the second earphone, the method further includes:
the terminal receives a second scene change instruction;
and responding to the second scene change instruction, disconnecting a first SCO link between the terminal and the first earphone and disconnecting a second SCO link between the terminal and the second earphone.
Optionally, the first audio data is media audio data, and the second audio data is voice call data.
In a fourth aspect, an embodiment of the present application provides a bluetooth communication method, which is applied to a TWS bluetooth headset, where the TWS bluetooth headset includes a first headset and a second headset, and the method includes:
a first ACL link is established between the terminal and the first earphone, and a second ACL link is established between the first earphone and the second earphone;
the first earphone receives first audio data sent by the terminal through the first ACL link, and forwards the first audio data to the second earphone through the second ACL link;
responding to a first scene change instruction received by the terminal, establishing a first SCO link between the terminal and the first earphone, and establishing a second SCO link between the terminal and the second earphone;
and second audio data is transmitted between the terminal and the first earphone through the first SCO link, and second audio data is transmitted between the terminal and the second earphone through the second SCO link.
Optionally, the first scene change instruction is used to instruct the application scene of the TWS bluetooth headset to change into a simultaneous interpretation scene; the transmitting second audio data between the terminal and the first earphone through the first SCO link, and the transmitting second audio data between the terminal and the second earphone through the second SCO link, includes:
the first earphone collects second audio data of a first language, and sends the second audio data of the first language to the terminal through the first SCO link, so that the terminal translates the second audio data of the first language into second audio data of a second language and sends the translated second audio data of the second language to the second earphone through the second SCO link, and the second earphone plays the second audio data of the second language;
the second earphone collects second audio data of a second language, sends the second audio data of the second language to the terminal through the second SCO link, so that the terminal translates the second audio data of the second language into second audio data of a first language and sends the translated second audio data of the first language to the first earphone through the first SCO link, and the first earphone plays the second audio data of the first language.
Optionally, before responding to the first scene change instruction received by the terminal, the method further includes:
the first earphone receives a first operation input by a user;
in response to the first operation, the first earpiece sending a physical address request message to the second earpiece;
the first earphone receives the physical address of the second earphone sent by the second earphone;
and the first earphone sends a first scene change instruction and the physical address of the second earphone to the terminal.
Optionally, before responding to the first scene change instruction received by the terminal, the method further includes:
the second earphone receives a first operation input by a user;
in response to the first operation, the second headset sends a first scene change instruction and a physical address of the second headset to the first headset;
and the first earphone forwards the first scene change instruction and the physical address of the second earphone to the terminal.
Optionally, after a first SCO link is established between the terminal and the first headset and a second SCO link is established between the terminal and the second headset, the method further includes:
and in response to a second scene change instruction received by the terminal, disconnecting a first SCO link between the terminal and the first earphone and disconnecting a second SCO link between the terminal and the second earphone.
Optionally, before responding to the second scene change instruction received by the terminal, the method further includes:
the first earphone receives a second operation input by a user;
and responding to the second operation, and sending a second scene change instruction to the terminal by the first earphone.
Optionally, before responding to the second scene change instruction received by the terminal, the method further includes:
the second earphone receives a second operation input by the user;
in response to the second operation, the second headset sends a second scene change instruction to the first headset;
and the first earphone forwards the received second scene change instruction to the terminal.
Optionally, the first audio data is media audio data, and the second audio data is voice call data.
In a fifth aspect, an embodiment of the present application provides a bluetooth communication method, which is applied to a terminal, and the method includes:
a first ACL link is established between the terminal and a first earphone, a second ACL link is established between the first earphone and a second earphone, and the first earphone and the second earphone are single earphones in a real wireless stereo TWS Bluetooth earphone;
the terminal sends first audio data to the first earphone through the first ACL link, and the first earphone forwards the first audio data to the second earphone through the second ACL link;
the terminal receives a first scene change instruction;
responding to the first scene change instruction, establishing a first SCO link between the terminal and the first earphone, and establishing a second SCO link between the terminal and the second earphone;
and second audio data is transmitted between the terminal and the first earphone through the first SCO link, and second audio data is transmitted between the terminal and the second earphone through the second SCO link.
Optionally, the first scene change instruction is used to instruct the application scene of the TWS bluetooth headset to change into a simultaneous interpretation scene; the transmitting second audio data between the terminal and the first earphone through the first SCO link, and the transmitting second audio data between the terminal and the second earphone through the second SCO link, includes:
the terminal receives second audio data of a first language from the first earphone through the first SCO link, the second audio data of the first language is collected by the first earphone, the terminal translates the second audio data of the first language into second audio data of a second language and sends the translated second audio data of the second language to the second earphone through the second SCO link, so that the second earphone plays the second audio data of the second language;
the terminal receives second audio data of a second language from the second earphone through the second SCO link, the second audio data of the second language is collected by the second earphone, the terminal translates the second audio data of the second language into second audio data of the first language and sends the translated second audio data of the first language to the first earphone through the first SCO link, so that the first earphone plays the second audio data of the first language.
Optionally, the method further includes: the terminal receives translation configuration information input by a user, wherein the translation configuration information is used for indicating that the collection language of the first earphone is the first language, and the collection language of the second earphone is the second language.
Optionally, after a first SCO link is established between the terminal and the first headset and a second SCO link is established between the terminal and the second headset, the method further includes:
the terminal receives a second scene change instruction;
and responding to the second scene change instruction, disconnecting a first SCO link between the terminal and the first earphone and disconnecting a second SCO link between the terminal and the second earphone.
Optionally, the first audio data is media audio data, and the second audio data is voice call data.
In a sixth aspect, an embodiment of the present application provides a chip, where the chip includes at least one communication interface, at least one processor, and at least one memory, where the communication interface, the memory, and the processor are interconnected by a bus, and the processor executes instructions stored in the memory to perform the bluetooth communication method according to any one of the fourth aspects, or the bluetooth communication method according to any one of the fifth aspects.
In a seventh aspect, an embodiment of the present application provides a storage medium for storing a computer program, where the computer program is used to implement the bluetooth communication method according to any one of the fourth aspects or the bluetooth communication method according to any one of the fifth aspects when the computer program is executed by a computer or a processor.
In an eighth aspect, the present application provides a computer program product, which includes instructions that, when executed by a computer or a processor, implement the bluetooth communication method according to any one of the fourth aspects, or implement the bluetooth communication method according to any one of the fifth aspects.
According to the Bluetooth communication method, the TWS Bluetooth headset and the terminal, under the condition that a first ACL link is established between the terminal and the main headset and a second ACL link is established between the main headset and the auxiliary headset, a first SCO link is established between the terminal and the main headset and a second SCO link is established between the terminal and the auxiliary headset in response to a first scene change instruction received by the terminal. Make the terminal can be through transmitting voice data between first SCO link and the main earphone to through transmitting voice data between second SCO link and the auxiliary earphone, thereby realize that main earphone and auxiliary earphone gather two different users 'voice data respectively, and transmit for the terminal through different SCO links respectively and translate, realize the simultaneous interpretation process, richened bluetooth headset's application scene, satisfied the multi-functional demand of user to bluetooth headset.
Drawings
FIG. 1 is a diagram of a system architecture provided by an embodiment of the present application;
fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 3A to fig. 3E are schematic diagrams of a terminal interface in an application scenario according to an embodiment of the present application;
fig. 4A is a schematic diagram of a link between a terminal and a bluetooth headset in a music listening scene according to an embodiment of the present application;
fig. 4B is a schematic diagram of data transmission between a terminal and a bluetooth headset in a music listening scene according to an embodiment of the present application;
fig. 5A is a schematic diagram of a link between a terminal and a bluetooth headset in a call dialing scenario according to an embodiment of the present application;
fig. 5B is a schematic diagram of data transmission between a terminal and a bluetooth headset in a call dialing scenario according to an embodiment of the present application;
fig. 6 is a flowchart illustrating a bluetooth communication method according to an embodiment of the present application;
fig. 7A is a schematic diagram of a bluetooth headset switched from a music listening scene to a simultaneous interpretation scene according to an embodiment of the present application;
fig. 7B is a schematic diagram of data transmission between a terminal and a bluetooth headset in a simultaneous interpretation scene according to an embodiment of the present application;
fig. 8A is a schematic diagram illustrating a bluetooth headset switched from a single-person call scenario to a double-person call scenario according to an embodiment of the present application;
fig. 8B is a schematic diagram of data transmission between a terminal and a bluetooth headset in a double-person communication scenario according to an embodiment of the present application;
fig. 9 is a schematic flowchart illustrating a process of entering a simultaneous translation mode according to an embodiment of the present application;
fig. 10 is a schematic flowchart of another process for entering the simultaneous interpretation mode according to the embodiment of the present application;
fig. 11 is a schematic diagram of data transmission between a terminal and a bluetooth headset in a simultaneous interpretation scene according to an embodiment of the present application;
fig. 12 is a schematic diagram of a translation configuration interface of a terminal according to an embodiment of the present application;
fig. 13 is a schematic diagram of a translation configuration interface of a terminal according to an embodiment of the present application;
fig. 14 is a flowchart illustrating a bluetooth communication method according to an embodiment of the present application;
fig. 15 is a schematic diagram of a link between a terminal and a bluetooth headset after exiting the simultaneous interpretation mode according to an embodiment of the present application;
fig. 16 is a schematic structural diagram of a TWS bluetooth headset according to an embodiment of the present application;
fig. 17 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
To facilitate understanding of the present application, a system architecture and devices to which the present application is applicable will be described first with reference to fig. 1 and 2.
Fig. 1 is a system architecture diagram provided in an embodiment of the present application. Referring to fig. 1, a terminal 10 and a bluetooth headset 20 are included. The terminal 10 and the Bluetooth headset 20 are connected via Bluetooth (Bluetooth).
In this application, the bluetooth headset is a headset supporting a bluetooth communication protocol. The bluetooth communication protocol may be an ER conventional bluetooth protocol, may also be a BDR conventional bluetooth protocol, and may also be a BLE bluetooth low energy protocol. Of course, other new bluetooth protocol types may be introduced in the future. From the perspective of the version of the bluetooth protocol, the version of the bluetooth communication protocol may be any of the following: series 1.0 versions, series 2.0 versions, series 3.0 versions, series 4.0 versions, other series versions based on future inferences.
The bluetooth headset in this embodiment is a TWS bluetooth headset. The TWS Bluetooth headset includes a first headset and a second headset. For convenience of description, in the embodiment of the present application, the first earphone is referred to as a main earphone, and the second earphone is referred to as a sub-earphone. The main earphone and the auxiliary earphone are internally provided with Bluetooth modules, and data transmission can be carried out between the main earphone and the auxiliary earphone through a Bluetooth protocol. The TWS earphone is not provided with a connecting line between the main earphone and the auxiliary earphone in appearance, and has the advantages of convenience in carrying and easiness in use.
The main earphone and the auxiliary earphone both comprise microphones, that is, the main earphone and the auxiliary earphone have an audio playing function, and the main earphone and the auxiliary earphone also have an audio collecting function.
The bluetooth headset in this application may be one or more of the following applications: HSP (Headset Profile) application, HFP (handles-free Profile) application, A2DP (Advanced Audio Distribution Profile) application, AVRCP (Audio/Video Remote Control Profile) application.
Among them, the HSP application represents an earphone application, providing basic functions required for communication between a terminal and an earphone. The bluetooth headset may serve as an audio input and output interface of the terminal.
HFP application represents a hands-free application, HFP application adds some extended functions on the basis of HSP application, and bluetooth headset can control the conversation process of the terminal, for example: answering, hanging up, refusing to connect, voice dialing, etc.
The A2DP application is an advanced audio delivery application, and the A2DP can stack data using a chip in the headset, achieving high definition of sound.
The AVRCP application is an audio video remoting application that defines how to control features of streaming media, including: pause, stop, start playback, volume control, and other types of remote control operations.
In the present application, the terminal 100 may be any device having an arithmetic processing capability. Of course, the terminal can also have audio and video playing and interface display functions. For example, the terminal may be a mobile phone, a computer, a smart tv, a vehicle-mounted device, a wearable device, an industrial device, and the like. The terminal 100 supports the bluetooth communication protocol.
In the present application, the terminal is an electronic device, and the configuration of the electronic device will be described below with reference to fig. 2.
Fig. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 2, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a Universal Serial Bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, a button 190, a motor 191, an indicator 192, a camera 193, a display 194, a Subscriber Identity Module (SIM) card interface 195, and the like. The sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors.
The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Avoiding repeated accesses reduces the latency of the processor 110, thereby increasing the efficiency of the system.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a Universal Serial Bus (USB) interface, etc.
The I2C interface is a bi-directional synchronous serial bus that includes a serial data line (SDA) and a Serial Clock Line (SCL). In some embodiments, processor 110 may include multiple sets of I2C buses. The processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces, respectively. For example: the processor 110 may be coupled to the touch sensor 180K via an I2C interface, such that the processor 110 and the touch sensor 180K communicate via an I2C bus interface to implement the touch functionality of the electronic device 100.
The I2S interface may be used for audio communication. In some embodiments, processor 110 may include multiple sets of I2S buses. The processor 110 may be coupled to the audio module 170 via an I2S bus to enable communication between the processor 110 and the audio module 170. In some embodiments, the audio module 170 may communicate audio signals to the wireless communication module 160 via the I2S interface, enabling answering of calls via a bluetooth headset.
The PCM interface may also be used for audio communication, sampling, quantizing and encoding analog signals. In some embodiments, the audio module 170 and the wireless communication module 160 may be coupled by a PCM bus interface. In some embodiments, the audio module 170 may also transmit audio signals to the wireless communication module 160 through the PCM interface, so as to implement a function of answering a call through a bluetooth headset. Both the I2S interface and the PCM interface may be used for audio communication.
The UART interface is a universal serial data bus used for asynchronous communications. The bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication. In some embodiments, a UART interface is generally used to connect the processor 110 with the wireless communication module 160. For example: the processor 110 communicates with a bluetooth module in the wireless communication module 160 through a UART interface to implement a bluetooth function. In some embodiments, the audio module 170 may transmit the audio signal to the wireless communication module 160 through a UART interface, so as to realize the function of playing music through a bluetooth headset.
MIPI interfaces may be used to connect processor 110 with peripheral devices such as display 194, camera 193, and the like. The MIPI interface includes a Camera Serial Interface (CSI), a Display Serial Interface (DSI), and the like. In some embodiments, processor 110 and camera 193 communicate through a CSI interface to implement the capture functionality of electronic device 100. Processor 110 and display 194 communicate via the DSI interface to implement display functions of electronic device 100.
It should be understood that the interface connection relationship between the modules illustrated in the embodiments of the present application is only an illustration, and does not limit the structure of the electronic device 100. In other embodiments of the present application, the electronic device 100 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The mobile communication module 150 may provide a solution including 2G/3G/4G/5G wireless communication applied to the electronic device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive the electromagnetic wave from the antenna 1, filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.) or displays an image or video through the display 194. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may provide a solution for wireless communication applied to the electronic device 100, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (bluetooth, BT), Global Navigation Satellite System (GNSS), Frequency Modulation (FM), Near Field Communication (NFC), Infrared (IR), and the like. The wireless communication module 160 may be one or more devices integrating at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, perform frequency modulation and amplification on the signal, and convert the signal into electromagnetic waves through the antenna 2 to radiate the electromagnetic waves.
In some embodiments, antenna 1 of electronic device 100 is coupled to mobile communication module 150 and antenna 2 is coupled to wireless communication module 160 so that electronic device 100 can communicate with networks and other devices through wireless communication techniques. The wireless communication technology may include global system for mobile communications (GSM), General Packet Radio Service (GPRS), code division multiple access (code division multiple access, CDMA), Wideband Code Division Multiple Access (WCDMA), time-division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), LTE, BT, GNSS, WLAN, NFC, FM, and/or IR technologies, etc. The GNSS may include a Global Positioning System (GPS), a global navigation satellite system (GLONASS), a beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS), and/or a Satellite Based Augmentation System (SBAS).
The electronic device 100 implements display functions via the GPU, the display 194, and the application processor. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. The processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
The display 194 is used to display images, video, and the like. The display 194 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode, AMOLED), a flexible light-emitting diode (FLED), a miniature, a Micro-oeld, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N displays 194, N being a positive integer greater than 1.
The electronic device 100 may implement a photographing function through the ISP, the camera 193, the video codec, the GPU, the display 194, the application processor, and the like.
The ISP is used to process the data fed back by the camera 193. For example, when a photo is taken, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing and converting into an image visible to naked eyes. The ISP can also carry out algorithm optimization on the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in camera 193.
The digital signal processor is used for processing digital signals, and can process digital image signals and other digital signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to perform fourier transform or the like on the frequency bin energy.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: moving Picture Experts Group (MPEG) 1, MPEG2, MPEG3, MPEG4, and the like.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. Applications such as intelligent recognition of the electronic device 100 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, and the like.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to extend the memory capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. For example, files such as music, video, etc. are saved in an external memory card.
The internal memory 121 may be used to store computer-executable program code, which includes instructions. The internal memory 121 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like. The storage data area may store data (such as audio data, phone book, etc.) created during use of the electronic device 100, and the like. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like. The processor 110 executes various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The electronic device 100 may implement audio functions via the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the headphone interface 170D, and the application processor. Such as music playing, recording, etc.
The audio module 170 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
For convenience of understanding, the process of establishing a service connection between the terminal and the bluetooth headset is described below by taking the terminal as a mobile phone as an example. The process of establishing the service connection between the terminal and the Bluetooth headset comprises three stages, namely a scanning stage, a pairing stage and a service connection establishing stage. The following is a detailed description with reference to fig. 3A to 3E.
Fig. 3A to fig. 3E are schematic diagrams of a terminal interface in an application scenario provided in the embodiment of the present application. Illustratively, as shown in fig. 3A, selecting "bluetooth" in the terminal setting interface enters a bluetooth setting interface, which is shown in fig. 3B. Referring to fig. 3B, when the terminal receives an instruction corresponding to the bluetooth activation option operated by the user, the terminal activates the bluetooth function. In some scenarios, after the terminal turns on the bluetooth function, it can discover nearby paired bluetooth devices and display the scanned bluetooth devices in an "available devices" list. Exemplarily, fig. 3B illustrates a case where the current terminal device HUAWEIP30 scans the bluetooth device HUAWEIMate20 and the bluetooth headset 1. This phase is called the scan phase.
In some scenarios, when the terminal detects that the user clicks a bluetooth device in the "available device" list, the terminal will pair with the bluetooth device. For example, as shown in fig. 3C, when the terminal detects that the user clicks "bluetooth headset 1" in the "available device" list, the terminal pairs with the bluetooth headset 1, and if the pairing is successful, the "bluetooth headset 1" is displayed in the "paired device" list, as shown in fig. 3D. This phase is called the pairing phase.
In some scenarios, if the terminal has previously established a service connection with some other bluetooth device, the bluetooth device that has established the service connection may be displayed in the "paired device" list. Referring to fig. 3D, the list of "paired devices" also shows the bluetooth devices, huabei free bugs, which have established traffic connections.
In some scenarios, when the terminal detects that the user clicks a certain bluetooth device in the "paired device" list, the terminal establishes a service connection with the bluetooth device. As shown in fig. 3E, when the terminal detects that the user clicks "bluetooth headset 1" in the "device that can be paired" list, the terminal establishes service connection with the bluetooth headset 1. If the service connection is successfully established, audio data can be transmitted between the terminal and the bluetooth headset 1. This phase is called the traffic connection setup phase. The interaction process between the terminal and the bluetooth headset described in the following embodiments is performed after the terminal and the bluetooth headset establish a service connection.
It should be noted that the schematic diagrams of the terminal interfaces illustrated in fig. 3A to 3E are only an example. The setting interfaces and operation modes corresponding to different terminal devices may be different.
After the terminal is connected with the Bluetooth headset, audio data can be transmitted between the terminal and the Bluetooth headset. For the TWS Bluetooth headset, when the terminal is connected with the Bluetooth headset, the terminal is only connected with the main headset in a Bluetooth mode, and the terminal is not required to be connected with the auxiliary headset in a Bluetooth mode. And the main earphone and the auxiliary earphone are communicated in a data forwarding mode. Illustratively, the terminal sends the audio data to the main earphone, and the main earphone forwards the audio data to the auxiliary earphone after receiving the audio data, so as to realize synchronous sound production between the main earphone and the auxiliary earphone.
The bluetooth physical link between the terminal and the TWS bluetooth headset is divided into two types, one is an Asynchronous Connection Less (ACL) link, and the other is a Synchronous Connection Oriented (SCO) link.
ACL links are basic connections for bluetooth and are typically used to transmit connection-type negotiation signaling for maintaining a bluetooth connection. ACL links also support unidirectional transmission of audio data. Illustratively, when the terminal sends audio data to the master earpiece over the ACL link, the master earpiece cannot send audio data to the terminal at the same time.
The SCO link is a connection technology supported by the bluetooth baseband, and uses reserved time slots to transmit data. The SCO link supports bi-directional transfer of audio data. Illustratively, when the terminal sends audio data to the master earphone over the SCO link, the master earphone may also send audio data to the terminal over the SCO link.
Current TWS bluetooth headsets typically have the basic functions of listening to music and making and receiving calls. The following describes a communication process between the terminal and the bluetooth headset in combination with two application scenarios of music listening and telephone dialing.
Fig. 4A is a schematic diagram of a link between a terminal and a bluetooth headset in a music listening scene according to an embodiment of the present application. As shown in fig. 4A, in a music listening scenario, the terminal establishes an ACL link with a primary one of the TWS bluetooth headsets, and an ACL link is also established between the primary and secondary headsets.
Fig. 4B is a schematic diagram of data transmission between a terminal and a bluetooth headset in a music listening scene according to an embodiment of the present application. As shown in fig. 4B, the terminal receives music to be played selected by the user. When the terminal plays the music, the currently played audio data is sent to the main earphone through the ACL link, and the main earphone forwards the received audio data to the auxiliary earphone through the ACL link. The Bluetooth headset controls the main headset and the auxiliary headset to synchronously play the audio data, so that the user can listen to music through the main headset and/or the auxiliary headset.
In practical application, the wearing states of the main earphone and the auxiliary earphone can be detected, and audio data can be transmitted according to the wearing states of the main earphone and the auxiliary earphone. Illustratively, when detecting that the user wears the main earphone and the auxiliary earphone at the same time, the terminal sends the currently played audio data to the main earphone through the ACL link, and the main earphone forwards the received audio data to the auxiliary earphone through the ACL link, so that the main earphone and the auxiliary earphone synchronously sound. When detecting that the user only wears the main earphone, the terminal sends the currently played audio data to the main earphone through the ACL link, and the main earphone does not need to forward the audio data to the auxiliary earphone. When detecting that the user only wears the auxiliary earphone, the roles of the main earphone and the auxiliary earphone can be switched, namely the main earphone is switched into the auxiliary earphone, and the auxiliary earphone is switched into the main earphone. Then, the terminal sends the currently played audio data to the main earphone through the ACL link, and the main earphone does not need to forward the audio data to the auxiliary earphone.
The wearing states of the main earphone and the auxiliary earphone can be detected in various detection modes. For example, a sensor-based wearing detection technique may be adopted, that is, sensors are provided in the primary and secondary earphones, and the sensors are used for acquiring wearing state signals. Whether the main earphone or the auxiliary earphone is in a wearing state can be determined according to the wearing state signals collected by the optical proximity sensor. The sensor may be one or more of an optical proximity sensor, a pressure sensor, a heat sensor, and a moisture sensor.
Fig. 5A is a schematic diagram of a link between a terminal and a bluetooth headset in a call dialing scenario according to an embodiment of the present application. As shown in fig. 5, in a dial-up scenario, the terminal establishes an SCO link with a primary earpiece of the TWS bluetooth headset, and an SCO link is also established between the primary and secondary earpieces.
Fig. 5B is a schematic diagram of data transmission between a terminal and a bluetooth headset in a call dialing scenario according to an embodiment of the present application. As shown in fig. 5B, the user wears the primary and secondary earphones at the same time. When a user carries out voice communication, the microphone of the main earphone carries out voice collection, and the microphone of the auxiliary earphone does not carry out voice collection. The microphone of the main earphone collects first voice data spoken by a user, the collected first voice data is sent to the terminal through the SCO link, and the terminal performs voice processing and transmits the voice data to the other party through a wireless transmission technology. Meanwhile, after receiving second voice data of the other party of the call, the terminal sends the second voice data to the main earphone through the SCO link, and the main earphone forwards the received second voice data to the auxiliary earphone through the SCO link. The Bluetooth headset controls the main headset and the auxiliary headset to synchronously play the second voice data, so that the user can listen to the second voice data of the other party through the main headset and the auxiliary headset.
In the present application, the master earphone refers to an earphone that establishes a connection link with the terminal, and is not limited to the left earphone or the right earphone. In some scenarios, the primary earpiece is the left earpiece and the secondary earpiece is the right earpiece. In other scenes, the main earphone is a right earphone and the auxiliary earphone is a left earphone. In still other scenarios, the primary and secondary earpieces may also be switched. In one possible mode, the switching can be performed according to the electric quantity states of the main earphone and the auxiliary earphone. For example, if the power of the current primary earphone is lower than a certain threshold and the power of the primary earphone is lower than the power of the secondary earphone, the secondary earphone is switched to the primary earphone. In another possible mode, the switching can be carried out according to the wearing states of the main earphone and the auxiliary earphone. For example, in some application scenarios, if the bluetooth headset detects that the primary headset is out of the ear or falls, the primary headset is switched to the secondary headset, and the secondary headset is switched to the primary headset.
The current TWS Bluetooth headset usually only has the basic functions of listening music and receiving and making calls, so that the function of the Bluetooth headset is single. In order to enrich the application scenes of the TWS Bluetooth headset and meet the multifunctional requirements of a user on the TWS Bluetooth headset, the embodiment of the application provides a Bluetooth communication method.
In the application, the microphones of the main earphone and the auxiliary earphone of the TWS Bluetooth earphone respectively collect voice data of different users, and the TWS Bluetooth earphone and the terminal are utilized to realize the function of simultaneous interpretation. Illustratively, a user a speaking a first language wears a primary earpiece and a user B speaking a second language wears a secondary earpiece. When the user A and the user B have a conversation, the microphone of the main earphone collects voice data of the first language spoken by the user A and sends the voice data to the terminal, and the terminal translates the voice data into the second language and then transmits the second language to the auxiliary earphone. And the microphone of the auxiliary earphone collects the voice data of the second language spoken by the user B and sends the voice data to the terminal, and the terminal translates the voice data into the first language and then transmits the first language to the main earphone. Thus, an unobstructed conversation between user a and user B is achieved.
The technical means shown in the present application will be described in detail below with reference to specific examples. It should be noted that the following embodiments may exist alone or in combination with each other, and description of the same or similar contents is not repeated in different embodiments.
Fig. 6 is a flowchart illustrating a bluetooth communication method according to an embodiment of the present application. As shown in fig. 6, the method includes:
s601: and a first ACL link is established between the terminal and the main earphone, and a second ACL link is established between the main earphone and the auxiliary earphone.
S602: and the terminal sends first audio data to the main earphone through the first ACL link, and the main earphone forwards the first audio data to the auxiliary earphone through the second ACL link.
In this embodiment, in S601 and S602, the TWS bluetooth headset is in a music listening scene. Illustratively, the schematic diagram of the links between the terminal, the main headset and the sub-headset is shown in fig. 4A, that is, ACL links are established between the terminal and the main headset and between the main headset and the sub-headset. For convenience of description, in the present embodiment, an ACL link between the terminal and the main headset is referred to as a first ACL link, and an ACL link between the main headset and the sub-headset is referred to as a second ACL link.
The audio transmission process between the terminal, the main earphone and the sub-earphone is shown in fig. 4B. Namely, the terminal sends the first audio data to the main earphone through the first ACL link, and the main earphone forwards the received first audio data to the auxiliary earphone through the ACL link. Wherein, the first audio data is media audio data, including but not limited to: music, audio data in a movie program, audio data of a recording, etc.
S603: the terminal receives a first scene change instruction.
Wherein the first scene change instruction is used for indicating that the application scene of the TWS Bluetooth headset is changed. In this application, the application scenarios of the bluetooth headset include but are not limited to: listening to music scenes, calling scenes, simultaneous interpretation scenes and double-person conversation scenes. Fig. 4A shows a link between the terminal and the bluetooth headset in a music listening scene, and fig. 4B shows a transmission process of audio data. The link between the terminal and the bluetooth headset in the call-making scenario is shown in fig. 5A, and the transmission process of the audio data is shown in fig. 5B.
In different application scenes, the wearing mode and the working mode of the Bluetooth headset are different. The wearing mode and the working mode in several application scenarios are respectively described below.
In the scene of listening music and making a call, the corresponding wearers of the main earphone and the auxiliary earphone are the same. In a music listening scene, the main earphone and the auxiliary earphone only play audio, and the audio data played by the main earphone and the auxiliary earphone are the same. And the microphones of the main earphone and the auxiliary earphone are not used for audio acquisition.
In a calling scene, the main earphone can perform audio playing and audio acquisition, and the auxiliary earphone only performs audio playing and does not perform audio acquisition. Also, in the call placement scenario, the audio data played by the primary and secondary earpieces is the same.
In the simultaneous interpretation scenario, the primary and secondary earpieces correspond to different wearers. The main earphone and the auxiliary earphone can simultaneously play audio and collect audio, and the audio data played by the main earphone and the auxiliary earphone is different, and the audio data collected by the main earphone and the auxiliary earphone is different.
In a double-person conversation scene, the corresponding wearers of the main earphone and the auxiliary earphone are different. The main earphone and the auxiliary earphone can simultaneously play audio and collect audio, the audio data collected by the main earphone and the auxiliary earphone are different, and the audio data played by the main earphone and the auxiliary earphone are the same.
In the embodiment of the application, the terminal receives the first scene change instruction, and the first scene change instruction may be directly input to the terminal by the user, or may be input to the TWS bluetooth headset by the user and forwarded to the terminal by the TWS bluetooth headset.
In some embodiments, the TWS bluetooth headset receives a first operation input by a user to the TWS bluetooth headset, and in response to the first operation, the TWS bluetooth headset transmits a first scene change instruction to the terminal. For example, when the user needs to change the application scene of the bluetooth headset, the first operation may be input to the bluetooth headset. The user may input the first operation to the bluetooth headset in various ways. In one possible embodiment, a mode switching button is provided on the body of the bluetooth headset, and the user can operate the mode switching button, for example, the user can click the mode switching button, or double click the mode switching button, or touch the mode switching button to input the first operation to the bluetooth headset. In another possible implementation, the bluetooth headset may recognize a preset voice command, and the user may input the preset voice command to the bluetooth headset to implement scene change. For example, when a user needs to apply the bluetooth headset to a music listening scene, the user may input a voice "enter a music listening mode" to the bluetooth headset. When a user needs to apply the Bluetooth headset to a simultaneous interpretation scene, the user can input voice to the Bluetooth headset to enter a simultaneous interpretation mode.
In other embodiments, the terminal receives a first scene change instruction input by a user. Illustratively, the user may input the first scene change instruction to the terminal in various ways, similar to the way the user inputs the first scene change instruction to the TWS bluetooth headset. For example: in a terminal interface, a first scene change instruction is input to a terminal in a mode of clicking a mode switching control, double-clicking the mode switching control, touching the mode switching control, sliding the mode switching control and the like. The first scene change instruction may also be input to the terminal by means of voice interaction, for example, inputting a voice "enter simultaneous interpretation mode" to the terminal.
In some embodiments, sensors are disposed in the primary earphone and the secondary earphone for collecting wearing status signals of the primary earphone and the secondary earphone. The sensor may be one or more of an optical proximity sensor, a pressure sensor, a thermal sensor, a moisture sensor. Whether the current application scene is switched or not can be intelligently determined according to the wearing state signal acquired by the sensor of the main earphone and the wearing state signal acquired by the sensor of the auxiliary earphone. For example, the sensor is taken as a thermal sensor, and the wearing state signal acquired by the thermal sensor indicates the body temperature of the wearer. In a music listening scene or a call making scene, the wearing state signals corresponding to the main earphone and the auxiliary earphone are possibly the same or similar, and the difference is possibly larger. For example: the difference between the body temperature indicated by the wearing state signal of the main earphone and the body temperature indicated by the wearing state signal of the auxiliary earphone is smaller than a first threshold value (the case that the user wears the main earphone and the auxiliary earphone at the same time), or the difference between the body temperature indicated by the wearing state signal of the main earphone and the body temperature indicated by the wearing state signal of the auxiliary earphone is larger than a second threshold value (the case that the user wears only the main earphone and does not wear the auxiliary earphone). When it is detected that a difference value between the body temperature of the wearer indicated by the wearing state signal of the primary earphone and the body temperature of the wearer indicated by the wearing state signal of the secondary earphone is between a first threshold and a second threshold, it may be determined that the wearers corresponding to the primary earphone and the secondary earphone are different, that is, the application scenario of the TWS bluetooth earphone is changed.
S604: and responding to the first scene change instruction, establishing a first SCO link between the terminal and the main earphone, and establishing a second SCO link between the terminal and the auxiliary earphone.
S605: and second audio data is transmitted between the terminal and the main earphone through the first SCO link, and second audio data is transmitted between the terminal and the auxiliary earphone through the second SCO link.
In the present application, the first scene change instruction may be an at (event) instruction. The AT command refers to a control command used in the bluetooth communication protocol. For example, an existing AT command may be multiplexed, and a certain information field in the existing AT command may be set to a value indicating a changed application scenario. An AT instruction may also be added to indicate a change situation of the application scenario.
Assume that a first scene change instruction is used to instruct the application scene of the TWS bluetooth headset to change to a simultaneous translation scene. In the present application, in the simultaneous interpretation scenario, the link established between the terminal and the bluetooth headset is different from the link established in the dialing scenario shown in fig. 5A. Under the simultaneous interpretation scene, the SCO link is established between the terminal and the main earphone as well as between the terminal and the auxiliary earphone. For convenience of description, in the embodiment of the present application, the SCO link between the terminal and the primary headset is referred to as a first SCO link, and the SCO link between the terminal and the secondary headset is referred to as a second SCO link.
After the first SCO link and the second SCO link are established, second audio data are transmitted between the terminal and the main earphone through the first SCO link, and second audio data are transmitted between the terminal and the auxiliary earphone through the second SCO link. The second audio data is voice call data. For example: the voice data can be the voice data in the voice call process of the user, and can also be the voice data in the video call process of the user.
The first SCO link and the second SCO link are both bidirectional links and can bidirectionally transmit voice data. For example, when the terminal sends voice data to the master earphone through the first SCO link, the master earphone may also send the voice data to the terminal through the first SCO link at the same time. When the terminal sends the voice data to the auxiliary earphone through the second SCO link, the auxiliary earphone can also send the voice data to the terminal through the second SCO link. The first SCO link and the second SCO link are independent and do not interfere with each other.
Therefore, the bluetooth communication method in the embodiment of the application can realize the switching of the application scenes of the TWS bluetooth headset in response to the first scene change instruction. The following describes a change of a link after switching from a listening music scene to a simultaneous interpretation scene, and a change of data transmission, with reference to fig. 7A and 7B.
Fig. 7A is a schematic diagram of a bluetooth headset switched from a music listening scene to a simultaneous interpretation scene according to an embodiment of the present application. As shown in fig. 7A, in a music listening scenario, a first ACL link is provided between the terminal and the main headset, and a second ACL link is provided between the main headset and the sub-headset. After the terminal receives the first scene change instruction, a first SCO link is established between the terminal and the main earphone, and a second SCO link is established between the terminal and the auxiliary earphone. Therefore, in the simultaneous interpretation scenario, two links exist between the terminal and the master earphone, which are the first ACL link and the first SCO link, respectively. And a second SCO link is arranged between the terminal and the auxiliary earphone. And a second ACL link is still formed between the main earphone and the auxiliary earphone.
In the simultaneous interpretation scene, the terminal has an interpretation function. After the terminal establishes SCO links with the main earphone and the auxiliary earphone, the user can use the Bluetooth earphone and the terminal to perform simultaneous interpretation. Specifically, the primary earphone collects second audio data of a first language, sends the second audio data of the first language to the terminal through the first SCO link, the terminal translates the second audio data of the first language into second audio data of a second language, sends the translated second audio data of the second language to the secondary earphone through the second SCO link, and the secondary earphone plays the second audio data of the second language. Meanwhile, the auxiliary earphone collects second audio data of a second language and sends the second audio data of the second language to the terminal through the second SCO link, the terminal translates the second audio data of the second language into second audio data of a first language and sends the translated second audio data of the first language to the main earphone through the first SCO link, and the main earphone plays the second audio data of the first language.
Fig. 7B is a schematic diagram of data transmission between a terminal and a bluetooth headset in a simultaneous interpretation scene provided in the embodiment of the present application. As shown in fig. 7B, user a wears the primary earphone and user B wears the secondary earphone. As shown by the solid line in fig. 7B, after being collected by the primary earphone, the voice data in the first language spoken by the user a is transmitted to the terminal through the first SCO link, the terminal translates the voice data into the second language, and transmits the translated voice data in the second language to the secondary earphone through the second SCO link, so that the user B can hear the voice data in the second language. Meanwhile, as shown by the dotted line in fig. 7B, after the voice data of the second language spoken by the user B is collected by the sub-headset, the voice data is transmitted to the terminal through the second SCO link, the terminal translates the voice data into the first language, and transmits the translated voice data of the first language to the main headset through the first SCO link, so that the user a can hear the voice data of the first language. Through the process, the simultaneous interpretation process of the user A and the user B is realized.
In the bluetooth communication method provided in this embodiment, a first ACL link is established between the terminal and the main earphone, and a second ACL link is established between the main earphone and the sub-earphone, in response to a first scene change instruction received by the terminal, a first SCO link is established between the terminal and the main earphone, and a second SCO link is established between the terminal and the sub-earphone. Make the terminal can be through transmitting voice data between first SCO link and the main earphone to through transmitting voice data between second SCO link and the auxiliary earphone, thereby realize that main earphone and auxiliary earphone gather two different users 'voice data respectively, and transmit for the terminal through different SCO links respectively and translate, realize the simultaneous interpretation process, richened bluetooth headset's application scene, satisfied the multi-functional demand of user to bluetooth headset.
Fig. 7A and 7B are schematic diagrams illustrating switching from a listening music scene to a simultaneous interpretation scene, and in practical applications, other scene switching situations may exist, for example: the single-person conversation scenario is switched to the double-person conversation scenario, which is described below with reference to fig. 8A and 8B.
Fig. 8A is a schematic diagram of a bluetooth headset switched from a single-person call scenario to a double-person call scenario according to an embodiment of the present application. As shown in fig. 8A, the bluetooth headset is in single-person talk mode before switching. For example: and the user A wears the main earphone and the auxiliary earphone simultaneously to carry out voice communication with the user C. During the call, the user a determines that it needs to switch to a double-person call scenario, and it is assumed that the user a wants to join the user B around the user a into the call. Illustratively, user a wears the sub-headset to user B, and user a inputs a first scene change instruction to the main headset or terminal. That is, after the double-person conversation scene is switched, the user a wears the main earphone, the user B wears the auxiliary earphone, and the user a and the user B use the same terminal to perform voice conversation with the user C.
As shown in fig. 8A, in the single-person call scenario, a first SCO link is between the terminal and the main earphone, and a third SCO link is between the main earphone and the sub-earphone. In this scenario, only the microphone of the primary earpiece collects audio data. The transmission process of the audio data is similar to that of fig. 5B, i.e., the microphone of the main headset takes the voice data of the user a and transmits the voice data to the terminal through the first SCO link. Thus, the terminal transmits the voice data to the user C. Meanwhile, the terminal receives the voice data of the user C and sends the received voice data to the main earphone through the first SCO link, the main earphone sends the received voice data to the auxiliary earphone through the second SCO link, the main earphone and the auxiliary earphone sound synchronously, and the user A receives and hears the speaking content of the user C through the main earphone and the auxiliary earphone.
With reference to fig. 8A, in the embodiment of the present application, after receiving a first scene change instruction input by a user, the terminal and the sub-headset establish a second SCO link. That is to say, after entering the double-person conversation scene, the first SCO link is still set between the terminal and the main earphone, the third SCO link is still set between the main earphone and the auxiliary earphone, and the second SCO link is newly added between the terminal and the auxiliary earphone.
Fig. 8B is a schematic diagram of data transmission between a terminal and a bluetooth headset in a double-person call scenario provided in the embodiment of the present application. As shown in fig. 8B, in a double-phone call scenario, the microphones of both the primary and secondary earphones collect audio data. Specifically, when the user a speaks, the main earphone collects the audio data of the user a, and sends the audio data of the user a to the terminal through the first SCO link, so that the terminal sends the audio data of the user a to the user C. When the user B speaks, the sub-headset collects the audio data of the user B and sends the audio data of the user B to the terminal through the second SCO link, and therefore the terminal sends the audio data of the user B to the user C. Of course, in some scenarios, the user a and the user C may also speak simultaneously, and therefore, the terminal mixes the audio data of the user a received through the first SCO link and the audio data of the user B received through the second SCO link, and then sends the mixed audio data to the user C.
In a double-person conversation scene, after receiving the voice data of the user C, the terminal sends the voice data to the main earphone through the first SCO link and sends the voice data to the auxiliary earphone through the second SCO link, so that the user A and the user B can both receive and hear the speaking content of the user C. Of course, in some scenarios, after receiving the voice data from the user C, the terminal may also send the voice data to the primary earphone through the first SCO link only, and the primary earphone forwards the audio data to the secondary earphone through the third SCO link.
In the above embodiments, in an implementation where the terminal receives the first scene change instruction from the TWS bluetooth headset, the TWS bluetooth headset transmits the first scene change instruction to the terminal in response to a first operation input by the user. The first operation may be input to the primary earphone by a user wearing the primary earphone, or input to the secondary earphone by a user wearing the secondary earphone. In the following, a detailed description is given in connection with two possible embodiments, respectively.
Fig. 9 is a schematic flowchart of entering a simultaneous interpretation mode according to an embodiment of the present application. As shown in fig. 9, the method includes:
s901: the master earphone receives a first operation input by a user.
Illustratively, the first operation is to instruct an application scenario of the TWS bluetooth headset to change to a simultaneous interpretation scenario.
For example, the user wearing the main headset may operate a mode switching button on the main headset, or the user wearing the main headset may input a preset voice command to the main headset, such as: the user inputs speech "enter into simultaneous interpretation mode" into the microphone of the primary earpiece.
S902: in response to the first operation, the primary earpiece transmits a physical address request message to the secondary earpiece.
S903: and the physical address of the auxiliary earphone is sent to the main earphone by the auxiliary earphone.
Illustratively, the primary earpiece may send a physical address request message to the secondary earpiece over the second ACL link. Accordingly, the secondary earpiece may send the physical address of the secondary earpiece to the primary earpiece over the second ACL link.
The physical address request message and the physical address of the sub-headset can be transmitted through an AT command.
S904: and the main earphone sends the first scene change instruction and the physical address of the auxiliary earphone to the terminal.
Illustratively, the primary earpiece sends a first scene change instruction to the terminal through the first ACL link, along with the physical address of the secondary earpiece. Correspondingly, the terminal receives a first scene change instruction and a physical address of the auxiliary earphone through the first ACL link.
The first scene change instruction and the physical address of the auxiliary earphone can be sent simultaneously or sequentially. For example, the first scene change instruction and the physical address of the auxiliary earphone are transmitted by an AT instruction, or the first scene change instruction is sent by the first AT instruction first, and then the physical address of the auxiliary earphone is sent by the second AT instruction. Or, the physical address of the auxiliary earphone is sent through the first AT instruction, and then the first scene change instruction is sent through the second AT instruction.
S905: and establishing a first SCO link between the terminal and the main earphone, and establishing a second SCO link between the terminal and the auxiliary earphone according to the physical address of the auxiliary earphone.
Through the above process, the SCO links are established between the terminal and the main earphone and between the terminal and the auxiliary earphone.
Fig. 10 is a schematic flowchart of another process for entering the simultaneous interpretation mode according to the embodiment of the present application. As shown in fig. 10, the method includes:
s1001: the sub-headset receives a first operation input by a user.
Illustratively, the first operation is to instruct an application scenario of the TWS bluetooth headset to change to a simultaneous interpretation scenario.
For example, the user wearing the sub-headset may operate a mode switching button on the sub-headset, or the user wearing the sub-headset may input a preset voice command to the sub-headset, such as: the user inputs the speech "enter the simultaneous interpretation mode" into the microphone of the sub-headset.
S1002: and responding to the first operation, the auxiliary earphone sends a first scene change instruction and the physical address of the auxiliary earphone to the main earphone.
Illustratively, the secondary earpiece sends the first scene change instruction to the primary earpiece over the second ACL link along with the physical address of the secondary earpiece. Correspondingly, the main earphone receives the first scene change instruction and the physical address of the auxiliary earphone through the second ACL link.
The first scene change instruction sent by the auxiliary earphone to the main earphone and the physical address of the auxiliary earphone can be transmitted through the AT instruction. The first scene change instruction and the physical address of the auxiliary earphone can be sent simultaneously or sequentially. For example, the first scene change instruction and the physical address of the auxiliary earphone are transmitted by an AT instruction, or the first scene change instruction is sent by the first AT instruction first, and then the physical address of the auxiliary earphone is sent by the second AT instruction. Or, the physical address of the auxiliary earphone is sent through the first AT instruction, and then the first scene change instruction is sent through the second AT instruction.
S1003: and the main earphone forwards the first scene change instruction and the physical address of the auxiliary earphone to the terminal.
Illustratively, the primary earpiece sends a first scene change instruction and a physical address of the secondary earpiece to the terminal through the first ACL link. Correspondingly, the terminal receives the first scene change instruction and the physical address of the auxiliary earphone through the first ACL link.
The first scene change instruction and the physical address of the auxiliary earphone can be sent simultaneously or sequentially. For example, the first scene change instruction and the physical address of the auxiliary earphone are transmitted by an AT instruction, or the first scene change instruction is sent by the first AT instruction first, and then the physical address of the auxiliary earphone is sent by the second AT instruction. Or, the physical address of the auxiliary earphone is sent through the first AT instruction, and then the first scene change instruction is sent through the second AT instruction.
S1004: and establishing a first SCO link between the terminal and the main earphone, and establishing a second SCO link between the terminal and the auxiliary earphone according to the physical address of the auxiliary earphone.
Through the above process, the SCO links are established between the terminal and the main earphone and between the terminal and the auxiliary earphone.
In the above embodiments, after the SCO links are established between the terminal and the main earphone and between the terminal and the auxiliary earphone, the terminal may perform data transmission with the main earphone and the auxiliary earphone in a frequent manner. For example, while the terminal receives the audio data sent by the primary earphone through the first SCO link, the terminal may also receive the audio data sent by the secondary earphone through the second SCO link. The terminal can translate the audio data received through the first SCO link and can translate the audio data received through the second SCO link. The terminal can send the translated audio data to the main earphone through the first SCO link and can also send the translated audio data to the auxiliary earphone through the second SCO link.
In the embodiment of the application, the terminal is internally provided with the Bluetooth chip, and the Bluetooth chip is a chip which simultaneously supports two SCO links. The Bluetooth chip establishes contact with a Bluetooth software protocol stack layer through a Bluetooth driving layer. The bluetooth software protocol stack supports management of two SCO links. Illustratively, the bluetooth software protocol stack supports state maintenance for two SCO links. The Bluetooth software protocol stack also supports protocol processing of the audio data acquired from each SCO link and transmits the processed audio data to the audio processing device. Of course, the bluetooth software protocol stack also supports receiving audio data from the audio processing device and transmitting the received audio data onto the corresponding SCO link.
Wherein, two audio processing devices can be arranged in the terminal. The audio processing apparatus supports a translation process for audio data. One SCO link for each audio processing device. Illustratively, the audio processing apparatus 1 is configured to perform a translation process on audio data transmitted by the first SCO link. The audio processing device 2 is configured to perform a translation process on the audio data transmitted by the second SCO link.
In one possible implementation, each audio processing device includes: a receiver, a translator, and a sender. Illustratively, for the audio processing apparatus 1, the receiver is configured to receive audio data in a first language from a first SCO link, the translator is configured to translate the audio data in the first language to obtain audio data in a second language, and the transmitter is configured to transmit the translated audio data in the second language to a second SCO link. For the audio processing apparatus 2, the receiver is configured to receive the audio data in the second language from the second SCO link, the translator is configured to translate the audio data in the second language to obtain the audio data in the first language, and the transmitter is configured to transmit the translated audio data in the first language to the first SCO link.
Through set up two audio processing device in the terminal, realized that two SCO links all can two-way transmission data simultaneously, two SCO links are independent each other, do not influence each other for two users that use different languages can accessible exchange in real time. The following is described in detail with reference to a specific use scenario.
Fig. 11 is a schematic diagram of data transmission between a terminal and a bluetooth headset in a simultaneous interpretation scenario provided in the embodiment of the present application. Suppose user a uses chinese and user B uses english. The user A wears the main earphone, and the user B wears the auxiliary earphone.
As shown in fig. 11, the chinese speech spoken by user a is picked up by the microphone of the main headset and transmitted by the main headset to the receiver 1 of the terminal through the first SCO link. After receiving the Chinese speech, the terminal translates the Chinese speech into English speech through the translator 1, and then the transmitter 1 transmits the translated English speech to the auxiliary earphone through the second SCO link. Thus, the user B wearing the sub-headset hears the content of the speech of the user a.
The english speech spoken by user B is picked up by the microphone of the sub-headset and transmitted by the sub-headset via the second SCO link to the receiver 2 of the terminal. After receiving the English speech, the terminal translates the English speech into Chinese speech through the translator 2, and then the transmitter 2 transmits the translated Chinese speech to the main earphone through the first SCO link. Thus, the user a wearing the main earphone hears the content of the speech of the user B.
In a possible implementation manner, the translator may also be disposed in the cloud server. And after receiving the voice data to be translated, the terminal sends the voice data to be translated to the cloud server. And the cloud server returns the translated voice data to the terminal.
When a traditional translation device such as a translation stick is used by two users for conversation, the two users can not usually have real-time conversation. Specifically, when the user a speaks, the user a is required to select a first target language to be translated on the translation bar, and presses a recording button of the translation bar to record. After the recording is finished, the translation bar translates the recorded data into the first target language and plays the data, so that the user B can know the semantics of the user A by listening to the translated first target language. Similarly, when the user B speaks, the user B is required to select a second target language to be translated on the translator bar and press the recording button of the translator bar to record. After the recording is finished, the translation bar translates the recorded data into the second target language and plays the data, so that the user A can know the semantics of the user B by listening to the translated second target language. In the process, the user A and the user B need to use the translation rods alternately to perform the recording and playing operations respectively, so that the operation is complicated, and real-time conversation cannot be performed.
In the embodiment of the application, the SCO links are respectively established between the terminal and the main earphone and between the terminal and the auxiliary earphone, the two SCO links and the two translation paths are mutually independent and do not interfere with each other, and the two SCO links support the transmission of two-way voice data, so that two users using different languages can communicate in real time without barriers, and the effect of simultaneous interpretation is achieved.
In the embodiment of the application, the terminal can also receive translation configuration information input by a user. The translation configuration information indicates the capture language and/or the playback language of the primary and secondary earpieces. Illustratively, the user can reasonably set the translation configuration information according to the conversation scene and the language used by the conversation person, so that the terminal can perform translation processing according to the translation configuration information. In the following, a description is given in connection with two possible embodiments.
In one possible implementation, the terminal receives the collected language of the main earphone and the collected language of the auxiliary earphone input by the user. Fig. 12 is a schematic diagram of a translation configuration interface of a terminal according to an embodiment of the present application. As shown in fig. 12, the user sets the collection language of the primary earphone to be chinese and the collection language of the secondary earphone to be english. After the configuration, the terminal automatically takes Chinese as the playing language of the main earphone and English as the playing language of the auxiliary earphone. Namely, the terminal translates the Chinese collected from the main ear into English, and the English is played by the auxiliary ear phone. The terminal translates English picked up by the auxiliary earphone into Chinese and plays the Chinese by the main earphone.
In another possible implementation, the terminal receives the play language of the primary earphone and the play language of the secondary earphone input by the user. Fig. 13 is a schematic diagram of a translation configuration interface of a terminal according to an embodiment of the present application. As shown in fig. 13, the user sets the language of the primary earphone to be chinese and the language of the secondary earphone to be english. After the configuration, the terminal automatically takes Chinese as the collection language of the main earphone and English as the collection language of the auxiliary earphone. Namely, the terminal translates the Chinese collected from the main earphone into English, and the English is played by the auxiliary earphone. The terminal translates English picked up by the auxiliary earphone into Chinese and plays the Chinese by the main earphone.
The terminal can be flexibly applied to various application scenes by performing translation processing according to the translation configuration information set by the user.
Certainly, the terminal can also automatically recognize the pickup voices of the main earphone and the auxiliary earphone, so that the intelligent translation processing is performed, the user operation is simplified, and the user experience is improved. For example, assume that user a wears a primary headset and user B wears a secondary headset. Before user a and user B have a formal conversation, user a and user B may first input a test voice to the terminal. And the terminal determines the acquisition language of the main earphone according to the test voice input by the user A. And the terminal determines the acquisition language of the auxiliary earphone according to the test voice input by the user B. For example: the test voice input by the user A is 'I am ok', and the test voice input by the user B is 'I am ok'. The terminal may determine that the collection language of the primary earphone is chinese and the collection language of the secondary earphone is english. Further, the terminal automatically translates the Chinese collected from the main ear into English, and the English is played by the auxiliary ear. The terminal translates English picked up by the auxiliary earphone into Chinese and plays the Chinese by the main earphone.
In the embodiment of the application, after the translation is finished, the user can exit the simultaneous interpretation mode through the operation terminal or the TWS Bluetooth headset. This will be described with reference to fig. 14.
Fig. 14 is a flowchart illustrating a bluetooth communication method according to an embodiment of the present application. In the embodiment, the SCO link is established between the terminal and both the primary earphone and the secondary earphone. Namely, a first SCO link is established between the terminal and the primary earphone, and a second SCO link is established between the terminal and the secondary earphone.
As shown in fig. 14, the method includes:
s1401: the terminal receives a second scene change instruction.
Wherein the second scene change instruction is used for indicating that the application scene of the TWS Bluetooth headset changes into a non-simultaneous interpretation scene.
In some embodiments, the TWS bluetooth headset receives a second operation input by the user to the TWS bluetooth headset, and in response to the second operation, the TWS bluetooth headset transmits a second scene change instruction to the terminal. Illustratively, when the user needs to end the simultaneous interpretation, the second operation may be input to the bluetooth headset in various ways. In one possible embodiment, the user may operate a mode switch button. Illustratively, the user may click the mode switch button, or double click the mode switch button, or touch the mode switch button. In another possible embodiment, the user may input a preset voice command to the bluetooth headset, for example, the user inputs a voice "exit the simultaneous interpretation mode" to the bluetooth headset.
In this embodiment, the second operation received by the TWS bluetooth headset may be input into the primary headset by a user wearing the primary headset, or may be input into the secondary headset by a user wearing the secondary headset.
In one possible scenario, the primary earphone receives a second operation input by the user, and in response to the second operation, the primary earphone sends a second scene change instruction to the terminal through the first ACL link. Correspondingly, the terminal receives the second scene change instruction through the first ACL link.
In another possible scenario, the secondary earphone receives a second operation input by the user, and in response to the second operation, the secondary earphone sends a second scene change instruction to the primary earphone through the second ACL link. And the main earphone forwards the received second scene change instruction to the terminal through the first ACL link.
In this application, the second scene change instruction may be an AT instruction. For example, an existing AT command may be multiplexed, with some field of information in the existing AT command set to a value indicating exit from the simultaneous translation mode. An AT instruction can be added to indicate to exit the simultaneous translation mode.
In other embodiments, the terminal receives a second scene change instruction input by the user. The user may input the second scene change instruction to the terminal in various ways, similar to the way the user inputs the first scene change instruction to the TWS bluetooth headset. For example: and inputting a second scene change instruction to the terminal in a terminal interface by clicking a mode switching control, double-clicking a mode switching control, touching the mode switching control, sliding the mode switching control and the like. The second scene change instruction may also be input to the terminal by means of voice interaction, for example, inputting a voice "exit the simultaneous interpretation mode" to the terminal.
S1402: and responding to the second scene change instruction, disconnecting a first SCO link between the terminal and the first earphone and disconnecting a second SCO link between the terminal and the second earphone.
And after the terminal receives the second scene change instruction, disconnecting a first SCO link between the terminal and the main earphone and disconnecting a second SCO link between the terminal and the auxiliary earphone. Fig. 15 is a schematic diagram of a link between a terminal and a bluetooth headset after exiting the simultaneous interpretation mode according to an embodiment of the present application. As shown in fig. 15, in the simultaneous interpretation mode, two links exist between the terminal and the master earphone, which are the first ACL link and the first SCO link, respectively. And a second SCO link is arranged between the terminal and the auxiliary earphone. And a second ACL link is arranged between the main earphone and the auxiliary earphone. And after the terminal receives the second scene change instruction, the terminal disconnects a first SCO link with the main earphone and disconnects a second SCO link with the auxiliary earphone. Therefore, after the simultaneous interpretation mode is exited, a first ACL link is formed between the terminal and the main earphone, and no connection link is formed between the terminal and the auxiliary earphone. And a second ACL link is still formed between the main earphone and the auxiliary earphone. That is, after the simultaneous interpretation is finished, the link between the terminal and the bluetooth headset is restored to the link state in the music listening scene as shown in fig. 4. Therefore, the Bluetooth headset is convenient for a user to normally use.
Fig. 16 is a schematic structural diagram of a TWS bluetooth headset according to an embodiment of the present application, and as shown in fig. 16, the TWS bluetooth headset 20 of the present embodiment includes a first headset 21 and a second headset 22, where the first headset 21 includes a processor 211, a memory 212, and a computer program stored in the memory 212 and operable on the processor 211. The second earpiece 22 comprises a processor 221, a memory 222, and a computer program stored on the memory 222 and executable on the processor 221. Illustratively, the memory 211 and the processor 212 may communicate via a communication bus. The processor 211 executes the computer program to implement the technical solution of the first earpiece 21 in the above embodiments. Illustratively, the memory 222 and the processor 221 may communicate via a communication bus. The processor 221 executes the computer program to implement the technical solution of the second earphone 22 in the above embodiments. The implementation principle and the technical effect are similar, and the detailed description is omitted here.
Fig. 17 is a schematic structural diagram of a terminal according to an embodiment of the present application. As shown in fig. 17, the terminal 10 of the present embodiment includes a processor 11, a memory 12, and a computer program stored on the memory 12 and operable on the processor 11. For example, the memory 12 and the processor 11 may communicate through a communication bus 13, and when the processor 11 executes the computer program, the processor 11 executes the technical solution on the terminal side in any of the above method embodiments, which has similar implementation principles and technical effects, and is not described herein again.
An embodiment of the present application provides a storage medium for storing a computer program, which is used to implement a bluetooth communication method on a TWS bluetooth headset side or a bluetooth communication method on a terminal side when the computer program is executed by a computer or a processor.
An embodiment of the present application provides a computer program product, which includes instructions that, when executed, cause a computer to execute the above-mentioned bluetooth communication method on the TWS bluetooth headset side, or implement the bluetooth communication method on the terminal side.
The embodiment of the application provides a chip, the chip can be applied to terminal or TWS bluetooth headset, the chip includes: the terminal comprises at least one communication interface, at least one processor and at least one memory, wherein the communication interface, the memory and the processor are interconnected through a bus, and the processor enables the terminal to execute the Bluetooth communication method or enables the TWS Bluetooth headset to execute the Bluetooth communication method by executing instructions stored in the memory.
In the embodiments of the present application, the processor may be a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component, and may implement or execute the methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of a method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware processor, or may be implemented by a combination of hardware and software modules in a processor.
In the embodiment of the present application, the memory may be a nonvolatile memory, such as a Hard Disk Drive (HDD) or a solid-state drive (SSD), and may also be a volatile memory, for example, a random-access memory (RAM). The memory is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. The memory in the embodiments of the present application may also be circuitry or any other device capable of performing a storage function for storing program instructions and/or data.
The methods provided by the embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, a network appliance, a user device, or other programmable apparatus. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., Digital Video Disk (DVD)), or a semiconductor medium (e.g., SSD), among others.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (17)

1. A bluetooth communication method, applied to a bluetooth communication system including a TWS bluetooth headset, which is a true wireless stereo, the TWS bluetooth headset including a first headset and a second headset, and a terminal, the method comprising:
a first synchronous directional SCO link is established between the terminal and the first earphone, and a third SCO link is established between the first earphone and the second earphone;
the first earphone collects first audio data and sends the first audio data to the terminal through the first SCO link;
the terminal sends second audio data to the first earphone through the first SCO link, and the first earphone forwards the second audio data to the second earphone through the third SCO link;
the terminal receives a first scene change instruction and a physical address of a second earphone, wherein the first scene change instruction is used for indicating that an application scene of the TWS Bluetooth earphone is changed into a double-person conversation scene, and the physical address of the second earphone is a physical address sent to the terminal by the first earphone;
responding to the first scene change instruction, and establishing a newly-added second SCO link between the terminal and the second earphone by using the physical address of the second earphone;
the second earphone collects third audio data, and the third audio data is sent to the terminal through the second SCO link;
and the terminal sends the second audio data to the second earphone through the second SCO link.
2. The method of claim 1, wherein before the terminal receives the first scene change instruction and the physical address of the second headset, the method further comprises:
the first earphone receives a first operation input by a user;
in response to the first operation, the first earpiece sending a physical address request message to the second earpiece;
the first earphone receives the physical address of the second earphone sent by the second earphone;
and the first earphone sends a first scene change instruction and the physical address of the second earphone to the terminal.
3. The method of claim 1, wherein before the terminal receives the first scene change instruction and the physical address of the second headset, the method further comprises:
the second earphone receives a first operation input by a user;
in response to the first operation, the second headset sends a first scene change instruction and a physical address of the second headset to the first headset;
and the first earphone forwards the first scene change instruction and the physical address of the second earphone to the terminal.
4. The method according to any of claims 1 to 3, wherein after the terminal establishes a second SCO link with the second headset using the physical address of the second headset, the method further comprises:
the terminal receives a second scene change instruction;
and responding to the second scene change instruction, and disconnecting a second SCO link between the terminal and the second earphone, wherein the second scene change instruction is used for indicating that the application scene of the TWS Bluetooth earphone is changed into a single-person call scene.
5. The method according to claim 4, wherein before the terminal receives the second scene change instruction, the method further comprises:
the first earphone receives a second operation input by a user;
and responding to the second operation, and sending a second scene change instruction to the terminal by the first earphone.
6. The method according to claim 4, wherein before the terminal receives the second scene change instruction, the method further comprises:
the second earphone receives a second operation input by the user;
in response to the second operation, the second headset sends a second scene change instruction to the first headset;
and the first earphone forwards the received second scene change instruction to the terminal.
7. The method according to any one of claims 1 to 3, wherein the first audio data is voice call data, the second audio data is voice call data, and the third audio data is voice call data.
8. A TWS bluetooth headset that is a true wireless stereo, the TWS bluetooth headset comprising a first earpiece and a second earpiece, each of the first earpiece and the second earpiece comprising a processor, a memory, and a computer program stored on the memory and executable on the processor, the processor when executing the computer program performing the steps of:
a first synchronous directional SCO link is established between the terminal and the first earphone, and a third SCO link is established between the first earphone and the second earphone;
the first earphone collects first audio data and sends the first audio data to the terminal through the first SCO link;
the first earphone receives second audio data sent by the terminal through the first SCO link, and forwards the second audio data to the second earphone through the third SCO link;
responding to a first scene change instruction and a physical address of a second earphone received by the terminal, wherein the first scene change instruction is used for indicating that an application scene of the TWS Bluetooth earphone is changed into a double-person conversation scene, the physical address of the second earphone is a physical address sent by the first earphone to the terminal, and the terminal establishes a newly-added second SCO link between the physical address of the second earphone and the second earphone;
the second earphone collects third audio data, and the third audio data is sent to the terminal through the second SCO link;
and the second earphone receives the second audio data sent by the terminal through the second SCO link.
9. The TWS Bluetooth headset of claim 8, wherein the responding to the first scene change instruction and the physical address of the second headset received by the terminal is preceded by:
the first earphone receives a first operation input by a user;
in response to the first operation, the first earpiece sending a physical address request message to the second earpiece;
the first earphone receives the physical address of the second earphone sent by the second earphone;
and the first earphone sends a first scene change instruction and the physical address of the second earphone to the terminal.
10. The TWS Bluetooth headset of claim 8, wherein the responding to the first scene change instruction and the physical address of the second headset received by the terminal is preceded by:
the second earphone receives a first operation input by a user;
in response to the first operation, the second headset sends a first scene change instruction and a physical address of the second headset to the first headset;
and the first earphone forwards the first scene change instruction and the physical address of the second earphone to the terminal.
11. A TWS bluetooth headset according to any of claims 8-10, characterized in that after the terminal has established a second SCO link with the second headset using the physical address of the second headset, it further comprises:
and responding to a second scene change instruction received by the terminal, and disconnecting the second SCO link between the terminal and the second earphone, wherein the second scene change instruction is used for indicating that the application scene of the TWS Bluetooth earphone is changed into a single-person call scene.
12. The TWS bluetooth headset of claim 11, wherein the responding to the second scene change instruction received by the terminal is preceded by:
the first earphone receives a second operation input by a user;
and responding to the second operation, and sending a second scene change instruction to the terminal by the first earphone.
13. The TWS bluetooth headset of claim 11, wherein the responding to the second scene change instruction received by the terminal is preceded by:
the second earphone receives a second operation input by the user;
in response to the second operation, the second headset sends a second scene change instruction to the first headset;
and the first earphone forwards the received second scene change instruction to the terminal.
14. A TWS bluetooth headset according to any of claims 8-10, wherein the first audio data is voice call data, the second audio data is voice call data, and the third audio data is voice call data.
15. A terminal, characterized in that the terminal comprises a processor, a memory, and a computer program stored on the memory and executable on the processor, the processor executing the computer program performing the steps of:
a first synchronous directional SCO link is established between the terminal and a first earphone, a third SCO link is established between the first earphone and a second earphone, and the first earphone and the second earphone are single earphones in a real wireless stereo TWS Bluetooth earphone;
the terminal receives first audio data sent by the first earphone through the first SCO link, wherein the first audio data is collected by the first earphone;
the terminal sends second audio data to the first earphone through the first SCO link, and the first earphone forwards the second audio data to the second earphone through the third SCO link;
the terminal receives a first scene change instruction and a physical address of a second earphone, wherein the first scene change instruction is used for indicating that an application scene of the TWS Bluetooth earphone is changed into a double-person conversation scene, and the physical address of the second earphone is a physical address sent to the terminal by the first earphone;
responding to the first scene change instruction, and establishing a newly-added second SCO link between the terminal and the second earphone by using the physical address of the second earphone;
the terminal receives third audio data sent by the second earphone through the second SCO link, wherein the third audio data are collected by the second earphone;
and the terminal sends the second audio data to the second earphone through the second SCO link.
16. The terminal of claim 15, wherein after the terminal establishes a second SCO link with the second headset using the physical address of the second headset, further comprising:
the terminal receives a second scene change instruction;
and responding to the second scene change instruction, and disconnecting a second SCO link between the terminal and the second earphone, wherein the second scene change instruction is used for indicating that the application scene of the TWS Bluetooth earphone is changed into a single-person call scene.
17. The terminal according to claim 15 or 16, wherein the first audio data is voice call data, the second audio data is voice call data, and the third audio data is voice call data.
CN201910513900.9A 2019-06-14 2019-06-14 Bluetooth communication method, TWS (two way communication system) Bluetooth headset and terminal Active CN110381485B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910513900.9A CN110381485B (en) 2019-06-14 2019-06-14 Bluetooth communication method, TWS (two way communication system) Bluetooth headset and terminal
PCT/CN2020/095872 WO2020249098A1 (en) 2019-06-14 2020-06-12 Bluetooth communication method, tws bluetooth headset, and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910513900.9A CN110381485B (en) 2019-06-14 2019-06-14 Bluetooth communication method, TWS (two way communication system) Bluetooth headset and terminal

Publications (2)

Publication Number Publication Date
CN110381485A CN110381485A (en) 2019-10-25
CN110381485B true CN110381485B (en) 2021-01-29

Family

ID=68250292

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910513900.9A Active CN110381485B (en) 2019-06-14 2019-06-14 Bluetooth communication method, TWS (two way communication system) Bluetooth headset and terminal

Country Status (2)

Country Link
CN (1) CN110381485B (en)
WO (1) WO2020249098A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023014032A1 (en) * 2021-08-02 2023-02-09 Samsung Electronics Co., Ltd. System and method for establishing call audio sharing using bluetooth low energy audio technology

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109005480A (en) * 2018-07-19 2018-12-14 Oppo广东移动通信有限公司 Information processing method and related product
CN110381485B (en) * 2019-06-14 2021-01-29 华为技术有限公司 Bluetooth communication method, TWS (two way communication system) Bluetooth headset and terminal
CN110446192A (en) * 2019-06-26 2019-11-12 惠州迪芬尼声学科技股份有限公司 The method of subordinate earphone and audiogenic device pairing line
CN111063170B (en) * 2019-11-29 2021-12-07 歌尔股份有限公司 TWS earphone loss prevention method, TWS earphone and computer readable storage medium
CN113038436A (en) * 2019-12-25 2021-06-25 华为终端有限公司 Bluetooth communication method and device
CN111107463B (en) * 2019-12-30 2021-06-11 广州由我科技股份有限公司 Wireless earphone switching method and device, wireless earphone and medium
CN111314814A (en) * 2020-01-19 2020-06-19 湖南国声声学科技股份有限公司 TWS Bluetooth headset-based translation method, mobile terminal, TWS Bluetooth headset and storage medium
EP3890356A1 (en) * 2020-03-30 2021-10-06 Sonova AG Bluetooth audio exchange with transmission diversity
CN111428515B (en) * 2020-03-30 2022-07-15 浙江大学 Simultaneous interpretation equipment and method
CN111615036B (en) * 2020-04-17 2021-07-23 歌尔科技有限公司 Data processing method and device and electronic equipment
CN111526440B (en) * 2020-04-27 2022-03-01 歌尔科技有限公司 Method, device and medium for switching master ear and slave ear of TWS earphone in call scene
CN111698672B (en) * 2020-05-26 2022-09-13 展讯通信(上海)有限公司 Audio synchronization method of wireless earphone and wireless earphone
CN111696552B (en) * 2020-06-05 2023-09-22 北京搜狗科技发展有限公司 Translation method, translation device and earphone
CN111739538B (en) * 2020-06-05 2022-04-26 北京搜狗科技发展有限公司 Translation method and device, earphone and server
CN111988771B (en) * 2020-08-28 2023-09-15 维沃移动通信有限公司 Wireless connection control method and device and electronic equipment
CN112446223A (en) * 2020-11-23 2021-03-05 维沃移动通信有限公司 Translation method and device and electronic equipment
CN112511942A (en) * 2020-12-03 2021-03-16 歌尔科技有限公司 Voice translation method based on TWS earphone and TWS earphone
CN114697754B (en) * 2020-12-28 2024-02-13 深圳Tcl新技术有限公司 Method, system, medium and terminal equipment based on wireless access telephone conference
KR20220105402A (en) * 2021-01-20 2022-07-27 삼성전자주식회사 Electronic device for processing audio data and method of operating the same
CN113055868B (en) * 2021-03-12 2022-09-23 上海物骐微电子有限公司 Bluetooth rapid networking method and system and Bluetooth headset
CN112787742B (en) * 2021-03-16 2022-11-22 芯原微电子(成都)有限公司 Clock synchronization method and device, wireless earphone and readable storage medium
CN113115179B (en) * 2021-03-24 2023-03-24 维沃移动通信有限公司 Working state adjusting method and device
CN113329381B (en) * 2021-04-28 2022-03-11 荣耀终端有限公司 Method for establishing Bluetooth connection and electronic equipment
CN113271376B (en) * 2021-05-08 2023-08-22 维沃移动通信有限公司 Communication control method, electronic equipment and earphone
CN113382337A (en) * 2021-06-21 2021-09-10 紫优科技(深圳)有限公司 Equipment management system and method of intelligent earphone
CN113411712A (en) * 2021-06-29 2021-09-17 紫优科技(深圳)有限公司 Method and system for realizing double-ear separation mode based on intelligent earphone
CN113543101A (en) * 2021-07-09 2021-10-22 Oppo广东移动通信有限公司 Audio output method, Bluetooth device, mobile terminal and storage medium
CN115915037A (en) * 2021-09-24 2023-04-04 Oppo广东移动通信有限公司 Call control method and device, electronic equipment and computer readable storage medium
CN114710771B (en) * 2022-06-07 2022-08-23 成都市安比科技有限公司 Link switching method and device based on TWS (two way communication system) and Bluetooth communication system
CN115776628B (en) * 2023-02-13 2023-04-14 成都市安比科技有限公司 Method for accurate synchronization of two-ear recording of TWS Bluetooth headset

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107894881A (en) * 2017-10-18 2018-04-10 恒玄科技(上海)有限公司 The method that principal and subordinate link switching, conversation monitoring and the Mike of bluetooth earphone switch
CN109218883A (en) * 2018-08-27 2019-01-15 深圳市声临科技有限公司 A kind of interpretation method, translation system, TWS earphone and terminal

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9749766B2 (en) * 2015-12-27 2017-08-29 Philip Scott Lyren Switching binaural sound
US10631074B2 (en) * 2016-04-19 2020-04-21 Snik Llc Magnetic earphones holder
US10517111B2 (en) * 2016-09-21 2019-12-24 Apple Inc. Mitigating scheduling conflicts in wireless communication devices
CN106851450A (en) * 2016-12-26 2017-06-13 歌尔科技有限公司 A kind of wireless headset pair and electronic equipment
CN107333200B (en) * 2017-07-24 2023-10-20 歌尔科技有限公司 Translation earphone receiver, wireless translation earphone and wireless translation system
CN107708006B (en) * 2017-08-23 2020-08-28 广东思派康电子科技有限公司 Computer-readable storage medium, real-time translation system
CN108345591A (en) * 2018-01-26 2018-07-31 歌尔股份有限公司 Voice real time translating method based on mobile terminal ears wireless headset and system
CN109005480A (en) * 2018-07-19 2018-12-14 Oppo广东移动通信有限公司 Information processing method and related product
CN109391724B (en) * 2018-08-01 2020-12-22 展讯通信(上海)有限公司 Method for realizing double-ear call, mobile terminal and double-ear wireless earphone
CN109151789B (en) * 2018-09-30 2021-08-17 Oppo广东移动通信有限公司 Translation method, device and system and Bluetooth headset
CN110381485B (en) * 2019-06-14 2021-01-29 华为技术有限公司 Bluetooth communication method, TWS (two way communication system) Bluetooth headset and terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107894881A (en) * 2017-10-18 2018-04-10 恒玄科技(上海)有限公司 The method that principal and subordinate link switching, conversation monitoring and the Mike of bluetooth earphone switch
CN109218883A (en) * 2018-08-27 2019-01-15 深圳市声临科技有限公司 A kind of interpretation method, translation system, TWS earphone and terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
扒开蓝牙耳机行业真相:四款热销TWS真无线蓝牙耳机深度拆解;巴八安;《https://piebbs.pconline.com.cntopic-292603.html》;20190405;参见正文第二节及附图 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023014032A1 (en) * 2021-08-02 2023-02-09 Samsung Electronics Co., Ltd. System and method for establishing call audio sharing using bluetooth low energy audio technology

Also Published As

Publication number Publication date
CN110381485A (en) 2019-10-25
WO2020249098A1 (en) 2020-12-17

Similar Documents

Publication Publication Date Title
CN110381485B (en) Bluetooth communication method, TWS (two way communication system) Bluetooth headset and terminal
US11653398B2 (en) Bluetooth connection method and device
CN110381345B (en) Screen projection display method and electronic equipment
WO2020133183A1 (en) Audio data synchronization method and device
CN112911566B (en) Bluetooth communication method and device
WO2021017909A1 (en) Method, electronic device and system for realizing functions through nfc tag
WO2020056684A1 (en) Method and device employing multiple tws earpieces connected in relay mode to realize automatic interpretation
CN113039822B (en) Method and equipment for establishing data channel
CN114079892A (en) Bluetooth communication method, wearable device and system
CN112335294B (en) Emergency call method and user terminal
CN114125789A (en) Communication method, terminal device and storage medium
CN114466324B (en) Calling method and related device
EP4195659A1 (en) Screen sharing method, electronic device and system
CN114697955A (en) Encrypted call method, device, terminal and storage medium
CN115580541B (en) Information synchronization method and electronic equipment
CN116981108B (en) Wireless screen-throwing connection method, mobile terminal and computer readable storage medium
CN114338891B (en) Communication method and device
CN113923528B (en) Screen sharing method, terminal and storage medium
WO2022095581A1 (en) Data transmission method and terminal device
CN116744275A (en) Communication method, electronic equipment and device
CN116709084A (en) Bluetooth headset, audio output method and audio output system
CN115884140A (en) Cross-device connection method, electronic device, and storage medium
CN113672187A (en) Data double-sided display method and device, electronic equipment and storage medium
CN114845078A (en) Communication method and electronic equipment
CN117354955A (en) Network connection method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant