WO2020140186A1 - Système audio sans fil et procédé et dispositif de communication audio - Google Patents

Système audio sans fil et procédé et dispositif de communication audio Download PDF

Info

Publication number
WO2020140186A1
WO2020140186A1 PCT/CN2018/126065 CN2018126065W WO2020140186A1 WO 2020140186 A1 WO2020140186 A1 WO 2020140186A1 CN 2018126065 W CN2018126065 W CN 2018126065W WO 2020140186 A1 WO2020140186 A1 WO 2020140186A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
receiving device
data packet
wireless link
audio receiving
Prior art date
Application number
PCT/CN2018/126065
Other languages
English (en)
Chinese (zh)
Inventor
王良
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201880100565.3A priority Critical patent/CN113678481B/zh
Priority to PCT/CN2018/126065 priority patent/WO2020140186A1/fr
Publication of WO2020140186A1 publication Critical patent/WO2020140186A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication

Definitions

  • This application relates to the field of wireless technology, in particular to a wireless audio system, audio communication method and equipment.
  • TWS true wireless stereo
  • Figure 1 shows an existing TWS audio solution (described in patent application document US 2012/0058727 A1).
  • the audio source sends stereo data containing two channels (CH1 and CH2) to the audio sink Sink1.
  • the audio receiving device Sink1 extracts the audio data of the channel CH1 from the stereo data of the two channels (CH1 and CH2), and forwards the audio data of the channel CH2 to the audio receiving device Sink2.
  • the audio source and the audio receiving device Sink1 are in the same sub-network 1 (piconet 1), and the audio receiving device Sink1 and the audio receiving device Sink2 are in the other sub-network 2 (piconet 2).
  • the sub-network 1 and the sub-network 2 may adopt, but are not limited to, wireless fidelity (Wi-Fi), Bluetooth (BT) and other wireless communication technologies.
  • Wi-Fi wireless fidelity
  • BT Bluetooth
  • the audio receiving device Sink1 is responsible for forwarding the audio data of the channel CH2 for the audio receiving device Sink2. However, the forwarding will cause the audio receiver Sink1 to consume a lot of power.
  • the present application provides a wireless audio system, audio communication method and device, which can save the power consumption of the audio receiving device and ensure that the audio receiving device can completely receive audio data.
  • the present application provides a wireless audio system.
  • the wireless audio system may include an audio source, a first audio receiving device, and a second audio receiving device.
  • a first wireless link may be established between the first audio receiving device and the second audio receiving device.
  • a second wireless link may be established between the first audio receiving device and the audio source.
  • No wireless link for transmitting audio data is established between the second audio receiving device and the audio source, and the second audio receiving device listens to the audio data transmitted by the audio source to the first audio receiving device over the second wireless link to receive The audio data.
  • a third wireless link for limited communication may be established between the second audio receiving device and the audio source. In some embodiments, the third wireless link may only be used by the second audio receiving device to feed back ACK/NACK to the audio source to inform whether audio data is successfully intercepted.
  • the audio source may be used to transmit audio data packets to the first audio receiving device through the second wireless link.
  • the first audio receiving device can be used to receive the audio data packet transmitted by the audio source through the second wireless link.
  • the first audio receiving device may also be used to determine whether the audio data packet transmitted by the audio source is successfully received through the second wireless link. If the audio data packet transmitted by the audio source is successfully received, the first audio receiving device can also be used to feed back an ACK to the audio source through the second wireless link; otherwise, the first audio receiving device can also be used to send the ACK to the audio source through the second wireless link
  • the audio source feeds back NACK.
  • the second audio receiving device may be used to listen to the audio data packet transmitted by the audio source to the first audio receiving device on the second wireless link.
  • the second audio receiving device may also be used to determine whether the audio data packet transmitted by the audio source is successfully intercepted. If the audio data packet transmitted by the audio source is successfully intercepted, the second audio receiving device can also be used to feed back an ACK for the audio data packet to the audio source through the third wireless link; otherwise, the second audio receiving device is also available The NACK for the audio data packet is fed back to the audio source through the third wireless link.
  • the audio source may also be used to receive the ACK/NACK fed back by the first audio receiving device through the second wireless link and the ACK/NACK fed back by the second audio receiving device through the third wireless link. If both the first audio receiving device and the second audio receiving device feedback ACK, the audio source continues to transmit the next audio data packet; otherwise, the audio source retransmits the audio data packet.
  • the first audio receiving device may also be used to send communication information to the second audio receiving device through the first wireless link.
  • the communication information may be used by the first audio receiving device to listen to the audio data packet N transmitted by the audio source to the first audio receiving device on the second wireless link.
  • the communication information may specifically include but is not limited to one or more of the following communication parameters: the Bluetooth device address of the audio source (BD_ADDR), the local clock (CLKN), and the logical transmission address of the first audio receiving device (LT_ADDR ), clock offset, and encryption parameters of the second wireless link, such as link key.
  • the audio data packet transmitted by the audio source may contain stereo audio information, and the stereo audio information may be compressed or uncompressed stereo samples.
  • the stereo audio information may include stereo audio information of the first audio channel and stereo audio information of the second audio channel, for example, stereo audio information of the left channel and the right channel.
  • the first audio receiving device may be used to extract stereo audio information of the first channel (such as the left channel) from the audio data packet N after successfully receiving the audio data packet, And it can perform audio rendering and playback according to the stereo audio information of the first channel.
  • the second audio receiving device can be used to extract the stereo audio information of the second channel from the audio data packet N after successfully listening to the audio data packet, and according to the stereo audio of the second channel (such as the right channel) Information for audio rendering and playback. In this way, the first audio receiving device and the second audio receiving device can present a stereo playback experience.
  • the audio source may transmit audio data packets in the first time slot.
  • the first audio receiving device and the second audio receiving device may feed back the ACK/NACK to the audio source in the second time slot.
  • the first time slot and the second time slot are continuous, and the second time slot may be the first time slot after the first time slot.
  • the second time slot may also be referred to as the next time slot of the first time slot.
  • the first audio receiving device may be used to send the identification information (such as BD_ADDR) of the second audio receiving device to the audio source through the second wireless link.
  • the identification information of the second audio receiving device may be sent by the second audio receiving device to the first audio receiving device through the first wireless link.
  • the audio source may be used to establish a mapping relationship between the identification information of the second audio receiving device and the identification information of the first audio receiving device, and save the mapping relationship.
  • the mapping relationship may indicate that the second audio receiving device and the first audio receiving device are a pair of audio receiving devices with stereo audio information, such as a pair of left and right headphones.
  • the identification information of the first audio receiving device may be sent to the audio source when the first audio receiving device establishes the second wireless link with the audio source.
  • the audio source may be used to allocate LT_ADDR to the first audio receiving device and the second audio receiving device, and there is a mapping between LT_ADDR of the first audio receiving device and LT_ADDR of the second audio receiving device relationship.
  • the mapping relationship may indicate that the first audio receiving device and the second audio receiving device are a pair of receivers of stereo information.
  • identification information such as BD_ADDR and LT_ADRR that can be carried in the ACK/NACK fed back by the first audio receiving device, and BD_ADDR and LT_ADRR that can also be carried in the ACK/NACK fed back by the second audio receiving device And other identifying information.
  • the audio source may be used to determine whether the first audio receiving device and the second audio receiving device are a pair of audio receiving devices according to identification information such as BD_ADDR carried in the ACK/NACK.
  • the audio source can be used to determine a pair of receivers of stereo information if it is determined that the first audio receiving device and the second audio receiving device are a pair of audio receiving devices, and both the first audio receiving device and the second audio receiving device feed back ACK After receiving the stereo information successfully, you can continue to transmit the next audio data packet.
  • the second audio receiving device receives the audio data packet transmitted by the audio source through interception, instead of receiving the audio data packet forwarded via the first audio receiving device, which can save the first audio receiving device Power consumption. Moreover, since the second audio receiving device feeds back ACK/NACK directly to the audio source through an atypical wireless link, instead of feeding back ACK/NACK via the first audio receiving device, the second audio receiving device and the first audio receiving device There is no need to complete the message interaction in a short idle time, which reduces the performance requirements of the chip. Moreover, the feedback ACK/NACK of the second audio receiving device does not depend on the first wireless link between it and the first audio receiving device, even if the first wireless link is disconnected, the second audio receiving device can normally feed back to the audio source ACK/NACK.
  • the present application provides an audio communication method based on a wireless audio system, which is applied to the audio source side.
  • the wireless audio system may be the wireless audio system described in the first aspect.
  • the method may include that the audio source may transmit audio data packets to the first audio receiving device through the second wireless link. Then, the audio source may receive the ACK/NACK for the audio data packet fed back by the first audio receiving device through the second wireless link, and the audio source may also receive the audio data packet fed back by the second audio receiving device through the third wireless link ACK/NACK.
  • the audio source may retransmit the audio data packet to the first audio receiving device through the second wireless link.
  • the audio source may transmit the next audio data packet of the audio data packet to the first audio receiving device through the second wireless link.
  • the present application provides an audio communication method based on a wireless audio system, which is applied to the first audio receiving device side.
  • the wireless audio system may be the wireless audio system described in the first aspect.
  • the method may include: the first audio receiving device may receive the audio data packet transmitted by the audio source through the second wireless link. If the audio data packet transmitted by the audio source is successfully received through the second wireless link, the first audio receiving device may feed back the ACK for the audio data packet to the audio source through the second wireless link. If the audio data packet transmitted by the audio source is not successfully received through the second wireless link, the first audio receiving device may feed back a NACK for the audio data packet to the audio source through the second wireless link.
  • the present application provides an audio communication method based on a wireless audio system, which is applied to the second audio receiving device side.
  • the wireless audio system may be the wireless audio system described in the first aspect.
  • the method may include: the second audio receiving device may listen to the audio data packet transmitted by the audio source to the first audio receiving device through the second wireless link. The second audio receiving device determines whether the audio data packet is successfully intercepted. If the audio data packet is successfully intercepted, the second audio receiving device can feed back the ACK for the audio data packet to the audio source through the third wireless link; otherwise The second audio receiving device may feed back the NACK for the audio data packet to the audio source through the third wireless link.
  • Implementing the methods described in the second to fourth aspects can save power consumption of the first audio receiving device. Moreover, since the second audio receiving device feeds back ACK/NACK directly to the audio source through an atypical wireless link, instead of feeding back ACK/NACK via the first audio receiving device, the second audio receiving device and the first audio receiving device There is no need to complete the message interaction in a short idle time, which reduces the performance requirements of the chip.
  • the first audio receiving device may also send communication information to the second audio receiving device through the first wireless link.
  • the second audio receiving device can receive the communication information through the first wireless link.
  • the communication information may be used by the first audio receiving device to listen to the audio data packet N transmitted by the audio source to the first audio receiving device on the second wireless link.
  • the communication information may specifically include but is not limited to one or more of the following communication parameters: the Bluetooth device address of the audio source (BD_ADDR), the local clock (CLKN), and the logical transmission address of the first audio receiving device (LT_ADDR ), clock offset, and encryption parameters of the second wireless link, such as link key.
  • the audio data packet transmitted by the audio source may include stereo audio information, and the stereo audio information may be compressed or uncompressed stereo samples.
  • the stereo audio information may include stereo audio information of the first audio channel and stereo audio information of the second audio channel, for example, stereo audio information of the left channel and the right channel.
  • the first audio receiving device may extract the first channel from the audio data packet N after successfully receiving the audio data packet (such as the left channel) stereo audio information, and audio rendering and playback can be performed according to the first channel stereo audio information.
  • the second audio receiving device can extract the stereo audio information of the second channel from the audio data packet N, and according to the stereo audio information of the second channel (such as the right channel) Perform audio rendering and playback. In this way, the first audio receiving device and the second audio receiving device can present a stereo playback experience.
  • the first audio receiving device may send the identification information (such as BD_ADDR) of the second audio receiving device to the audio through the second wireless link source.
  • the identification information of the second audio receiving device may be sent by the second audio receiving device to the first audio receiving device through the first wireless link.
  • the audio source may establish a mapping relationship between the identification information of the second audio receiving device and the identification information of the first audio receiving device, and save the mapping relationship.
  • the mapping relationship may indicate that the second audio receiving device and the first audio receiving device are a pair of audio receiving devices with stereo audio information, such as a pair of left and right headphones.
  • the identification information of the first audio receiving device may be sent to the audio source when the first audio receiving device establishes the second wireless link with the audio source.
  • the audio source may allocate LT_ADDR to the first audio receiving device and the second audio receiving device, and the LT_ADDR and the first There is a mapping relationship between the LT_ADDR of the two audio receiving devices.
  • the mapping relationship may indicate that the first audio receiving device and the second audio receiving device are a pair of receivers of stereo information.
  • the audio source may send the LT_ADDR allocated to the first audio receiving device and the LT_ADDR allocated to the second audio receiving device to the first audio receiving device through the second wireless link, and then the first audio receiving device may pass the first wireless The link sends the LT_ADDR allocated by the audio source to the second audio receiving device to the second audio receiving device. In this way, the first audio receiving device and the second audio receiving device can carry their respective LT_ADDR in the feedback ACK/NACK to the audio source.
  • the identification information such as BD_ADDR, LT_ADRR, etc. that may be carried in the ACK/NACK fed back by the first audio receiving device, and the feedback information returned by the second audio receiving device
  • the identification information such as BD_ADDR and LT_ADRR may also be carried in the ACK/NACK.
  • the audio source may determine whether the first audio receiving device and the second audio receiving device are a pair of audio receiving devices according to identification information such as BD_ADDR carried in the ACK/NACK. If it is determined that the first audio receiving device and the second audio receiving device are a pair of audio receiving devices, and both the first audio receiving device and the second audio receiving device feed back an ACK, the audio source may determine that both receivers of the stereo information are After successfully receiving the stereo information, you can continue to transmit the next audio data packet.
  • identification information such as BD_ADDR carried in the ACK/NACK.
  • an electronic device for performing the audio communication method described in the second aspect.
  • the electronic device may include: a memory and a processor, transmitter, and receiver coupled to the memory, wherein: the transmitter is used to transmit audio data to the audio receiving device, and the receiver is used to receive data transmitted by the audio receiving device, such as ACK/NACK
  • the memory is used to store the implementation code of the audio communication method described in the second aspect
  • the processor is used to execute the program code stored in the memory, that is, to execute the audio communication method described in the second aspect.
  • an electronic device which may have a function of implementing the audio communication method described in the second aspect.
  • This function can be realized by hardware, and can also be realized by hardware executing corresponding software.
  • the above hardware or software includes one or more modules corresponding to the above functions.
  • a computer device may include a memory, a processor, and a computer program stored on the memory and operable on the processor.
  • the processor executes the computer program, the computer device Implement the audio communication method as described in the second aspect.
  • an audio receiving device for performing the audio communication method described in the third aspect.
  • the electronic device may include: a memory and a processor, transmitter, and receiver coupled to the memory, wherein: the transmitter is used to send data to other audio receiving devices or audio sources, and the receiver is used to receive other audio receiving devices or audio source transmissions
  • the data such as audio data
  • the memory is used to store the implementation code of the audio communication method described in the third aspect
  • the processor is used to execute the program code stored in the memory, that is, to perform the audio communication method described in the third aspect.
  • an audio receiving device which can have a function of implementing the audio communication method described in the third aspect.
  • This function can be realized by hardware, and can also be realized by hardware executing corresponding software.
  • the above hardware or software includes one or more modules corresponding to the above functions.
  • a computer device may include a memory, a processor, and a computer program stored on the memory and operable on the processor.
  • the processor executes the computer program, the computer device Implement the audio communication method as described in the third aspect.
  • an audio receiving device for performing the audio communication method described in the fourth aspect.
  • the electronic device may include: a memory and a processor, transmitter, and receiver coupled to the memory, wherein: the transmitter is used to send data to other audio receiving devices or audio sources, and the receiver is used to receive other audio receiving devices or audio source transmissions
  • the data such as audio data
  • the memory is used to store the implementation code of the audio communication method described in the fourth aspect
  • the processor is used to execute the program code stored in the memory, that is, the audio communication method described in the fourth aspect is executed.
  • an audio receiving device which can have a function of implementing the audio communication method described in the fourth aspect.
  • This function can be realized by hardware, and can also be realized by hardware executing corresponding software.
  • the above hardware or software includes one or more modules corresponding to the above functions.
  • a computer device may include a memory, a processor, and a computer program stored on the memory and operable on the processor.
  • the processor executes the computer program, the computer The device implements the audio communication method as described in the fourth aspect.
  • a communication system comprising: a first audio receiving device and a second audio receiving device, wherein: the first audio receiving device may be the eighth aspect or the ninth aspect or the tenth aspect Describe the audio receiving device.
  • the first audio receiving device may be the audio receiving device described in the eleventh aspect or the twelfth aspect or the thirteenth aspect.
  • a communication system includes: an audio source, a first audio receiving device, and a second audio receiving device, where the audio source may be the fifth aspect or the sixth aspect or the seventh aspect Describe the electronic device.
  • the first audio receiving device may be the audio receiving device described in the eighth aspect or the ninth aspect or the tenth invention.
  • the first audio receiving device may be the audio receiving device described in the eleventh aspect or the twelfth aspect or the thirteenth aspect.
  • a computer-readable storage medium having instructions stored on it, which when executed on a computer, causes the computer to perform the operations described in the second aspect or the third aspect or the fourth aspect Audio communication method.
  • a computer program product containing instructions, which when executed on a computer, causes the computer to execute the audio communication method described in the second aspect or the third aspect or the fourth aspect.
  • Figure 1 is a schematic diagram of an existing true wireless audio communication solution
  • FIG. 2 is a schematic structural diagram of a wireless audio system involved in this application.
  • 3A-3C are schematic diagrams of an existing audio communication solution in the wireless audio system shown in FIG. 2;
  • FIG. 4 is a schematic structural diagram of a wireless audio system provided by an embodiment of the present application.
  • 6A-6D are timing diagrams of several transmission situations of the audio communication method provided by this application.
  • FIG. 7A is a schematic diagram of a hardware architecture of an electronic device provided by an embodiment of the present application.
  • FIG. 7B is a schematic diagram of a software architecture implemented on the electronic device shown in FIG. 7A;
  • FIG. 8 is a schematic diagram of a hardware architecture of an audio receiving device provided by an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a wireless audio system and related devices provided by an embodiment of the present application.
  • FIG. 2 shows the wireless audio system 10 involved in the present application.
  • the wireless audio system 10 may include the following devices: an audio source (audio) 101, a first audio receiving device (audio) sink 102 and a second audio receiving device (audio sink) 103.
  • the audio source 101 can be implemented as any of the following electronic devices: mobile phones, portable game consoles, portable media playback devices, personal computers, vehicle-mounted media playback devices, and so on.
  • the first audio receiving device 102 and the second audio receiving device 103 can be configured as any type of electro-acoustic transducer for converting audio data into sound, such as speakers, in-ear headphones, head-mounted Headphones and so on.
  • the physical form and size of the audio source 101, the first audio receiving device 102, and the second audio receiving device 103 may also be different, which is not limited in this application.
  • the audio source 101, the first audio receiving device 102, and the second audio receiving device 103 may all be configured with a wireless transceiver, and the wireless transceiver may be used to transmit and receive wireless signals.
  • the audio source 101 can transmit an audio stereo stream in the form of one or more data packets.
  • This data packet may be referred to as an audio data packet.
  • Each audio data packet may contain stereo audio information, which may be compressed or uncompressed stereo samples.
  • the stereo audio information may include stereo samples of the first audio channel and stereo samples of the second audio channel, for example, stereo samples of the left channel and the right channel.
  • a second wireless link 106 can be established between the audio source 101 and the first audio receiving device 102, and the two can communicate through the second wireless link 106.
  • the audio source 101 may transmit audio data packets to the first audio receiving device 102 through the second wireless link 106.
  • the first audio receiving device 102 can convert the received audio data into sound, so that the user wearing the first audio receiving device 102 can hear the sound.
  • the two can communicate through the first wireless link 105 instead of the wired communication link.
  • the first audio receiving device 102 can send communication information (one or more communication parameters) to the second audio receiving device 103 through the first wireless link 105, so that the second audio receiving device 103 can listen to the audio source 101 according to the communication information Audio data transmitted to the first audio receiving device 102 on the second wireless link 106.
  • the communication information may include but is not limited to: the Bluetooth device address (BD_ADDR) of the audio source 101, the local clock (CLKN), the first audio reception The logical transmission address (LT_ADDR), clock offset (clock offset) of the device 102, and the encryption parameters of the second wireless link 106, such as a link key.
  • BD_ADDR Bluetooth device address
  • CLKN local clock
  • LT_ADDR first audio reception The logical transmission address
  • clock offset clock offset
  • the wireless link 106 such as a link key
  • the second audio receiving device 103 may listen to the wireless link 106 according to the communication information sent by the first audio receiving device 102 (such as BD_ADDR, CLKN of the audio source 101, LT_ADDR of the first audio receiving device 102, clock offset, etc.) to receive
  • the audio source 101 transmits audio data to the first audio receiving device 102 on the wireless link 106.
  • the first audio receiving device 102 may serve as a participant, and the second audio receiving device 102 may serve as an observer.
  • the second audio receiving device 102 receives audio data from the audio source 101 through listening instead of the first audio receiving device 101 forwarding audio data, which can greatly Reduces the power consumption of the first audio receiving device 101 to provide a longer battery life.
  • the first audio receiving device 102 may be configured with a sound collecting device such as a receiver/microphone.
  • the first audio receiving device 102 in the transmission direction of the first audio receiving device 102 to the audio source 101, the first audio receiving device 102 can convert the collected sound into audio data and send it to the audio source 101 through the second wireless link 106 Send audio data.
  • the audio source 101 can process the received audio data, such as sending the audio data to other electronic devices (in a voice call scenario) and storing the audio data (in a recording scenario).
  • messages between the audio source 101 and the first audio receiving device 102 can be based on the second wireless link 106 for interactive playback control (such as the previous song, next song, etc.) and call control (such as answering, hanging up) messages , Volume control messages (such as volume increase, volume decrease), etc.
  • the audio source 101 may send a playback control message and a call control message to the first audio receiving device 102 through the second wireless link 106, which may implement playback control and call control on the audio source 101 side.
  • the first audio receiving device 102 may send a playback control message and a call control message to the audio source 101 through the second wireless link 106, which may implement playback control and call control on the side of the first audio receiving device 102.
  • the wireless audio system 10 shown in FIG. 2 may be a wireless audio system implemented based on the Bluetooth protocol. That is, the devices in the wireless audio system 10 can use Bluetooth communication technology to receive or send data. To support stereo audio applications, the devices in the wireless audio system 10 can implement some profiles of the Bluetooth protocol, such as advanced audio distribution specifications (advanced audio distribution profile, A2DP), audio and video remote control specifications (audio video remote control profile, AVRCP), Hands-free profile (hands free profile, HFP). Not limited to Bluetooth communication technology, the wireless audio system 10 may also use wireless communication technologies such as wireless fidelity (Wi-Fi) and Zigbee.
  • Wi-Fi wireless fidelity
  • Zigbee Zigbee
  • the following discusses how to solve the problem of audio packet loss and retransmission in the wireless audio system 10 shown in FIG. 2.
  • the second audio receiving device 103 does not directly feed back an acknowledgement (ACK) or non-acknowledgement (NACK) to the audio source 101.
  • ACK acknowledgement
  • NACK non-acknowledgement
  • the second audio receiving device 103 can pass the first wireless link Road 105 feeds back an ACK to the first audio receiving device 102 (participant) to indicate that the audio data packet is successfully intercepted.
  • the second audio receiving device 103 can pass the first wireless The link 105 feeds back a NACK to the first audio receiving device 102 (participant) to indicate that the audio data packet was not successfully intercepted.
  • the time slot (slot(n+1) to slot(n+5)) for transmitting audio data includes two parts: audio data transmission time and idle time (idle period).
  • the first audio receiving device 102 participant
  • the second audio receiving device 103 observeer
  • POLL means that the participant asks whether the observer successfully listened to the audio data packet
  • ACK/NACK means the response returned by the observer to the participant (the answer is whether the audio data packet was successfully intercepted).
  • FIG. 3B shows that the audio source 101 transmits the audio frame N, the participant successfully receives the audio frame N, and the observers successfully listen to the audio frame N.
  • the audio frame N is the audio data packet N.
  • the first audio receiving device 102 may send an ACK to the audio source 101 in the next time slot (slo(n+6)) to indicate that both the participant and the observer successfully received To audio frame N.
  • FIG. 3C shows a situation where the audio source 101 transmits the audio frame N, the participant successfully receives the audio frame N, but the observer does not successfully listen to the audio frame N.
  • the first audio receiving device 102 may send a NACK to the audio source 101 in the next time slot (slo(n+6)) to trigger the audio source 101 to retransmit the audio frame N .
  • FIGS. 3A-3C requires participants and observers to complete the message interaction within a relatively short idle time (idle period), especially when using Bluetooth technology, the two need to complete the interaction within 200us. High performance requirements.
  • this application provides another solution, which may refer to the wireless audio system 20 shown in FIG. 4.
  • an atypical third wireless link can be established between the audio source 101 and the second audio receiving device 103 (observer) Road 107.
  • the third wireless link 107 may be used for limited interaction between the audio source 101 and the second audio receiving device 103 (observer). In some embodiments, the third wireless link 107 may be used only for the second audio receiving device 103 (observer) to feed back the ACK/NACK for the audio data packet to the audio source 101.
  • the third wireless link 107 is not used for the audio source 101 to transmit audio data to the second audio receiving device 103 (observer).
  • the second audio receiving device 103 (observer) can listen to the audio data transmitted by the audio source 101 to the first audio receiving device 102 on the second wireless link 106 according to the communication information.
  • the audio source 101 can continue to transmit the next audio data packet; if either one of the audio receiving device 102 (participant) and the second audio receiving device 103 (observer) feeds back NACK, the audio source 101 repeats Pass the audio data packet.
  • the second audio receiving device 103 (observer) feeds back ACK/NACK directly to the audio source 101 through the atypical wireless link 107, instead of feeding back ACK/NACK via the first audio receiving device 102 (participant), the first The second audio receiving device 103 (observer) and the first audio receiving device 102 (participant) do not need to complete the message interaction within the relatively short idle time shown in FIGS. 3B-3C.
  • the feedback ACK/NACK of the second audio receiving device 103 does not depend on the first wireless link 105 between it and the first audio receiving device 102 (participant), even if the first wireless link 105 is disconnected,
  • the second audio receiving device 103 can also feed back the ACK/NACK to the audio source 101 normally.
  • the first audio receiving device 102 (participant) may be referred to as SNK-1
  • the second audio receiving device 103 (observer)
  • the audio source 101 may be referred to as SRC.
  • SNK-1 may send the identification information of SNK-2 (such as BD_ADDR of SNK-2) to the SRC through the second wireless link 106.
  • the identification information of SNK-2 may be sent by SNK-2 to SNK-1 through the first wireless link 105.
  • the SRC can establish the mapping relationship between the identification information of SNK-2 and the identification information of SNK-1, and save the mapping relationship.
  • the mapping relationship may indicate that SNK-2 and SNK-1 are a pair of audio receiving devices for stereo audio information, such as a pair of left and right headphones.
  • the identification information of SNK-1 may be sent to SRC when SNK-1 establishes the second wireless link 106 with SRC.
  • the SRC After receiving the ACK/NACK from SNK-1 (which can carry the identification information of SNK-1), and the ACK/NACK from SNK-2 (which can carry the identification information of SNK-2), the SRC can use the ACK /NACK carries identification information to determine whether SNK-1 and SNK-2 are a pair of audio receiving devices. If it is determined that SNK-1 and SNK-2 are a pair of audio receiving devices, and both SNK-1 and SNK-2 feedback ACK, SRC can determine that a pair of receivers of stereo information has successfully received stereo information and can continue to transmit The next audio packet.
  • the present application provides an audio communication method.
  • the main inventive idea may include: a third wireless link 107 may be established between the audio source 101 and the second audio receiving device 103 (observer).
  • the third wireless link 107 can be used for limited communication between the audio source 101 and the second audio receiving device 103 (observer). For example, it can be used only for the second audio receiving device 103 (observer) to feed back ACK/ to the audio source 101 NACK. In this way, the observer does not need to feedback ACK/NACK via the participant.
  • the audio source 101 After transmitting an audio data packet, if the first audio receiving device 102 (participant) and/or the second audio receiving device 103 (observer) feedback ACK, the audio source 101 retransmits the audio data packet.
  • FIG. 5 shows an audio communication method provided by this application.
  • the audio communication method shown in FIG. 5 is applied to the wireless audio system 20 shown in FIG. 4.
  • the first audio receiving device 102 (participant) and the second audio receiving device 103 (observer) can communicate through the first wireless link 105, the first audio receiving device 102 (participant) and the audio source 101 Can communicate via the second wireless link 106.
  • a third wireless link 107 may be established between the second audio receiving device 103 (observer) and the audio source 101.
  • the third wireless link 107 can be used for limited communication between the second audio receiving device 103 (observer) and the audio source 101, for example, only for the second audio receiving device 103 (observer) to feed back the ACK or NACK to the audio source 101 .
  • the third wireless link 107 may be a unidirectional link.
  • the wireless audio system 20 shown in FIG. 4 may use Bluetooth communication technology.
  • the first wireless link 105, the second wireless link 106, and the third wireless link 107 may all be links established based on the Bluetooth protocol.
  • the method shown in Figure 5 is expanded below:
  • Stage 1 (S101): The first audio receiving device 102 (participant) can send communication information to the second audio receiving device 103 (observer) through the first wireless link 105.
  • the communication information can be used for the first audio receiving device 102 (participant) to listen to the audio data packet N transmitted by the audio source 101 to the first audio receiving device 102 (participant) on the second wireless link 106.
  • N is a positive integer.
  • the audio data packet N may be any audio data packet in the audio stream (audio stream) transmitted by the audio source 101.
  • the communication information may specifically include but is not limited to one or more of the following communication parameters: the Bluetooth device address of the audio source 101 (BD_ADDR), the local clock (CLKN) , The logical transmission address (LT_ADDR), clock offset (clock offset) of the first audio receiving device 102, and the encryption parameters of the second wireless link 106, such as a link key.
  • BD_ADDR Bluetooth device address of the audio source 101
  • CLKN local clock
  • LT_ADDR The logical transmission address
  • clock offset clock offset
  • the second wireless link 106 such as a link key
  • Phase 2 (S102, S105, S109, S113): The audio source 101 transmits the audio data packet N.
  • the audio source 101 may transmit the audio data packet N to the first audio receiving device 102 (participant) on the second wireless link 106.
  • the audio data packet N may contain stereo audio information, which may be compressed or uncompressed stereo samples.
  • the stereo audio information may include stereo audio information of the first audio channel and stereo audio information of the second audio channel, for example, stereo audio information of the left channel and the right channel.
  • the first audio receiving device 102 (participant) can receive the audio data packet N transmitted by the audio source 101 through the second wireless link 106, for details, refer to S102-A, S105-A, S109-A, S113-A .
  • the second audio receiving device 103 (observer) can listen to the audio data packet N transmitted by the audio source 101 to the first audio receiving device 102 (participant) on the second wireless link 106 according to the aforementioned communication information. For details, refer to S102 -B, S105-B, S109-B, S113-B.
  • the first audio receiving device 102 After successfully receiving the audio data packet N, the first audio receiving device 102 (participant) can extract the stereo audio information of the first channel (such as the left channel) from the audio data packet N, and according to the first channel Audio rendering and playback of the stereo audio information.
  • the second audio receiving device 103 After successfully listening to the audio data packet N, the second audio receiving device 103 (observer) can extract the stereo audio information of the second channel from the audio data packet N, and according to the second channel (such as the right channel ) Stereo audio information for audio rendering and playback. In this way, the first audio receiving device 102 (participant) and the second audio receiving device 103 (observer) can present a stereo playback experience.
  • Phase 3 (S103-S104, S106-S107, S110-S111, S114-S115): Participants and observers feedback ACK/NACK.
  • the first audio receiving device 102 may return the audio data packet N for the audio source 101 through the second wireless link 106 AC).
  • the first audio receiving device 102 may return a NACK for the audio data packet N to the audio source 101 through the second wireless link 106.
  • the second audio receiving device 103 when successfully listening to the audio data packet N transmitted by the audio source 101 according to the foregoing communication information, the second audio receiving device 103 (observer) may return the audio data packet 101 to the audio source 101 through the third wireless link 107 N AC).
  • the second audio receiving device 103 when the audio data packet N transmitted by the audio source 101 is not successfully intercepted according to the foregoing communication information, the second audio receiving device 103 (observer) may return the audio data packet N to the audio source 101 through the third wireless link 107 NACK.
  • Stage 4 Determine whether to perform retransmission based on feedback from participants and observers.
  • the audio source 101 can determine that a pair of receivers of the stereo information has successfully received the stereo information and can continue to transmit An audio data packet. If the first audio receiving device 102 (participant) and/or the second audio receiving device 103 (observer) feedback NACK, the audio source 101 may determine that neither pair of receivers of the stereo information has successfully received the stereo information and retransmit For audio data packet N, refer to S108, S112, or S116. This can ensure that the audio receiving device receives complete audio data and avoids jams caused by data loss.
  • the audio source 101 transmits audio data packets N in slots (n+1) to slots (n+5).
  • the first audio receiving device 102 (participant) and the second audio receiving device 103 (observer) can feed back ACK/NACK to the audio source 101 at slot (n+6).
  • slot(n+1) to slot(n+5) may be referred to as a first time slot
  • slot(n+6) may be referred to as a second time slot.
  • the first time slot and the second time slot are continuous, and the second time slot may be the first time slot after the first time slot. It is not limited to the 5 time slots indicated by slot(n+1) to slot(n+5) in FIG. 6A, and the time length of the first time slot may also be other values, such as 3 time slots, which is not limited here.
  • Case 1 (S102-S104): The audio source 101 transmits the audio data packet N, the participant 102 successfully receives the audio data packet N through the second wireless link 105 (refer to S102-A), and the observer 103 successfully listens To audio data packet N (refer to S102-B).
  • FIG. 6A is an operation timing chart of each device in the wireless communication system in case 1.
  • the first audio receiving device 102 after successfully receiving the audio data packet N in the first time slot, the first audio receiving device 102 (participant) can send to the audio source 101 through the second wireless link 105 in the second time slot ACK to indicate that the audio data packet N was successfully received.
  • the second audio receiving device 103 after successfully listening to the audio data packet N in the first time slot, the second audio receiving device 103 (observer) can pass the third wireless link 107 to the audio source 101 in the second time slot An ACK is sent to indicate that the audio packet N was successfully intercepted.
  • the audio source 101 can determine that a pair of receivers of the stereo information has successfully received the stereo information, and can continue to transmit the next audio data packet (such as audio data packet N+1) in the time slot after the second time slot ).
  • FIG. 6B is an operation timing chart of each device in the wireless communication system in case 2.
  • the first audio receiving device 102 after successfully receiving the audio data packet N in the first time slot, the first audio receiving device 102 (participant) can send to the audio source 101 through the second wireless link 105 in the second time slot ACK to indicate that the audio data packet N was successfully received.
  • the second audio receiving device 103 after the audio data packet N is not successfully heard in the first time slot, the second audio receiving device 103 (observer) can send the audio source to the audio source through the third wireless link 107 in the second time slot 101 Send NACK to indicate that the audio data packet N is not successfully heard.
  • the audio source 101 may determine that the second audio receiving device 103 (observer) has not successfully heard the audio data packet N, that is, neither pair of receivers of the stereo information has successfully received the stereo information , The audio source 101 may retransmit the audio data packet N in the time slot after the second time slot.
  • Case 3 (S109-S112): The audio source 101 transmits the audio data packet N, the participant 102 fails to receive the audio data packet N through the second wireless link 105 (refer to S109-A), and the observer 103 fails Audio data packet N is heard (refer to S109-B).
  • FIG. 6C is an operation timing chart of each device in the wireless communication system in case 3.
  • the first audio receiving device 102 can send the audio source 101 to the audio source 101 through the second wireless link 105 in the second time slot Send NACK to indicate that audio packet N was not successfully received.
  • the second audio receiving device 103 after the audio data packet N is not successfully heard in the first time slot, the second audio receiving device 103 (observer) can send the audio source through the third wireless link 107 in the second time slot 101 Send NACK to indicate that the audio data packet N is not successfully heard.
  • the audio source 101 may determine that neither the first audio receiving device 102 (participant) nor the second audio receiving device 103 (observer) has successfully received the audio data packet N, that is, the stereo information Neither pair of receivers has successfully received the stereo information, and the audio source 101 may retransmit the audio data packet N in the time slot after the second time slot.
  • FIG. 6D is an operation timing chart of each device in the wireless communication system in case 4.
  • the first audio receiving device 102 can send the audio source 101 to the audio source 101 through the second wireless link 105 in the second time slot Send NACK to indicate that audio packet N was not successfully received.
  • the second audio receiving device 103 after successfully listening to the audio data packet N in the first time slot, the second audio receiving device 103 (observer) can send the audio source 101 through the third wireless link 107 in the second time slot An ACK is sent to indicate that the audio packet N was successfully intercepted.
  • the audio source 101 may determine that the first audio receiving device 102 (participant) did not successfully receive the audio data packet N, that is, neither pair of receivers of the stereo information successfully received the stereo information, The audio source 101 may retransmit the audio data packet N in the time slot after the second time slot.
  • the first audio receiving device 102 (participant) and the second audio receiving device 103 (observer) can perform one or more ACKs in the second time slot /NACK feedback.
  • the audio source 101 may determine that the first audio receiving device 102 (participant) and the second audio receiving device 103 (observer) are a pair of receivers of stereo information in the following ways.
  • the first audio receiving device 102 can transfer the identification information of the second audio receiving device 103 (observer) through the second wireless link 106 (such as the BD_ADDR of the second audio receiving device 103 (observer)) Send to audio source 101.
  • the identification information of the second audio receiving device 103 (observer) may be sent by the second audio receiving device 103 (observer) to the first audio receiving device 102 (participant) through the first wireless link 105.
  • the audio source 101 can establish a mapping relationship between the identification information of the second audio receiving device 103 (observer) and the identification information of the first audio receiving device 102 (participant), and save the mapping relationship.
  • the mapping relationship may indicate that the second audio receiving device 103 (observer) and the first audio receiving device 102 (participant) are a pair of audio receiving devices with stereo audio information, such as a pair of left and right headphones.
  • the identification information of the first audio receiving device 102 (participant) may be sent to the audio source 101 when the first audio receiving device 102 (participant) establishes the second wireless link 106 with the audio source 101.
  • the audio source 101 can determine the first audio receiving device based on the identification information such as BD_ADDR carried in the ACK/NACK Whether 102 (participant) and second audio receiving device 103 (observer) are a pair of audio receiving devices.
  • the audio source 101 can determine that a pair of receivers of the stereo information have successfully received the stereo information, and can continue to transmit the next audio data packet.
  • the audio source 101 may allocate LT_ADDR to the first audio receiving device 102 (participant) and the second audio receiving device 103 (observer), and the LT_ADDR and second audio receiving of the first audio receiving device 102 (participant) There is a mapping relationship between LT_ADDR of the device 103 (observer). The mapping relationship may indicate that the first audio receiving device 102 (participant) and the second audio receiving device 103 (observer) are a pair of receivers of stereo information.
  • the audio source 101 can judge the first audio receiving device 102 (participant) and the second audio according to the LT_ADDR carried in the ACK/NACK Whether the receiving device 103 (observer) is a pair of audio receiving devices.
  • the audio source 101 can determine that a pair of receivers of the stereo information have successfully received the stereo information, and can continue to transmit the next audio data packet.
  • the audio source 101 may also determine that the first audio receiving device 102 (participant) and the second audio receiving device 103 (observer) are a pair of receivers of stereo information through other methods.
  • the ACK/NACK feedback from participants and observers all carry the same field indicating the device name. This application does not limit this.
  • the audio communication method described in FIG. 5 is applied to the wireless audio system 20 shown in FIG. 4, and the second audio receiving device 103 (observer) receives the audio data packet transmitted by the audio source 101 by listening instead of receiving the first The audio receiving device 102 (participant) forwards the audio data packet, which can save the power consumption of the first audio receiving device 102 (participant).
  • the second audio receiving device 103 (observer) directly feeds back ACK/NACK to the audio source 101 through the atypical wireless link 107, instead of feeding back ACK/NACK via the first audio receiving device 102 (participant), Therefore, the second audio receiving device 103 (observer) and the first audio receiving device 102 (participant) do not need to complete message interaction within a short idle time, which reduces the performance requirements of the chip.
  • the feedback ACK/NACK of the second audio receiving device 103 does not depend on the first wireless link 105 between it and the first audio receiving device 102 (participant), even if the first wireless link 105 is disconnected,
  • the second audio receiving device 103 can also feed back the ACK/NACK to the audio source 101 normally.
  • the electronic device 100 may be implemented as the first audio receiving device mentioned in the above embodiment, and may be the first audio receiving device 101 in the wireless audio system 10 shown in FIG. 1.
  • the electronic device 100 can usually be used as an audio source (audio source), such as a mobile phone, a tablet computer, etc., and can transmit audio data to other audio receiving devices (audio headphones, speakers, etc.), so that other audio receiving devices can use Audio data is converted into sound.
  • the electronic device 100 can also be used as an audio sink to receive audio data (such as audio converted by the user's voice collected by the headset) transmitted by other device audio sources (such as a headset with a microphone) data).
  • FIG. 7A shows a schematic structural diagram of the electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2 , Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, key 190, motor 191, indicator 192, camera 193, display 194, and Subscriber identification module (SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine some components, or split some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), and an image signal processor. (image)signal processor (ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • image image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • different processing units may be independent devices, or may be integrated in one or more processors.
  • the electronic device 100 may also include one or more processors 110.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate the operation control signal according to the instruction operation code and the timing signal to complete the control of fetching instructions and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. The repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the electronic device 100.
  • the processor 110 may include one or more interfaces.
  • Interfaces can include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit, sound, I2S) interface, pulse code modulation (pulse code modulation (PCM) interface, universal asynchronous transceiver (universal) asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /Or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, charger, flash, camera 193, etc., respectively through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface, and realize the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170.
  • the audio module 170 can transfer audio signals to the wireless communication module 160 through the I2S interface, so as to realize the function of answering the call through the Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface to realize the function of answering the phone call through the Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 to peripheral devices such as the display screen 194 and the camera 193.
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI) and so on.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through the DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured via software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor 110 to the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard specifications, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiments of the present invention is only a schematic description, and does not constitute a limitation on the structure of the electronic device 100.
  • the electronic device 100 may also use different interface connection methods in the foregoing embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 140 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be disposed in the processor 110.
  • the power management module 141 and the charging management module 140 may also be set in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve the antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna can be used in conjunction with a tuning switch.
  • the mobile communication module 150 may provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 150 may receive electromagnetic waves from the antenna 1 and filter, amplify, etc. the received electromagnetic waves, and transmit them to a modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor and convert it to electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the mobile communication module 150 may be implemented as one or more transceivers.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be transmitted into a high-frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs an audio signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110, and may be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (bluetooth, BT), and global navigation satellites that are applied to the electronic device 100 Wireless communication solutions such as global navigation (satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared technology (infrared, IR), etc.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 can receive electromagnetic waves via the antenna 2, frequency-modulate and filter the electromagnetic wave signals, and send the processed signals to the processor 110.
  • the wireless communication module 160 may also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it to electromagnetic waves through the antenna 2 to radiate it out.
  • the wireless communication module 160 may include a Bluetooth module, a Wi-Fi module, and the like.
  • the wireless communication module 160 may be implemented as one or more transceivers.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include a global mobile communication system (global system for mobile communications, GSM), a general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long-term evolution (LTE), BT, GNSS, WLAN, NFC , FM, and/or IR technology, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband Code division multiple access
  • WCDMA wideband code division multiple access
  • time division code division multiple access time-division code division multiple access
  • TD-SCDMA time-division
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a beidou navigation system (BDS), and a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS beidou navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 can realize a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connecting the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations, and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute instructions to generate or change display information.
  • the display screen 194 is used for displaying images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel may use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light) emitting diode, AMOLED), flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP processes the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, and light is transmitted to the photosensitive element of the camera through the lens, and the optical signal is converted into an electrical signal. The photosensitive element of the camera transmits the electrical signal to the ISP for processing and converts it into an image visible to the naked eye. ISP can also optimize the algorithm of image noise, brightness and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be set in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CCD charge coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the digital signal processor is used to process digital signals. In addition to digital image signals, it can also process other digital signals. For example, when the electronic device 100 is selected at a frequency point, the digital signal processor is used to perform Fourier transform on the energy at the frequency point.
  • the video codec is used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, such as: moving picture experts group (MPEG)-1, MPEG-2, MPEG-3, MPEG-4, etc.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent recognition of the electronic device 100, such as image recognition, face recognition, voice recognition, and text understanding.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, photos, videos and other data in an external memory card.
  • the internal memory 121 may be used to store one or more computer programs including instructions.
  • the processor 110 may execute the above-mentioned instructions stored in the internal memory 121, so that the electronic device 100 executes the data sharing method, various functional applications, and data processing provided in some embodiments of the present application.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store the operating system; the storage program area can also store one or more application programs (such as gallery, contacts, etc.) and so on.
  • the storage data area may store data (such as photos, contacts, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • a non-volatile memory such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input into digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also known as "handset" is used to convert audio electrical signals into sound signals.
  • the voice can be received by bringing the receiver 170B close to the ear.
  • the microphone 170C also known as “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through the human mouth, and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C. In addition to collecting sound signals, it may also implement a noise reduction function. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the headset interface 170D is used to connect wired headsets.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile electronic device (open terminal) platform (OMTP) standard interface, and the American Telecommunications Industry Association (cellular telecommunications industry association of the United States, CTIA) standard interface.
  • OMTP open mobile electronic device
  • CTIA American Telecommunications Industry Association
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may be a parallel plate including at least two conductive materials. When force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the strength of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but have different touch operation intensities may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for image stabilization.
  • the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to cancel the shaking of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude by using the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the electronic device 100 may detect the opening and closing of the clamshell according to the magnetic sensor 180D.
  • characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of electronic devices, and can be used in horizontal and vertical screen switching, pedometer and other applications.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting scenes, the electronic device 100 may use the distance sensor 180F to measure distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light outward through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense the brightness of ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access to application lock, fingerprint photo taking, fingerprint answering call, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs performance reduction of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection. In some other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to prevent the low temperature from causing the electronic device 100 to shut down abnormally. In some other embodiments, when the temperature is below another threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
  • the touch sensor 180K can also be called a touch panel or a touch-sensitive surface.
  • the touch sensor 180K may be provided on the display screen 194, and the touch sensor 180K and the display screen 194 constitute a touch screen, also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human voice.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive the blood pressure beating signal.
  • the bone conduction sensor 180M may also be provided in the earphone and combined into a bone conduction earphone.
  • the audio module 170 may parse out the voice signal based on the vibration signal of the vibrating bone block of the voice part acquired by the bone conduction sensor 180M to realize the voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M to implement the heart rate detection function.
  • the key 190 includes a power-on key, a volume key, and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 may generate a vibration prompt.
  • the motor 191 can be used for vibration notification of incoming calls and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminder, receiving information, alarm clock, game, etc.
  • Touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging state, the amount of power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be inserted into or removed from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 can also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to realize functions such as call and data communication.
  • the electronic device 100 uses eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the electronic device 100 exemplarily shown in FIG. 7A may display various user interfaces described in the following embodiments through the display screen 194.
  • the electronic device 100 can detect a touch operation in each user interface through the touch sensor 180K, for example, a click operation in each user interface (such as a touch operation on an icon, a double-click operation), and, for example, an upward or in each user interface. Swipe down, or perform a gesture of drawing a circle, and so on.
  • the electronic device 100 may detect a motion gesture performed by the user holding the electronic device 100, such as shaking the electronic device, through the gyro sensor 180B, the acceleration sensor 180E, or the like.
  • the electronic device 100 can detect non-touch gesture operations through the camera 193 (eg, 3D camera, depth camera).
  • the software system of the electronic device 100 may adopt a layered architecture, event-driven architecture, micro-core architecture, micro-service architecture, or cloud architecture.
  • the embodiment of the present invention takes the Android system with a layered architecture as an example to exemplarily explain the software structure of the electronic device 100.
  • FIG. 7B is a block diagram of the software structure of the electronic device 100 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor.
  • the layers communicate with each other through a software interface.
  • the Android system is divided into four layers, from top to bottom are the application layer, the application framework layer, the Android runtime and the system library, and the kernel layer.
  • the application layer may include a series of application packages.
  • the application package may include applications such as games, voice assistants, music players, video players, mailboxes, calls, navigation, and file browsers.
  • the application framework layer provides application programming interfaces (application programming interfaces) and programming frameworks for applications at the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, intercept the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
  • the view system includes visual controls, such as controls for displaying text and controls for displaying pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes an SMS notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call status (including connection, hang up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear after a short stay without user interaction.
  • the notification manager is used to notify the completion of downloading, message reminders, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • the text message is displayed in the status bar, a prompt sound is emitted, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core library and virtual machine. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one part is the function function that Java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in the virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer into binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library may include multiple functional modules. For example: surface manager (surface manager), media library (Media library), 3D graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • surface manager surface manager
  • media library Media library
  • 3D graphics processing library for example: OpenGL ES
  • 2D graphics engine for example: SGL
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports a variety of commonly used audio, video format playback and recording, and still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least the display driver, camera driver, audio driver, and sensor driver.
  • the following describes the workflow of the software and hardware of the electronic device 100 in combination with capturing a photographing scene.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps and other information of touch operations).
  • the original input event is stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch operation, for example, the control corresponding to the touch operation is a camera application icon.
  • the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer, through the camera 193 Capture still images or videos.
  • the audio receiving device 300 may be implemented as the first audio receiving device or the second audio receiving device mentioned in the above embodiments, and may be the first audio receiving device 102 or the second audio receiving in the wireless audio system 20 shown in FIG. 4 Device 103.
  • the audio receiving device 300 can generally be used as an audio receiving device (audio sink), such as headphones and speakers, can transmit audio data to other audio sources (audio sources, such as mobile phones, tablet computers, etc.), and can receive the received audio Convert data to sound.
  • audio sink such as headphones and speakers
  • the audio receiving device 300 can also be used as an audio source to transmit audio data (such as an audio sink) (such as a mobile phone) to other devices Audio data converted by the voice collected by the headset into the user's speech).
  • audio data such as an audio sink
  • the audio receiving device 300 can also be used as an audio source to transmit audio data (such as an audio sink) (such as a mobile phone) to other devices Audio data converted by the voice collected by the headset into the user's speech).
  • FIG. 8 exemplarily shows a schematic structural diagram of an audio receiving device 300 provided by the present application.
  • the audio receiving device 300 may include a processor 302, a memory 303, a Bluetooth communication processing module 304, a power supply 305, a wear detector 306, a microphone 307 and an electric/acoustic converter 308. These components can be connected via a bus. among them:
  • the processor 302 may be used to read and execute computer-readable instructions.
  • the processor 302 may mainly include a controller, an arithmetic unit, and a register.
  • the controller is mainly responsible for instruction decoding and issues control signals for the operations corresponding to the instructions.
  • the arithmetic unit is mainly responsible for performing fixed-point or floating-point arithmetic operations, shift operations, and logical operations, and can also perform address operations and conversions.
  • the register is mainly responsible for saving the register operands and intermediate operation results temporarily stored during the execution of the instruction.
  • the hardware architecture of the processor 302 may be an application specific integrated circuit (Application Specific Integrated Circuits, ASIC) architecture, a MIPS architecture, an ARM architecture, an NP architecture, or the like.
  • the processor 302 may be used to parse signals received by the Bluetooth communication processing module 304, such as signals encapsulated with audio data, content control messages, flow control messages, and so on.
  • the processor 302 may be used to perform corresponding processing operations according to the analysis result, such as driving the electrical/acoustic converter 308 to start or pause or stop converting audio data into sound, and so on.
  • the processor 302 may also be used to generate signals sent out by the Bluetooth communication processing module 304, such as Bluetooth broadcast signals, beacon signals, and audio data converted from the collected sound.
  • the memory 303 is coupled to the processor 302 and is used to store various software programs and/or multiple sets of instructions.
  • the memory 303 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices.
  • the memory 303 may store an operating system, such as an embedded operating system such as uCOS, VxWorks, and RTLinux.
  • the memory 303 may also store a communication program that can be used to communicate with the electronic device 100, one or more servers, or additional devices.
  • the Bluetooth (BT) communication processing module 304 can receive signals transmitted by other devices (such as the electronic device 100), such as scan signals, broadcast signals, signals encapsulated with audio data, content control messages, flow control messages, and so on.
  • the Bluetooth (BT) communication processing module 304 may also transmit signals, such as broadcast signals, scan signals, signals encapsulated with audio data, content control messages, flow control messages, and so on.
  • the power supply 305 can be used to supply power to the processor 302, the memory 303, the Bluetooth communication processing module 304, the wear detector 306, the electrical/acoustic converter 308, and other internal components.
  • the wearing detector 306 may be used to detect a state where the audio receiving device 300 is worn by the user, such as an unworn state, a worn state, and may even include a worn tight state.
  • the wear detector 306 may be implemented by one or more of a distance sensor, a pressure sensor, and the like.
  • the wearing detector 306 can transmit the detected wearing state to the processor 302, so that the processor 302 can be powered on when the audio receiving device 300 is worn by the user, and powered off when the audio receiving device 300 is not worn by the user, to save Power consumption.
  • the microphone 307 can be used to collect sounds, such as the voice of the user speaking, and can output the collected sounds to the electric/acoustic converter 308, so that the electric/acoustic converter 308 can convert the sound collected by the microphone 307 into audio data.
  • the electric/acoustic converter 308 can be used to convert sound into electrical signals (audio data), for example, convert the sound collected by the microphone 307 into audio data, and can transmit audio data to the processor 302. In this way, the processor 302 can trigger the Bluetooth (BT) communication processing module 304 to transmit the audio data.
  • the electrical/acoustic converter 308 may also be used to convert electrical signals (audio data) into sound, for example, audio data output by the processor 302 into sound.
  • the audio data output by the processor 302 may be received by the Bluetooth (BT) communication processing module 304.
  • the structure illustrated in FIG. 8 does not constitute a specific limitation on the audio receiving device 300.
  • the audio receiving device 300 may include more or fewer components than shown, or combine some components, or split some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • FIG. 9 shows a schematic structural diagram of a wireless audio system provided by an embodiment of the present application.
  • the wireless audio system 50 may include: an electronic device 51, a first audio receiving device 53 and a second audio receiving device 55.
  • the electronic device 51 may be the audio source 101 mentioned in the foregoing various embodiments.
  • the first audio receiving device 53 and the second audio receiving device 55 may be the first audio receiving device 102 (participant) and the second audio receiving device 103 (observer) in the wireless audio system 20 shown in FIG. 4, respectively.
  • the first audio receiving device 53 and the second audio receiving device 55 may be respectively implemented as left and right headphones (or audio receiving devices such as speakers).
  • a first wireless link 52 may be established between the first audio receiving device 53 and the second audio receiving device 55.
  • a second wireless link 54 may be established between the first audio receiving device 53 and the electronic device 51.
  • the first wireless link 52 and the second wireless link 54 may refer to the first wireless link 105 and the second wireless link 106 mentioned in the foregoing embodiments, respectively.
  • No wireless link for transmitting audio data is established between the second audio receiving device 55 and the electronic device 51, and the second audio receiving device 55 listens to the electronic device 51 transmitting to the first audio receiving device 53 on the second wireless link 54 To receive the audio data.
  • a third wireless link 56 for limited communication may be established between the second audio receiving device 55 and the electronic device 51.
  • the third wireless link 56 may be used only for the second audio receiving device 55 to feed back the ACK/NACK to the electronic device 51 to inform whether the audio data is successfully intercepted.
  • the third wireless link 56 may refer to the third wireless link 107 mentioned in the foregoing embodiments.
  • the electronic device 51 may be used to transmit audio data packets to the first audio receiving device 53 through the second wireless link 54.
  • the first audio receiving device 53 can be used to receive the audio data packet transmitted by the electronic device 51 through the second wireless link 54.
  • the first audio receiving device 53 can also be used to determine whether the audio data packet transmitted by the electronic device 51 is successfully received through the second wireless link 54. If the audio data packet transmitted by the electronic device 51 is successfully received, the first audio receiving device 53 can also be used to feed back an ACK to the electronic device 51 via the second wireless link 54; otherwise, the first audio receiving device 53 can also be used to pass the The second wireless link 54 feeds back NACK to the electronic device 51.
  • the second audio receiving device 55 can be used to listen to the audio data packets transmitted by the electronic device 51 to the first audio receiving device 53 on the second wireless link 54.
  • the second audio receiving device 55 can also be used to determine whether the audio data packet transmitted by the electronic device 51 is successfully intercepted. If the audio data packet transmitted by the electronic device 51 is successfully intercepted, the second audio receiving device 55 can also be used to feed back an ACK for the audio data packet to the electronic device 51 through the third wireless link 56; otherwise, the second audio The receiving device 55 can also be used to feed back the NACK for the audio data packet to the electronic device 51 through the third wireless link 56.
  • the electronic device 51 may also be used to receive the ACK/NACK fed back by the first audio receiving device 53 via the second wireless link 54 and the ACK/NACK fed back by the second audio receiving device 55 via the third wireless link 56. If both the first audio receiving device 53 and the second audio receiving device 55 feedback ACK, the electronic device 51 continues to transmit the next audio data packet; otherwise, the electronic device 51 retransmits the audio data packet.
  • the electronic device 51, the first audio receiving device 53, and the second audio receiving device 55 may be divided into functional modules according to the above method examples.
  • each functional module may be divided corresponding to each function, or two or two More than one function is integrated in one processing module.
  • the above integrated modules can be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a division of logical functions. In actual implementation, there may be another division manner.
  • the electronic device 51 may include a processing module 511 and a communication module 513. among them:
  • the communication module 513 may be used to transmit audio data packets to the first audio receiving device 53 through the second wireless link 54.
  • the communication module 513 may also be used to receive the ACK/NACK fed back by the first audio receiving device 53 via the second wireless link 54 and the ACK/NACK fed back by the second audio receiving device 55 via the third wireless link 56.
  • the processing module 511 can be used to determine whether the second wireless link 54 and the second audio receiving device 55 have successfully received the audio data packet according to the feedback received by the communication module 513, and if none of the audio data packets have been successfully received, it can be used to pass The communication module 513 retransmits the audio data packet.
  • the first audio receiving device 53 may include a processing module 531 and a communication module 533. among them:
  • the communication module 513 may be used to receive the audio data packet transmitted by the electronic device 51 through the second wireless link 54.
  • the processing module 531 may be used to determine whether the audio data packet transmitted by the electronic device 51 is successfully received.
  • the communication module 513 can also be used to feed back ACK/NACK to the electronic device 51 through the second wireless link 54. Specifically, if the audio data packet is successfully received, the communication module 513 may be used to feed back an ACK to the electronic device 51 through the second wireless link 54; otherwise, the communication module 513 may be used to send the electronic device 51 through the second wireless link 54 Feedback NACK.
  • the communication module 513 may also be used to send communication information to the second audio receiving device 55 through the first wireless link 52.
  • the communication information can be used for the second audio receiving device 55 to listen to the audio data packet transmitted by the electronic device 51.
  • each functional module included in the first audio receiving device 53 reference may also be made to the foregoing method embodiments, and details are not described herein again.
  • the second audio receiving device 55 may include: a processing module 551 and a communication module 553. among them:
  • the communication module 553 may be used to listen to the audio data packet transmitted by the electronic device 51 to the first audio receiving device 53 on the second wireless link 54.
  • the processing module 531 may be used to determine whether the audio data packet transmitted by the electronic device 51 is successfully intercepted.
  • the communication module 553 can also be used to feed back ACK/NACK to the electronic device 51 through the third wireless link 56. Specifically, if the audio data packet is successfully intercepted, the communication module 513 can be used to feed back an ACK to the electronic device 51 through the third wireless link 56; otherwise, the communication module 513 can be used to send the electronic device through the third wireless link 56 51 Feedback NACK.
  • the communication module 513 may also be used to receive the communication information sent by the second audio receiving device 55 through the first wireless link 52.
  • the communication module 553 is specifically configured to listen to the audio data packet transmitted by the electronic device 51 according to the communication information.
  • specific implementation of the communication information reference may be made to the foregoing method embodiments, and details are not described herein again.
  • the processing module may be a processor or a controller. It can implement or execute various exemplary logical blocks, modules and circuits described in conjunction with the disclosure of the present application.
  • the processor may also be a combination of computing functions, such as a combination of one or more microprocessors, a combination of digital signal processing (DSP) and a microprocessor, and so on.
  • the storage module may be a memory.
  • the communication module may specifically be a device that interacts with other electronic devices, such as a radio frequency circuit, a Bluetooth chip, or a Wi-Fi chip.
  • the processing module is a processor and the storage module is a memory
  • the electronic device 51 involved in this embodiment may be a mobile phone, and the first audio receiving device 53 and the second audio receiving device 55 may be left, Right headset.
  • Embodiments of the present application also provide a computer storage medium that stores computer instructions, and when the computer instructions run on an electronic device, the electronic device performs the above-mentioned related method steps to implement the audio communication method described in FIG. 5 .
  • Embodiments of the present application also provide a computer storage medium that stores computer instructions, and when the computer instructions run on the first audio receiving device, the first audio receiving device executes the above related method steps 5 describes the audio communication method.
  • Embodiments of the present application also provide a computer storage medium that stores computer instructions. When the computer instructions run on the second audio receiving device, the second audio receiving device executes the above related method steps. 5 describes the audio communication method.
  • An embodiment of the present application also provides a computer program product, which, when the computer program product runs on a computer, causes the computer to perform the above-mentioned related steps to implement the audio communication method described in FIG. 5 performed by the electronic device in the foregoing embodiment.
  • An embodiment of the present application also provides a computer program product, which, when the computer program product runs on a computer, causes the computer to perform the above-mentioned related steps to implement the audio described in FIG. 5 and executed by the first audio receiving device in the foregoing embodiment Communication method.
  • An embodiment of the present application also provides a computer program product, which, when the computer program product runs on a computer, causes the computer to perform the above-mentioned related steps to implement the audio described in FIG. 5 and executed by the second audio receiving device in the foregoing embodiment Communication method.
  • the embodiments of the present application also provide an apparatus.
  • the apparatus may specifically be a chip, a component, or a module.
  • the apparatus may include a connected processor and a memory; wherein the memory is used to store computer-executed instructions.
  • the processor may execute computer execution instructions stored in the memory to cause the chip to execute the audio communication method described in FIG. 5 and executed by the electronic device in the foregoing method embodiments.
  • An embodiment of the present application further provides an apparatus.
  • the apparatus may specifically be a chip, a component, or a module.
  • the apparatus may include a connected processor and a memory; wherein the memory is used to store computer-executed instructions.
  • the apparatus When the apparatus is running, the processor The computer stored in the executable memory executes instructions to cause the chip to execute the audio communication method described in FIG. 5 and executed by the first audio receiving device in the foregoing method embodiments.
  • An embodiment of the present application further provides an apparatus.
  • the apparatus may specifically be a chip, a component, or a module.
  • the apparatus may include a connected processor and a memory; wherein the memory is used to store computer-executed instructions.
  • the processor The computer stored in the executable memory executes instructions to cause the chip to execute the audio communication method described in FIG. 5 and executed by the second audio receiving device in the foregoing method embodiments.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the module or unit is only a division of logical functions.
  • there may be another division manner for example, multiple units or components may be The combination can either be integrated into another device, or some features can be ignored, or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical, or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may be one physical unit or multiple physical units, that is, may be located in one place, or may be distributed in multiple different places . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or software function unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application may be essentially or part of the contribution to the existing technology or all or part of the technical solutions may be embodied in the form of software products, which are stored in a storage medium
  • several instructions are included to enable a device (which may be a single-chip microcomputer, chip, etc.) or processor to execute all or part of the steps of the methods described in the embodiments of the present application.
  • the foregoing storage media include various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente invention concerne un système audio sans fil et un procédé et un dispositif de communication audio. Une liaison sans fil bidirectionnelle peut être établie entre un premier dispositif de réception audio (par exemple un écouteur gauche) et un dispositif électronique (par exemple un téléphone cellulaire). La liaison sans fil peut être utilisée par le dispositif électronique (par exemple un téléphone cellulaire) pour transmettre des données audios au premier dispositif de réception audio (par exemple, un écouteur gauche), et peut être utilisée par le premier dispositif de réception audio (par exemple un écouteur gauche) pour renvoyer un ACK au dispositif électronique (par exemple un téléphone cellulaire) lorsque la réception des données audios est réussie. Une liaison sans fil non typique peut être établie entre un deuxième dispositif de réception audio (par exemple un écouteur droit) et un dispositif électronique (par exemple un téléphone cellulaire). La liaison sans fil ne peut être utilisée par le deuxième dispositif de réception audio (par exemple un écouteur droit) que pour renvoyer un ACK au dispositif électronique (par exemple un téléphone cellulaire) lorsque la surveillance des données audios est réussie. La solution peut garantir que le deuxième dispositif de réception audio (par exemple un écouteur droit) reçoive complètement les données audios tout en économisant la consommation du premier dispositif de réception audio (par exemple un écouteur gauche).
PCT/CN2018/126065 2018-12-31 2018-12-31 Système audio sans fil et procédé et dispositif de communication audio WO2020140186A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880100565.3A CN113678481B (zh) 2018-12-31 2018-12-31 无线音频系统、音频通讯方法及设备
PCT/CN2018/126065 WO2020140186A1 (fr) 2018-12-31 2018-12-31 Système audio sans fil et procédé et dispositif de communication audio

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/126065 WO2020140186A1 (fr) 2018-12-31 2018-12-31 Système audio sans fil et procédé et dispositif de communication audio

Publications (1)

Publication Number Publication Date
WO2020140186A1 true WO2020140186A1 (fr) 2020-07-09

Family

ID=71406503

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/126065 WO2020140186A1 (fr) 2018-12-31 2018-12-31 Système audio sans fil et procédé et dispositif de communication audio

Country Status (2)

Country Link
CN (1) CN113678481B (fr)
WO (1) WO2020140186A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079537A (zh) * 2021-07-22 2022-02-22 珠海市杰理科技股份有限公司 音频丢包数据接收方法、装置、音频播放设备及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180084456A1 (en) * 2016-09-21 2018-03-22 Apple Inc. Real-time Relay of Wireless Communications
CN108337074A (zh) * 2018-06-22 2018-07-27 恒玄科技(上海)有限公司 高可靠性的蓝牙耳机无线通信方法
CN108419228A (zh) * 2018-02-09 2018-08-17 恒玄科技(上海)有限公司 一种适于蓝牙耳机的无线通信方法
US10104474B2 (en) * 2010-09-02 2018-10-16 Apple Inc. Un-tethered wireless audio system
US10110984B2 (en) * 2014-04-21 2018-10-23 Apple Inc. Wireless earphone
CN108901004A (zh) * 2018-08-08 2018-11-27 易兆微电子(杭州)有限公司 一种同步传输蓝牙耳机的方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10104474B2 (en) * 2010-09-02 2018-10-16 Apple Inc. Un-tethered wireless audio system
US10110984B2 (en) * 2014-04-21 2018-10-23 Apple Inc. Wireless earphone
US20180084456A1 (en) * 2016-09-21 2018-03-22 Apple Inc. Real-time Relay of Wireless Communications
CN108419228A (zh) * 2018-02-09 2018-08-17 恒玄科技(上海)有限公司 一种适于蓝牙耳机的无线通信方法
CN108337074A (zh) * 2018-06-22 2018-07-27 恒玄科技(上海)有限公司 高可靠性的蓝牙耳机无线通信方法
CN108901004A (zh) * 2018-08-08 2018-11-27 易兆微电子(杭州)有限公司 一种同步传输蓝牙耳机的方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114079537A (zh) * 2021-07-22 2022-02-22 珠海市杰理科技股份有限公司 音频丢包数据接收方法、装置、音频播放设备及系统
CN114079537B (zh) * 2021-07-22 2023-09-12 珠海市杰理科技股份有限公司 音频丢包数据接收方法、装置、音频播放设备及系统

Also Published As

Publication number Publication date
CN113678481A (zh) 2021-11-19
CN113678481B (zh) 2023-12-15

Similar Documents

Publication Publication Date Title
WO2020133183A1 (fr) Dispositif et procédé de synchronisation de données audio
CN113169915B (zh) 无线音频系统、音频通讯方法及设备
WO2021027666A1 (fr) Procédé de reconnexion bluetooth et appareil associé
EP3846402B1 (fr) Procédé de communication vocale, dispositif électronique et système
WO2021185141A1 (fr) Procédé et système d'établissement de liaison sensible au wi-fi, dispositif électronique et support de stockage
WO2022033320A1 (fr) Procédé de communication bluetooth, équipement terminal et support d'enregistrement lisible par ordinateur
CN112119641B (zh) 通过转发模式连接的多tws耳机实现自动翻译的方法及装置
WO2021129521A1 (fr) Procédé et appareil de communication bluetooth
WO2021000817A1 (fr) Procédé et dispositif de traitement de son ambiant
WO2021031865A1 (fr) Procédé et appareil d'appel
WO2022042637A1 (fr) Procédé de transmission de données à base de bluetooth et appareil associé
WO2020019533A1 (fr) Procédé de transmission de données et dispositif électronique
WO2022262492A1 (fr) Procédé et appareil de téléchargement de données, et dispositif terminal
WO2022257563A1 (fr) Procédé de réglage de volume, et dispositif électronique et système
WO2020134868A1 (fr) Procédé d'établissement de connexion, et appareil terminal
WO2022170856A1 (fr) Procédé d'établissement de connexion et dispositif électronique
WO2021043250A1 (fr) Procédé de communication bluetooth, et dispositif associé
CN113132959B (zh) 无线音频系统、无线通讯方法及设备
WO2020140186A1 (fr) Système audio sans fil et procédé et dispositif de communication audio
EP4280596A1 (fr) Procédé d'appel vidéo et dispositif associé
WO2022161006A1 (fr) Procédé et appareil de synthèse de photographie, et dispositif électronique et support de stockage lisible
WO2021129453A1 (fr) Procédé de capture d'écran et dispositif associé
WO2022267917A1 (fr) Procédé et système de communication bluetooth
WO2024001773A1 (fr) Procédé de migration de données, dispositif électronique et système de mise en réseau
WO2024093614A1 (fr) Procédé et système d'entrée de dispositif, dispositif électronique et support de stockage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18945391

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18945391

Country of ref document: EP

Kind code of ref document: A1