CN115022766B - Audio data playing method and equipment - Google Patents

Audio data playing method and equipment Download PDF

Info

Publication number
CN115022766B
CN115022766B CN202210856413.4A CN202210856413A CN115022766B CN 115022766 B CN115022766 B CN 115022766B CN 202210856413 A CN202210856413 A CN 202210856413A CN 115022766 B CN115022766 B CN 115022766B
Authority
CN
China
Prior art keywords
audio data
electronic device
mute
playing
wireless headset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210856413.4A
Other languages
Chinese (zh)
Other versions
CN115022766A (en
Inventor
赵成
王家天
江思源
李焯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202210856413.4A priority Critical patent/CN115022766B/en
Publication of CN115022766A publication Critical patent/CN115022766A/en
Application granted granted Critical
Publication of CN115022766B publication Critical patent/CN115022766B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6058Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone
    • H04M1/6066Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone including a wireless connection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/033Headphones for stereophonic communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2420/00Details of connection covered by H04R, not provided for in its groups
    • H04R2420/07Applications of wireless loudspeakers or wireless microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/01Aspects of volume control, not necessarily automatic, in sound systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Telephone Function (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The application provides an audio data playing method and device, which are used for solving the problems that in the related technology, the audio data currently played by a wireless earphone is mute data, so that earphone resources are easily wasted, and the use experience of a user is reduced. In the method, a first electronic device is provided with a mute identifier in a playing state, and the method comprises the following steps: the first electronic equipment receives and plays first audio data from the second electronic equipment; in the process of playing the first audio data by the first electronic device, if the first audio data is determined to be the mute data, the first electronic device sets the mute identifier as the first identifier. The first electronic device receives a first play request from the third electronic device, wherein the first play request is used for requesting the first electronic device to play audio data from the third electronic device. The first electronic equipment responds to the first playing request to read the mute identification; if the mute flag is the first flag, the first electronic device receives and plays the second audio data from the third electronic device.

Description

Audio data playing method and equipment
Technical Field
The present application relates to the field of communications technologies, and in particular, to an audio data playing method and device.
Background
With the development of wireless headset technology, one wireless headset can support simultaneous connection with more than two electronic devices. After the wireless earphone is connected with the electronic equipment, the audio data from the electronic equipment can be received and played.
When the wireless headset is connected to more than two electronic devices at the same time and the wireless headset is playing audio data from one of the electronic devices (e.g., electronic device 1), it is usually the case that the wireless headset cannot play audio data from another electronic device (e.g., electronic device 2). However, in some cases, the audio data of the electronic apparatus 1 currently played by the wireless headset is mute data. In this case, the resource of the earphone is easily wasted, and for the user, the earphone does not play the sound, but cannot play the audio data of the electronic device 2 using the earphone, so that the user experience is easily reduced.
Disclosure of Invention
The embodiment of the application provides an audio data playing method and device, which are used for solving the problems that the audio data currently played by a wireless earphone is mute data, so that earphone resources are easily wasted, and the use experience of a user is reduced.
In order to achieve the above purpose, the embodiment of the present application adopts the following technical solutions:
in a first aspect, an audio data playing method is provided, where the method is applied to a first electronic device, and the first electronic device is provided with a mute flag in a playing state, where the mute flag is used to indicate whether audio data currently played by the first electronic device is mute data. The first electronic equipment establishes wireless communication connection with the second electronic equipment and the third electronic equipment respectively. The method comprises the following steps:
the first electronic equipment receives and plays first audio data from the second electronic equipment; in the process of playing the first audio data by the first electronic device, if the first audio data is determined to be the mute data, the first electronic device sets the mute identifier as the first identifier. The first electronic device receives a first play request from a third electronic device, wherein the first play request is used for requesting the first electronic device to play audio data from the third electronic device. The first electronic equipment responds to the first playing request to read the mute identification; if the mute flag is the first flag, the first electronic device receives and plays the second audio data from the third electronic device.
In this way, when the first electronic device receives a play request from the third electronic device, even if the first electronic device is currently in a play state, the first electronic device can still receive and play audio data from the third electronic device because the first electronic device is currently playing mute data. Therefore, the waste of earphone resources can be reduced, and the use experience of a user is improved.
In one possible implementation, the first electronic device reads the mute flag of the first electronic device after determining that the first electronic device is in the play state in response to the first play request. If the first electronic device determines that the first electronic device is in an idle state without playing when receiving the first playing request, the first electronic device may directly receive and play audio data from the third electronic device. Therefore, the response speed of the first electronic equipment to the playing request can be improved.
In one possible embodiment, the method further comprises: in the process of playing the first audio data, if it is determined that the first audio data is not the mute data, the first electronic device sets the mute flag to be a second flag, and the second flag is used for indicating that the first audio data is not the mute data.
In a possible implementation manner, after the first electronic device reads the mute flag in response to the first play request, the method further includes: and if the mute identifier is the second identifier, the first electronic device sends a first play response to the third electronic device, wherein the first play response is used for indicating that the first electronic device refuses to play the audio data from the third electronic device. If the current mute flag is the second flag, it indicates that the first electronic device is currently playing non-mute data, and at this time, the first electronic device no longer allows audio data from other devices to be played. Therefore, the first electronic device can be prevented from being preempted by other audio data in the playing process, and the user experience is improved.
In a possible implementation manner, if the mute flag is the first flag, the first electronic device stops playing the first audio data and sends a first pause playing instruction to the second electronic device in addition to receiving and playing the second audio data from the third electronic device. The first pause playing instruction is used for instructing the second electronic equipment to pause playing the audio data. In this way, the first electronic device can be prevented from playing the first audio data and the second audio data at the same time. Even if the audio data of the first audio data at the rear part is not silent, the playing of the first electronic equipment to the second audio data is not influenced, and the noise possibly caused by two-way sound mixing is avoided.
In a possible implementation manner, the determining, by the first electronic device, that the first audio data is mute data may specifically include: the method comprises the steps that the first electronic equipment periodically obtains a PCM value of first audio data played currently; if the average value of the PCM values acquired within the first preset time is smaller than a first preset value, the first electronic device determines that the first audio data is mute data. Thus, the possibility that audio data muted for a short time is detected as mute data is reduced, and detection errors are reduced.
In one possible implementation, the first electronic device periodically acquires a PCM value of the currently played first audio data; and if the average value of the PCM values acquired within the second preset time length is smaller than a second preset value, the first electronic equipment sends a second pause playing instruction to the second electronic equipment, and the second pause playing instruction is used for indicating the second electronic equipment to pause playing of the audio data. If the time for playing the audio data to keep silent reaches a certain time, the first electronic device automatically stops playing the audio data, so that the mute audio data can be prevented from occupying the first electronic device, resource waste can be avoided, and power consumption of the first electronic device can be saved.
In a possible implementation manner, if the mute flag is the first flag, before the first electronic device receives and plays the second audio data from the third electronic device, the first electronic device sends a second play response to the third electronic device, where the second play response is used to indicate that the first electronic device can play the audio data from the third electronic device. And after receiving the second play response, the third electronic equipment can send second audio data to the first electronic equipment.
In a second aspect, an electronic device is provided, comprising: a processor and a memory; the memory is configured to store computer executable instructions, and when the electronic device is running, the processor executes the computer executable instructions stored in the memory, so as to enable the electronic device to execute the audio data playing method according to any one of the above first aspects.
In a third aspect, a computer-readable storage medium is provided, which stores instructions that, when executed on a computer, enable the computer to execute the audio data playing method of any one of the above first aspects.
In a fourth aspect, a computer program product containing instructions is provided, which when run on an electronic device, enables the electronic device to perform the audio data playback method of any one of the above first aspects.
In a fifth aspect, an apparatus (e.g., the apparatus may be a system on a chip) is provided, which includes a processor configured to enable an electronic device to implement the functions referred to in the first aspect, such as receiving and playing first audio data from a second electronic device, determining whether the first audio data is mute data, setting a mute flag, and so on. In one possible design, the apparatus further includes a memory for storing program instructions and data necessary for the electronic device. When the device is a chip system, the device may be composed of a chip, or may include a chip and other discrete devices.
For technical effects brought by any one of the design manners in the second aspect to the fifth aspect, reference may be made to technical effects brought by different design manners in the first aspect, and details are not described herein.
Drawings
Fig. 1 is a schematic structural diagram of a communication system according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an audio data playing method in the related art according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a TWS headset according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of an audio data playing method according to an embodiment of the present application;
fig. 6 is a schematic flowchart of another audio data playing method according to an embodiment of the present application;
fig. 7 is a schematic flowchart of another audio data playing method according to an embodiment of the present application;
fig. 8 is a schematic flowchart of another audio data playing method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a chip system according to an embodiment of the present disclosure.
Detailed Description
The application provides an audio data playing method which can be applied to first electronic equipment. The first electronic device can establish wireless communication connection with more than two other electronic devices (such as a second electronic device and a third electronic device) at the same time, and the first electronic device can receive and play audio data from the second electronic device or the third electronic device.
For example, the first electronic device may specifically be an electronic device such as a wireless headset, a smart speaker, and the like, which supports receiving and playing audio data from other electronic devices. The second electronic device and the third electronic device may be a desktop, a laptop, a handheld computer, a notebook, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, a media player, a television, or the like.
Taking the example that the first electronic device is the wireless headset 1 shown in fig. 1, the second electronic device is the tablet computer 2 shown in fig. 1, and the third electronic device is the mobile phone 3 shown in fig. 1, wireless communication connections are established between the wireless headset 1 and the tablet computer 2, and between the wireless headset 1 and the mobile phone 3. Through the wireless communication connection, the wireless headset 1 can receive and play audio data sent by the tablet personal computer 2 or the mobile phone 3.
As shown in fig. 2, after the wireless headset 1 receives the play request from the tablet computer 2, the wireless headset 1 determines the current play status of the wireless headset 1. If the wireless headset 1 is currently in an idle state without playing, the wireless headset 1 sends a play response a to the tablet computer 2, where the play response is used to indicate that the tablet computer 2 is currently capable of playing the audio data from the tablet computer 2. After receiving the play response a, the tablet pc 2 sends audio data a to the wireless headset 1. The wireless headset 1 receives and plays the audio data a from the tablet computer 2.
The wireless headset 1 receives the play request from the mobile phone 3, and the wireless headset 1 determines that the playing state is currently being performed (the audio data a from the tablet pc 2 is being played). Since the wireless headset 1 is currently in a playing state, the wireless headset 1 cannot play audio data from other electronic devices (such as the mobile phone 3). At this time, the wireless headset 1 sends a play response b to the handset 3, where the play response b is used to indicate that the handset 3 currently refuses to play the audio data from the handset 3.
However, in the scenario shown in fig. 2, the following may occur: the audio data a currently played by the wireless headset 1 from the tablet pc 2 is mute data. That is, the wireless headset 1 is occupied by the tablet pc 2, but the user cannot hear the sound through the wireless headset 1. Therefore, earphone resources are easily wasted, and for a user, the wireless earphone 1 does not play sound, but cannot play audio data of the mobile phone 3 by using the wireless earphone 1, so that the user experience is easily reduced.
Therefore, the present application provides an audio data playing method, where in a situation that the wireless headset 1 is in a playing state, the wireless headset 1 sets a mute identifier in the wireless headset 1, for indicating whether the audio data a currently played by the wireless headset 1 is mute data.
In the embodiment of the present application, in the process that the wireless headset 1 is playing the audio data a from the tablet pc 2, the wireless headset 1 receives the playing request from the mobile phone 3, and the wireless headset 1 reads the mute identifier and determines whether to allow receiving and playing the audio data B from the mobile phone 3 according to the mute identifier. Specifically, if the wireless headset 1 determines that the currently played audio data a is the mute data according to the mute flag, the wireless headset 1 allows the audio data from the mobile phone 3 to be played. Further, the wireless headset 1 sends a play response c to the mobile phone 3, where the play response c is used to indicate that the wireless headset 1 can play the audio data from the mobile phone 3 currently. After receiving the play response c, the handset 3 transmits audio data B to the wireless headset 1. The wireless headset 1 receives and plays the audio data B from the handset 3.
In this way, when receiving the play request from the cellular phone 3, even if the wireless headset 1 is currently in the play state, the wireless headset 1 can still receive and play the audio data from the cellular phone 3 because the wireless headset 1 is currently playing mute data. Therefore, the waste of earphone resources can be reduced, and the use experience of a user is improved.
The wireless headset 1 may be, for example, a True Wireless Stereo (TWS) headset.
Please refer to fig. 3, which is a schematic structural diagram of an earplug (a left earplug or a right earplug) of a TWS headset 300 according to an embodiment of the present application. As shown in fig. 3, the earplugs of the TWS headset 300 may include: a processor 301, a memory 302, a sensor 303, a wireless communication module 304, at least one microphone 305, at least one microphone 306, a power source 307, and an input/output interface 308.
The memory 302 may be used for storing application program codes, such as application program codes for establishing a bluetooth connection with another earpiece of the TWS headset 300 and for enabling the earpieces to pair with the tablet 2 and the mobile phone 3. The processor 301 may control the execution of the above-mentioned application program codes to realize the functions of the earpieces of the TWS headset 300 in the embodiment of the present application.
The memory 302 may also have stored therein a bluetooth address for uniquely identifying the earpiece and a bluetooth address for another earpiece of the TWS headset 300. In addition, the memory 302 may also store connection data with an electronic device that the earplug has successfully paired before. For example, the connection data may be a bluetooth address of the electronic device that was successfully paired with the earpiece. Based on the connection data, the ear bud can be automatically paired with the electronic device without having to configure a connection therewith, such as for legitimacy verification or the like. The bluetooth address may be a Media Access Control (MAC) address.
The sensor 303 may be a distance sensor or a proximity light sensor. The processor 301 of the ear plug may determine whether it is worn by the user via the sensor 303. For example, the processor 301 of the ear bud may utilize a proximity light sensor to detect whether an object is near the ear bud to determine whether the ear bud is being worn by the user. Upon determining that the earpiece is worn, the processor 301 of the earpiece may turn on the receiver 305. In some embodiments, the earplug may further include a bone conduction sensor, incorporated into a bone conduction earpiece. The bone conduction sensor can acquire a vibration signal of a vibration bone block of a sound part, and the processor 301 analyzes the voice signal to realize a control function corresponding to the voice signal. In other embodiments, the ear tip may further comprise a touch sensor or a pressure sensor for detecting a touch operation and a press operation of the user, respectively. In other embodiments, the ear bud may further include a fingerprint sensor for detecting a user's fingerprint, identifying the user's identity, and the like. In other embodiments, the earplug may further comprise an ambient light sensor, and the processor 301 of the earplug may adaptively adjust parameters, such as volume, according to the brightness of the ambient light sensed by the ambient light sensor.
A wireless communication module 304, configured to support short-distance data interaction between the left and right earpieces of the TWS headset 300 and between the earpieces and various electronic devices, such as the tablet pc 2 and the mobile phone 3. In some embodiments, the wireless communication module 304 may be a bluetooth transceiver. The earplugs of the TWS headset 300 can establish bluetooth connection with the tablet pc 2 and the mobile phone 3 through the bluetooth transceiver to realize short-distance data interaction therebetween.
The receiver 305, which may also be referred to as a "handset," may be used to convert audio electrical signals into sound signals and play them. For example, when the ear plugs of the TWS headset 300 are used as the audio output device of the tablet pc 2 and the mobile phone 3, the receiver 305 may convert the received audio electrical signals into sound signals and play the sound signals.
The microphone 306, which may also be referred to as a "microphone," is used to convert sound signals into electrical audio signals. For example, when the ear plug of the TWS headset 300 is used as an audio input device of the tablet pc 2 and the mobile phone 3, the microphone 306 may collect a voice signal of the user and convert the voice signal into an audio electrical signal when the user speaks (e.g., calls or sends voice messages). The audio electrical signal is audio data in the embodiment of the present application.
A power supply 307 may be used to supply power to the various components contained in the earplugs of the TWS headset 300. In some embodiments, the power source 307 may be a battery, such as a rechargeable battery.
The input/output interface 308 may be used to provide any wired connection between the earplugs and the headset case of the TWS headset 300. In some embodiments, the input/output interface 308 may be an electrical connector. For example, when the earplugs of the TWS headset 300 are placed in the cavity 301-1 of the headset case, the earplugs of the TWS headset 300 may be electrically connected with the headset case through the electrical connector. After this electrical connection is established, the headset case may charge the power supply 307 of the earplugs of the TWS headset 300. The earplugs of the TWS headset 300 may also be in data communication with the headset case after the electrical connection is established. For example, the processor 301 of the earplugs of the TWS headset 300 may receive pairing instructions from the headset case through the electrical connection. The pairing command is used to instruct the processor 301 of the earplugs of the TWS headset 300 to turn on the wireless communication module 304 so that the earplugs of the TWS headset 300 can be paired with the tablet 2 and the cell phone 3 using a corresponding wireless communication protocol (e.g., bluetooth).
Of course, the earplugs of the TWS headset 300 described above may also not include an input/output interface 308. In this case, the ear plug may implement a charging or data communication function based on the bluetooth connection established with the earphone box through the above-described wireless communication module 304.
Additionally, in some embodiments, the headset case may further include a processor, memory, and the like. The memory may be used to store application program code and be controlled to be executed by the processor of the headset box to implement the functionality of the headset box. For example. When the user opens the lid of the earphone box, the processor of the earphone box may send a pairing command or the like to the earplugs of the TWS earphone 300 in response to the user opening the lid by executing application program codes stored in the memory.
It is to be understood that the illustrated structure of the embodiments of the present application does not constitute a specific limitation to the ear plug of the TWS headset 300. It may have more or fewer components than shown in fig. 3, may combine two or more components, or may have a different configuration of components. For example, the earplug may further include an indicator light (which may indicate the status of the earplug, such as power), a dust screen (which may be used with the earpiece), and the like. The various components shown in fig. 3 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing or application specific integrated circuits.
It should be noted that the left and right earplugs of the TWS headset 300 may be identical in structure. For example, the left and right earplugs of the TWS earpiece 300 may both include the components shown in FIG. 3. Alternatively, the structure of the left and right earplugs of the TWS earpiece 300 may also be different. For example, one earpiece (e.g., the right earpiece) of the TWS headset 300 may include the components shown in fig. 3, while the other earpiece (e.g., the left earpiece) may include other components in fig. 3 in addition to the microphone 306.
In some embodiments, fig. 4 is a schematic structural diagram of the electronic device 400. As shown in fig. 4, the electronic device 400 may include a processor 410, a memory 420, a Universal Serial Bus (USB) interface 430, a charge management module 440, a battery 441, an antenna 1, an antenna 2, a mobile communication module 450, a wireless communication module 460, an audio module 470, a sensor module 480, and a display screen 490.
Illustratively, the electronic device may be the tablet computer 2 or the mobile phone 3 shown in fig. 1. It is to be understood that the illustrated structure of the embodiment of the invention is not to be construed as a specific limitation to the electronic device 400. In other embodiments of the present application, electronic device 400 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Processor 410 may include one or more processing units, such as: the processor 410 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a controller, a memory, a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural-Network Processing Unit (NPU), etc. The different processing units may be separate devices or may be integrated into one or more processors. For example, in the embodiment of the present application, the processor 410 may be an application processor AP.
The controller may be, among other things, a neural center and a command center of the electronic device 400. The controller can generate an operation control signal according to the instruction operation code and the timing signal to complete the control of instruction fetching and instruction execution.
A memory may also be provided in processor 410 for storing instructions and data. In some embodiments, the memory in the processor 410 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 410. If the processor 410 needs to reuse the instruction or data, it can be called directly from memory. Avoiding repeated accesses reduces the latency of the processor 410, thereby increasing the efficiency of the system.
In some embodiments, processor 410 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface, and/or a USB interface, etc.
The USB interface 430 is an interface conforming to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, or the like. The USB interface 430 may be used to connect a charger to charge the electronic device 400, and may also be used to transmit data between the electronic device 400 and a peripheral device. And the earphone can also be used for connecting an earphone and playing audio through the earphone. The interface may also be used to connect other electronic devices or electronic devices, such as AR devices and the like.
It should be understood that the connection relationship between the modules according to the embodiment of the present invention is only illustrative, and is not limited to the structure of the electronic device 400. In other embodiments of the present application, the electronic device 400 may also adopt different interface connection manners or a combination of multiple interface connection manners in the above embodiments.
The charging management module 440 is configured to receive charging input from a charger. The charger may be a wireless charger or a wired charger. In some embodiments, electronic device 400 may support wired charging. Specifically, the charging management module 440 may receive a charging input of the wired charger through the USB interface 430. In other embodiments, electronic device 400 may support wireless charging.
The charging management module 440 may also supply power to the electronic device 400 while charging the battery 441. The charging management module 440 receives input from the battery 441 and provides power to the processor 410, the memory 420, the display 490, and the wireless communication module 460. The charge management module 440 may also be used to monitor parameters such as battery capacity, battery cycle number, and battery state of health (leakage, impedance) of the battery 441. In some other embodiments, the charging management module 440 may also be disposed in the processor 410.
The wireless communication function of the electronic device 400 may be implemented by the antenna 1, the antenna 2, the mobile communication module 450, the wireless communication module 460, the modem processor, the baseband processor, and the like.
The antennas 1 and 2 are used for transmitting and receiving electromagnetic wave signals. Each antenna in electronic device 400 may be used to cover a single or multiple communication bands. Different antennas can also be multiplexed to improve the utilization of the antennas. For example: the antenna 1 may be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 450 may provide a solution including 2G/3G/4G/5G wireless communication applied on the electronic device 400. The mobile communication module 450 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 450 may receive the electromagnetic wave from the antenna 1, and filter, amplify, etc. the received electromagnetic wave, and transmit the electromagnetic wave to the modem processor for demodulation. The mobile communication module 450 can also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 1 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 450 may be disposed in the processor 410. In some embodiments, at least some of the functional blocks of the mobile communication module 450 may be disposed in the same device as at least some of the blocks of the processor 410.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then transferred to the application processor. The application processor outputs a sound signal through an audio device or displays an image or video through the display screen 490. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be separate from the processor 410, and may be located in the same device as the mobile communication module 450 or other functional modules. In some embodiments, the modem processor is further configured to limit the transmit power of the electronic device according to a maximum transmit power of the electronic device; in particular the modem processor controls the antenna 1 to transmit according to the maximum transmit power of the electronic device. The modem processor is a modem in the embodiment of the present application.
The wireless communication module 460 may provide a solution for wireless communication applied to the electronic device 400, including Wireless Local Area Networks (WLANs) (e.g., wireless fidelity (Wi-Fi) networks), bluetooth (BT), global Navigation Satellite System (GNSS), frequency Modulation (FM), NFC, infrared (IR), and the like. In some embodiments, antenna 1 of electronic device 400 is coupled to mobile communication module 450 and antenna 2 is coupled to wireless communication module 460, such that electronic device 400 may communicate with networks and other devices through wireless communication techniques.
The electronic device 400 implements display functions via the GPU, the display screen 490, and the application processor, among other things. The GPU is a microprocessor for image processing, and is connected to the display screen 490 and an application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 410 may include one or more GPUs that execute program instructions to generate or alter display information.
The display screen 490 is used to display images, video, etc. The display screen 490 includes a display panel. In some embodiments, the electronic device 400 may include 1 or N display screens 490, N being a positive integer greater than 1.
The electronic device 400 may implement a shooting function via the ISP, the camera, the video codec, the GPU, the display screen 490, the application processor, and the like. And the ISP is used for processing data fed back by the camera. In some embodiments, the ISP may be provided in a camera. The camera is used to capture still images or video. In some embodiments, electronic device 400 may include 1 or N cameras, N being a positive integer greater than 1.
Memory 420 may be used to store computer-executable program code, which includes instructions. The processor 410 executes various functional applications of the electronic device 400 and data processing by executing instructions stored in the memory 420. Further, the memory 420 may include a high-speed random access memory, and may further include a nonvolatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (UFS), and the like.
The electronic device 400 may implement audio functions through the audio module 470, as well as an application processor, etc. Such as music playing, recording, etc. The audio module 470 is used to convert digital audio information into an analog audio signal output and also used to convert an analog audio input into a digital audio signal. In some embodiments, the audio module 470 may be disposed in the processor 410, or some functional modules of the audio module 470 may be disposed in the processor 410.
The methods in the following embodiments may be implemented in the wireless headset 1 having the hardware structure shown in fig. 3.
The wireless headset 1 can establish wireless communication connection with more than two electronic devices (such as a tablet computer 2 and a mobile phone 3) at the same time. Illustratively, the wireless communication connection may be a bluetooth connection. The specific process of establishing the bluetooth connection between the wireless headset 1 and the tablet pc 2 and the mobile phone 3 may refer to the description in the related art, which is not described in detail in the embodiments of the present application.
After the wireless headset 1 establishes bluetooth connection with the tablet computer 2 and the mobile phone 3, respectively, when the wireless headset 1 is in an idle state without playing, audio data from the tablet computer 2 or the mobile phone 3 can be received and played.
After the wireless headset 1 receives and plays the audio data from the tablet computer 2 or the mobile phone 3, the wireless headset 1 may determine whether the currently played audio data is mute data, and set a mute flag in the wireless headset 1 according to the determination result. The mute flag is used to indicate whether the audio data currently played by the wireless headset 1 is mute data. In some embodiments, the mute flag is set to a first flag for indicating that the currently played audio data is mute data; the mute flag is set to a second flag for indicating that the currently played audio data is not mute data.
Technical terms that may be involved in the present application are explained below.
Pulse Code Modulation (PCM) is a method of digitally representing a sampled analog signal. The main process of pulse code modulation is to sample analog signals of voice, image, etc. at regular intervals to make them discretize, at the same time, round the sampled values by hierarchical unit to make rounding and quantization, and at the same time, express the amplitude of sampled pulse by a group of binary codes.
The protocol for high quality Audio data transmission (Advanced Audio Distribution Profile, A2 DP) defines the protocol and procedure for transmitting high quality Audio information, such as mono or stereo. A2DP defines two roles: and the Audio Source (Audio Source) encodes Audio data by the Audio input end and transmits the encoded Audio data to the Audio receiving end. And the Audio receiver (Audio Sink) receives the Audio data and then performs decoding operation to restore the Audio. In the embodiment of the present application, the tablet 2 or the mobile phone 3 is the audio source, and the wireless headset 1 is the audio receiver.
Web Real-Time Communication (WebRTC) is an interface supporting a Web browser to perform Real-Time voice conversation or video conversation.
Voice over Internet Protocol (VoIP), a Voice call technology, achieves a Voice call and a multimedia conference via Internet Protocol (IP), that is, communicates via the Internet.
Fig. 5 is a schematic flow chart of an audio data playing method according to an embodiment of the present application. Specifically, the method includes steps S501 to S513. Wherein:
s501, the tablet personal computer 2 sends a second playing request to the wireless earphone 1.
The second play request is used for requesting the wireless headset 1 to play the first audio data from the tablet computer 2. When the tablet computer 2 has audio data to be played through the wireless headset 1, the tablet computer 2 can send a play request to the wireless headset 1.
S502, the wireless earphone 1 receives a second playing request from the tablet computer 2.
And S503, the wireless earphone 1 responds to the second playing request, and judges whether the current wireless earphone 1 is in a playing state.
The wireless headset 1 can be divided into at least two states according to whether audio data is being played, wherein one state is a playing state which indicates that the wireless headset 1 is currently playing audio data from other electronic devices. Another state of the wireless headset 1 is an unplayed idle state, which indicates that the wireless headset 1 is not currently playing any audio data from other electronic devices.
In some embodiments, the wireless headset 1 is provided with a play status flag, which is used to indicate whether the wireless headset 1 is currently in a play status. Illustratively, when the play state identifier of the wireless headset 1 is the first state identifier, it indicates that the wireless headset 1 is currently in the play state in playing; when the playing status flag of the wireless headset 1 is the second status flag, it indicates that the wireless headset 1 is currently in an idle status without playing.
Further, in the above S503, the wireless headset 1 may specifically determine whether the current wireless headset 1 is in the playing state by reading the playing state identifier of the wireless headset 1.
In some embodiments, the playing state of the wireless headset 1 may specifically be an A2DP playing state of the wireless headset 1. Illustratively, the playing state of A2DP is playing (playing) indicating that the wireless headset 1 is in the playing state, and the playing state of A2DP is not playing (not playing) indicating that the wireless headset 1 is in the idle state.
S504, after the current wireless earphone 1 is determined to be in the idle state, the wireless earphone 1 sends a third play response to the tablet computer 2.
Wherein the third play response is used to indicate that the wireless headset 1 can play the first audio data from the tablet computer 2. Specifically, the specific process of determining that the wireless headset 1 is in the idle state may refer to the description of the above embodiment.
And S505, the tablet computer 2 receives a third play response.
S506, the tablet personal computer 2 sends first audio data to the wireless headset 1.
And S507, the wireless earphone 1 receives and plays the first audio data from the tablet computer 2.
After the wireless headset 1 starts playing the second audio data from the tablet pc 2, it means that the wireless headset 1 enters the playing state. In some embodiments, after S507, the wireless headset 1 further needs to modify the play status identifier of the wireless headset 1. Illustratively, after S507, the wireless headset 1 modifies the play status identifier of the wireless headset 1 from the second status identifier to the first status identifier, for indicating that the wireless headset 1 is currently in the play status.
In some embodiments, the first audio data from the tablet 2 may be mute data. For example, in the case where the tablet pc 2 automatically or by the user triggers the playing of the mute audio data, there may be a problem that the first audio data is the mute data. If the first audio data is mute data, that is, the wireless headset 1 plays audio data without sound. In the related art, if the wireless headset 1 is always occupied by the mute data, other electronic devices cannot play audio data through the wireless headset 1, which results in waste of headset resources.
In order to avoid this situation, in the embodiment of the present application, when the wireless headset 1 is in the playing state, the currently played audio data is detected, and whether the currently played audio data is the mute data is determined. Specifically, the wireless headset 1 may perform S508.
And S508, in the process of playing the first audio data from the tablet computer 2, the wireless headset 1 judges whether the first audio data is mute data.
In some embodiments, the wireless headset 1 may perform silence detection on the first audio data during the playing of the first audio data to determine whether the first audio data is silence data.
In some embodiments, the wireless headset 1 may perform silence detection on the first audio data after the first audio data starts to be played, and then set a corresponding silence identifier in the wireless headset 1 according to a detection result of the silence detection. In other embodiments, the wireless headset 1 may set a default mute flag in the wireless headset 1 after the first audio data starts to be played; and then the wireless earphone 1 carries out silence detection on the first audio data, and if the silence detection result does not accord with the set default silence identifier, the wireless earphone 1 modifies the silence identifier. For example, after the wireless headset 1 enters the play state, the wireless headset 1 first sets the mute flag as the second flag, that is, the audio data currently played is not the mute data by default. After the wireless headset 1 performs the silence detection to obtain the result of the silence detection, if the result of the silence detection indicates that the currently played audio data is the silence data, the wireless headset 1 modifies the silence identifier into a first identifier for indicating that the currently played audio data is the silence data. It is understood that if the result of the silence detection matches the set default silence flag, the silence flag does not need to be modified.
Further, since the wireless headset 1 plays audio data from other electronic devices, it may be silent data for a certain period of time and not silent data for another period of time. Therefore, in some embodiments, when the wireless headset 1 is in the playing state, the wireless headset 1 may continuously perform silence detection on the currently played audio data, determine whether the currently played audio data is silence data, and modify the silence identifier according to a result of the silence detection.
The wireless headset 1 may perform silence detection on the currently played audio data in any manner, and determine whether the currently played audio data is silence data.
In some embodiments, the wireless headset 1 determines that the first audio data is mute data, including: the wireless headset 1 periodically acquires the PCM value of the first audio data currently played. If the average value of the PCM values acquired within the first preset time period is smaller than the first preset value, the wireless headset 1 determines that the first audio data is mute data.
In some embodiments, the wireless headset 1 may detect whether the first audio data is muted. If it is determined that the first audio data is muted and the duration of the muting reaches a certain threshold (e.g., a first preset duration), the wireless headset 1 determines that the first audio data is muted data.
Further, in some embodiments, each time the wireless headset 1 acquires the PCM value of the first audio data, the PCM value of the first audio data within the preset time window may be acquired. For example, the preset time window is set to 5 frames. And taking the PCM value of 5 frames of audio data to calculate an average value, and if the average value is smaller than a first preset value, indicating that the first audio data is silent in the 5 frames. Wherein, the size of first default numerical value can set up according to actual conditions.
In other embodiments, the wireless headset 1 may also determine that the first audio data is muted within the preset time window when detecting that the average value of the PCM values within the preset time window is 0.
When the duration for which the first audio data remains silent reaches a first preset duration, the wireless headset 1 determines the first audio data as silent data.
In some embodiments, the wireless headset 1 may acquire the PCM value of the first audio data at a preset cycle. The preset period can be set according to actual conditions.
The wireless headset 1 periodically obtains the PCM value of the first audio data for judgment, and determines that the first audio data is mute data when the average value of the PCM value of the first audio data is smaller than a first preset value and the duration time reaches a certain time (a first preset duration). Thus, the possibility that audio data muted for a short time is detected as mute data is reduced, and detection errors are reduced.
In other embodiments, the wireless headset 1 may also perform silence detection in other manners. For example, the wireless headset 1 may also determine whether the currently played audio data is mute data by means of WebRTC mute detection. As another example, the wireless headset 1 may implement silence detection using VoIP technology. The specific process of implementing the silence detection by the method may refer to descriptions in related technologies, which are not described in this embodiment of the present application.
Further, after S508, if it is determined that the first audio data is mute data, the wireless headset 1 may perform S509. Specifically, the method comprises the following steps:
and S509, the wireless earphone 1 sets the mute identifier as a first identifier.
It is to be understood that when the mute flag is set as the first flag, it indicates that the first audio data currently played by the wireless headset 1 is mute data. In other embodiments, the wireless headset 1 may also set the mute flag to another flag when it is determined that the first audio data is mute data.
In other embodiments, as shown in fig. 6, after S508, if it is determined that the first audio data is not mute data, the wireless headset 1 may perform S601.
And S601, the wireless earphone 1 sets the mute identifier as a second identifier. In this embodiment, the mute flag is set as the second flag, which indicates that the first audio data currently played by the wireless headset 1 is not mute data.
If it is detected that the first audio data is not the mute data, the wireless headset 1 is configured to set the mute flag to be the second flag, so as to indicate that the audio data currently played by the wireless headset 1 is not the mute data. In some embodiments, when the wireless headset 1 is in the playing state, if the mute flag is the second flag, the wireless headset 1 can no longer receive and play the audio data from other electronic devices. When the wireless headset 1 is in the playing state, the wireless headset is divided into two states of playing mute data and playing non-mute data, which are used for distinguishing whether the wireless headset 1 is allowed to play audio data from other electronic equipment currently, so that the waste of headset resources is reduced.
The above S501-S509 describe that the wireless headset 1 in the idle state receives and plays the first audio data from the tablet pc 2 in response to the second play request; and a process of performing mute detection on whether the first audio data is mute data by the wireless headset 1, and setting a mute flag in the wireless headset 1 according to a result of the mute detection.
It is to be understood that after S509, the wireless headset 1 is in the play state. If the wireless headset 1 receives a play request from another electronic device (e.g., the mobile phone 3), the wireless headset 1 needs to determine whether to allow the audio data from the mobile phone 3 to be played in combination with the mute flag in the play state. With reference to fig. 5, the specific process may include S510-S513, where:
and S510, the mobile phone 3 sends a first playing request to the wireless earphone 1.
The first play request is for requesting the wireless headset 1 to play the second audio data from the handset 3.
S511, the wireless headset 1 receives a first play request from the handset 3.
And S512, the wireless earphone 1 responds to the first play request and reads the mute identification.
In some embodiments, similar to the wireless headset 1 receiving the second play request from the tablet pc 2, the wireless headset 1 may first determine whether the wireless headset 1 is in the play state in response to the first play request. If the wireless headset 1 is in the playing state, the wireless headset 1 needs to determine whether the audio data from the mobile phone 3 can be received and played according to whether the currently played audio data is the mute data.
As can be seen from the above description, after the wireless headset 1 plays the first audio data from the tablet pc 2, the mute flag is set in the wireless headset 1 according to whether the first audio data is mute data. Therefore, after S511, the wireless headset 1 may read the mute flag and determine whether to allow the audio data from the handset 3 to be played according to the mute flag.
It should be noted that, when S510 and S511 in the above embodiment occur after S509, the wireless headset 1 may perform S512. In other embodiments, the sending of the first play request by the handset 3 to the wireless headset 1 may occur before S509, before S501, or at any other time. Therefore, in response to the first play request, the wireless headset 1 needs to determine the play status of the wireless headset 1. After determining that the wireless headset 1 is in the play state, the wireless headset 1 reads the mute flag again. As shown in fig. 7, S512 includes S512a and S512b. Wherein:
and S512a, the wireless earphone 1 responds to the first playing request and judges whether the wireless earphone 1 is in a playing state.
In some embodiments, the wireless headset 1 may determine whether the wireless headset 1 is in the play state by reading the play state identification. In other embodiments, the wireless headset 1 can set the mute flag when entering the play state. Therefore, the wireless headset 1 can also determine whether the current wireless headset 1 is in the playing state by detecting whether the mute flag exists.
Further, if it is determined that the wireless headset 1 is currently in the playing state, the wireless headset 1 may perform S512b.
And S512b, reading the mute identification by the wireless earphone 1.
After receiving the first play request, the wireless headset 1 first determines whether the wireless headset 1 is in a play state, and if the wireless headset 1 is in an idle state, the wireless headset does not need to read the mute flag, but directly responds to the first play request, and receives and plays the audio data from the mobile phone 3. In this way, the response speed of the wireless headset 1 can be improved.
Further, in some embodiments, if the mute flag is the first flag, which indicates that the currently played first audio data is mute data, the wireless headset 1 may receive and play the audio data played from the other electronic device.
And S513, if the mute identifier is the first identifier, the wireless headset 1 receives and plays the second audio data from the mobile phone 3.
In some embodiments, if the mute flag is the first flag, before the wireless headset 1 receives and plays the second audio data from the mobile phone 3, the method further includes: the wireless headset 1 sends a second play response to the handset 3, the second play response being used to indicate that the wireless headset 1 is capable of playing the audio data from the handset 3. And after receiving the second play response from the wireless headset 1, the mobile phone 3 sends second audio data to the wireless headset 1. The wireless headset 1 receives and plays the second audio data from the handset 3.
In some embodiments, the wireless headset 1 may stop playing the first audio data from the tablet 2 before the wireless headset 1 receives and plays the second audio data from the cell phone 3. Specifically, after S512, the method further includes: if the mute identifier is the first identifier, the wireless headset 1 sends a first pause playing instruction to the tablet computer 2. The first pause instruction is used to instruct the tablet computer 2 to pause playing the audio data. In the embodiment of the present application, when the wireless headset 1 starts playing the second audio data, the wireless headset 1 first suspends playing the first audio data. Therefore, the wireless earphone 1 can be prevented from simultaneously playing the first audio data and the second audio data, even if the audio data of the first audio data at the rear part is not silent, the playing of the wireless earphone 1 on the second audio data can not be influenced, and noise possibly caused by two-way sound mixing is avoided. Meanwhile, since the playing of the first audio data from the tablet pc 2 is already suspended before the wireless headset 1 plays the second audio data from the mobile phone 3, the wireless headset 1 does not need to support the audio mixing function, and may allow other electronic devices to preempt the wireless headset 1 to play the audio data when the play state and the mute flag are the first flag.
In the technical solution provided in the embodiment of the present application, when the wireless headset 1 receives the play request from the mobile phone 3, even if the wireless headset 1 is currently in the play state, the wireless headset 1 can still receive and play the audio data from the mobile phone 3 because the wireless headset 1 currently plays the mute data. Therefore, the waste of earphone resources can be reduced, and the use experience of a user is improved.
In other embodiments, the wireless headset 1 may refuse to play audio data from other electronic devices if the mute flag is the second flag indicating that the currently playing audio data is not mute data.
With reference to fig. 6, after S512, the method further includes S602.
And S602, if the mute identifier is the second identifier, the wireless earphone 1 sends a first play response to the mobile phone 3.
Wherein the first play response is used to instruct the wireless headset 1 to refuse to play the audio data from the handset 3.
In the technical solution provided in the embodiment of the present application, when the wireless headset 1 receives a play request from the mobile phone 3, the play is paused and the currently played audio data is not the mute data, and then the wireless headset 1 does not allow other electronic devices to play the audio data through the wireless headset 1. Therefore, the wireless earphone 1 can be prevented from being seized by other audio data in the playing process, and the user experience is improved.
In other embodiments, in the case that the wireless headset 1 is in the play state and the mute identifier is the second identifier, the electronic device or the application program in the specified list may be allowed to preempt the wireless headset 1 from playing the audio data by setting the specified list. For example, the specified list includes the electronic device a, when the wireless headset 1 receives a play request from the electronic device a under the condition that the wireless headset 1 is in the play state and the mute flag is the second flag, the wireless headset 1 will allow to receive and play the audio data from the electronic device a. As another example, the specified list includes an application M (e.g., a phone), and when the wireless headset 1 receives a play request from the application M when the wireless headset 1 is in the play state and the mute identifier is the second identifier, the wireless headset 1 allows the audio data of the application to preempt the wireless headset 1 to play. The designated list can be set according to actual conditions, or the designated list can be customized by a user.
Further, in some embodiments, the electronic devices or applications included in the specified list may set a priority, and in a state where the wireless headset 1 plays audio data that is not mute data, the wireless headset 1 may allow the electronic device or application with a higher priority to preempt the wireless headset 1 for playing. For example, if the priority of the mobile phone 3 is set to be higher than that of the tablet pc 2 in the designated list, when the current wireless headset 1 plays the audio data from the tablet pc 2, the audio data from the mobile phone 3 is allowed to preempt the wireless headset 1 for playing regardless of whether the current audio data is currently played. As another example, if the priority of the phone application is set higher than the priority of the music player in the specified list, the phone application from the handset 3 is allowed to preempt the wireless headset 1 for playing when the wireless headset 1 is playing audio data from the music player of the handset 3, and so on. It should be understood that the above examples are only some examples, and the specified list may be set in other forms according to actual situations.
In the technical solution provided in the embodiment of the present application, by setting the specified list, the electronic device or the application program in the specified list is allowed to preempt the wireless headset 1 to play when the wireless headset 1 plays data that is not silent. Therefore, resources of the wireless earphone 1 can be used more reasonably, the use of the wireless earphone 1 is more in line with habits of users, and the use experience of the users can be improved.
In some embodiments, after S507, the wireless headset 1 may detect whether the first audio data is muted. If the first audio data is muted and the duration of the muting is long, it is possible that the first audio data currently played by the wireless headset 1 is invalid audio data. If the wireless headset 1 plays invalid audio data for a long time, the headset resources are easily wasted. Therefore, in case that it is determined that the time for muting the first audio data reaches a certain threshold, the wireless headset 1 may control to pause the playing of the first audio data from the tablet pc 2.
Referring to fig. 8, after S507, the method further includes S800.
S800, when the first audio data are determined to be mute in the second preset time, the wireless earphone 1 sends a second play pause instruction to the tablet computer 2.
The second pause instruction is used for instructing the tablet computer 2 to pause playing of the first audio data.
In some embodiments, the wireless headset 1 may determine whether the first audio data is muted by acquiring a PCM value of the first audio data. In some embodiments, a frame of audio data of the first audio data is determined to be silent if the PCM value of the frame of audio data is less than a second predetermined value. In other embodiments, if the average value of the PCM of the first audio data within the predetermined time window is smaller than the second predetermined value, it is determined that the first audio data is muted within the predetermined time window. The second preset value can be set according to actual conditions. In other embodiments, the wireless headset 1 may also determine that the first audio data is muted within the preset time window when detecting that the average value of the PCM values within the preset time window is 0.
As can be seen from the above description of the embodiment, when the wireless headset 1 detects that the average value of the PCM values of the first audio data is smaller than the first preset value within the first preset time period, the wireless headset 1 determines that the first audio data is mute data, and sets the mute flag as the first flag.
In some embodiments, the first predetermined value and the second predetermined value may be the same or different. Taking the first value and the second value as an example, the second preset time length is greater than the first preset time length. That is, when detecting that the duration of the average value of the PCM values is smaller than the first preset value reaches the first preset duration, the wireless headset 1 determines that the first audio data is mute data, and sets the mute flag to be the first flag. At this time, the wireless headset 1 will continue to play the first audio data and continue to detect the first audio data, and if the duration that the average value of the PCM values is smaller than the first preset value (i.e., the second preset value) reaches the second preset duration, the wireless headset 1 will send a pause playing instruction to the tablet pc 2 and stop playing the first audio data.
In the technical solution provided in the embodiment of the present application, if the time for the played audio data to remain silent reaches a certain duration, the wireless headset 1 will automatically stop playing the audio data, so as to avoid the silent audio data occupying the wireless headset 1, which results in the waste of headset resources, and save the power consumption of the wireless headset 1.
It is to be understood that the above-mentioned S512b, S513 and S602 are the response flow of the wireless headset 1 to the first play request when the wireless headset 1 determines that it is currently in the play state. If it is determined that the wireless headset 1 is in the idle state after S512a, the wireless headset 1 may directly receive and play the audio data from the cellular phone 3 in response to the first play request.
With continued reference to fig. 8, the method further includes S801-S803 after S512a.
And S801, the wireless earphone 1 sends a fourth play response to the mobile phone 3.
Wherein the fourth play response is used to indicate that the wireless headset 1 is currently capable of playing audio data from the handset 3.
S802, the mobile phone 3 sends second audio data to the wireless earphone 1.
And S803. The wireless earphone 1 receives and plays the second audio data from the mobile phone 3.
In the technical solution provided in the embodiment of the present application, if the wireless headset 1 is in the idle state, the wireless headset 1 does not need to read the mute identifier, but directly responds to the first play request, sends a fourth play response to the mobile phone 3, and receives and plays the audio data from the mobile phone 3. Thus, the response speed of the wireless headset 1 can be improved.
Other embodiments of the present application provide an electronic device, which is the first electronic device (e.g., the wireless headset 1) described above, and the first electronic device supports establishing wireless communication connections with at least two electronic devices simultaneously. The first electronic device may include: a memory, a bluetooth module, and one or more processors. The memory, bluetooth module and processor are coupled. The Bluetooth module is used for carrying out wireless communication with other electronic equipment. The memory is also operable to store computer program code comprising computer instructions. When the processor executes the computer instructions, the first electronic device may perform the various functions or steps performed by the wireless headset 1 in the above-described method embodiments. The structure of the first electronic device may refer to the structure of the TWS headset 300 shown in fig. 3.
The embodiment of the present application further provides a chip system, as shown in fig. 9, the chip system 90 includes at least one processor 901 and at least one interface circuit 902. The processor 901 and the interface circuit 902 may be interconnected by wires. For example, the interface circuit 902 may be used to receive signals from other devices (e.g., a memory of an electronic device). Also for example, the interface circuit 902 may be used to transmit signals to other devices (e.g., the processor 901). Illustratively, the interface circuit 902 may read instructions stored in the memory and send the instructions to the processor 901. The instructions, when executed by the processor 901, may cause the electronic device to perform the various steps in the embodiments described above. Of course, the chip system may further include other discrete devices, which is not specifically limited in this embodiment of the present application.
Embodiments of the present application further provide a computer-readable storage medium, where the computer-readable storage medium includes computer instructions, and when the computer instructions are executed on the first electronic device (for example, the wireless headset 1), the first electronic device is caused to perform various functions or steps performed by the wireless headset 1 in the above method embodiments.
The present embodiment also provides a computer program product, which when running on a computer, causes the computer to execute the functions or steps performed by the wireless headset 1 in the above method embodiments. The computer may be a first electronic device, such as a wireless headset 1.
Through the description of the above embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, the foregoing division of the functional modules is merely used as an example, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, and for example, the division of the modules or units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another device, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may be one physical unit or a plurality of physical units, that is, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partially contributed to by the prior art, or all or part of the technical solutions may be embodied in the form of a software product, where the software product is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (9)

1. The method for playing the audio data is characterized in that the method is applied to a first electronic device, a mute identifier is arranged on the first electronic device in a playing state, and the mute identifier is used for indicating whether the audio data currently played by the first electronic device is mute data or not; the first electronic equipment establishes wireless communication connection with second electronic equipment and third electronic equipment respectively; the method comprises the following steps:
the first electronic equipment receives first audio data from the second electronic equipment and plays the first audio data;
in the process of playing the first audio data, if it is determined that the first audio data is the mute data, the first electronic device sets the mute identifier as a first identifier, where the first identifier is used to indicate that the first audio data is the mute data;
the first electronic device receives a first playing request from the third electronic device, wherein the first playing request is used for requesting the first electronic device to play audio data from the third electronic device;
the first electronic equipment responds to the first playing request, and reads the mute identification after the first electronic equipment determines that the first electronic equipment is in a playing state;
and if the mute identifier is the first identifier, the first electronic equipment receives and plays second audio data from the third electronic equipment.
2. The method of claim 1, further comprising:
in the process of playing the first audio data, if it is determined that the first audio data is not the mute data, the first electronic device sets the mute identifier as a second identifier, where the second identifier is used to indicate that the first audio data is not the mute data.
3. The method of claim 2, wherein after the first electronic device reads the mute flag in response to the first play request, the method further comprises:
if the mute identifier is the second identifier, the first electronic device sends a first play response to the third electronic device, where the first play response is used to instruct the first electronic device to refuse to play audio data from the third electronic device.
4. The method according to any one of claims 1-3, further comprising:
if the mute identifier is the first identifier, the first electronic device stops playing the first audio data and sends a first pause playing instruction to the second electronic device, wherein the first pause playing instruction is used for indicating the second electronic device to pause playing the audio data.
5. The method of any of claims 1-3, wherein the determining that the first audio data is the silence data comprises:
the first electronic equipment periodically acquires a Pulse Code Modulation (PCM) value of the currently played first audio data;
if the average value of the PCM values acquired within the first preset time is smaller than a first preset value, the first electronic device determines that the first audio data is the mute data.
6. The method according to any one of claims 1-3, further comprising:
the first electronic equipment periodically acquires a PCM value of the first audio data which is currently played;
and if the average value of the PCM values acquired within a second preset time length is smaller than a second preset value, the first electronic equipment sends a second pause playing instruction to the second electronic equipment, and the second pause playing instruction is used for indicating the second electronic equipment to pause playing of the audio data.
7. The method of any of claims 1-3, wherein if the mute flag is the first flag, before the first electronic device receives and plays the second audio data from the third electronic device, the method further comprises:
and the first electronic equipment sends a second play response to the third electronic equipment, wherein the second play response is used for indicating that the first electronic equipment can play the audio data from the third electronic equipment.
8. A computer device comprising a processor and a memory coupled to the processor, the memory for storing computer program code, the computer program code comprising computer instructions that, when executed by the processor, cause the computer device to perform the audio data playback method of any of claims 1-7.
9. A computer-readable storage medium having stored therein instructions that, when executed in a computer device, cause the computer device to execute the audio data playback method according to any one of claims 1 to 7.
CN202210856413.4A 2022-07-21 2022-07-21 Audio data playing method and equipment Active CN115022766B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210856413.4A CN115022766B (en) 2022-07-21 2022-07-21 Audio data playing method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210856413.4A CN115022766B (en) 2022-07-21 2022-07-21 Audio data playing method and equipment

Publications (2)

Publication Number Publication Date
CN115022766A CN115022766A (en) 2022-09-06
CN115022766B true CN115022766B (en) 2022-12-27

Family

ID=83080174

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210856413.4A Active CN115022766B (en) 2022-07-21 2022-07-21 Audio data playing method and equipment

Country Status (1)

Country Link
CN (1) CN115022766B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116489270A (en) * 2023-03-28 2023-07-25 荣耀终端有限公司 Audio playing method and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151566A (en) * 2018-09-30 2019-01-04 Oppo广东移动通信有限公司 Audio frequency playing method, device, electronic equipment and computer-readable medium
CN113852944A (en) * 2021-09-24 2021-12-28 维沃移动通信有限公司 Audio output method and device
CN113849241A (en) * 2020-06-10 2021-12-28 Oppo(重庆)智能科技有限公司 Audio playing control method, intelligent watch and device with storage function
CN114466107A (en) * 2020-11-10 2022-05-10 华为终端有限公司 Sound effect control method and device, electronic equipment and computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8345891B2 (en) * 2009-10-14 2013-01-01 Lenovo (Singapore) Pte. Ltd. Intelligent selective system mute
EP3142389B1 (en) * 2015-09-02 2020-07-08 Harman International Industries, Inc. Audio system with multi-screen application
WO2020103045A1 (en) * 2018-11-21 2020-05-28 深圳市欢太科技有限公司 Application processing method and apparatus and electronic device
CN114554012B (en) * 2020-11-18 2023-09-12 华为技术有限公司 Incoming call answering method, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109151566A (en) * 2018-09-30 2019-01-04 Oppo广东移动通信有限公司 Audio frequency playing method, device, electronic equipment and computer-readable medium
CN113849241A (en) * 2020-06-10 2021-12-28 Oppo(重庆)智能科技有限公司 Audio playing control method, intelligent watch and device with storage function
CN114466107A (en) * 2020-11-10 2022-05-10 华为终端有限公司 Sound effect control method and device, electronic equipment and computer readable storage medium
CN113852944A (en) * 2021-09-24 2021-12-28 维沃移动通信有限公司 Audio output method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
低功耗智能蓝牙云交互耳机的设计与实现;陈飞;《电脑知识与技术》;20181005(第28期);全文 *
基于蓝牙技术耳机的软件设计方案的研究;马欣;《武汉船舶职业技术学院学报》;20061225(第06期);全文 *

Also Published As

Publication number Publication date
CN115022766A (en) 2022-09-06

Similar Documents

Publication Publication Date Title
CN114726946B (en) Method for automatically switching Bluetooth audio coding modes, electronic equipment and readable storage medium
CN110381345B (en) Screen projection display method and electronic equipment
CN112789866B (en) Audio data transmission method and equipment applied to TWS earphone single-ear and double-ear switching
CN113228701B (en) Audio data synchronization method and device
CN109445740B (en) Audio playing method and device, electronic equipment and storage medium
CN112868244B (en) Point-to-multipoint data transmission method and device
WO2021008614A1 (en) Method for establishing communication connection and wearable device
CN112789883B (en) Rate control method, device and computer storage medium
CN112042212B (en) Audio data transmission method and electronic equipment
CN112640505A (en) Transmission rate control method and equipment
CN109151212B (en) Equipment control method and device and electronic equipment
CN113039822B (en) Method and equipment for establishing data channel
CN112771828B (en) Audio data communication method and electronic equipment
CN115022766B (en) Audio data playing method and equipment
CN113316129B (en) Method for acquiring coding and decoding capabilities in Bluetooth device and electronic device
CN114501239A (en) Master-slave switching method and device of earphone, Bluetooth earphone and storage medium
CN113678481B (en) Wireless audio system, audio communication method and equipment
CN114696961B (en) Multimedia data transmission method and equipment
CN117917899A (en) Audio service processing method, electronic equipment and computer storage medium
CN116600275A (en) Voice transmission method, terminal equipment and computer storage medium
CN114666631A (en) Sound effect adjusting method and electronic equipment
CN114449492A (en) Data transmission method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant