CN112954528B - Method for transmitting audio data - Google Patents

Method for transmitting audio data Download PDF

Info

Publication number
CN112954528B
CN112954528B CN202110193355.7A CN202110193355A CN112954528B CN 112954528 B CN112954528 B CN 112954528B CN 202110193355 A CN202110193355 A CN 202110193355A CN 112954528 B CN112954528 B CN 112954528B
Authority
CN
China
Prior art keywords
audio
event
source
gesture
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110193355.7A
Other languages
Chinese (zh)
Other versions
CN112954528A (en
Inventor
边文裕
胡胜雄
張景嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inventec Appliances Shanghai Corp
Inventec Appliances Pudong Corp
Inventec Appliances Corp
Original Assignee
Inventec Appliances Shanghai Corp
Inventec Appliances Pudong Corp
Inventec Appliances Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inventec Appliances Shanghai Corp, Inventec Appliances Pudong Corp, Inventec Appliances Corp filed Critical Inventec Appliances Shanghai Corp
Priority to CN202110193355.7A priority Critical patent/CN112954528B/en
Priority to TW110112977A priority patent/TWI841832B/en
Publication of CN112954528A publication Critical patent/CN112954528A/en
Application granted granted Critical
Publication of CN112954528B publication Critical patent/CN112954528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/167Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W56/00Synchronisation arrangements
    • H04W56/001Synchronization between nodes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Telephone Function (AREA)
  • Stereophonic System (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Headphones And Earphones (AREA)

Abstract

A method for transmitting audio data is used for an audio transmission system, which comprises a transmission end and a receiving end. The method includes judging whether the transmission end is set to a first mode; when the transmission end is set to the first mode, a plurality of audio sources are converted into an audio streaming; wherein the audio sources include a first audio source comprising first audio packets and a second audio source comprising second audio packets, the audio stream comprising audio samples; wherein each of the plurality of audio samples comprises one of the plurality of first audio packets and one of the plurality of second audio packets.

Description

Method for transmitting audio data
Technical Field
The present invention relates to a method for transmitting audio data, and more particularly, to a method for transmitting audio data capable of transmitting audio data of a plurality of audio sources simultaneously.
Background
As the functions of electronic devices (e.g., smart phones, tablet computers, desktop computers, etc.) become more powerful, users can obtain diversified services (e.g., talking, surfing the internet, listening to music, watching movies, playing games, etc.) through the mobile phones, but if the services cannot be provided at the same time, the services may be forced to be interrupted or important information may be missed. For example, when a user is watching a live broadcast of an athletic competition, if there is an incoming call, the mobile phone will automatically switch to the incoming call display and play a ring. In the case where the live broadcast is forcibly interrupted, the user is likely to miss a wonderful picture of the game.
In addition, the current external audio playing devices (such as wireless earphone set and wired earphone set) can only play audio from a single sound source at a single time point. Part of the external audio playing device has the noise reduction function and is used for providing better audio quality. For different use requirements, more and more users have multiple mobile phones, such as a public mobile phone and a private mobile phone, at the same time. Thus, in practice, the user may wear the headset and use a private cell phone to watch live broadcasts while setting the business cell phone aside for standby. However, since the user is enthusiastic in live broadcasting or the noise reduction function of the headphones excludes the ambient sound source, the user is likely to miss important official information.
In view of the above-mentioned shortcomings, how to simultaneously transmit audio data of multiple audio sources to provide diversified services for users has become an emerging issue in the field.
Disclosure of Invention
To solve the above problems, the present invention provides a method for transmitting audio data. The method is used for an audio transmission system which comprises a transmission end and a receiving end. The method includes judging whether the transmission end is set to a first mode; when the transmission end is set to the first mode, a plurality of audio sources are converted into an audio streaming; the audio sources include a first audio source and a second audio source, the first audio source includes a plurality of first audio packets, and the second audio source includes a plurality of second audio packets. The step of converting the audio sources into the audio stream comprises generating the audio stream comprising a plurality of audio samples according to a setting signal, the first audio packets and the second audio packets; wherein each of the plurality of audio samples comprises one of the plurality of first audio packets and one of the plurality of second audio packets; wherein the first audio source and the second audio source are two of a speech data, a first music data and a second music data.
By the method for transmitting audio data of the present invention, a transmitting end (e.g., an electronic device) of the present invention can convert a plurality of audio sources into a single audio stream, and a receiving end (e.g., a wireless earphone set or a wired earphone set) can receive the audio data of the plurality of audio sources, so as to provide diversified services for users. Furthermore, the invention applies the functions of multipoint pairing and gesture detection to audio data transmission and provides the service of setting the ear-familiar function to adapt to the requirements of users.
Drawings
These and other objects, features, advantages and embodiments of the present invention will become more apparent from the following detailed description of the preferred embodiments of the invention when taken in conjunction with the accompanying drawings in which:
fig. 1 is a functional block diagram of an audio transmission system according to a first embodiment of the present invention.
FIG. 2 is a diagram illustrating an audio processing unit operating in a first scenario according to a first embodiment of the present invention.
FIG. 3 is a diagram illustrating an electronic device operating in a first scenario according to a first embodiment of the invention.
FIG. 4 is a diagram illustrating an audio processing unit operating in a second scenario according to the first embodiment of the invention.
FIG. 5 is a diagram illustrating an electronic device operating in a second scenario according to the first embodiment of the invention.
FIG. 6 is a flowchart illustrating a process of transmitting audio data according to a first embodiment of the present invention.
Fig. 7 is a block diagram of an audio transmission system according to a second embodiment of the invention.
Fig. 8 is a diagram illustrating the audio transmission system operating in a third scenario according to the second embodiment of the invention.
FIG. 9 is a flowchart illustrating a process for transmitting audio data according to a second embodiment of the present invention.
Reference numerals
1: audio transmission system
10, 70: electronic device
11: wireless earphone set
12: wired earphone set
100: processor with a memory having a plurality of memory cells
101: voice modem
102: audio processing unit
103: wireless transceiver
104: audio output circuit
105: memory device
151: earphone service suite
152: application program
202: mixing unit
600 to 607, 901 to 907: step (ii) of
711, 721: micro-controller
712, 722: gesture sensor
AS1, AS2: audio source
AU1, AU2, AU3: outputting audio streams
CTRL1, CTRL2: control signal
EB1, EB2, EB3, EB4: earphone set
PN1, PN2: personal area network
MD1, MD2: music data
SM1, SM2: audio streaming
ST: setting signal
U1, U2: user can use the device
VD: voice data
WSM1: wireless data transmission
A1-AN, C1-CN: a first audio packet
B1 to BN: second audio packet
SP1 to SPN, SP1 'to SPN': audio samples
Detailed Description
The following embodiments are described in detail with reference to the accompanying drawings, but the embodiments are only used for explaining the present disclosure and not for limiting the present disclosure, and the description of the structural operation is not used for limiting the execution sequence thereof, so that the apparatus with equivalent functions can be produced within the scope of the present disclosure.
Fig. 1 is a block diagram of an audio transmission system 1 according to a first embodiment of the present invention. The audio transmission system 1 includes a transmission end (Source) which may be an electronic device 10 and a receiving end (Sink) which may be an earphone set (earstems). The earphone set may be a wireless earphone set 11 or a wired earphone set 12. When the earphone set is a wireless earphone set 11, the wireless earphone set 11 may be enabled by bluetooth: (
Figure BDA0002946019030000041
) The electronic device 10 is paired and connected to receive an output audio stream AU1 from the electronic device 10. On the other hand, when the earphone set is the wired earphone set 12, the wired earphone set 12 is connected to the electronic device 10 by a wire to receive an output audio stream AU2 from the electronic device 10.
The electronic device 10 includes a processor 100, an audio modem 101, an audio processing unit 102, a wireless transceiver 103, an audio output circuit 104, and a memory 105. Structurally, the processor 100 is coupled to the voice modem 101, the audio processing unit 102, the wireless transceiver 103, and the memory 105 for generating the audio sources AS1 and AS2. The audio processing unit 102 is coupled to the processor 100 and the audio output circuit 104, and is configured to generate an audio stream SM1 or SM2 according to at least one of the audio sources AS1 and AS2.
In one embodiment, the processor 100 is further configured to generate a setting signal ST to the audio processing unit 102, wherein the setting signal ST indicates a wireless transmission path or a wired transmission path. For example, when the setting signal ST indicates the wireless transmission path, the audio processing unit 102 generates an audio stream SM1 to the processor 100, and the processor 100 generates a wireless transmission data WSM1 to the wireless transceiver 103 according to the audio stream SM1. The wireless transceiver 103 is coupled to the processor 100 for generating the output audio stream AU according to the wireless transmission data WSM11, and transmits the output audio stream AU1 to the wireless earphone set 11 through bluetooth wireless communication. In an embodiment of the invention, the wireless transceiver 103 is based on Bluetooth Low energy (R) ((R))
Figure BDA0002946019030000051
Low Energy, BT LE) communication standard version 5.2, to transmit the output audio stream AU1 to the wireless headset 11, but is not limited thereto. On the other hand, when the setting signal ST indicates a wired transmission path, the audio processing unit 102 generates an audio stream SM2 to the audio output circuit 104. The audio output circuit 104 is coupled to the audio processing unit 102 for generating an output audio stream AU2 to the wired headset 12 according to the audio stream SM2.
In one embodiment, when the electronic device 10 and the wireless headset 11 are successfully paired, the processor 100 generates a setting signal ST indicating a wireless transmission path to the audio processing unit 102; on the other hand, when the audio output circuit 104 detects the wired earphone set 12, it may notify the processor 100 to generate a setting signal ST indicating the wired transmission path to the audio processing unit 102.
In one embodiment, the setting signal ST is also used to indicate a general mode or an audio multiplexing mode. For example, when the setting signal ST indicates the general mode and the wireless transmission path, the audio processing unit 102 generates the audio stream SM1 for the wireless earphone set 11 according to a single audio source (e.g., the audio source AS1 or the audio source AS 2). When the setting signal ST indicates the general mode and the wired transmission path, the audio processing unit 102 generates the audio stream SM2 for the wired earphone set 12 from the single audio source. When the setting signal ST indicates the wireless transmission path and the audio multiplexing mode, the audio processing unit 102 generates an audio stream SM1 for the wireless earphone group 11 from a plurality of audio sources (e.g., the audio sources AS1 and AS 2). When the setting signal ST indicates the wired transmission path and the audio multiplexing mode, the audio processing unit 102 generates an audio stream SM2 for the wired earphone set 12 from a plurality of audio sources.
The voice modem 101 is coupled to the processor 100 for generating a voice data VD. In practical applications, the voice modem 101 can generate the voice data VD when a user makes a call through the electronic device 10. Therefore, the voice data VD can be used as an audio source.
The memory 105 is coupled to the processor 100 and is used for storing a headset service kit 151 and a plurality of application programs 152. The plurality of application programs 152 include at least one of a voice call program, a web call program, a music playing program, a video playing program, and a fm broadcast program, but are not limited thereto. When two of the plurality of application programs 152 are executed, the processor 100 retrieves the music materials MD1 and MD2 from the memory 105. Therefore, the music materials MD1 and MD2 can be used as different audio sources. In one embodiment, the processor 100 generates the audio source AS1 according to the speech data VD, and the processor 100 generates the audio source AS2 according to the music data MD1 (or MD 2). In one embodiment, the processor 100 generates the audio sources AS1 and AS2 from the music material MD1 and MD2, respectively.
In other words, the audio sources AS1 and AS2 may be two of the voice data VD, the music data MD1 and MD2. Therefore, the audio processing unit 102 of the present invention can convert a plurality of audio sources into a single audio stream, so that the earphone set can receive a plurality of audio sources at the same time. In this way, the user can listen to the audio data of the audio sources through the earphones EB1 and EB2 (see fig. 3) of the earphone set (e.g., the wireless earphone set 11 or the wired earphone set 12).
FIG. 2 is a diagram illustrating the audio processing unit 102 operating in a first scenario according to the first embodiment of the invention. Assuming that the first scenario is that the user is talking and listening to music at the same time, the audio source AS1 includes the voice data VD and the audio source AS2 includes the music data MD2. The audio source AS1 includes a plurality of first audio packets A1-AN, and the audio source AS2 includes a plurality of second audio packets B1-BN. The audio processing unit 102 includes a Mixer (Mixer) 202 for generating AN audio stream SM1 according to a setting signal ST, a plurality of first audio packets A1 to AN and a second plurality of audio packets B1 to BN. The audio stream SM1 includes a plurality of audio samples (samples) SP 1-SPN, and each of the plurality of audio samples SP 1-SPN includes a first audio packet and a second audio packet. For example, the audio sample SP1 includes a first audio packet A1 and a second audio packet B1; the audio sample SP2 includes a first audio packet A2 and a second audio packet B2; by analogy, the audio samples SPN comprise a first audio packet AN and a second audio packet BN. Each of the audio samples SP1 SPN is, for example, a Pulse Code Modulation (PCM) packet, but is not limited thereto. In an embodiment, the mixing unit 202 may be implemented by a hardware circuit or a software program.
In one embodiment, the setting signal ST is used to indicate that the audio source AS1 corresponds to a first channel or a second channel, wherein the first channel is, for example, a right channel, and the second channel is, for example, a left channel, but not limited thereto. Therefore, assuming that the setting signal ST indicates that the audio source AS1 corresponds to the first channel, the mixing unit 202 of the audio processing unit 102 writes the first audio packets A1 to AN of the audio source AS1 into the data format of the audio samples SP1 to SPN corresponding to the first channel, and writes the second audio packets B1 to BN of the audio source AS2 into the data format of the audio samples SP1 to SPN corresponding to the second channel.
Fig. 3 is a schematic diagram illustrating the electronic device 10 operating in a first scenario according to the first embodiment of the invention. In the first scenario, a user U1 wears the wireless earphone set 11 while listening to music and receiving a call, so the electronic device 10 executes the music playing process and the call process simultaneously. The wireless earphone set 11 includes an earphone EB1 and an earphone EB2. The Wireless headset 11 is, for example, a True Wireless (TWS) bluetooth headset and may support bluetooth low energy communication standard version 5.2, but is not limited thereto. It should be noted that, since the bluetooth low energy communication standard version 5.2 introduces a synchronous connection Stream (CIS) and a synchronous connection Group (CIG) in the synchronous channel technology of Multi-Stream (Multi-Stream), the headset EB1 and the headset EB2 may respectively receive a first synchronous connection Stream and a second synchronous connection Stream from the electronic device 10, where the first synchronous connection Stream and the second synchronous connection Stream are a Group of synchronous connection groups.
Therefore, the headphones EB1 and EB2 can synchronously receive the output audio stream AU1 and retrieve the required first audio packets A1 to AN and second audio packets B1 to BN from the audio samples SP1 to SPN according to their corresponding audio channels. Specifically, when the earphone EB1 corresponds to the first channel, the microcontroller built in the earphone EB1 can retrieve the first audio packets A1 to AN written in the data format corresponding to the first channel from the audio samples SP1 to SPN, thereby playing the audio data VD of the audio source AS 1. Meanwhile, when the earphone EB2 corresponds to the second channel, the microcontroller built in the earphone EB2 can retrieve a plurality of second audio packets B1 to BN written in the data format corresponding to the second channel from the plurality of audio samples SP1 to SPN, thereby playing the music data MD2 of the audio source AS2.
Therefore, in the audio multiplexing mode, the headphones EB1 and EB2 of the headphone set of the present invention can simultaneously and respectively play audio data of a plurality of audio sources. In practical applications, when the user U1 receives an incoming call while listening to music (or dubbing music of a broadcast, video, or game, etc.), it is possible to listen to the telephone voice through the headset EB1 and continue listening to the music through the headset EB2 at the same time. Therefore, the user U1 does not miss the information (i.e., dubbing music of music, broadcast, video, or game, etc.) that is being focused on by answering the incoming call. Then, after the user U1 hangs up, the processor 100 stops executing the call process and the voice modem 101 stops generating the voice data VD, so that the user U1 listens to the music through the earphone EB1 and the earphone EB2.
FIG. 4 is a diagram illustrating the audio processing unit 102 operating in a second scenario according to the first embodiment of the invention. Assume that the second scenario is that there are multiple users listening to different music data, so the audio source AS1 is the music data MD1 and the audio source AS2 is the music data MD2. The audio source AS1 comprises a plurality of first audio packets C1-CN, and the audio source AS2 comprises a plurality of second audio packets B1-BN. The mixing unit 202 of the audio processing unit 102 is configured to generate an audio stream SM2 according to the setting signal ST, the plurality of first audio packets C1 to CN and the plurality of second audio packets B1 to BN. The audio stream SM2 comprises a plurality of audio samples SP1' -SPN ', wherein the audio samples SP1' comprise a first audio packet C1 and a second audio packet B1; the audio sample SP2' includes a first audio packet C2 and a second audio packet B2; by analogy, the audio samples SPN' comprise a first audio packet CN and a second audio packet BN. Assuming that the setting signal ST indicates that the audio source AS1 corresponds to the first channel, the mixing unit 202 of the audio processing unit 102 writes the first audio packets C1 to CN of the audio source AS1 into the data format of the audio samples SP1 'to SPN' corresponding to the first channel, and writes the second audio packets B1 to BN of the audio source AS2 into the data format of the audio samples SP1 'to SPN' corresponding to the second channel.
FIG. 5 is a diagram illustrating the electronic device 10 operating in a second scenario according to the first embodiment of the invention. In a second scenario, multiple users U1 and U2 are equipped with earphones EB3 and EB4, respectively, of wired earphone set 12. When the user U1 listens to the music material MD1 of the first multimedia playing program, the user U2 listens to the music material MD2 of the second music playing program at the same time, so that the electronic device 10 executes the first and second music playing programs at the same time.
The earphone EB3 and the earphone EB4 can synchronously receive the output audio streaming AU2 and acquire a plurality of audio packets C1-CN and a plurality of audio packets B1-BN required by a plurality of audio samples SP1 '-SPN' according to the sound channels corresponding to the earphones. Specifically, when the headphone EB3 corresponds to the first channel, the headphone EB3 can play the music material MD1 of the audio source AS 1. Meanwhile, when the headphone EB4 corresponds to the second channel, the headphone EB4 can play the music data MD2 of the audio source AS2.
In one embodiment, the user U1 (or U2) can set a general mode or an audio multiplexing mode, a first channel or a second channel through the headset service kit 151. For example, when a plurality of users U1 and U2 want to watch video together, the user U1 (or U2) can set the normal mode through the headset service kit 151, and the audio stream SM2 has only a single audio source. When the user U1 wants to watch the video and the user U2 wants to listen to the broadcast program, the user U1 (or U2) can set the audio multiplexing mode through the headset service kit 151, designate the headset EB3 corresponding to the first channel for playing the music data of the video and designate the headset EB4 corresponding to the second channel for playing the music data of the broadcast program, and the audio stream SM2 includes a plurality of audio sources. Similarly, in the embodiment of fig. 3 and 4, when the user U1 wants to listen to a call and listen to music simultaneously, the user U1 can set the audio multiplexing mode through the headset service kit 151 and designate one of the headset EB1 and the headset EB2 to play voice and the other to play music.
Briefly, in the first embodiment of the present invention, the electronic device 10 can support an audio multiplexing mode, and the audio processing unit 102 converts the audio sources AS1 and AS2 into a single audio stream (audio stream SM1 or audio stream SM 2), so that the headphone set can receive audio data (e.g., voice or music) of the audio sources AS1 and AS2. In addition, the headset service kit 151 can be used to set a general mode or an audio multiplexing mode, a first channel or a second channel, so that a user can select a desired audio playing service. In this way, the electronic device 10 of the present invention can provide diversified audio playing services to improve user experience.
It should be noted that, the present invention mixes the audio data of the multiple audio sources AS1 and AS2 into a single audio source to regenerate a single audio stream through the audio processing unit 102 of the electronic device 10, and does not relate to the design of the existing earphone set. Similarly, the headset service kit 151 is installed on the electronic device 10 and is used to control the audio processing unit 102, without regard to the design of existing headset sets. Therefore, any wireless headset or wired headset that supports the synchronous channel technique for multi-stream can support the audio multiplexing mode of the present invention.
Regarding the operation of the electronic device 10, the operation can be summarized as a process of transmitting audio data, as shown in fig. 6, the process of transmitting audio data includes the following steps.
Step 600: and starting.
Step 601: it is determined whether the first mode is set. If yes, go to step 602; if not, go to step 605.
Step 602: the audio sources are converted into an audio stream.
Step 603: it is determined whether the first path is set. If yes, go to step 604; if not, go to step 606.
Step 604: an audio stream is converted into an audio stream.
Step 605: the audio stream is transmitted through a first path. Proceed to step 607.
Step 606: and transmitting the audio streaming through the second path.
Step 607: and (6) ending.
In step 601, the audio processing unit 102 determines whether the electronic device 10 is set to the first mode (e.g., audio multiplexing mode) according to the setting signal ST. In step 602, when the setting signal ST indicates that the electronic device 10 is set to the first mode, the audio processing unit 102 converts the audio sources into an audio stream; alternatively, in step 605, when the setting signal ST indicates that the electronic device 10 is not set to the first mode, the electronic device 10 is set to the second mode (e.g. the normal mode), and the audio processing unit 102 converts the single audio source into an audio stream. In step 603, the audio processing unit 102 determines whether the electronic device 10 is set as the first path (e.g., wireless transmission path) according to the setting signal ST. In step 604, when the setting signal ST indicates that the electronic device 10 is set to the first path, the electronic device 10 transmits the audio stream through the first path; alternatively, in step 606, when the setting signal ST indicates that the electronic device 10 is not set as the first path, the electronic device 10 is set as the second path (e.g., wired transmission path), and the electronic device 10 transmits the audio stream through the second path.
Therefore, in the first embodiment, through the process of transmitting audio data according to the present invention, the electronic device 10 of the present invention can convert the audio sources AS1 and AS2 into a single audio stream SM1 or SM2, so that the receiving end (e.g. the wireless earphone set 11 or the wired earphone set 12) can receive the audio data of the audio sources AS1 and AS2. Therefore, the invention can simultaneously transmit the audio data of a plurality of audio sources so as to provide diversified services for users.
Fig. 7 is a block diagram of an audio transmission system 7 according to a second embodiment of the present invention. The audio transmission system 7 comprises electronic devices 10, 70 and a wireless earphone set 11. It should be noted that, in view of the fact that the wireless headset in the market has the functions of Multi-point Pairing (Multi-Points Pairing) and gesture detection, the present invention further applies the above functions to audio data transmission.
The electronic devices 10 and 70 have substantially the same structure and function, and the detailed description of the electronic device 70 refers to the first embodiment of fig. 1 to 6. The wireless earphone set 11 includes an earphone EB1 and an earphone EB2. Headset EB1 comprises a first microcontroller 711 and a first gesture sensor 712. The first gesture sensor 712 is coupled to the first microcontroller 711 for detecting a gesture of a user. Headset EB2 includes a second microcontroller 721 and a second gesture sensor 722. The second gesture sensor 722 is coupled to the second microcontroller 721 and is used for detecting a gesture of a user. In one embodiment, the first gesture sensor 712 and the second gesture sensor 722 may be a pressure sensor or a capacitive sensor and are mounted on the handles of the headset EB1 and the headset EB2, or other location where a user touch may be sensed. In an embodiment, the first gesture sensor 712 and the second gesture sensor 722 may also be an input interface including a plurality of keys, and the plurality of keys respectively correspond to the instructions of playing, pausing, going up and down, answering and hanging up, but are not limited thereto.
The first microcontroller 711 is configured to pair and connect with the electronic device 10 through bluetooth wireless communication to receive the output audio stream AU1 from the electronic device 10. The first microcontroller 711 is further configured to generate a control signal CTRL1 corresponding to the gesture of the user to the electronic device 10. The second microcontroller 721 is used for pairing and connecting with the electronic device 10 through bluetooth wireless communication to receive the output audio stream AU1 from the electronic device 10. The second microcontroller 721 is further configured to generate a control signal CTRL2 corresponding to the gesture of the user to the electronic device 10. For example, table 1 illustrates the function definitions and corresponding gestures, but is not so limited.
Figure BDA0002946019030000111
Figure BDA0002946019030000121
Therefore, when the headset EB1 transmits the control signal CTRL1 (or the headset EB2 transmits the control signal CTRL 2) to the electronic device 10, the processor 100 generates the setting signal ST to indicate the general mode or the audio multiplexing mode and the first channel or the second channel according to the control signal CTRL1 to the audio processing unit 102. For example, when the electronic device 10 is set to the normal mode, the user may press the handle of the headset EB1 once, so that the headset EB1 generates the control signal CTRL1 corresponding to the "switch mode" gesture to the electronic device 10, and sets the electronic device 10 to the audio multiplexing mode.
Further, the user may press the handle of the earphone EB1 twice, so that the earphone EB1 generates the control signal CTRL1 corresponding to the "select/turn on" gesture to the electronic device 10, and the electronic device 10 transmits the audio data according to the first channel of the earphone EB 1. For example, when the user receives an incoming call while listening to music, the electronic device 10 transmits voice and music according to the sound track of the headset EB1 generating the "pick/turn" gesture, so the headset EB1 plays the incoming call ringtone, the headset EB2 continues to play the music, and the user can hang up the phone by pressing the headset handle of the headset EB1 once.
In addition, the second microcontroller 721 is used for pairing and connecting with the electronic device 70 through bluetooth wireless communication to receive an output audio stream AU3 from the electronic device 70. The second microcontroller 721 is also used to generate a control signal CTRL2 corresponding to the gesture of the user to the electronic device 70. Briefly, the headset EB2 (i.e., the second microcontroller 721) can be paired and connected to the electronic devices 10 and 70 at the same time to realize the multi-point pairing function; moreover, the electronic device 10 can receive the control signals CTRL1 and CTRL2 generated by the earphones EB1 and EB2, and the electronic device 70 can receive the control signal CTRL2 generated by the earphone EB2, so as to implement the function of detecting a gesture. In one embodiment, the user can set the designated earphone corresponding to the audio event through the earphone service kit 151. Table 2 is an audio event and corresponding designated headphone, but is not so limited.
Figure BDA0002946019030000122
Figure BDA0002946019030000131
In the normal mode, the headset (i.e., the wireless headset 11 and the wired headset 12 of fig. 1) designates dual headphones to play a single audio when any audio event occurs. In the audio multiplexing mode, when a "dual audio coexistence" event occurs, the user may designate the primary earpiece (e.g., ear-accustomed) to play the primary audio source (e.g., speech) and designate the secondary earpiece (e.g., non-ear-accustomed) to play the secondary audio source (e.g., music). Since the left and right ears of the user may have different audios due to congenital factors or accidents, the invention provides a service of setting the ear usage to suit the needs of the user.
In addition, in the audio multiplexing mode, when an "external device call" event occurs, the user can designate the main earphone to play a single audio data. For example, when a plurality of users listen to different music respectively using the earphones EB1 and EB2, if the electronic device 70 receives an incoming call, the primary earphone (e.g., the earphone EB 1) plays the ring tone and voice, and the secondary earphone (e.g., the earphone EB 2) continues to play the music.
When performing the multi-point pairing, the wireless headset 11 needs to determine which electronic device 10 or 70 should receive the audio stream. In one embodiment, the wireless headset 11 determines which electronic device 10 or 70 should receive the audio stream according to the priority of the audio event. Table 3 is an audio event and corresponding priority, but is not limited thereto.
Figure BDA0002946019030000132
Figure BDA0002946019030000141
For example, the user may set the priority of the audio event through the headset service kit 151, or the priority of the audio event is set by the original factory. Assuming that the electronic apparatus 10 is a main device and the electronic apparatus 70 is a first secondary device, the electronic apparatuses 10 and 70 may determine the priority of the audio event according to their own roles. Therefore, when the headphone EB2 receives the output audio streams AU1 and AU3 at the same time, the output audio streams with higher priority may be received and the output audio streams with lower priority may be ignored.
Fig. 8 is a schematic diagram of the audio transmission system 7 operating in a third scenario according to the second embodiment of the present invention. Assume that the third scenario is that the user listens to music through the electronic device 10 and puts the electronic device 70 aside for standby. First, after the earphones EB1 and EB2 and the electronic device 10 are paired and connected, a Personal Area Network (PAN) PN1 of the electronic device 10 is added to receive the output audio stream AU1. Then, after the earphone EB2 and the electronic device 70 complete pairing and connection, join a personal area network PN2 of the electronic device 70 to receive the output audio stream AU3. Since the electronic devices 10 and 70 transmit the output audio streams AU1 and AU3 according to the priority of the audio events, the headphone EB2 can determine from which electronic device the audio stream should be received according to the priority of the audio events (as shown in table 3). For example, according to the embodiment of table 3, when the user listens to music through the electronic device 10, if the electronic device 70 receives an incoming call, the earphone EB2 receives the output audio stream AU3 from the electronic device 70 in the normal mode and the audio multitasking mode because the incoming call of the external device has higher priority than the music. In detail, in the general mode, the headphones EB1 and EB2 receive the output audio stream AU1 from the electronic device 10. On the other hand, in the audio multiplexing mode, the headphone EB1 receives the output audio stream AU1 from the electronic device 10, and the headphone EB2 can determine whether to receive the output audio stream AU1 from the electronic device 10 or the output audio stream AU3 from the electronic device 70 according to the user setting.
Therefore, in the second embodiment, the invention applies the functions of multipoint pairing and gesture detection to audio data transmission and provides the service of setting the ear-to-ear so as to meet the requirements of users.
Regarding the operation of the electronic devices 10 and 70, the operation can be summarized as a process of transmitting audio data, as shown in fig. 9, the process of transmitting audio data includes the following steps.
Step 901: a first audio event is performed.
Step 902: it is determined whether a second audio event has occurred. If yes, go to step 903; if not, go back to step 901.
Step 903: it is determined whether the first mode is set. If yes, go to step 904; if not, go to step 906.
Step 904: an audio stream including a first audio source of a first audio event and a second audio source of a second audio event is transmitted according to the audio event priority.
Step 905: and judging whether the first gesture is received. If yes, go back to step 904; if not, go back to step 901.
Step 906: an audio stream of an audio source including a second audio event is transmitted.
Step 907: and judging whether a first gesture is received. If yes, go back to step 906; if not, go back to step 901.
Taking the electronic device 10 as an example, in step 901, the electronic device 10 performs a first audio event (e.g., transmitting music data to the wireless earphone set 11). At step 902, the electronic device 10 determines whether a second audio event (e.g., whether there is an incoming call) has occurred. In step 903, when the second audio event occurs, the electronic device 10 determines whether the first mode (e.g., audio multiplexing mode) is set. At step 904, when the electronic device 10 is set to the first mode, the electronic device 10 transmits an audio stream comprising a first audio source of the first audio event (e.g., music) and a second audio source of the second audio event (e.g., ring tone) to the wireless headset set 11 according to the audio event priority. In step 905, the electronic device 10 determines whether a first gesture (e.g., a gesture corresponding to a "select/turn on" function) is received from the wireless headset 11. When the electronic device 10 receives the first gesture, the electronic device 10 returns to step 904. When the electronic device 10 does not receive the first gesture, the electronic device 10 receives a second gesture (e.g., a gesture corresponding to the "reject/hang up" function), and the electronic device 10 returns to step 901. In one embodiment, when the electronic device 10 does not receive any gesture within a predetermined time, the incoming call is not answered, and the electronic device 10 returns to step 901. In one embodiment, when the incoming call is ended, the electronic device 10 returns to step 901.
On the other hand, in step 906, when the electronic device 10 is not set to the first mode, the electronic device 10 is set to the second mode (e.g., the normal mode), and the electronic device 10 transmits an audio stream including a second audio source (e.g., a ring tone) of a second audio event to the wireless headset 11. In step 907, the electronic device 10 determines whether the first gesture is received. When the electronic device 10 receives the first gesture, the electronic device 10 returns to step 906, i.e., transmits the audio stream of the second audio source including the second audio event to the wireless headset 11. When the electronic device 10 does not receive the first gesture, the electronic device 10 receives the second gesture, and the electronic device 10 returns to step 901 to perform the first audio event.
It is noted that, at step 904, the electronic devices 10 and 70 transmit audio streams of audio sources including the first audio event and the second audio event according to the audio event priority. In this way, when the electronic devices 10 and 70 transmit audio streams simultaneously, the headphone EB2 can determine which electronic device should receive the audio stream from according to the priority of the audio event. For example, when the audio stream transmitted by the electronic device 70 includes incoming voice with higher priority and the audio stream transmitted by the electronic device 10 includes music with lower priority, the headphone EB2 receives the audio stream from the electronic device 70 and ignores the audio stream from the electronic device 10.
In one embodiment, after step 905, the electronic devices 10 and 70 determine that the first gesture is received from a first output terminal (e.g., the headphone EB 1) or a second output terminal (e.g., the headphone EB 2) of a receiving terminal, and transmit the audio stream of the audio sources including the first audio event and the second audio event according to the channel of the first output terminal or the second output terminal.
In summary, through the process of transmitting audio data of the present invention, the electronic device of the present invention can convert a plurality of audio sources into a single audio stream, so that a receiving end (e.g., a wireless earphone set or a wired earphone set) can receive audio data of the plurality of audio sources, thereby providing diversified services for users. Furthermore, the invention applies the functions of multipoint pairing and gesture detection to audio data transmission and provides the service of setting the ear-to-ear so as to meet the requirements of users.
Although the present disclosure has been described with reference to particular embodiments, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the disclosure, and therefore the scope of the disclosure is to be determined by the appended claims.

Claims (9)

1. A method for transmitting audio data in an audio transmission system, the audio transmission system including a transmitting end and a receiving end, the method comprising:
judging whether the transmission terminal is set to a first mode; and
when the transmission end is set to the first mode, generating an audio streaming comprising a plurality of audio samples according to a set signal, a plurality of first audio packets and a plurality of second audio packets and transmitting the audio streaming to the receiving end;
wherein the first audio packets are from a first audio source and the second audio packets are from a second audio source; and
each of the plurality of audio samples comprises one of the plurality of first audio packets and one of the plurality of second audio packets;
when the transmission terminal carries out a first audio event, judging whether a second audio event occurs;
when the second audio event occurs and the transmission terminal is set to the first mode, transmitting the audio stream including a first audio source of the first audio event and a second audio source of the second audio event according to the priority of the first audio event and the priority of the second audio event;
the method further comprises the following steps:
judging whether a first gesture is received from the receiving end; and
when the first gesture is received from the receiving end, the audio streaming of the first audio source including the first audio event and the second audio source including the second audio event is transmitted according to the priority of the first audio event and the priority of the second audio event.
2. The method of claim 1, wherein the first audio source and the second audio source are two of a speech data, a first music data and a second music data.
3. The method of claim 1, wherein the setting signal indicates a primary audio source corresponding to a first channel and a secondary audio source corresponding to a second channel, the step of converting the audio sources into the audio stream comprises:
when the first audio source is the main audio source, writing the first audio packets into the data format of the audio samples corresponding to the first channel, and writing the second audio packets into the data format of the audio samples corresponding to the second channel; or alternatively
When the first audio source is the secondary audio source, the first audio packets are respectively written into the data format of the second channel in the audio samples, and the second audio packets are respectively written into the data format of the first channel in the audio samples.
4. The method of claim 1, further comprising:
when the second audio event occurs and the transmitting end is not set to the first mode, transmitting the audio stream of the second audio source including the second audio event.
5. The method of claim 1, wherein when the second audio event occurs and the transmitting end is set to the first mode, the method further comprises:
when the first gesture is not received from the receiving end and a second gesture is received from the receiving end, the audio streaming of the first audio source comprising the first audio event is transmitted to conduct the first audio event.
6. The method of claim 1, wherein when the second audio event ends, the method further comprises:
transmitting the audio stream of the first audio source including the first audio event to perform the first audio event.
7. The method of claim 1, wherein when the second audio event occurs and the transmitting end is set to the first mode, the method further comprises:
judging that the first gesture is received from a first output end or a second output end of the receiving end;
transmitting the audio stream including the first audio source of the first audio event and the second audio source of the second audio event according to a channel of the first output terminal when the first gesture is received from the first output terminal of the receiving terminal; and
when the first gesture is received from the second output of the receiving end, the audio stream of the first audio source including the first audio event and the second audio source including the second audio event is transmitted according to the channel of the second output.
8. The method as claimed in claim 4, wherein the transmitter is set to a second mode when the second audio event occurs and the transmitter is not set to the first mode, the method further comprising:
judging whether a first gesture is received from the receiving end; and
transmitting the audio stream of the second audio source including the second audio event when the first gesture is received from the receiving end.
9. The method of claim 1, further comprising:
when a third gesture is received from the receiving end and the transmitting end is set to be in the first mode, the transmitting end is set to be in a second mode; or
When the third gesture is received from the receiving end and the transmitting end is set to the second mode, the transmitting end is set to the first mode.
CN202110193355.7A 2021-02-20 2021-02-20 Method for transmitting audio data Active CN112954528B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110193355.7A CN112954528B (en) 2021-02-20 2021-02-20 Method for transmitting audio data
TW110112977A TWI841832B (en) 2021-02-20 2021-04-09 Method of transmitting audio data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110193355.7A CN112954528B (en) 2021-02-20 2021-02-20 Method for transmitting audio data

Publications (2)

Publication Number Publication Date
CN112954528A CN112954528A (en) 2021-06-11
CN112954528B true CN112954528B (en) 2023-01-24

Family

ID=76244824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110193355.7A Active CN112954528B (en) 2021-02-20 2021-02-20 Method for transmitting audio data

Country Status (2)

Country Link
CN (1) CN112954528B (en)
TW (1) TWI841832B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6937724B1 (en) * 1999-03-29 2005-08-30 Siemens Information & Communication Networks, Inc. Apparatus and method for delivery of ringing and voice calls through a workstation
CN102227917A (en) * 2008-12-12 2011-10-26 高通股份有限公司 Simultaneous mutli-source audio output at wireless headset
CN106162446A (en) * 2016-06-28 2016-11-23 乐视控股(北京)有限公司 Audio frequency playing method, device and earphone
CN106358126A (en) * 2016-09-26 2017-01-25 宇龙计算机通信科技(深圳)有限公司 Multi-audio frequency playing method, system and terminal
CN109445740A (en) * 2018-09-30 2019-03-08 Oppo广东移动通信有限公司 Audio frequency playing method, device, electronic equipment and storage medium
CN109862475A (en) * 2019-01-28 2019-06-07 Oppo广东移动通信有限公司 Audio-frequence player device and method, storage medium, communication terminal
CN110856068A (en) * 2019-11-05 2020-02-28 南京中感微电子有限公司 Communication method of earphone device
CN111770408A (en) * 2020-07-07 2020-10-13 Oppo(重庆)智能科技有限公司 Control method, control device, wireless headset and storage medium
CN112218197A (en) * 2019-07-12 2021-01-12 络达科技股份有限公司 Audio compensation method and wireless audio output device using same

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003347956A (en) * 2002-05-28 2003-12-05 Toshiba Corp Audio output apparatus and control method thereof
CN105635903A (en) * 2014-11-05 2016-06-01 淇誉电子科技股份有限公司 Method and system used for wireless connection and control of wireless sound box
KR102648190B1 (en) * 2016-12-20 2024-03-18 삼성전자주식회사 Content output system, display apparatus and control method thereof
WO2019006588A1 (en) * 2017-07-03 2019-01-10 深圳市汇顶科技股份有限公司 Audio system and headphone
CN109218873A (en) * 2017-07-03 2019-01-15 中兴通讯股份有限公司 Wireless headset and the method for playing audio
CN108847012A (en) * 2018-04-26 2018-11-20 Oppo广东移动通信有限公司 Control method and relevant device
CN112068794A (en) * 2020-07-27 2020-12-11 湖北亿咖通科技有限公司 Audio mixing control method, device, electronic device and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6937724B1 (en) * 1999-03-29 2005-08-30 Siemens Information & Communication Networks, Inc. Apparatus and method for delivery of ringing and voice calls through a workstation
CN102227917A (en) * 2008-12-12 2011-10-26 高通股份有限公司 Simultaneous mutli-source audio output at wireless headset
CN106162446A (en) * 2016-06-28 2016-11-23 乐视控股(北京)有限公司 Audio frequency playing method, device and earphone
CN106358126A (en) * 2016-09-26 2017-01-25 宇龙计算机通信科技(深圳)有限公司 Multi-audio frequency playing method, system and terminal
CN109445740A (en) * 2018-09-30 2019-03-08 Oppo广东移动通信有限公司 Audio frequency playing method, device, electronic equipment and storage medium
CN109862475A (en) * 2019-01-28 2019-06-07 Oppo广东移动通信有限公司 Audio-frequence player device and method, storage medium, communication terminal
CN112218197A (en) * 2019-07-12 2021-01-12 络达科技股份有限公司 Audio compensation method and wireless audio output device using same
CN110856068A (en) * 2019-11-05 2020-02-28 南京中感微电子有限公司 Communication method of earphone device
CN111770408A (en) * 2020-07-07 2020-10-13 Oppo(重庆)智能科技有限公司 Control method, control device, wireless headset and storage medium

Also Published As

Publication number Publication date
CN112954528A (en) 2021-06-11
TW202234867A (en) 2022-09-01
TWI841832B (en) 2024-05-11

Similar Documents

Publication Publication Date Title
JP3905509B2 (en) Apparatus and method for processing audio signal during voice call in mobile terminal for receiving digital multimedia broadcast
US6678362B2 (en) System and method for effectively managing telephone functionality by utilizing a settop box
WO2020132839A1 (en) Audio data transmission method and device applied to monaural and binaural modes switching of tws earphone
US20100064329A1 (en) Communication system and method
US10425758B2 (en) Apparatus and method for reproducing multi-sound channel contents using DLNA in mobile terminal
US20170195817A1 (en) Simultaneous Binaural Presentation of Multiple Audio Streams
GB2460219A (en) Interaction between Audio/Visual Display Appliances and Mobile Devices
US20220286538A1 (en) Earphone device and communication method
US12052556B2 (en) Terminal, audio cooperative reproduction system, and content display apparatus
CN112804610B (en) Method for controlling Microsoft Teams on PC through TWS Bluetooth headset
CN117296348A (en) Method and electronic device for Bluetooth audio multi-streaming
CN111190568A (en) Volume adjusting method and device
CN110149620A (en) A kind of control method of intelligent earphone, device, intelligent earphone and storage medium
CN118741471A (en) Display device and device pairing method
CN112954528B (en) Method for transmitting audio data
CN113115290B (en) Method for receiving audio data
CN102196107A (en) Telephone system
US10206031B2 (en) Switching to a second audio interface between a computer apparatus and an audio apparatus
CN103988516A (en) Communication method, system and mobile terminal in application of mobile broadcast television
WO2017215118A1 (en) Earphone having information processing function, and terminal device and information processing system
CN114501401A (en) Audio transmission method and device, electronic equipment and readable storage medium
CN109818979A (en) A kind of method, apparatus that realizing audio return, equipment and storage medium
CN112423197A (en) Method and device for realizing multipath Bluetooth audio output
JP3144831U (en) Wireless audio system with stereo output
KR20080029415A (en) Apparatus and method for palying multimedia using local area wireless communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant