CN112954528A - Method for transmitting audio data - Google Patents

Method for transmitting audio data Download PDF

Info

Publication number
CN112954528A
CN112954528A CN202110193355.7A CN202110193355A CN112954528A CN 112954528 A CN112954528 A CN 112954528A CN 202110193355 A CN202110193355 A CN 202110193355A CN 112954528 A CN112954528 A CN 112954528A
Authority
CN
China
Prior art keywords
audio
event
source
mode
gesture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110193355.7A
Other languages
Chinese (zh)
Other versions
CN112954528B (en
Inventor
边文裕
胡胜雄
張景嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inventec Appliances Shanghai Corp
Original Assignee
Inventec Appliances Shanghai Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inventec Appliances Shanghai Corp filed Critical Inventec Appliances Shanghai Corp
Priority to CN202110193355.7A priority Critical patent/CN112954528B/en
Priority to TW110112977A priority patent/TWI841832B/en
Publication of CN112954528A publication Critical patent/CN112954528A/en
Application granted granted Critical
Publication of CN112954528B publication Critical patent/CN112954528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/10Earpieces; Attachments therefor ; Earphones; Monophonic headphones
    • H04R1/1041Mechanical or electronic switches, or control elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/04Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis using predictive techniques
    • G10L19/16Vocoder architecture
    • G10L19/167Audio streaming, i.e. formatting and decoding of an encoded audio signal representation into a data stream for transmission or storage purposes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W56/00Synchronisation arrangements
    • H04W56/001Synchronization between nodes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Telephone Function (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Stereophonic System (AREA)

Abstract

A method for transmitting audio data is used for an audio transmission system, which comprises a transmission end and a receiving end. The method includes judging whether the transmission end is set to a first mode; when the transmission end is set to the first mode, a plurality of audio sources are converted into an audio streaming; wherein the audio sources include a first audio source comprising first audio packets and a second audio source comprising second audio packets, the audio stream comprising audio samples; wherein each of the plurality of audio samples comprises one of the plurality of first audio packets and one of the plurality of second audio packets.

Description

Method for transmitting audio data
Technical Field
The present invention relates to a method for transmitting audio data, and more particularly, to a method for transmitting audio data capable of transmitting audio data of a plurality of audio sources simultaneously.
Background
As the functions of electronic devices (e.g., smart phones, tablet computers, desktop computers, etc.) become more powerful, users can obtain diversified services (e.g., talking, surfing the internet, listening to music, watching movies, playing games, etc.) through the mobile phones, but if the services cannot be provided at the same time, the services may be forced to be interrupted or important information may be missed. For example, when a user is watching a live broadcast of an athletic competition, if there is an incoming call, the mobile phone will automatically switch to the incoming call display and play a ring. In the case where the live broadcast is forcibly interrupted, the user is likely to miss a wonderful picture of the game.
In addition, the current external audio playing devices (such as wireless earphone set and wired earphone set) can only play audio from a single sound source at a single time point. Part of the external audio playing device has the noise reduction function and is used for providing better audio quality. For different use requirements, more and more users have multiple mobile phones, such as a public mobile phone and a private mobile phone. Thus, in practice, the user may wear headphones and use a private cell phone to watch live broadcasts while leaving the official cell phone aside for standby. However, since the user is enthusiastic in live broadcasting or the noise reduction function of the headphones excludes the ambient sound source, the user is likely to miss important official information.
In view of the above-mentioned shortcomings, how to simultaneously transmit audio data of multiple audio sources to provide diversified services for users has become an emerging issue in the field.
Disclosure of Invention
To solve the above problems, the present invention provides a method for transmitting audio data. The method is used for an audio transmission system which comprises a transmission end and a receiving end. The method includes judging whether the transmission end is set to a first mode; when the transmission end is set to the first mode, a plurality of audio sources are converted into an audio streaming; the audio sources include a first audio source and a second audio source, the first audio source includes a plurality of first audio packets, and the second audio source includes a plurality of second audio packets. The step of converting the audio sources into the audio stream comprises generating the audio stream comprising a plurality of audio samples according to a setting signal, the first audio packets and the second audio packets; wherein each of the plurality of audio samples comprises one of the plurality of first audio packets and one of the plurality of second audio packets; wherein the first audio source and the second audio source are two of a voice data, a first music data and a second music data.
By the method for transmitting audio data of the present invention, a transmitting end (e.g., an electronic device) of the present invention can convert a plurality of audio sources into a single audio stream, and a receiving end (e.g., a wireless earphone set or a wired earphone set) can receive the audio data of the plurality of audio sources, so as to provide diversified services for users. Furthermore, the invention applies the functions of multipoint pairing and gesture detection to audio data transmission and provides the service of setting the ear-to-ear so as to meet the requirements of users.
Drawings
In order to make the aforementioned and other objects, features, advantages and embodiments of the invention more comprehensible, the following description is given:
fig. 1 is a functional block diagram of an audio transmission system according to a first embodiment of the present invention.
FIG. 2 is a diagram illustrating an audio processing unit operating in a first scenario according to a first embodiment of the invention.
FIG. 3 is a diagram illustrating an electronic device operating in a first scenario according to a first embodiment of the invention.
FIG. 4 is a diagram illustrating an audio processing unit operating in a second scenario according to the first embodiment of the invention.
FIG. 5 is a diagram illustrating an electronic device operating in a second scenario according to the first embodiment of the invention.
FIG. 6 is a flowchart illustrating a process of transmitting audio data according to a first embodiment of the present invention.
Fig. 7 is a functional block diagram of an audio transmission system according to a second embodiment of the invention.
Fig. 8 is a schematic diagram of the audio transmission system operating in a third scenario according to the second embodiment of the present invention.
FIG. 9 is a flowchart illustrating a process of transmitting audio data according to a second embodiment of the present invention.
Reference numerals
1: audio transmission system
10, 70: electronic device
11: wireless earphone set
12: wired earphone set
100: processor with a memory having a plurality of memory cells
101: voice modem
102: audio processing unit
103: wireless transceiver
104: audio output circuit
105: memory device
151: earphone service suite
152: application program
202: mixing unit
600 to 607, 901 to 907: step (ii) of
711, 721: micro-controller
712, 722: gesture sensor
AS1, AS 2: audio source
AU1, AU2, AU 3: outputting audio streams
CTRL1, CTRL 2: control signal
EB1, EB2, EB3, EB 4: earphone set
PN1, PN 2: personal area network
MD1, MD 2: music data
SM1, SM 2: audio streaming
ST: setting signal
U1, U2: user's hand
VD: voice data
WSM 1: wireless data transmission
A1-AN, C1-CN: a first audio packet
B1-BN: second audio packet
SP 1-SPN, SP1 '-SPN': audio samples
Detailed Description
The following embodiments are described in detail with reference to the accompanying drawings, but the embodiments are only used for explaining the present invention and not for limiting the present invention, and the description of the structural operation is not used for limiting the execution sequence thereof, and the devices with equivalent functions are all covered by the present invention.
Fig. 1 is a block diagram of an audio transmission system 1 according to a first embodiment of the present invention. The audio transmission system 1 includes a transmission end (Source) which may be an electronic device 10 and a receiving end (Sink) which may be an earphone set (earstems). The earphone set may be a wireless earphone set 11 or a wired earphone set 12. When the earphone set is a wireless earphone set 11, the wireless earphone set 11 may be enabled by bluetooth: (
Figure BDA0002946019030000041
) The wireless communication is used for pairing and connecting with the electronic device 10 to receive an output audio stream AU1 from the electronic device 10. On the other hand, when the earphone set is the wired earphone set 12, the wired earphone set 12 is connected to the electronic device 10 by a wire to receive an output audio stream AU2 from the electronic device 10.
The electronic device 10 includes a processor 100, a voice modem 101, an audio processing unit 102, a wireless transceiver 103, an audio output circuit 104, and a memory 105. Structurally, the processor 100 is coupled to the voice modem 101, the audio processing unit 102, the wireless transceiver 103, and the memory 105 for generating the audio sources AS1 and AS 2. The audio processing unit 102 is coupled to the processor 100 and the audio output circuit 104, and is configured to generate an audio stream SM1 or SM2 according to at least one of the audio sources AS1 and AS 2.
In one embodiment, the processor 100 is further configured to generate a setting signal ST to the audio processing unit 102, wherein the setting signal ST indicates a wireless transmission path or a wired transmission path. For example, when the setting signal ST indicates the wireless transmission path, the audio processing unit 102 generates the audio stream SM1 to the processor 100, and the processor 100 generates a wireless transmission data WSM1 to the wireless transceiver 103 according to the audio stream SM 1. The wireless transceiver 103 is coupled to the processor 100 and configured to generate an output audio stream AU1 according to the wireless transmission data WSM1 and transmit the output audio stream AU1 to the wireless headset 11 through bluetooth wireless communication. In an embodiment of the invention, the wireless transceiver 103 is based on Bluetooth Low energy (R) ((R))
Figure BDA0002946019030000051
Low Energy, BT LE) communication standard version 5.2, which delivers the output audio stream AU1 to the wireless headset 11, but is not limited thereto. On the other hand, when the setting signal ST indicates a wired transmission path, the audio processing unit 102 generates an audio stream SM2 to the audio output circuit 104. The audio output circuit 104 is coupled to the audio processing unit 102 for generating an output audio stream AU2 to the wired headset 12 according to the audio stream SM 2.
In one embodiment, when the electronic device 10 and the wireless headset 11 are successfully paired, the processor 100 generates a setting signal ST indicating a wireless transmission path to the audio processing unit 102; on the other hand, when the audio output circuit 104 detects the wired earphone set 12, it may notify the processor 100 to generate a setting signal ST indicating a wired transmission path to the audio processing unit 102.
In one embodiment, the setting signal ST is also used to indicate a general mode or an audio multiplexing mode. For example, when the setting signal ST indicates the general mode and the wireless transmission path, the audio processing unit 102 generates the audio stream SM1 for the wireless earphone group 11 according to a single audio source (e.g., the audio source AS1 or the audio source AS 2). When the setting signal ST indicates the general mode and the wired transmission path, the audio processing unit 102 generates the audio stream SM2 for the wired earphone set 12 according to the single audio source. When the setting signal ST indicates the wireless transmission path and the audio multiplexing mode, the audio processing unit 102 generates an audio stream SM1 for the wireless earphone group 11 according to a plurality of audio sources (e.g., the audio sources AS1 and AS 2). When the setting signal ST indicates the wired transmission path and the audio multiplexing mode, the audio processing unit 102 generates an audio stream SM2 for the wired earphone set 12 from a plurality of audio sources.
The voice modem 101 is coupled to the processor 100 for generating a voice data VD. In practical applications, the voice modem 101 can generate the voice data VD when a user makes a call through the electronic device 10. Therefore, the voice data VD can be used as an audio source.
The memory 105 is coupled to the processor 100 and is used for storing a headset service kit 151 and a plurality of application programs 152. The plurality of application programs 152 include at least one of a voice call program, a web call program, a music playing program, a video playing program, and a fm broadcast program, but are not limited thereto. When two of the plurality of applications 152 are executed, the processor 100 retrieves the music material MD1 and MD2 from the memory 105. Therefore, the music materials MD1 and MD2 can be used as different audio sources. In one embodiment, the processor 100 generates the audio source AS1 from the speech profile VD, and the processor 100 generates the audio source AS2 from the music profile MD1 (or MD 2). In one embodiment, the processor 100 generates audio sources AS1 and AS2 from music profiles MD1 and MD2, respectively.
In other words, the audio sources AS1 and AS2 may be two of the speech material VD, the music material MD1 and MD 2. Therefore, the audio processing unit 102 of the present invention can convert multiple audio sources into a single audio stream, so that the headphone set can simultaneously receive multiple audio sources. In this way, a user can listen to a plurality of audio data of a plurality of audio sources through headphones EB1 and EB2 (see fig. 3) of a headphone set (e.g., wireless headphone set 11 or wired headphone set 12).
FIG. 2 is a diagram illustrating the audio processing unit 102 operating in a first scenario according to the first embodiment of the invention. Assuming that the first scenario is that the user is talking and listening to music at the same time, the audio source AS1 includes the speech data VD and the audio source AS2 includes the music data MD 2. The audio source AS1 includes a plurality of first audio packets A1-AN, and the audio source AS2 includes a plurality of second audio packets B1-BN. The audio processing unit 102 includes a Mixer 202 for generating AN audio stream SM1 according to a set signal ST, a plurality of first audio packets A1-AN and a plurality of second audio packets B1-BN. The audio stream SM1 includes a plurality of audio samples (sample) SP 1-SPN, each of the plurality of audio samples SP 1-SPN includes a first audio packet and a second audio packet. For example, the audio sample SP1 includes a first audio packet a1 and a second audio packet B1; the audio sample SP2 includes a first audio packet a2 and a second audio packet B2; by analogy, the audio samples SPN comprise a first audio packet AN and a second audio packet BN. Each of the audio samples SP1 SPN is, for example, a Pulse Code Modulation (PCM) packet, but is not limited thereto. In an embodiment, the mixing unit 202 may be implemented by a hardware circuit or a software program.
In one embodiment, the setting signal ST is used to indicate that the audio source AS1 corresponds to a first channel or a second channel, wherein the first channel is, for example, a right channel and the second channel is, for example, a left channel, but not limited thereto. Therefore, assuming that the setting signal ST indicates that the audio source AS1 corresponds to the first channel, the mixing unit 202 of the audio processing unit 102 writes the first audio packets a 1-AN of the audio source AS1 into the data format of the first channel of the audio samples SP 1-SPN, respectively, and writes the second audio packets B1-BN of the audio source AS2 into the data format of the second channel of the audio samples SP 1-SPN, respectively.
Fig. 3 is a schematic diagram illustrating the electronic device 10 operating in a first scenario according to the first embodiment of the invention. In the first scenario, a user U1 wears the wireless earphone set 11 and listens to music and receives a call, so the electronic device 10 executes the music playing process and the call process simultaneously. Wireless headset set 11 includes headset EB1 and headset EB 2. The Wireless headset 11 is, for example, a True Wireless (TWS) bluetooth headset and may support bluetooth low energy communication standard version 5.2, but is not limited thereto. It should be noted that, since the bluetooth low energy communication standard version 5.2 introduces a synchronous connection Stream (CIS) and a synchronous connection Group (CIG) in the synchronous channel technology of Multi-Stream (Multi-Stream), the headset EB1 and the headset EB2 may respectively receive a first synchronous connection Stream and a second synchronous connection Stream from the electronic device 10, where the first synchronous connection Stream and the second synchronous connection Stream are a set of sets of syncs.
Therefore, the headphone EB1 and the headphone EB2 can synchronously receive the output audio stream AU1 and extract the required first audio packets a 1-AN and second audio packets B1-BN from the audio samples SP 1-SPN according to their own corresponding channels. Specifically, when the headphone EB1 corresponds to the first channel, the microcontroller built in the headphone EB1 can retrieve a plurality of first audio packets a 1-AN written in the data format corresponding to the first channel from the plurality of audio samples SP 1-SPN, thereby playing the audio data VD of the audio source AS 1. Meanwhile, when the earphone EB2 corresponds to the second channel, the microcontroller built in the earphone EB2 can retrieve a plurality of second audio packets B1-BN written in the data format corresponding to the second channel from the plurality of audio samples SP 1-SPN, thereby playing the music data MD2 of the audio source AS 2.
Therefore, in the audio multiplexing mode, the headphones EB1 and the headphones EB2 of the headphone set of the present invention can simultaneously and respectively play audio data of a plurality of audio sources. In practical applications, when user U1 receives an incoming call while listening to music (or dubbing music for broadcast, video, or games, etc.), it can simultaneously listen to the telephone voice over headset EB1 and continue listening to the music over headset EB 2. Therefore, the user U1 does not miss the information that is being focused on (i.e., the dubbing music of music, broadcast, video, or game, etc.) by answering the incoming call. Then, after the user U1 hangs up, the processor 100 stops executing the talking process and the voice modem 101 stops generating the voice data VD, and the user U1 listens to the music through the earphone EB1 and the earphone EB 2.
FIG. 4 is a diagram illustrating the audio processing unit 102 operating in a second scenario according to the first embodiment of the invention. Assume that the second scenario is that there are multiple users listening to different music data, so the audio source AS1 is music data MD1 and the audio source AS2 is music data MD 2. The audio source AS1 includes a plurality of first audio packets C1-CN, and the audio source AS2 includes a plurality of second audio packets B1-BN. The mixing unit 202 of the audio processing unit 102 is configured to generate the audio stream SM2 according to the setting signal ST, the plurality of first audio packets C1-CN and the plurality of second audio packets B1-BN. The audio stream SM2 includes a plurality of audio samples SP1 ' -SPN ', wherein the audio samples SP1 ' include a first audio packet C1 and a second audio packet B1; the audio sample SP 2' includes a first audio packet C2 and a second audio packet B2; by analogy, the audio samples SPN' comprise a first audio packet CN and a second audio packet BN. Assuming that the setting signal ST indicates that the audio source AS1 corresponds to a first channel, the mixing unit 202 of the audio processing unit 102 writes the first audio packets C1 to CN of the audio source AS1 into the data format of the first channel of the audio samples SP1 'to SPN', respectively, and writes the second audio packets B1 to BN of the audio source AS2 into the data format of the second channel of the audio samples SP1 'to SPN', respectively.
FIG. 5 is a diagram illustrating the electronic device 10 operating in a second scenario according to the first embodiment of the invention. In a second scenario, multiple users U1 and U2 are respectively fitted with earphones EB3 and EB4 of wired earphone set 12. When the user U1 is listening to the music material MD1 of the first multimedia playing program, the user U2 is listening to the music material MD2 of the second music playing program at the same time, so the electronic device 10 executes the first and second music playing programs at the same time.
The earphone EB3 and the earphone EB4 can synchronously receive the output audio stream AU2 and extract a plurality of audio packets C1-CN and a plurality of audio packets B1-BN required by a plurality of audio samples SP1 'to SPN' according to the corresponding sound channels. Specifically, when headphone EB3 corresponds to the first channel, headphone EB3 can play music material MD1 of audio source AS 1. Meanwhile, when the headphone EB4 corresponds to the second channel, the headphone EB4 can play the music material MD2 of the audio source AS 2.
In one embodiment, the user U1 (or U2) can set a general mode or an audio multiplexing mode, a first channel or a second channel through the headset service kit 151. For example, when multiple users U1 and U2 want to watch video together, the user U1 (or U2) can set the normal mode through the headset service kit 151, and the audio stream SM2 has only a single audio source. When the user U1 wants to watch video and the user U2 wants to listen to the broadcast program, the user U1 (or U2) can set the audio multiplexing mode through the headset service kit 151, designate the headset EB3 corresponding to the first channel for playing the music data of the video and designate the headset EB4 corresponding to the second channel for playing the music data of the broadcast program, and the audio stream SM2 includes a plurality of audio sources. Similarly, in the embodiment of fig. 3 and 4, when the user U1 wants to listen to a call and listen to music simultaneously, the user U1 can set the audio multiplexing mode through the headset service kit 151 and designate one of the headset EB1 and the headset EB2 to play voice and the other to play music.
Briefly, in the first embodiment of the present invention, the electronic device 10 can support the audio multiplexing mode, and the audio processing unit 102 converts the audio sources AS1 and AS2 into a single audio stream (audio stream SM1 or audio stream SM2), so that the headphone set can receive audio data (e.g., voice or music) of the audio sources AS1 and AS 2. In addition, the headset service kit 151 can be used to set a general mode or an audio multiplexing mode, a first channel or a second channel, so that a user can select a desired audio playing service. In this way, the electronic device 10 of the present invention can provide diversified audio playing services to improve user experience.
It should be noted that, the audio processing unit 102 of the electronic device 10 of the present invention mixes the audio data of the audio sources AS1 and AS2 into a single audio source to reproduce a single audio stream, and thus the present invention does not relate to the design of the existing earphone set. Similarly, the headset service kit 151 is installed on the electronic device 10 and is used to control the audio processing unit 102, without regard to the design of existing headset sets. Therefore, any wireless earphone set or wired earphone set supporting the synchronous channel technology of multi-stream can support the audio multiplexing mode of the invention.
Regarding the operation of the electronic device 10, the operation can be summarized as a process of transmitting audio data, as shown in fig. 6, the process of transmitting audio data includes the following steps.
Step 600: and starting.
Step 601: it is determined whether the first mode is set. If yes, go to step 602; if not, go to step 605.
Step 602: the audio sources are converted into an audio stream.
Step 603: it is determined whether the first path is set. If yes, go to step 604; if not, go to step 606.
Step 604: an audio stream is converted into an audio stream.
Step 605: the audio stream is transmitted through a first path. Proceed to step 607.
Step 606: and transmitting the audio streaming through the second path.
Step 607: and (6) ending.
In step 601, the audio processing unit 102 determines whether the electronic device 10 is set to the first mode (e.g., audio multiplexing mode) according to the setting signal ST. In step 602, when the setting signal ST indicates that the electronic device 10 is set to the first mode, the audio processing unit 102 converts the audio sources into an audio stream; alternatively, in step 605, when the setting signal ST indicates that the electronic device 10 is not set to the first mode, the electronic device 10 is set to the second mode (e.g., the normal mode), and the audio processing unit 102 converts the single audio source into an audio stream. In step 603, the audio processing unit 102 determines whether the electronic device 10 is set as the first path (e.g., wireless transmission path) according to the setting signal ST. In step 604, when the setting signal ST indicates that the electronic device 10 is set to the first path, the electronic device 10 transmits the audio stream through the first path; alternatively, in step 606, when the setting signal ST indicates that the electronic device 10 is not set as the first path, the electronic device 10 is set as the second path (e.g., the wired transmission path), and the electronic device 10 transmits the audio stream through the second path.
Therefore, in the first embodiment, through the process of transmitting audio data according to the present invention, the electronic device 10 of the present invention can convert the multiple audio sources AS1 and AS2 into a single audio stream SM1 or SM2, so that the receiving end (e.g. the wireless headset 11 or the wired headset 12) can receive the audio data of the multiple audio sources AS1 and AS 2. Therefore, the invention can simultaneously transmit the audio data of a plurality of audio sources so as to provide diversified services for users.
Fig. 7 is a block diagram of an audio transmission system 7 according to a second embodiment of the present invention. The audio transmission system 7 comprises electronic devices 10, 70 and a wireless earphone set 11. It should be noted that, in view of the fact that the wireless headset in the market has the functions of Multi-point Pairing (Multi-Points Pairing) and gesture detection, the present invention further applies the above functions to audio data transmission.
The electronic devices 10 and 70 have substantially the same structure and function, and the detailed description of the electronic device 70 refers to the first embodiment of fig. 1 to 6. Wireless headset set 11 includes headset EB1 and headset EB 2. Headset EB1 includes a first microcontroller 711 and a first gesture sensor 712. The first gesture sensor 712 is coupled to the first microcontroller 711 for detecting a gesture of a user. Headset EB2 includes a second microcontroller 721 and a second gesture sensor 722. The second gesture sensor 722 is coupled to the second microcontroller 721 for detecting a gesture of the user. In one embodiment, first gesture sensor 712 and second gesture sensor 722 may be a pressure sensor or a capacitive sensor and are mounted on the handles of headset EB1 and headset EB2, or other location sensitive user touches. In an embodiment, the first gesture sensor 712 and the second gesture sensor 722 may also be an input interface including a plurality of keys, and the plurality of keys respectively correspond to the instructions of playing, pausing, head-up and down, answering and hanging up, but are not limited thereto.
The first microcontroller 711 is configured to pair and connect with the electronic device 10 through bluetooth wireless communication to receive the output audio stream AU1 from the electronic device 10. The first microcontroller 711 is also used for generating a control signal CTRL1 corresponding to the gesture of the user to the electronic device 10. The second microcontroller 721 is used for pairing and connecting with the electronic device 10 through bluetooth wireless communication to receive the output audio stream AU1 from the electronic device 10. The second microcontroller 721 is further configured to generate a control signal CTRL2 corresponding to the gesture of the user to the electronic device 10. For example, table 1 illustrates the function definitions and corresponding gestures, but is not so limited.
Figure BDA0002946019030000111
Figure BDA0002946019030000121
Therefore, when the headset EB1 transmits the control signal CTRL1 (or the headset EB2 transmits the control signal CTRL2) to the electronic device 10, the processor 100 generates the setting signal ST to the audio processing unit 102 according to the control signal CTRL1 to indicate the general mode or the audio multiplexing mode and the first channel or the second channel. For example, when the electronic device 10 is set to the normal mode, the user may press the earphone handle of the earphone EB1 once, so that the earphone EB1 generates the control signal CTRL1 corresponding to the "switch mode" gesture to the electronic device 10, and sets the electronic device 10 to the audio multitasking mode.
Further, the user may press the earphone handle of earphone EB1 twice, so that earphone EB1 generates control signal CTRL1 corresponding to the "select/turn on" gesture to electronic device 10, and electronic device 10 transmits audio data according to the first channel of earphone EB 1. For example, when the user receives an incoming call while listening to music, the electronic device 10 transmits voice and music according to the sound channel of the headset EB1 that generated the "pick/turn on" gesture, so the headset EB1 plays the incoming call ringtone, the headset EB2 continues to play the music, and the user can hold the phone up by pressing the headset handle of the headset EB1 once.
In addition, the second microcontroller 721 is used for pairing and connecting with the electronic device 70 through bluetooth wireless communication to receive an output audio stream AU3 from the electronic device 70. The second microcontroller 721 is also used to generate a control signal CTRL2 to the electronic device 70 corresponding to the gesture of the user. Briefly, the headset EB2 (i.e., the second microcontroller 721) can be paired and connected to the electronic devices 10 and 70 at the same time to realize the multi-point pairing function; moreover, the electronic device 10 can receive the control signals CTRL1 and CTRL2 generated by the earphones EB1 and EB2, and the electronic device 70 can receive the control signal CTRL2 generated by the earphone EB2, so as to implement the function of detecting the gesture. In one embodiment, the user can set a specific earphone corresponding to the audio event through the earphone service kit 151. Table 2 is an audio event and corresponding designated headphone, but is not so limited.
Figure BDA0002946019030000122
Figure BDA0002946019030000131
In the normal mode, the headset sets (i.e., the wireless headset set 11 and the wired headset set 12 of fig. 1) designate both headphones to play a single audio when any audio event occurs. In the audio multiplexing mode, when a "dual audio coexistence" event occurs, the user may designate the primary earpiece (e.g., ear-familiar) to play the primary audio source (e.g., speech) and designate the secondary earpiece (e.g., non-ear-familiar) to play the secondary audio source (e.g., music). Since the left and right ears of the user may have different audios due to congenital factors or accidents, the invention provides a service of setting the ear usage to suit the needs of the user.
In addition, in the audio multiplexing mode, when an "external device call" event occurs, the user can designate the main earphone to play a single audio data. For example, when a plurality of users listen to different music respectively using the headphones EB1 and EB2, if the electronic device 70 receives an incoming call, the primary headphones (e.g., headphone EB1) will play the incoming call ring tone and voice, and the secondary headphones (e.g., headphone EB2) will continue to play the music.
When performing the multi-point pairing, the wireless headset 11 needs to determine which electronic device 10 and 70 should receive the audio stream from. In one embodiment, the wireless headset 11 determines which electronic device 10 or 70 should receive the audio stream according to the priority of the audio event. Table 3 is an audio event and corresponding priority, but is not limited thereto.
Figure BDA0002946019030000132
Figure BDA0002946019030000141
For example, the user may set the priority of the audio event through the headset service kit 151, or the priority of the audio event is set by the factory. Assuming that the electronic apparatus 10 is a main device and the electronic apparatus 70 is a first secondary device, the electronic apparatuses 10 and 70 may determine the priority of the audio event according to their own roles. Thus, when the output audio stream AU1 and the output audio stream AU3 are received by the headphone EB2 at the same time, the output audio stream with higher priority may be received and the output audio stream with lower priority may be ignored.
Fig. 8 is a schematic diagram of the audio transmission system 7 operating in a third scenario according to the second embodiment of the present invention. Assume that the third scenario is that the user listens to music through the electronic device 10 and puts the electronic device 70 aside for standby. First, after the earphones EB1, EB2 and the electronic device 10 are paired and connected, a Personal Area Network (PAN) PN1 of the electronic device 10 is added to receive the output audio stream AU 1. Then, after the earphone EB2 and the electronic device 70 are paired and connected, a personal area network PN2 of the electronic device 70 is added to receive the output audio stream AU 3. Since the electronic devices 10 and 70 transmit the output audio streams AU1 and AU3 according to the priority of the audio events, the headphone EB2 can determine from which electronic device the audio stream should be received according to the priority of the audio events (as shown in table 3). For example, according to the embodiment of table 3, when the user listens to music through the electronic device 10, if the electronic device 70 receives an incoming call, the headphone EB2 receives the output audio stream AU3 from the electronic device 70 in the normal mode and the audio multitasking mode because the incoming call of the external device has higher priority than the music. In detail, in the general mode, the earphones EB1 and EB2 receive the output audio stream AU1 from the electronic device 10. On the other hand, in the audio multiplexing mode, the headphone EB1 receives the output audio stream AU1 from the electronic device 10, and the headphone EB2 can determine to receive the output audio stream AU1 from the electronic device 10 or the output audio stream AU3 from the electronic device 70 according to the user setting.
Therefore, in the second embodiment, the invention applies the functions of multipoint pairing and gesture detection to audio data transmission and provides the service of setting the ear-to-ear so as to meet the requirements of users.
Regarding the operation of the electronic devices 10 and 70, the operation can be summarized as a process of transmitting audio data, as shown in fig. 9, the process of transmitting audio data includes the following steps.
Step 901: a first audio event is performed.
Step 902: it is determined whether a second audio event has occurred. If yes, go to step 903; if not, go back to step 901.
Step 903: it is determined whether the first mode is set. If yes, go to step 904; if not, go to step 906.
Step 904: an audio stream including a first audio source of a first audio event and a second audio source of a second audio event is transmitted according to the audio event priority.
Step 905: and judging whether the first gesture is received. If yes, go back to step 904; if not, go back to step 901.
Step 906: an audio stream of an audio source including a second audio event is transmitted.
Step 907: and judging whether the first gesture is received. If yes, go back to step 906; if not, go back to step 901.
Taking the electronic device 10 as an example, in step 901, the electronic device 10 performs a first audio event (e.g., transmitting music data to the wireless earphone set 11). In step 902, the electronic device 10 determines whether a second audio event (e.g., whether there is an incoming call) has occurred. In step 903, when the second audio event occurs, the electronic device 10 determines whether the first mode (e.g., audio multiplexing mode) is set. At step 904, when the electronic device 10 is set to the first mode, the electronic device 10 transmits an audio stream including a first audio source of the first audio event (e.g., music) and a second audio source of the second audio event (e.g., ring tone) to the wireless headset 11 according to the audio event priority. In step 905, the electronic device 10 determines whether a first gesture (e.g., a gesture corresponding to a "select/turn on" function) is received from the wireless headset 11. When the electronic device 10 receives the first gesture, the electronic device 10 returns to step 904. When the electronic device 10 does not receive the first gesture, the electronic device 10 receives a second gesture (e.g., a gesture corresponding to the "reject/hang up" function), and the electronic device 10 returns to step 901. In one embodiment, when the electronic device 10 does not receive any gesture within a predetermined time, the incoming call is not answered, and the electronic device 10 returns to step 901. In one embodiment, when the incoming call is ended, the electronic device 10 returns to step 901.
On the other hand, in step 906, when the electronic device 10 is not set to the first mode, the electronic device 10 is set to the second mode (e.g., the normal mode), and the electronic device 10 transmits an audio stream including a second audio source (e.g., a ring tone) of a second audio event to the wireless headset 11. In step 907, the electronic device 10 determines whether the first gesture is received. When the electronic device 10 receives the first gesture, the electronic device 10 returns to step 906, i.e., transmits the audio stream of the second audio source including the second audio event to the wireless headset 11. When the electronic device 10 does not receive the first gesture, the electronic device 10 receives the second gesture, and the electronic device 10 returns to step 901 to perform the first audio event.
Notably, at step 904, the electronic devices 10 and 70 transmit audio streams of audio sources including the first audio event and the second audio event according to the audio event priority. In this way, when the electronic devices 10 and 70 transmit audio streams simultaneously, the headphone EB2 can determine which electronic device should receive the audio stream according to the priority of the audio event. For example, when the audio stream transmitted by the electronic device 70 includes incoming voice with higher priority and the audio stream transmitted by the electronic device 10 includes music with lower priority, the headphone EB2 receives the audio stream from the electronic device 70 and ignores the audio stream from the electronic device 10.
In one embodiment, after step 905, the electronic devices 10 and 70 determine that the first gesture is received from a first output (e.g., headphone EB1) or a second output (e.g., headphone EB2) of a receiving end, and transmit an audio stream of audio sources including the first audio event and the second audio event according to the channel of the first output or the second output.
In summary, through the process of transmitting audio data of the present invention, the electronic device of the present invention can convert a plurality of audio sources into a single audio stream, so that a receiving end (e.g., a wireless earphone set or a wired earphone set) can receive audio data of the plurality of audio sources, thereby providing diversified services for users. Furthermore, the invention applies the functions of multipoint pairing and gesture detection to audio data transmission and provides the service of setting the ear-to-ear so as to meet the requirements of users.
Although the present disclosure has been described with reference to the above embodiments, it should be understood that various changes and modifications can be made by those skilled in the art without departing from the spirit and scope of the disclosure, and therefore, the scope of the disclosure is to be determined by the appended claims.

Claims (10)

1. A method for transmitting audio data in an audio transmission system, the audio transmission system including a transmitting end and a receiving end, the method comprising:
judging whether the transmission end is set to be a first mode or not; and
when the transmission end is set to the first mode, generating an audio streaming comprising a plurality of audio samples according to a set signal, a plurality of first audio packets and a plurality of second audio packets and transmitting the audio streaming to the receiving end;
wherein the first audio packets are from a first audio source and the second audio packets are from a second audio source; and
each of the plurality of audio samples includes one of the plurality of first audio packets and one of the plurality of second audio packets.
2. The method of claim 1 wherein the first audio source and the second audio source are two of a speech data, a first music data and a second music data.
3. The method of claim 1, wherein the setting signal indicates a primary audio source corresponding to a first channel and a secondary audio source corresponding to a second channel, the step of converting the audio sources into the audio stream comprises:
when the first audio source is the main audio source, writing the first audio packets into the data format of the audio samples corresponding to the first channel, and writing the second audio packets into the data format of the audio samples corresponding to the second channel; or
When the first audio source is the secondary audio source, the first audio packets are respectively written into the data format of the second channel in the audio samples, and the second audio packets are respectively written into the data format of the first channel in the audio samples.
4. The method of claim 1, further comprising:
when the transmission terminal carries out a first audio event, judging whether a second audio event occurs;
when the second audio event occurs and the transmission terminal is set to the first mode, transmitting the audio stream including a first audio source of the first audio event and a second audio source of the second audio event according to the priority of the first audio event and the priority of the second audio event; and
when the second audio event occurs and the transmitting end is not set to the first mode, transmitting the audio stream of the second audio source including the second audio event.
5. The method of claim 4, wherein when the second audio event occurs and the transmitting end is set to the first mode, the method further comprises:
judging whether a first gesture is received from the receiving end; and
when the first gesture is received from the receiving end, the audio stream of the first audio source including the first audio event and the second audio source including the second audio event is transmitted according to the priority of the first audio event and the priority of the second audio event.
6. The method of claim 4, wherein when the second audio event occurs and the transmitting end is set to the first mode, the method further comprises:
when the first gesture is not received from the receiving end and a second gesture is received from the receiving end, the audio stream of the first audio source comprising the first audio event is transmitted to carry out the first audio event.
7. The method of claim 4, wherein when the second audio event ends, the method further comprises:
transmitting the audio stream of the first audio source including the first audio event to perform the first audio event.
8. The method of claim 5, wherein when the second audio event occurs and the transmitting end is set to the first mode, the method further comprises:
judging that the first gesture is received from a first output end or a second output end of the receiving end;
transmitting the audio stream including the first audio source of the first audio event and the second audio source of the second audio event according to a channel of the first output terminal when the first gesture is received from the first output terminal of the receiving terminal; and
when the first gesture is received from the second output terminal of the receiving terminal, the audio stream including the first audio source of the first audio event and the second audio source of the second audio event is transmitted according to the channel of the second output terminal.
9. The method of claim 4, wherein when the second audio event occurs and the transmitting end is not set to the first mode, the transmitting end is set to a second mode, the method further comprising:
judging whether a first gesture is received from the receiving end; and
transmitting the audio stream of the second audio source including the second audio event when the first gesture is received from the receiving end.
10. The method of claim 1, further comprising:
when a third gesture is received from the receiving end and the transmitting end is set to be in the first mode, the transmitting end is set to be in a second mode; or
When the third gesture is received from the receiving end and the transmitting end is set to the second mode, the transmitting end is set to the first mode.
CN202110193355.7A 2021-02-20 2021-02-20 Method for transmitting audio data Active CN112954528B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110193355.7A CN112954528B (en) 2021-02-20 2021-02-20 Method for transmitting audio data
TW110112977A TWI841832B (en) 2021-02-20 2021-04-09 Method of transmitting audio data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110193355.7A CN112954528B (en) 2021-02-20 2021-02-20 Method for transmitting audio data

Publications (2)

Publication Number Publication Date
CN112954528A true CN112954528A (en) 2021-06-11
CN112954528B CN112954528B (en) 2023-01-24

Family

ID=76244824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110193355.7A Active CN112954528B (en) 2021-02-20 2021-02-20 Method for transmitting audio data

Country Status (1)

Country Link
CN (1) CN112954528B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6937724B1 (en) * 1999-03-29 2005-08-30 Siemens Information & Communication Networks, Inc. Apparatus and method for delivery of ringing and voice calls through a workstation
CN102227917A (en) * 2008-12-12 2011-10-26 高通股份有限公司 Simultaneous mutli-source audio output at wireless headset
CN105635903A (en) * 2014-11-05 2016-06-01 淇誉电子科技股份有限公司 Method and system used for wireless connection and control of wireless sound box
CN106162446A (en) * 2016-06-28 2016-11-23 乐视控股(北京)有限公司 Audio frequency playing method, device and earphone
CN106358126A (en) * 2016-09-26 2017-01-25 宇龙计算机通信科技(深圳)有限公司 Multi-audio frequency playing method, system and terminal
CN107455009A (en) * 2017-07-03 2017-12-08 深圳市汇顶科技股份有限公司 Audio system and earphone
CN109218873A (en) * 2017-07-03 2019-01-15 中兴通讯股份有限公司 Wireless headset and the method for playing audio
CN109445740A (en) * 2018-09-30 2019-03-08 Oppo广东移动通信有限公司 Audio frequency playing method, device, electronic equipment and storage medium
CN109862475A (en) * 2019-01-28 2019-06-07 Oppo广东移动通信有限公司 Audio-frequence player device and method, storage medium, communication terminal
CN110856068A (en) * 2019-11-05 2020-02-28 南京中感微电子有限公司 Communication method of earphone device
CN111770408A (en) * 2020-07-07 2020-10-13 Oppo(重庆)智能科技有限公司 Control method, control device, wireless headset and storage medium
CN112218197A (en) * 2019-07-12 2021-01-12 络达科技股份有限公司 Audio compensation method and wireless audio output device using same

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6937724B1 (en) * 1999-03-29 2005-08-30 Siemens Information & Communication Networks, Inc. Apparatus and method for delivery of ringing and voice calls through a workstation
CN102227917A (en) * 2008-12-12 2011-10-26 高通股份有限公司 Simultaneous mutli-source audio output at wireless headset
CN105635903A (en) * 2014-11-05 2016-06-01 淇誉电子科技股份有限公司 Method and system used for wireless connection and control of wireless sound box
CN106162446A (en) * 2016-06-28 2016-11-23 乐视控股(北京)有限公司 Audio frequency playing method, device and earphone
CN106358126A (en) * 2016-09-26 2017-01-25 宇龙计算机通信科技(深圳)有限公司 Multi-audio frequency playing method, system and terminal
CN107455009A (en) * 2017-07-03 2017-12-08 深圳市汇顶科技股份有限公司 Audio system and earphone
CN109218873A (en) * 2017-07-03 2019-01-15 中兴通讯股份有限公司 Wireless headset and the method for playing audio
CN109445740A (en) * 2018-09-30 2019-03-08 Oppo广东移动通信有限公司 Audio frequency playing method, device, electronic equipment and storage medium
CN109862475A (en) * 2019-01-28 2019-06-07 Oppo广东移动通信有限公司 Audio-frequence player device and method, storage medium, communication terminal
CN112218197A (en) * 2019-07-12 2021-01-12 络达科技股份有限公司 Audio compensation method and wireless audio output device using same
CN110856068A (en) * 2019-11-05 2020-02-28 南京中感微电子有限公司 Communication method of earphone device
CN111770408A (en) * 2020-07-07 2020-10-13 Oppo(重庆)智能科技有限公司 Control method, control device, wireless headset and storage medium

Also Published As

Publication number Publication date
CN112954528B (en) 2023-01-24
TW202234867A (en) 2022-09-01

Similar Documents

Publication Publication Date Title
JP3905509B2 (en) Apparatus and method for processing audio signal during voice call in mobile terminal for receiving digital multimedia broadcast
US6678362B2 (en) System and method for effectively managing telephone functionality by utilizing a settop box
WO2020132839A1 (en) Audio data transmission method and device applied to monaural and binaural modes switching of tws earphone
US20100064329A1 (en) Communication system and method
KR20110054609A (en) Method and apparatus for remote controlling of bluetooth device
US10425758B2 (en) Apparatus and method for reproducing multi-sound channel contents using DLNA in mobile terminal
JP2011125015A (en) Apparatus and method for recognizing earphone mounting in portable terminal
US20170195817A1 (en) Simultaneous Binaural Presentation of Multiple Audio Streams
GB2460219A (en) Interaction between Audio/Visual Display Appliances and Mobile Devices
CN112804610B (en) Method for controlling Microsoft Teams on PC through TWS Bluetooth headset
CN117296348A (en) Method and electronic device for Bluetooth audio multi-streaming
US20220286538A1 (en) Earphone device and communication method
WO2010102469A1 (en) Mobile terminal and method for performing call during cell phone television service
CN112954528B (en) Method for transmitting audio data
CN111190568A (en) Volume adjusting method and device
CN113115290B (en) Method for receiving audio data
TWI841832B (en) Method of transmitting audio data
CN102196107A (en) Telephone system
US10206031B2 (en) Switching to a second audio interface between a computer apparatus and an audio apparatus
CN109818979A (en) A kind of method, apparatus that realizing audio return, equipment and storage medium
JP3144831U (en) Wireless audio system with stereo output
CN103988516A (en) Communication method, system and mobile terminal in application of mobile broadcast television
JP2005229492A (en) Television receiver
KR100693586B1 (en) a Mobile Communication Terminal having a Singing Establishment Function
KR20040026909A (en) Mobile communication terminal capable of listening music and calling telephone at the same time

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant