CN110933216A - Audio data transmission method and device, readable storage medium and mobile terminal - Google Patents

Audio data transmission method and device, readable storage medium and mobile terminal Download PDF

Info

Publication number
CN110933216A
CN110933216A CN202010085690.0A CN202010085690A CN110933216A CN 110933216 A CN110933216 A CN 110933216A CN 202010085690 A CN202010085690 A CN 202010085690A CN 110933216 A CN110933216 A CN 110933216A
Authority
CN
China
Prior art keywords
audio data
sub
type
data
mobile terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010085690.0A
Other languages
Chinese (zh)
Inventor
何殿超
徐彬彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Thunder Shark Information Technology Co Ltd
Original Assignee
Nanjing Thunder Shark Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Thunder Shark Information Technology Co Ltd filed Critical Nanjing Thunder Shark Information Technology Co Ltd
Priority to CN202010085690.0A priority Critical patent/CN110933216A/en
Publication of CN110933216A publication Critical patent/CN110933216A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/60Substation equipment, e.g. for use by subscribers including speech amplifiers
    • H04M1/6033Substation equipment, e.g. for use by subscribers including speech amplifiers for providing handsfree use or a loudspeaker mode in telephone sets
    • H04M1/6041Portable telephones adapted for handsfree use
    • H04M1/6058Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone
    • H04M1/6066Portable telephones adapted for handsfree use involving the use of a headset accessory device connected to the portable telephone including a wireless connection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/80Responding to QoS
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Telephone Function (AREA)

Abstract

An audio data transmission method, an audio data transmission device, a readable storage medium and a mobile terminal are provided, wherein the audio data transmission method comprises the following steps: acquiring audio data in an application system in real time, extracting each sub-audio data in the audio data according to different sound producing bodies, and determining a sound producing body identifier corresponding to each sub-audio data; inquiring the time delay type of each sub audio data from a preset data time delay type table according to the sound generating body identification, wherein the time delay type comprises a first type and a second type, and the real-time requirement of the data of the first type is higher than that of the data of the second type; transmitting the sub-audio data belonging to the first type to a Bluetooth earphone connected with a mobile terminal through a low-power Bluetooth protocol; transmitting sub-audio data belonging to the second type into the Bluetooth headset via a classic Bluetooth protocol. The invention can realize the real-time property of the transmission of the first type of audio data and the high efficiency of the transmission of the second type of audio data.

Description

Audio data transmission method and device, readable storage medium and mobile terminal
Technical Field
The present invention relates to the field of electronic technologies, and in particular, to an audio data transmission method, an audio data transmission device, a readable storage medium, and a mobile terminal.
Background
With the development of electronic technology, the applications of mobile terminals such as mobile phones and tablet computers are more and more popular, and the game applications are more and more popular. At present, like an FPS (first person shooting) game, a player needs to judge the actions of an enemy player in advance according to gunshot, footstep sound, car sound and the like before seeing the enemy player, so that the player can make a correct decision. When a user is playing a similar game and using a bluetooth headset, the timeliness of data transmission by the bluetooth headset is a key concern of the client and also an important factor affecting the user experience.
The WIFI and the Bluetooth are generally the same IC and share the same antenna. WIFI and bluetooth can adopt the mode of time sharing multiplex moreover usually to switch, that is to say WIFI and bluetooth can not share at the same moment. The communication mode of the classic bluetooth (BR/EDR) has a larger bandwidth and can realize high energy efficiency performance of data transmission, but the communication mode has no fixed connection interval, and due to high priority of WIFI, the bluetooth is not timely in sending data, which can cause delay increase of data receiving of the bluetooth headset.
The communication between the mobile terminal and the bluetooth headset on the market at present adopts the classic bluetooth A2DP protocol for data transmission, which can ensure high energy efficiency of data transmission. However, when the mobile terminal and the earphone terminal need to perform data transmission with high delay requirement, the existing bluetooth communication mode has great delay, which cannot meet the user requirement and affects the user experience.
Disclosure of Invention
In view of the above, it is necessary to provide an audio data transmission method, an apparatus, a readable storage medium and a mobile terminal for solving the problem of a long delay time for receiving terminal data by a bluetooth headset in the prior art.
An audio data transmission method is applied to a mobile terminal, the mobile terminal is currently connected with a Bluetooth headset, and the audio data transmission method comprises the following steps:
acquiring audio data in an application system in real time, extracting each sub-audio data in the audio data according to different sound producing bodies, and determining a sound producing body identifier corresponding to each sub-audio data;
inquiring the time delay type of each sub audio data from a preset data time delay type table according to the sound generating body identification, wherein the time delay type comprises a first type and a second type, and the real-time requirement of the data of the first type is higher than that of the data of the second type;
transmitting sub-audio data belonging to the first type to the Bluetooth headset through a low-power Bluetooth protocol;
transmitting sub-audio data belonging to the second type into the Bluetooth headset via a classic Bluetooth protocol.
Further, in the audio data transmission method, the extracting each sub-audio data in the audio data according to the difference of the sounding bodies and determining the sounding body identifier corresponding to each sub-audio data includes:
and classifying the audio data through a neural network model to obtain sub-audio data of each sounding body in the audio data and a corresponding sounding body identifier.
Further, in the audio data transmission method, the step of classifying the audio data through a neural network model to obtain sub-audio data of each sounding body in the audio data and a corresponding sounding body identifier includes:
establishing a neural network model, and training the neural network model by using label data, wherein the label data comprises a plurality of sub-audio data with sounding body identifications;
and classifying the audio data by the trained neural network model to obtain sub-audio data of each sounding body in the audio data and a corresponding sounding body identifier.
Further, in the audio data transmission method, the application system packages sub audio data generated by each sound generating body and a corresponding sound generating body identifier into a data packet, where the audio data includes the data packet corresponding to each sound generating body, the step of extracting each sub audio data in the audio data according to the difference of the sound generating bodies, and determining the sound generating body identifier corresponding to each sub audio data includes:
and extracting the sub-audio data and the corresponding sound-producing body identification in each data packet.
Further, the audio data transmission method described above, wherein the step of sending the sub audio data belonging to the first type to the bluetooth headset via a low power consumption bluetooth protocol includes:
and encoding and compressing the sub-audio data belonging to the first type, and sending the encoded and compressed sub-audio data to the Bluetooth headset through a GATT protocol.
Further, the audio data transmission method described above, wherein the step of sending the sub audio data of the second type to the bluetooth headset via a classical bluetooth protocol includes:
and encoding and compressing the subdata of the second type, and sending the encoded and compressed subdata of audio frequency to the Bluetooth headset through an A2DP protocol.
The embodiment of the invention also provides an audio data transmission device, which is applied to a mobile terminal, wherein the mobile terminal is currently connected with a Bluetooth headset, and the audio data transmission device comprises:
the extraction module is used for acquiring audio data in an application system in real time, extracting each sub-audio data in the audio data according to different sound producing bodies, and determining a sound producing body identifier corresponding to each sub-audio data;
the query module is used for querying the time delay type of each piece of sub-audio data from a preset data time delay type table according to the sound generating body identifier, wherein the time delay type comprises a first type and a second type, and the real-time requirement of the data of the first type is higher than that of the data of the second type;
the first sending module is used for sending the sub-audio data belonging to the first type to the Bluetooth headset through a low-power Bluetooth protocol;
and the second sending module is used for sending the sub-audio data belonging to the second type to the Bluetooth headset through a classic Bluetooth protocol.
Further, in the audio data transmission apparatus, the extracting module is specifically configured to:
and classifying the audio data through a neural network model to obtain sub-audio data of each sounding body in the audio data and a corresponding sounding body identifier.
An embodiment of the present invention further provides a readable storage medium, on which a program is stored, where the program is executed by a processor to implement any one of the methods described above.
The embodiment of the present invention further provides a mobile terminal, which includes a memory, a processor, and a program stored in the memory and capable of being executed on the processor, and when the processor executes the program, the method described in any one of the above is implemented.
In the invention, the audio data in the application system is acquired in real time, each sub-audio data in the audio data is extracted according to the difference of the sound-producing bodies, and the sound-producing body identification corresponding to each sub-audio data is determined. And inquiring the type of each audio data according to the identification of the sounding body, and sending the audio data by adopting different Bluetooth protocols according to different real-time requirements. The sub audio data in the application system is mainly divided into two types, wherein the time delay requirement of the self audio data of the first type is higher than that of the self audio data of the second type, so that the sub audio data of the first type is sent to a Bluetooth earphone end through a low-power-consumption Bluetooth protocol to ensure the timeliness of data transmission, and the sub audio data of the second type is sent to the earphone end through a classic Bluetooth protocol to ensure the high efficiency of data transmission.
Drawings
FIG. 1 is a flow chart of a method for transmitting audio data according to a first embodiment of the present invention;
FIG. 2 is a flow chart of a method for transmitting audio data according to a second embodiment of the present invention;
FIG. 3 is a flowchart illustrating a method for transmitting audio data according to a third embodiment of the present invention;
fig. 4 is a block diagram of an audio data transmission apparatus according to a first embodiment of the present invention.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
These and other aspects of embodiments of the invention will be apparent with reference to the following description and attached drawings. In the description and drawings, particular embodiments of the invention have been disclosed in detail as being indicative of some of the ways in which the principles of the embodiments of the invention may be practiced, but it is understood that the scope of the embodiments of the invention is not limited correspondingly. On the contrary, the embodiments of the invention include all changes, modifications and equivalents coming within the spirit and terms of the claims appended hereto.
The audio data transmission method in the embodiment of the invention is applied to the mobile terminal, and the mobile terminal is an electronic device such as a mobile phone, a tablet computer and the like. The mobile terminal may be connected to a bluetooth receiver, such as a bluetooth headset, using both the classic Bluetooth (BT) and low energy Bluetooth (BLE) protocols.
Referring to fig. 1, a method for transmitting audio data according to a first embodiment of the present invention includes steps S11-S14.
Step S11, obtaining audio data in the application system in real time, extracting each sub-audio data in the audio data according to the difference of sound producing bodies, and determining a sound producing body identifier corresponding to each sub-audio data.
Step S12, querying a delay type of each sub-audio data from a preset data delay type table according to the utterance body identifier, where the delay type includes a first type and a second type, and a real-time requirement of the data of the first type is higher than that of the data of the second type.
The mobile terminal in this embodiment is equipped with at least one Application (APP), preferably a gaming application, generally referred to as an FPS (first person shooter) type game. For example, games such as "peace elite", "elite troops", etc. are currently popular. The audio data of the application system in the using process comprises various sub audio data, such as audio data of footstep sound, car sound, gunshot sound, background music, game sound effect and the like. Each sub audio data has different sound producing bodies, such as human bodies, firearms and automobiles on an operation interface in application, and a background music module and a sound effect module in an application system.
In the application system, sounds such as footstep sounds and gunshot sounds which influence the player to judge the movement of the local player are important to the success or failure of the game, and the user has high requirements on the real-time performance of receiving the data. And the real-time requirement for data such as background music, game sound effects and the like is low. Therefore, in the present embodiment, the audio data in the application system is mainly divided into two types, wherein the real-time performance requirement of the first type of data is higher than that of the second type.
The sub-audio data of each sound producing body in the audio data extracted by the mobile terminal is, for example, gunshot audio data, footstep audio data, background music response data, and the like generated by an application system. The mobile terminal is pre-stored with a data delay type table, and the data delay type table comprises each sounding body identifier and a corresponding type. And the mobile terminal inquires the type of each acquired sub audio data in the data delay type table.
And step S13, sending the sub audio data belonging to the first type to a Bluetooth headset connected with the mobile terminal through a low-power Bluetooth protocol.
And step S14, transmitting the sub audio data belonging to the second type to the Bluetooth headset through a classic Bluetooth protocol.
And after determining the type of each sub audio data, the mobile terminal sends the sub audio data belonging to the first type to a Bluetooth earphone connected with the mobile terminal through a low-power Bluetooth protocol. The sub audio data of the first type has higher requirement on real-time performance and low requirement on bandwidth, and the sub audio data of the first type is sent to the Bluetooth headset through a low-power Bluetooth protocol. The communication mode of the low power consumption bluetooth protocol, which is the GATT protocol, is a fixed interval mode, and can be as low as 7.5ms for one packet, that is, whether data exists or not, the bluetooth of the mobile terminal must send data to the opposite terminal device when the fixed interval expires. Therefore, compared with the classic Bluetooth, the low-power Bluetooth has the advantages of high real-time performance, low time delay and the like.
The audio data of the second type has low requirement on the delay performance and high requirement on the bandwidth, so that the sub-audio data belonging to the second type is sent to the Bluetooth headset connected with the sub-audio data through the classic Bluetooth protocol. The classic bluetooth protocol is, for example, the A2DP protocol, and the A2DP protocol has a large bandwidth to achieve high energy efficiency performance of data transmission.
In this embodiment, the audio data in the application system is obtained in real time, each sub-audio data in the audio data is extracted according to the difference of the sound generating bodies, and the sound generating body identifier corresponding to each sub-audio data is determined. And inquiring the type of each audio data according to the identification of the sounding body, and sending the audio data by adopting different Bluetooth protocols according to different real-time requirements. The sub audio data in the application system is mainly divided into two types, wherein the time delay requirement of the self audio data of the first type is higher than that of the self audio data of the second type, so that the sub audio data of the first type is sent to a Bluetooth earphone end through a low-power-consumption Bluetooth protocol to ensure the timeliness of data transmission, and the sub audio data of the second type is sent to the earphone end through a classic Bluetooth protocol to ensure the high efficiency of data transmission.
Referring to fig. 2, a method for transmitting audio data according to a second embodiment of the present invention includes steps S21-S24.
Step S21, obtaining audio data in the application system in real time, and extracting sub-audio data and corresponding sound-generating body identifier in each data packet in the audio data.
In this embodiment, the application system defines a plurality of sound producing units, and each sound producing unit may produce corresponding sub-audio data in the application running process. The sound producing body is, for example, a human body, a gun, a car on an operation interface in an application, and a background music module, a sound effect module and the like in an application system, and the sub audio data produced by the sound producing body is footstep sound data, gun sound data, car sound data, background music data, sound effect data and the like. The sub-audio data generated by each sounding body in the application system exists in the form of a separate data packet, and the data packet also contains the sounding body identification of the sounding body. The sound-producing body identification is the only identification used by the mobile terminal for identifying the sound-producing body. The audio data of the application system acquired by the mobile terminal are data packets corresponding to the plurality of sounding bodies, and the sub-audio data and the sounding body identifications are extracted after the mobile terminal acquires the data packets.
Step S22, querying a delay type of each sub-audio data from a preset data delay type table according to the utterance body identifier, where the delay type includes a first type and a second type, and a real-time requirement of the data of the first type is higher than that of the data of the second type.
The mobile terminal stores a data delay type table, and the data delay type table comprises each sounding body identifier and a delay type corresponding to the sounding body identifier. In the present embodiment, the sub-audio data generated by each sounding body in the application system is divided into two types, i.e., a first type and a second type. Wherein the first type of sub audio data has a higher real-time requirement than the second type. In particular implementations, the first type of sub-audio data includes, but is not limited to, sub-audio data generated by a human, a firearm, or an automobile; the second type of sub audio data includes, but is not limited to, background music, sound effect data.
And step S23, sending the sub audio data belonging to the first type to a Bluetooth headset connected with the mobile terminal through a low-power Bluetooth protocol.
When the mobile terminal sends audio data to the bluetooth headset, because of the limitation of bluetooth bandwidth, the audio data stream needs to be encoded and compressed, and common encoding has SBC algorithm and the like. When the method is implemented specifically, the mobile terminal encodes and compresses the sub-audio data belonging to the first type through an SCB algorithm, and sends the encoded and compressed sub-audio data to the Bluetooth headset through a GATT protocol. The GATT protocol is a low-power-consumption Bluetooth protocol, and has the advantages of high real-time performance and low time delay during data transmission.
And step S24, sending the sub audio data of the second type to the Bluetooth headset through a classic Bluetooth protocol.
In specific implementation, the mobile terminal encodes and compresses the sub-data of the second type by an SCB algorithm, and sends the sub-data to the bluetooth headset by an A2DP protocol. The A2DP protocol is a classic bluetooth protocol, which has a large bandwidth and ensures high energy efficiency of data transmission.
And the Bluetooth earphone decodes and plays the sub audio data in sequence according to the time sequence of the arrival of the sub audio data. If the Bluetooth headset receives the sub-audio data transmitted by the A2DP protocol and the GATT protocol at the same time, the Bluetooth headset preferentially decodes and plays the sub-audio data transmitted by the GATT protocol, and then decodes and plays the sub-audio data transmitted by the A2DP protocol, so as to ensure the timeliness of the first type data transmission.
In this embodiment, the application system in the mobile terminal packages the sub-audio data not generated by each sounding body and the identifier of the sounding body into a data packet. The mobile terminal obtains each data packet of the application system, extracts the sub audio data and the sounding body identification in the data packet, and determines the time delay type of the sub audio data according to the identification. Sub audio data with high real-time requirement is sent to a Bluetooth earphone by a GATT protocol, so that the timeliness of the data obtained by a user is guaranteed, and the user experience is improved; meanwhile, the sub audio data with low real-time requirement is sent to the Bluetooth headset by the A2DP protocol, so that high energy efficiency of data output is ensured.
Referring to fig. 3, a method for transmitting audio data according to a third embodiment of the present invention includes steps S31-S34.
Step S31, audio data in the application system are obtained in real time, and the audio data are classified through a neural network model, so that sub-audio data of each sounding body in the audio data and a corresponding sounding body identifier are obtained.
In this embodiment, the audio data output by the application system is a sound data mixture generated by various sound generators, and compared with the second embodiment, the application system does not need to perform a packing process on sub-sound data generated by each sound generator. In this embodiment, after acquiring the audio data in the application system, the mobile terminal separates the audio data through the neural network model to obtain sub-audio data generated by each sound generating body and a corresponding sound generating body identifier. The specific steps of classifying the audio data through the neural network model comprise:
step S311, establishing a neural network model, and training the neural network model by using label data, wherein the label data comprises a plurality of sub-audio data with sound-producing body identifications;
step S312, the trained neural network model classifies the audio data to obtain sub-audio data of each sounding body in the audio data and a corresponding sounding body identifier.
In this embodiment, the neural network model is trained through each sub audio data with the sounding body label, and the trained neural network model can be used to classify the audio data output by the application system, so as to output each sub audio label data and the corresponding sounding body identifier.
The neural network model may adopt a model in an advanced technology, for example, a DNN model (deep neural Networks, deep neural network model) may be adopted, the DNN may be divided according to positions of different layers, and the neural network layer inside the DNN may be divided into three layers, an input layer, a hidden layer and an output layer. In specific implementation, a DNN model is constructed and trained, the DNN model can adopt a tensoflow frame to train audio data so as to determine parameters of the DNN model, and the trained DNN model is used for classifying the audio data.
Step S32, querying a delay type of each sub-audio data from a preset data delay type table according to the utterance body identifier, where the delay type includes a first type and a second type, and a real-time requirement of the data of the first type is higher than that of the data of the second type.
In this embodiment, each sound generator in the application system generates sub-audio data, such as footstep sound data, gunshot sound data, car sound data, background music data, and sound effect data, which are divided into two types of time delay. Wherein the first type of sub-audio data includes, but is not limited to, sub-audio data generated by a human body, a firearm, a car, etc.; the second type of sub audio data includes, but is not limited to, background music, sound effect data, and the like. The data type of each sub audio data is determined according to the data type of the sounding body obtained by table lookup.
And step S33, sending the sub audio data belonging to the first type to a Bluetooth headset connected with the mobile terminal through a low-power Bluetooth protocol.
And step S34, sending the sub audio data of the second type to the Bluetooth headset through a classic Bluetooth protocol.
In this embodiment, program development is not required for the application, and the mobile terminal directly classifies the audio data output by the application system through the neural network, so as to obtain each sub-audio data and the corresponding sounding body identifier, and determines the data type of each sub-audio data according to the table lookup of the sounding body identifier.
Referring to fig. 4, an audio data transmission apparatus according to a fourth embodiment of the present invention is applied to a mobile terminal, where the mobile terminal is currently connected to a bluetooth headset, and the audio data transmission apparatus includes:
the extracting module 41 is configured to obtain audio data in an application system in real time, extract each sub-audio data in the audio data according to a difference of sound producing entities, and determine a sound producing entity identifier corresponding to each sub-audio data;
a query module 42, configured to query, according to the utterance body identifier, a delay type of each piece of sub-audio data from a preset data delay type table, where the delay type includes a first type and a second type, and a real-time requirement of the first type of data is higher than that of the second type;
a first sending module 43, configured to send the sub-audio data belonging to the first type to the bluetooth headset via a low-power bluetooth protocol;
a second sending module 44, configured to send the sub-audio data belonging to the second type to the bluetooth headset via a classic bluetooth protocol.
Further, in the audio data transmission apparatus, the extracting module 41 is specifically configured to:
and classifying the audio data through a neural network model to obtain sub-audio data of each sounding body in the audio data and a corresponding sounding body identifier.
The audio data transmission device provided by the embodiment of the present invention has the same implementation principle and technical effect as the foregoing method embodiments, and for the sake of brief description, no mention is made in the device embodiment, and reference may be made to the corresponding contents in the foregoing method embodiments.
An embodiment of the present invention further provides a readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to implement the audio data transmission method.
The embodiment of the invention also provides a mobile terminal, which comprises a memory, a processor and a program which is stored on the memory and can be run on the processor, wherein the processor realizes the audio data transmission method when executing the program.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An audio data transmission method is applied to a mobile terminal, and the mobile terminal is currently connected with a Bluetooth headset, and is characterized by comprising the following steps:
acquiring audio data in an application system in real time, extracting each sub-audio data in the audio data according to different sound producing bodies, and determining a sound producing body identifier corresponding to each sub-audio data;
inquiring the time delay type of each sub audio data from a preset data time delay type table according to the sound generating body identification, wherein the time delay type comprises a first type and a second type, and the real-time requirement of the data of the first type is higher than that of the data of the second type;
transmitting sub-audio data belonging to the first type to the Bluetooth headset through a low-power Bluetooth protocol;
transmitting sub-audio data belonging to the second type into the Bluetooth headset via a classic Bluetooth protocol.
2. The audio data transmission method according to claim 1, wherein the step of extracting each sub audio data in the audio data according to the difference of sound producing bodies, and the step of determining the sound producing body identifier corresponding to each sub audio data comprises:
and classifying the audio data through a neural network model to obtain sub-audio data of each sounding body in the audio data and a corresponding sounding body identifier.
3. The method of claim 2, wherein the step of classifying the audio data by a neural network model to obtain sub-audio data of each utterance and a corresponding utterance identification in the audio data comprises:
establishing a neural network model, and training the neural network model by using label data, wherein the label data comprises a plurality of sub-audio data with sounding body identifications;
and classifying the audio data by the trained neural network model to obtain sub-audio data of each sounding body in the audio data and a corresponding sounding body identifier.
4. The audio data transmission method according to claim 1, wherein the application system packages the sub audio data generated by each sound generating body and the corresponding sound generating body identifier into a data packet, the audio data includes the data packet corresponding to each sound generating body, the extracting of each sub audio data in the audio data according to the difference of the sound generating bodies, and the determining of the sound generating body identifier corresponding to each sub audio data includes:
and extracting the sub-audio data and the corresponding sound-producing body identification in each data packet.
5. The audio data transmission method of claim 1, wherein the step of sending the sub audio data belonging to the first type into the bluetooth headset via a low power bluetooth protocol comprises:
and encoding and compressing the sub-audio data belonging to the first type, and sending the encoded and compressed sub-audio data to the Bluetooth headset through a GATT protocol.
6. The audio data transmission method of claim 1, wherein the step of sending the second type of sub-audio data into the bluetooth headset via a classic bluetooth protocol comprises:
and encoding and compressing the subdata of the second type, and sending the encoded and compressed subdata of audio frequency to the Bluetooth headset through an A2DP protocol.
7. The utility model provides an audio data transmission device, is applied to mobile terminal, mobile terminal is connected with bluetooth headset at present, its characterized in that includes:
the extraction module is used for acquiring audio data in an application system in real time, extracting each sub-audio data in the audio data according to different sound producing bodies, and determining a sound producing body identifier corresponding to each sub-audio data;
the query module is used for querying the time delay type of each piece of sub-audio data from a preset data time delay type table according to the sound generating body identifier, wherein the time delay type comprises a first type and a second type, and the real-time requirement of the data of the first type is higher than that of the data of the second type;
the first sending module is used for sending the sub-audio data belonging to the first type to the Bluetooth headset through a low-power Bluetooth protocol;
and the second sending module is used for sending the sub-audio data belonging to the second type to the Bluetooth headset through a classic Bluetooth protocol.
8. The audio data transmission apparatus of claim 7, wherein the extraction module is specifically configured to:
and classifying the audio data through a neural network model to obtain sub-audio data of each sounding body in the audio data and a corresponding sounding body identifier.
9. A readable storage medium on which a program is stored, which program, when executed by a processor, carries out the method according to any one of claims 1-6.
10. A mobile terminal comprising a memory, a processor and a program stored on the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1-6 when executing the program.
CN202010085690.0A 2020-02-11 2020-02-11 Audio data transmission method and device, readable storage medium and mobile terminal Pending CN110933216A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010085690.0A CN110933216A (en) 2020-02-11 2020-02-11 Audio data transmission method and device, readable storage medium and mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010085690.0A CN110933216A (en) 2020-02-11 2020-02-11 Audio data transmission method and device, readable storage medium and mobile terminal

Publications (1)

Publication Number Publication Date
CN110933216A true CN110933216A (en) 2020-03-27

Family

ID=69854808

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010085690.0A Pending CN110933216A (en) 2020-02-11 2020-02-11 Audio data transmission method and device, readable storage medium and mobile terminal

Country Status (1)

Country Link
CN (1) CN110933216A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111970668A (en) * 2020-08-17 2020-11-20 努比亚技术有限公司 Bluetooth audio control method, equipment and computer readable storage medium
CN112333674A (en) * 2020-10-30 2021-02-05 展讯半导体(成都)有限公司 Data transmission method, device and equipment
CN112423105A (en) * 2020-10-27 2021-02-26 深圳Tcl新技术有限公司 Data transmission method, device and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004187706A (en) * 2002-12-06 2004-07-08 Nintendo Co Ltd Game music performing program, game device, and game music performing method
CN103578516A (en) * 2012-07-24 2014-02-12 创杰科技股份有限公司 System to deliver prioritized game audio wirelessly with a minimal latency
CN105324943A (en) * 2013-06-11 2016-02-10 美加狮有限公司 Systems and methods for transmitting data using selected transmission technology from among other transmission technologies
CN109712631A (en) * 2019-03-28 2019-05-03 南昌黑鲨科技有限公司 Audio data transfer control method, device, system and readable storage medium storing program for executing
CN109799975A (en) * 2018-12-20 2019-05-24 武汉西山艺创文化有限公司 A kind of action game production method neural network based and system
CN110270096A (en) * 2019-06-19 2019-09-24 杭州绝地科技股份有限公司 Audio resource configuration method, device, equipment and storage medium in game application

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004187706A (en) * 2002-12-06 2004-07-08 Nintendo Co Ltd Game music performing program, game device, and game music performing method
CN103578516A (en) * 2012-07-24 2014-02-12 创杰科技股份有限公司 System to deliver prioritized game audio wirelessly with a minimal latency
CN105324943A (en) * 2013-06-11 2016-02-10 美加狮有限公司 Systems and methods for transmitting data using selected transmission technology from among other transmission technologies
CN109799975A (en) * 2018-12-20 2019-05-24 武汉西山艺创文化有限公司 A kind of action game production method neural network based and system
CN109712631A (en) * 2019-03-28 2019-05-03 南昌黑鲨科技有限公司 Audio data transfer control method, device, system and readable storage medium storing program for executing
CN110270096A (en) * 2019-06-19 2019-09-24 杭州绝地科技股份有限公司 Audio resource configuration method, device, equipment and storage medium in game application

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111970668A (en) * 2020-08-17 2020-11-20 努比亚技术有限公司 Bluetooth audio control method, equipment and computer readable storage medium
CN111970668B (en) * 2020-08-17 2023-06-27 努比亚技术有限公司 Bluetooth audio control method, device and computer readable storage medium
CN112423105A (en) * 2020-10-27 2021-02-26 深圳Tcl新技术有限公司 Data transmission method, device and medium
CN112423105B (en) * 2020-10-27 2024-03-15 深圳Tcl新技术有限公司 Data transmission method, device and medium
CN112333674A (en) * 2020-10-30 2021-02-05 展讯半导体(成都)有限公司 Data transmission method, device and equipment
CN112333674B (en) * 2020-10-30 2022-06-24 展讯半导体(成都)有限公司 Data transmission method, device and equipment

Similar Documents

Publication Publication Date Title
CN110933216A (en) Audio data transmission method and device, readable storage medium and mobile terminal
CN109246672B (en) Data transmission method, device and system and Bluetooth headset
WO2021129262A1 (en) Server-side processing method and server for actively initiating conversation, and voice interaction system capable of actively initiating conversation
US9704491B2 (en) Storytelling environment: distributed immersive audio soundscape
CN109951743A (en) Barrage information processing method, system and computer equipment
CN109151212B (en) Equipment control method and device and electronic equipment
CN109107158B (en) Sound effect processing method and device, electronic equipment and computer readable storage medium
US20200105268A1 (en) Ai voice interaction method, device and system
WO2023098332A1 (en) Audio processing method, apparatus and device, medium, and program product
CN207706384U (en) It is a kind of that there is the wireless K song earphones for going voice function
CN108886653A (en) A kind of earphone sound channel control method, relevant device and system
US20220225029A1 (en) Systems and methods for broadcasting audio
CN108737648B (en) Music volume self-adaptive adjusting method, device, storage medium and terminal
CN104869505B (en) A kind of method for controlling volume, playback equipment, mobile terminal and system
CN114006890B (en) Data transmission method, device, storage medium and terminal device
CN109102816B (en) Encoding control method and device and electronic equipment
CN105744022A (en) Mobile terminal as well as voice playing method and system
CN105518774A (en) Apparatus and method for acquiring configuration data
CN106603669A (en) Control method and system for distributed type main equipment and auxiliary equipment
CN104971494A (en) Intelligent dice cup and intelligent special-effect system
CN108154886A (en) Noise suppressing method and device, electronic device and computer readable storage medium
CN110337095B (en) Parameter updating method and device for audio listening equipment and audio listening equipment
CN108492831B (en) Data transmission method, audio equipment and intelligent terminal
CN115038021B (en) Real wireless stereo earphone, audio processing, lighting and vibration method
CN111787353A (en) Multi-party audio processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200327