CN112468841B - Audio transmission method and device, intelligent equipment and computer readable storage medium - Google Patents

Audio transmission method and device, intelligent equipment and computer readable storage medium Download PDF

Info

Publication number
CN112468841B
CN112468841B CN202011349527.7A CN202011349527A CN112468841B CN 112468841 B CN112468841 B CN 112468841B CN 202011349527 A CN202011349527 A CN 202011349527A CN 112468841 B CN112468841 B CN 112468841B
Authority
CN
China
Prior art keywords
audio
audio data
storage space
cache region
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011349527.7A
Other languages
Chinese (zh)
Other versions
CN112468841A (en
Inventor
杨柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202011349527.7A priority Critical patent/CN112468841B/en
Publication of CN112468841A publication Critical patent/CN112468841A/en
Priority to PCT/CN2021/123834 priority patent/WO2022111111A1/en
Application granted granted Critical
Publication of CN112468841B publication Critical patent/CN112468841B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/61Indexing; Data structures therefor; Storage structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4392Processing of audio elementary streams involving audio buffer management

Abstract

The embodiment of the application provides an audio transmission method, an audio transmission device, intelligent equipment and a computer readable storage medium, relates to the technical field of software, and can reduce time delay in an audio data transmission process, improve audio data transmission speed and improve user experience. The audio transmission method comprises the following steps: acquiring audio data, and storing the audio data in a target audio cache region, wherein the target audio cache region is one of a plurality of audio cache regions included in a hardware abstraction layer, and the storage space of the target audio cache region is smaller than a preset storage space; monitoring the residual storage space of the target audio buffer area; when the residual storage space of the target audio cache region is smaller than or equal to a storage space threshold value, sending the audio data of the target audio cache region to a server; and acquiring new audio data, and storing the new audio data in another audio cache region of the hardware abstraction layer.

Description

Audio transmission method and device, intelligent equipment and computer readable storage medium
Technical Field
The application relates to the technical field of software, in particular to an audio transmission method, an audio transmission device, intelligent equipment and a computer readable storage medium.
Background
With the rapid development of the internet technology, the audio transmission technology can be utilized to realize functions of communication, search, office work, chat and the like. The audio transmission technology can be that the intelligent device transmits the words spoken by the user to the server, or the server transmits audio data to the intelligent device.
However, the audio transmission process is complex, which causes a long delay in the audio transmission process and a slow transmission of audio data, and affects the user experience.
Disclosure of Invention
The embodiment of the application provides an audio transmission method, an audio transmission device, intelligent equipment and a computer readable storage medium, so as to solve the problems.
In a first aspect, an audio transmission method is provided, which is applied to an intelligent device, and includes: acquiring audio data, and storing the audio data in a target audio cache region, wherein the target audio cache region is one of a plurality of audio cache regions included in a hardware abstraction layer, and the storage space of the target audio cache region is smaller than a preset storage space; monitoring the residual storage space of the target audio cache region; when the residual storage space of the target audio cache region is smaller than or equal to the storage space threshold value, sending the audio data of the target audio cache region to the server; and acquiring new audio data, and storing the new audio data in another audio buffer zone of the hardware abstraction layer.
In a second aspect, an audio transmission apparatus is provided, which includes an obtaining module, a processing module, and a sending module. The acquisition module is used for acquiring audio data and storing the audio data in a target audio cache region, the target audio cache region is one of a plurality of audio cache regions included in the hardware abstraction layer, and the storage space of the target audio cache region is smaller than the preset storage space. And the processing module is used for monitoring the residual storage space of the target audio buffer area. And the sending module is used for sending the audio data of the target audio cache region to the server when the residual storage space of the target audio cache region is less than or equal to the storage space threshold value. And the acquisition module is further used for acquiring new audio data and storing the new audio data in another audio cache region of the hardware abstraction layer.
In a third aspect, an audio transmission method is provided, which is applied to an intelligent device, and includes: receiving audio data sent by a server, and storing the audio data in a target cache region, wherein the target audio cache region is one of a plurality of audio cache regions included in a hardware abstraction layer, and the storage space of the target audio cache region is smaller than a preset storage space; monitoring the residual storage space of the target audio cache region; when the residual storage space of the target audio cache region is smaller than or equal to the storage space threshold value, sending the audio data of the target audio cache region to the sound sending equipment; and receiving new audio data, and storing the new audio data in another audio buffer zone of the hardware abstraction layer.
In a fourth aspect, an audio transmission apparatus is provided, which includes a receiving module, a processing module, and a sending module. The receiving module is used for receiving the audio data sent by the server and storing the audio data in a target audio cache region, the target audio cache region is one of a plurality of audio cache regions included in the hardware abstraction layer, and the storage space of the target audio cache region is smaller than the preset storage space. And the processing module is used for monitoring the residual storage space of the target audio buffer area. And the sending module is used for sending the audio data of the target audio buffer area to the sound sending equipment when the residual storage space of the target audio buffer area is less than or equal to the storage space threshold value. And the receiving module is also used for receiving the new audio data and storing the new audio data in another audio cache region of the hardware abstraction layer.
In a fifth aspect, a smart device is provided, comprising: one or more processors; a memory; and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of the first or third aspect.
A sixth aspect provides a computer readable storage medium having program code stored therein, the program code being invoked by a processor to perform a method according to the first or third aspect.
In the audio transmission method, the audio transmission device, the intelligent device and the computer readable storage medium provided by the embodiment of the application, once the remaining storage space of the target audio cache region is less than or equal to the storage space threshold, the audio data of the target audio cache region is immediately and actively sent to the server, and compared with a scheme that the remaining storage space of the target audio cache region is possibly less than the storage space threshold already when the upper layer calls the audio data, the audio transmission method, the device, the intelligent device and the computer readable storage medium can timely send the audio data to the server so as to reduce the transmission delay of the audio data. On this basis, after the intelligent device acquires the audio data, the audio data is stored in a target audio cache region of the hardware abstraction layer, so that the memory of the target audio cache region is smaller than a preset storage space, and the preset storage space can be, for example, the storage space of the audio cache region in the prior art, that is, the storage space of the target audio cache region is smaller than the storage space of the target audio cache region in the prior art, the smaller the storage space of the target audio cache region is, the earlier the intelligent device sends the audio data to the server, and the smaller the transmission delay of the audio data is.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a working architecture of an intelligent device provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of an audio transmission method according to an embodiment of the present application;
fig. 3 is a scene diagram of an audio transmission process according to an embodiment of the present application;
fig. 4 is a scene diagram of an audio transmission process provided in an embodiment of the present application;
fig. 5 is a scene diagram of an audio transmission process provided in an embodiment of the present application;
fig. 6 is a schematic flowchart of an audio transmission method according to an embodiment of the present application;
fig. 7 is a scene diagram of an audio transmission process provided in an embodiment of the present application;
fig. 8 is a scene diagram of an audio transmission process according to an embodiment of the present application;
fig. 9 is a scene diagram of an audio transmission process according to an embodiment of the present application;
fig. 10 is a block diagram of an audio transmission apparatus provided in an embodiment of the present application;
fig. 11 is a flowchart illustrating an audio transmission method according to an embodiment of the present application;
fig. 12 is a scene diagram of an audio transmission process provided in an embodiment of the present application;
fig. 13 is a scene diagram of an audio transmission process provided in an embodiment of the present application;
fig. 14 is a flowchart illustrating an audio transmission method according to an embodiment of the present application;
fig. 15 is a scene diagram of an audio transmission process provided in an embodiment of the present application;
fig. 16 is a scene diagram of an audio transmission process provided in an embodiment of the present application;
fig. 17 is a scene diagram of an audio transmission process provided in an embodiment of the present application;
fig. 18 is a block diagram of an audio transmission device provided in an embodiment of the present application;
fig. 19 is a timing diagram of interaction between an intelligent device and a server according to an embodiment of the present application;
fig. 20 is a relational block diagram of each module in the intelligent device according to the embodiment of the present application;
fig. 21 is a block diagram illustrating a relationship between a computer-readable storage medium and an application program according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. It should be noted that the features of the embodiments of the present application may be combined with each other without conflict.
The background art proposes that the audio transmission process is complex, so that the delay of the audio transmission process is large, the transmission of audio data is slow, and the user experience is influenced.
Taking a plurality of users to form a team to play a game as an example, the plurality of users can start a voice function to communicate in the team. The user A sends out sound, the sound can be transmitted to the server by the intelligent device A in the form of audio data, then the sound is transmitted to the intelligent device B by the server, and the user B using the intelligent device B can hear the words spoken by the user A. If the intelligent device A has a large delay when transmitting the audio data to the server, and the server has a large delay when transmitting the audio data to the intelligent device B, the user B needs to listen to the words spoken by the user A after a long time. In the game process, if the user B hears the words spoken by the user a after a long time, the user experience will be greatly influenced.
In view of the above problems, the inventor provides an audio transmission method after research, which can reduce the delay in the audio data transmission process, improve the audio data transmission speed, and improve the user experience.
Taking the example that the intelligent device sends the audio data to the server, the intelligent device acquires the words spoken by the user, records, transmits, processes, codes and the like the words spoken by the user in the form of the audio data, and then sends the audio data to the server.
Taking the example that the server sends the audio data to the intelligent device, after receiving the audio data sent by the server, the intelligent device decodes, processes and the like the audio data, and finally plays the audio data.
As shown in fig. 1, in the interaction process between the intelligent device and the server, the working architecture of the intelligent device may include a hardware abstraction layer (Audio HAL), a service layer (Audio Flinger), and an application layer.
And the hardware abstraction layer is used for bridging the hardware driving layer and the upper framework and can interact with the audio hardware equipment so as to access the audio data downwards and transmit the audio data to the upper framework.
And the service layer can make an audio strategy and execute the audio strategy, is responsible for managing input and output audio data, processes the audio data according to the audio strategy and transmits the audio data.
The application layer can provide an Audio Record interface (Audio Record) and an Audio play interface (Audio Track), wherein when the intelligent device interacts with the server, the intelligent device can send Audio data to the server through the Audio Record interface of the application layer, and the intelligent device can obtain the Audio data from the server through the Audio play interface.
As shown in fig. 2, an embodiment of the present application provides an audio transmission method, which is applied to an intelligent device, and the embodiment of the present application describes a process of steps at an intelligent device side, where the method includes:
s110, audio data are obtained, and the audio data are stored in a target audio cache region, wherein the target audio cache region is one of a plurality of audio cache regions included in a hardware abstraction layer, and the storage space of the target audio cache region is smaller than a preset storage space.
Under the allowed condition, the smart device may monitor the audio data in real time, as shown in fig. 3, when the audio data is monitored, the audio hardware device 11 may record audio, the audio hardware device 11 may convert the obtained audio data of the analog signal into audio data of a digital signal, and then may transmit the audio data in the form of the digital signal to the hardware abstraction layer.
As shown in fig. 4, after the audio data is transferred to the hardware abstraction layer, the audio data may be stored in a target audio buffer of the hardware abstraction layer. The hardware abstraction layer comprises a plurality of audio cache regions, the target audio cache region is one of the plurality of audio cache regions, and the target audio cache region is used for storing audio data currently acquired from the microphone.
In some embodiments, in addition to the target audio buffer, other audio buffers in the hardware abstraction layer may also be used as the target audio buffer during the process of storing audio data. It should be noted that the number of the target audio buffers at the same time is one.
In some embodiments, the predetermined storage space may be, for example, a storage space of a prior art audio buffer. For example, in the prior art, the audio data that can be stored in the audio buffer can be played for 20ms at most, and when the same content of the same sound is played under the same recording and playing effects, the playing time of the audio data that can be stored in the target audio buffer in the embodiment of the present application at most is less than 20ms, for example, the playing time of the audio data that can be stored in the target audio buffer in the embodiment of the present application at most is 5ms to 10ms, and optionally, the playing time is 5ms, 8ms, or 10ms.
The specific storage space of the audio buffer area is related to the duration, processing degree, recording effect, playing effect, and the like of the audio data, which is not limited in the embodiments of the present application.
Illustratively, the storage space required for audio data having a playback time of 5ms is 1000 bits.
In some embodiments, since the target audio buffer is one of the multiple audio buffers included in the hardware abstraction layer, when the storage space of the target audio buffer is smaller than the preset storage space, the storage spaces of the other audio buffers are also smaller than the preset storage space.
In some embodiments, the number of the audio buffers included in the hardware abstraction layer is not limited, as long as the number of the audio buffers is multiple, for example, the number of the audio buffers is two.
In some embodiments, the audio hardware device 11 may be, for example, a microphone (Mic) or the like, which may convert audio data of an analog signal into a digital signal form.
And S120, monitoring the residual storage space of the target audio buffer area.
The smart device 10 may monitor the remaining storage space of the target audio buffer in real-time.
And S130, when the residual storage space of the target audio cache region is smaller than or equal to the storage space threshold value, sending the audio data of the target audio cache region to the server.
As shown in fig. 5, once the smart device 10 monitors that the remaining storage space of the target audio buffer is smaller than or equal to the storage space threshold, the smart device 10 actively sends the audio data of the target audio buffer to the server 20.
In some embodiments, it takes a certain time to transmit the audio data in the target audio buffer to the server 20, and all the audio data in the target audio buffer need to be transmitted to the server 20 according to a certain sequence. Optionally, in this embodiment of the application, after the audio data is transmitted to the target audio buffer area of the hardware abstraction layer, the audio data that is First stored in the target audio buffer area may be First sent to the server 20 by using a First-in First-out (FIFO) principle.
In some embodiments, the size of the storage space threshold may be set according to a working principle of a target audio buffer of the hardware abstraction layer, which is not particularly limited in the embodiment of the present application.
For example, the memory space threshold may be 0bit, 5bit, or the like.
In other embodiments, if the smart device 10 detects that the audio data acquisition from the audio hardware device is completed, but the remaining storage space of the target audio buffer is greater than the storage space threshold, the audio data acquisition from the audio hardware device may be stopped, and the remaining audio data of the target audio buffer may be sent to the server 20.
In some embodiments, the server 20 may be a traditional server or a cloud server or the like.
And S140, acquiring new audio data, and storing the new audio data in another audio buffer area of the hardware abstraction layer.
As shown in fig. 5, if the remaining storage space of the previous target audio buffer is less than or equal to the storage space threshold, and new audio data can still be obtained from the audio hardware device, the intelligent device 10 may also store the new audio data in another audio buffer of the hardware abstraction layer while sending the audio signal of the previous target audio buffer to the server 20, where the audio buffer is the target audio buffer in the process of storing the audio data.
Once the remaining storage space of the target audio cache region is less than or equal to the storage space threshold, the audio data of the target audio cache region is immediately and actively sent to the server 20, and compared with a scheme that the upper layer is waited to call the audio data, and when the upper layer calls the audio data, the remaining storage space of the target audio cache region may be already less than the storage space threshold, the audio data can be sent to the server 20 in time, so that the transmission delay of the audio data is reduced. On this basis, after the smart device 10 acquires the audio data, the audio data is stored in a target audio buffer of the hardware abstraction layer, so that a memory of the target audio buffer is smaller than a preset storage space, and the preset storage space may be, for example, a storage space of an audio buffer in the prior art, that is, the storage space of the target audio buffer is smaller than the storage space of the target audio buffer in the prior art, and the smaller the storage space of the target audio buffer is, the earlier the smart device 10 sends the audio data to the server 20, the smaller the transmission delay of the audio data is.
Taking the example that when the storage space of the target audio buffer is equal to the storage space threshold, the playing time of the stored audio data is 5ms, and when the target audio buffer is full of the audio data corresponding to 5ms, the part of the audio data can be sent to the server 20; taking the example that the playing time of the stored audio data is 20ms when the storage space of the target audio buffer area in the prior art is equal to the storage space threshold, if the upper layer is called when the audio data corresponding to 20ms is just full, the part of audio is sent to the server 20 when the target audio buffer area is full of the audio data corresponding to 20ms, and in this case, compared with the prior art, the intelligent device 10 of the present application can reduce the delay of 15ms when sending the audio data to the server 20 once; if the remaining storage space of the target audio buffer is already smaller than the storage space threshold when the upper layer calls the audio data of the target audio buffer in the prior art, the target audio buffer stores the audio data corresponding to 20ms, and then the audio data can be sent to the server 20 at an interval of a period of time.
As shown in fig. 6, an embodiment of the present application provides an audio transmission method applied to an intelligent device 10, where the embodiment of the present application describes a flow of steps on the intelligent device side, and the method includes:
s110, audio data are obtained, and the audio data are stored in a target audio cache region, wherein the target audio cache region is one of a plurality of audio cache regions included in a hardware abstraction layer, and the storage space of the target audio cache region is smaller than a preset storage space.
And S120, monitoring the residual storage space of the target audio buffer area.
S131, when the residual memory of the target audio buffer area is smaller than or equal to the storage space threshold value, transmitting the audio data in the target audio buffer area to the service layer.
As shown in fig. 7, when the remaining memory of the target audio buffer is less than or equal to the storage space threshold, the smart device 10 actively transmits the audio data in the target audio buffer to the service layer.
S132, processing the received audio data according to the audio strategy in the service layer.
As shown in fig. 7, a policy executor (Audio flanger service) of the service layer may perform processing on received Audio data in terms of sound effect, resampling, and the like according to an Audio policy, and then store the processed Audio data in a buffer area. Wherein, the received Audio data may be processed by an internal thread (thread Loop) of the Audio pointer service.
In some embodiments, an Audio Policy is formulated by a Policy maker (Audio Policy Service) of the Service layer, and an Audio flunger Service of the Service layer is used to execute the Audio Policy formulated by the Audio Policy Service. The Audio Policy Service may pre-formulate an Audio Policy, and the Audio Flinger Service executes directly.
And S133, transmitting the processed audio data to a shared buffer area, wherein the application layer and the service layer share the shared buffer area.
As shown in fig. 8, a shared buffer may also be set between the underlying system and the upper application. The service layer may transmit the processed audio data to the shared buffer for the application layer to obtain.
In some embodiments, it takes a certain time for the smart device 10 to transmit the audio data in the buffer of the service layer to the shared buffer, and the audio data stored in the buffer of the service layer needs to be transmitted to the shared buffer in a certain sequence. Optionally, in this embodiment of the present application, the audio data stored in the buffer of the service layer may be stored in the buffer first and transmitted to the shared buffer first by using a FIFO principle. As shown in fig. 8, in the buffer, the audio data corresponding to the filled portion is already transmitted to the shared buffer, and the audio data corresponding to the unfilled portion is not yet transmitted to the shared buffer.
And S134, acquiring audio data from the shared cache region through the audio recording interface of the application layer, and sending the data audio data to the server.
As shown in fig. 9, when the server 20 obtains audio data from the smart device 10, the smart device 10 may enable interaction with the server 20 through an application layer. The smart device 10 may send the audio data in the shared buffer to the server 20.
In some embodiments, the process of sending the audio data in the shared buffer to the server 20 through the application layer by the smart device 10 needs to take a certain time, and the audio data stored in the shared buffer needs to be sent to the server 20 according to a certain sequence. Optionally, in this embodiment of the application, the audio data stored in the shared buffer may be first stored in the shared buffer by using the FIFO principle, and then sent to the server 20. As shown in fig. 9, in the shared buffer, the audio data corresponding to the filled portion has been sent to the server 20, and the audio data corresponding to the unfilled portion has not been sent to the server 20.
And S140, acquiring new audio data, and storing the new audio data in another audio buffer area of the hardware abstraction layer.
The explanation of steps S110, S120, and S140 in this embodiment is the same as the explanation of steps S110, S120, and S140 in the foregoing embodiment, and is not repeated here.
The embodiment of the application provides an audio transmission method, and the intelligent device 10 can transmit audio data stored in a target audio buffer area to a service layer, and perform processing such as sound effect and resampling on the audio data at the service layer, so as to improve subsequent playing effect of the audio data; then, the processed audio data is sent to the shared cache region, so that when the server 20 obtains the audio data, the audio data is directly called from the shared cache region through the application layer.
As shown in fig. 10, which shows a block diagram of an audio transmission apparatus 100 according to another embodiment of the present application, the audio transmission apparatus 100 may include an obtaining module 101, a processing module 102, and a sending module 103.
The obtaining module 101 is configured to obtain audio data, and store the audio data in a target audio cache region, where the target audio cache region is one of multiple audio cache regions included in a hardware abstraction layer, and a storage space of the target audio cache region is smaller than a preset storage space.
The processing module 102 is configured to monitor a remaining storage space of the target audio buffer.
A sending module 103, configured to send the audio data in the target audio buffer to the server when the remaining storage space in the target audio buffer is less than or equal to the storage space threshold.
The obtaining module 101 is further configured to obtain new audio data, and store the new audio data in another audio buffer of the hardware abstraction layer.
On this basis, the processing module 102 is further configured to transmit the audio data in the target audio buffer to a service layer when the remaining memory of the target audio buffer is less than or equal to the storage space threshold, and process, at the service layer, the received audio data according to an audio policy; the sending module 103 is further configured to send the processed audio data to the server 20 through the application layer.
The processing module 102 is further configured to transmit the processed audio data to a shared cache region, where the application layer and the service layer share the shared cache region; the sending module 103 is further configured to obtain audio data from the shared buffer through an audio recording interface of the application layer, and send the data audio data to the server.
The embodiment of the present application provides an audio transmission apparatus 100, which has the same explanation and advantages as the audio transmission method of the foregoing embodiment, and is not repeated herein.
As shown in fig. 11, an audio transmission method provided in the embodiment of the present application is applied to an intelligent device 10, and the embodiment of the present application describes a flow of steps on the intelligent device side, where the method includes:
s210, receiving audio data sent by a server, and storing the audio data in a target cache region, wherein the target audio cache region is one of a plurality of audio cache regions included in a hardware abstraction layer, and the storage space of the target audio cache region is smaller than a preset storage space.
As shown in fig. 12, the audio data transmitted by the server 20 may be stored in a target audio buffer of the hardware abstraction layer. The hardware abstraction layer comprises a plurality of audio cache regions, the target audio cache region is one of the plurality of audio cache regions, and the target audio cache region is used for storing audio data currently acquired from the microphone.
In some embodiments, in addition to the target audio buffer, other audio buffers in the hardware abstraction layer may also serve as the target audio buffer during the process of storing audio data. It should be noted that the number of the target audio buffers at the same time is one.
In some embodiments, the predetermined storage space may be, for example, a storage space of a prior art audio buffer. For example, in the prior art, the audio data that can be stored in the audio buffer at most may be played for 20ms, and when the same content of the same sound is played under the same recording and playing effects, the playing time of the audio data that can be stored in the target audio buffer at most in the embodiment of the present application is less than 20ms, for example, the playing time of the audio data that can be stored in the target audio buffer at most in the embodiment of the present application is 5ms to 10ms, and optionally, the playing time is 5ms, 8ms, or 10ms.
The specific storage space of the audio buffer area is related to the duration, processing degree, recording effect, playing effect, and the like of the audio data, which is not limited in the embodiments of the present application.
For example, the storage space required for playing audio data with a duration of 5ms is 1000 bits.
In some embodiments, since the target audio buffer is one of the multiple audio buffers included in the hardware abstraction layer, when the storage space of the target audio buffer is smaller than the preset storage space, the storage spaces of the other audio buffers are also smaller than the preset storage space.
In some embodiments, the audio data sent by the server 20 to the smart device 10 may include audio data sent by other smart devices 10 to the server 20, and may also include sound in the current scene of the smart device 10.
For example, if the current scene of the smart device 10 is a game in team with other smart devices 10, the smart device 10 may receive the sound emitted by the users using other smart devices 10 and may also receive the background sound of the game.
In some embodiments, the number of the audio buffers included in the hardware abstraction layer is not limited, as long as the number of the audio buffers is multiple, for example, the number of the audio buffers is two.
In some embodiments, the server 20 may be a traditional server or a cloud server or the like.
And S220, monitoring the residual storage space of the target audio buffer area.
The smart device 10 may monitor the remaining storage space of the target audio buffer in real-time.
And S230, when the residual storage space of the target audio buffer area is smaller than or equal to the storage space threshold value, sending the audio data of the target audio buffer area to the sound sending equipment.
As shown in fig. 13, once the smart device 10 monitors that the remaining storage space of the target audio buffer is smaller than or equal to the storage space threshold, the smart device 10 actively sends the audio data of the target audio buffer to the sound sending device 12.
In some embodiments, a certain time is consumed in the process of sending the audio data in the target audio buffer to the sound sending device 12, and all the audio data in the target audio buffer need to be sent to the sound sending device 12 according to a certain sequence. Optionally, in this embodiment of the application, after the audio data is transmitted to the target audio buffer area of the hardware abstraction layer, the audio data stored in the target audio buffer area first may be sent to the sound sending device 12 first by using the FIFO principle.
In some embodiments, the size of the storage space threshold may be set according to a working principle of a target audio buffer of the hardware abstraction layer, which is not particularly limited in the embodiment of the present application.
For example, the storage space threshold may be 0bit, 5bit, or the like.
In other embodiments, if the smart device 10 detects that the receiving server 20 completes sending the audio data, but the remaining storage space of the target audio buffer is greater than the storage space threshold, the receiving of the audio data from the server 20 may be stopped, and the remaining audio data of the target audio buffer may be sent to the server 20.
In some embodiments, sound delivery device 12 may be, for example, a sound chamber assembly including a sound chamber body, a speaker, and the like. After the audio data is sent to the sound sending device 12, it can be played through a speaker.
And S240, receiving new audio data, and storing the new audio data in another audio buffer area of the hardware abstraction layer.
As shown in fig. 13, if the remaining storage space of the previous target audio buffer is less than or equal to the storage space threshold, and new audio data can still be received from the server 20, the smart device 10 may also store the new audio data in another audio buffer of the hardware abstraction layer while sending the audio signal of the previous target audio buffer to the sound sending device 12, where the audio buffer is the target audio buffer in the process of storing the audio data.
On one hand, once the remaining storage space of the target audio buffer area is less than or equal to the storage space threshold, the audio data of the target audio buffer area is immediately and actively sent to the sound sending equipment 12, and compared with a scheme that the remaining storage space of the target audio buffer area is possibly already less than the storage space threshold when the audio data is called by the lower layer and the audio data is called by the lower layer, the audio transmission method can timely send the audio data to the sound sending equipment 12 so as to reduce the transmission delay of the audio data. On this basis, after the intelligent device 10 receives the audio data, the audio data is stored in the target audio buffer of the hardware abstraction layer, so that the memory of the target audio buffer is smaller than the preset storage space, which may be, for example, the storage space of the audio buffer in the prior art, that is, the storage space of the target audio buffer is smaller than the storage space of the target audio buffer in the prior art, the smaller the storage space of the target audio buffer is, the earlier the audio data is sent to the sound sending device 12, the earlier the sound sending device 12 plays the audio data, and thus the smaller the play delay of the audio data is.
Taking the example that when the storage space of the target audio buffer is equal to the storage space threshold, the playing time of the stored audio data is 5ms, and when the target audio buffer is full of the audio data corresponding to 5ms, the part of the audio data can be sent to the sound sending device 12; taking the example that the playing time of the stored audio data is 20ms when the storage space of the target audio buffer area in the prior art is equal to the storage space threshold, if the lower layer is called when the audio data corresponding to 20ms is just full, the part of audio is sent to the sound sending device 12 when the target audio buffer area is full of the audio data corresponding to 20ms, and in this case, compared with the prior art, the intelligent device 10 of the present application can reduce the delay of 15ms when sending the audio data to the sound sending device 12 once; if the lower layer in the prior art calls the audio data in the target audio buffer, and the remaining storage space in the target audio buffer is already smaller than the storage space threshold, after the target audio buffer is full of the audio data corresponding to 20ms, the part of audio data can be sent to the sound sending device 12 at an interval of time.
As shown in fig. 14, an embodiment of the present application provides an audio transmission method applied to an intelligent device 10, where the embodiment of the present application describes a flow of steps on the intelligent device side, and the method includes:
s211, receiving the audio data sent by the server through an audio playing interface of the application layer, and transmitting the audio data to a shared cache region, wherein the application layer and the service layer share the shared cache region.
As shown in fig. 15, when the server 20 transmits audio data to the smart device 10, the smart device 10 may enable interaction with the server 20 through an application layer. A shared cache region can be set between the bottom system and the upper application. The smart device 10 may receive the audio data sent by the server 20 through the audio playing interface of the application layer, and store the audio data in the shared buffer.
In some embodiments, as shown in fig. 15, when the server 20 sends audio data of different sources to the smart device 10, the smart device 10 may store the audio data of different sources in different shared buffers, respectively.
For example, if the smart device 10 receives the sound emitted by the user using another smart device 10 and the background sound of the game at the same time, the audio data corresponding to the sound emitted by the user using another smart device 10 and the audio data corresponding to the background sound of the game may be stored in different shared buffers.
S212, transmitting the audio data in the shared buffer area to a service layer.
As shown in fig. 16, the smart device 10 may transmit the audio data in the shared buffer to the service layer.
In some embodiments, the process of transmitting the audio data in the shared buffer to the service layer by the smart device 10 needs to consume a certain time, and the audio data stored in the shared buffer needs to be transmitted to the service layer according to a certain sequence. Optionally, in this embodiment of the present application, the audio data stored in the shared buffer area may be first stored in the shared buffer area and then transmitted to the service layer by using a FIFO principle. As shown in fig. 16, in the shared buffer, the audio data corresponding to the filled portion is already transmitted to the service layer, and the audio data corresponding to the unfilled portion is not yet transmitted to the service layer.
And S213, processing the received audio data according to the audio strategy in the service layer.
As shown in fig. 16, the Audio flipper service of the service layer may perform processing on received Audio data in aspects of sound effect, resampling, etc. according to an Audio policy, and then store the processed Audio data in the buffer area. The received Audio data may be processed by an internal thread Loop of the Audio pointer service.
In some embodiments, the Audio Policy established by Audio Policy Service may skip at least one of the mixing process, the sound effect process, and the resampling process to further reduce the delay.
In some embodiments, as shown in fig. 16, when the server 20 sends audio data of different sources to the smart device 10, the smart device 10 may store the audio data of different sources in different buffers, respectively.
S214, transmitting the processed audio data to the hardware abstraction layer, and storing the audio data in a target audio cache region of the hardware abstraction layer.
As shown in fig. 17, the smart device 10 may transmit the processed audio data to the hardware abstraction layer, and store the audio data in a target audio buffer of the hardware abstraction layer.
In some embodiments, as shown in fig. 17, when the server 20 sends audio data with different sources to the smart device 10, the smart device 10 may mix the audio data with different sources and transmit the mixed audio data to the target audio buffer of the hardware abstraction layer.
In some embodiments, the process of transmitting the audio data in the buffer of the service layer to the target audio buffer by the smart device 10 needs to take a certain time, and the audio data stored in the buffer of the service layer needs to be transmitted to the target audio buffer in a certain sequence. Optionally, in this embodiment of the present application, the audio data stored in the buffer of the service layer may be stored in the buffer first and transmitted to the target audio buffer first by using a FIFO principle. As shown in fig. 17, in the buffer, the audio data corresponding to the filled portion has been transmitted to the target audio buffer, and the audio data corresponding to the unfilled portion has not been transmitted to the target audio buffer.
And S220, monitoring the residual storage space of the target audio buffer area.
And S230, when the residual storage space of the target audio buffer area is smaller than or equal to the storage space threshold value, sending the audio data of the target audio buffer area to the sound sending equipment.
And S240, receiving new audio data, and storing the new audio data in another audio buffer area of the hardware abstraction layer.
The explanation of steps S220, S230, and S240 in the embodiment of the present application is the same as the explanation of steps S220, S230, and S240 in the foregoing embodiment, and is not repeated herein.
When the server 20 sends audio data to the intelligent device 10, the intelligent device 10 may store the received audio data in a shared cache area; and then, transmitting the audio data to a service layer, and performing processing such as sound effect and resampling on the audio data at the service layer to improve the subsequent playing effect of the audio data.
As shown in fig. 18, which shows a block diagram of an audio transmission apparatus 100 according to another embodiment of the present application, an audio transmission apparatus 200 may include a receiving module 201, a processing module 202, and a sending module 203.
The receiving module 201 is configured to receive the audio data sent by the server 20, and store the audio data in a target audio cache region, where the target audio cache region is one of multiple audio cache regions included in the hardware abstraction layer, and a storage space of the target audio cache region is smaller than a preset storage space.
And the processing module 202 is configured to monitor a remaining storage space of the target audio buffer.
And the sending module 203 is configured to send the audio data of the target audio buffer to the sound sending device when the remaining storage space of the target audio buffer is smaller than or equal to the storage space threshold.
The receiving module 201 is further configured to receive new audio data, and store the new audio data in another audio buffer of the hardware abstraction layer.
On this basis, the receiving module 201 is further configured to receive, by the application layer, the audio data sent by the server, and store the audio data in the service layer; the processing module 202 is further configured to, at the service layer, process the received audio data according to the audio policy, transmit the processed audio data to the hardware abstraction layer, and store the audio data in a target audio buffer of the hardware abstraction layer.
The receiving module 201 is further configured to receive audio data sent by the server through an audio playing interface of the application layer, and transmit the audio data to the shared cache region, where the application layer and the service layer share the shared cache region; the sending module 203 is further configured to transmit the audio data in the shared buffer to the service layer.
The audio transmission apparatus 100 according to the embodiment of the present application has the same explanation and advantages as the audio transmission method according to the previous embodiment, and is not repeated herein.
As shown in fig. 19, another embodiment of the present application provides a device distribution network processing method, which is applicable to interaction among a first smart device, a second smart device, and a server 20, where this embodiment describes an interaction flow among the first smart device, the second smart device, and the server 20, and the method may include:
s310, the first intelligent device obtains audio data and stores the audio data in a first target audio cache region, the first target audio cache region is one of a plurality of first audio cache regions included in a first hardware abstraction layer, and the storage space of the first target audio cache region is smaller than a first preset storage space.
S320, monitoring the first residual storage space of the first target audio buffer area by the first intelligent device.
S330, when monitoring that the residual storage space of the first target audio cache region is smaller than or equal to the first storage space threshold value, the first intelligent device sends the first audio data of the first target audio cache region to the server.
S340, the first intelligent device obtains new first audio data and stores the new first audio data in another audio cache region of the hardware abstraction layer.
S350, the server receives the first audio data and sends second audio data to the second intelligent device, wherein the second audio data comprise the first audio data.
And S360, the second intelligent device receives second audio data sent by the server and stores the second audio data in a second target cache region, the second target audio cache region is one of a plurality of second audio cache regions included in a second hardware abstraction layer, and the storage space of the second target audio cache region is smaller than a second preset storage space.
And S370, the second intelligent device monitors the second residual storage space of the second target audio buffer area.
And S380, when the second residual storage space of the second target audio buffer area is smaller than or equal to the second storage space threshold value, sending the second audio data of the second target audio buffer area to the sound sending equipment.
And S390, receiving the new second audio data, and storing the new second audio data in another second audio cache region of the second hardware abstraction layer.
For the explanation and beneficial effects of the interaction process between the first smart device, the second smart device, and the server 20, reference may be made to the foregoing embodiments, and details are not described herein again.
As shown in fig. 20, a block diagram of a smart device 10 according to another embodiment of the present application is shown, where the smart device 10 includes: one or more processors 101; a memory 102; and one or more applications 103, wherein the one or more applications 13 are stored in the memory and configured to be executed by the one or more processors 101, the one or more applications 103 configured to perform the methods described in the foregoing embodiments.
Processor 101 may include one or more processing cores. The processor 101 interfaces with various components throughout the smart device 10 using various interfaces and lines to perform various functions of the smart device 10 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 102 and invoking data stored in the memory 102. Alternatively, the processor 101 may be implemented in at least one hardware form of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 11 may integrate one or a combination of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the above modem may not be integrated into the processor 101, but may be implemented by a communication chip.
The Memory 102 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). The memory 102 may be used to store instructions, programs, code sets, or instruction sets. The memory 102 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The memory data area may also store data created by the smart device 10 during use (e.g., phone book, audio-visual data, chat log data), etc.
Fig. 21 is a block diagram illustrating a computer-readable storage medium 300 according to another embodiment of the present application. The computer-readable storage medium 300 has stored therein program code that can be called by a processor to execute the method described in the above-described method embodiments.
The computer-readable storage medium 300 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer-readable storage medium 300 comprises a non-transitory computer-readable medium.
The computer readable storage medium 300 has storage space for the application 103 that performs any of the method steps of the method described above. These application programs 103 may be read from or written to one or more computer program products. The application 103 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (11)

1. An audio transmission method applied to intelligent equipment is characterized by comprising the following steps:
acquiring audio data, and storing the audio data in a target audio cache region, wherein the target audio cache region is one of a plurality of audio cache regions included in a hardware abstraction layer, the storage spaces of the plurality of audio cache regions are all smaller than a preset storage space, and the preset storage space is the storage space of the audio cache region in the prior art;
monitoring the residual storage space of the target audio cache region;
when the residual storage space of the target audio cache region is smaller than or equal to a storage space threshold value, sending the audio data of the target audio cache region to a server;
and acquiring new audio data, and storing the new audio data in another audio cache region of the hardware abstraction layer, wherein the another audio cache region is used as the target audio cache region in the process of storing the audio data.
2. The method according to claim 1, wherein the sending the audio data in the audio buffer to a server when the remaining memory of the target audio buffer is less than or equal to a storage space threshold comprises:
when the residual memory of the target audio cache region is smaller than or equal to the storage space threshold value, transmitting the audio data in the target audio cache region to a service layer;
processing the received audio data according to an audio strategy in the service layer;
and sending the processed audio data to the server through an application layer.
3. The method of claim 2, wherein sending the processed audio data to the server through an application layer comprises:
transmitting the processed audio data to a shared cache region, wherein the application layer and the service layer share the shared cache region;
and acquiring the audio data from the shared cache region through an audio recording interface of the application layer, and sending the data audio data to the server.
4. An audio transmission device, comprising:
the device comprises an acquisition module, a storage module and a processing module, wherein the acquisition module is used for acquiring audio data and storing the audio data in a target audio cache region, the target audio cache region is one of a plurality of audio cache regions included in a hardware abstraction layer, the storage spaces of the plurality of audio cache regions are all smaller than a preset storage space, and the preset storage space is the storage space of the audio cache region in the prior art;
the processing module is used for monitoring the residual storage space of the target audio cache region;
the sending module is used for sending the audio data of the target audio cache region to a server when the residual storage space of the target audio cache region is smaller than or equal to a storage space threshold value;
the obtaining module is further configured to obtain new audio data, and store the new audio data in another audio buffer area of the hardware abstraction layer, where the another audio buffer area is used as the target audio buffer area in a process of storing audio data.
5. An audio transmission method is applied to intelligent equipment and is characterized by comprising the following steps:
receiving audio data sent by a server, and storing the audio data in a target audio cache region, wherein the target audio cache region is one of a plurality of audio cache regions included in a hardware abstraction layer, the storage spaces of the plurality of audio cache regions are all smaller than a preset storage space, and the preset storage space is the storage space of the audio cache region in the prior art;
monitoring the residual storage space of the target audio cache region;
when the residual storage space of the target audio cache region is smaller than or equal to a storage space threshold value, sending the audio data of the target audio cache region to sound sending equipment;
and receiving new audio data, and storing the new audio data in another audio buffer zone of the hardware abstraction layer, wherein the another audio buffer zone is used as the target audio buffer zone in the process of storing the audio data.
6. The method of claim 5, wherein the receiving the audio data sent by the server, and storing the audio data in the target audio buffer comprises:
receiving audio data sent by a server through an application layer, and storing the audio data in a service layer;
processing the received audio data according to an audio strategy in the service layer;
transmitting the processed audio data to the hardware abstraction layer, and storing the audio data in the target audio cache region of the hardware abstraction layer.
7. The method of claim 6, wherein the audio strategy skips at least one of a mixing process, an acoustics process, and a resampling process.
8. The method according to claim 6 or 7, wherein the receiving, by the application layer, the audio data sent by the server, and storing the audio data in the service layer comprises:
receiving the audio data sent by the server through an audio playing interface of the application layer, and transmitting the audio data to a shared cache region, wherein the application layer and the service layer share the shared cache region;
and transmitting the audio data in the shared buffer area to a service layer.
9. An audio transmission device, comprising:
the receiving module is used for receiving audio data sent by a server and storing the audio data in a target audio cache region, wherein the target audio cache region is one of a plurality of audio cache regions included in a hardware abstraction layer, the storage spaces of the plurality of audio cache regions are all smaller than a preset storage space, and the preset storage space is the storage space of the audio cache region in the prior art;
the processing module is used for monitoring the residual storage space of the target audio cache region;
the sending module is used for sending the audio data of the target audio cache region to the sound sending equipment when the residual storage space of the target audio cache region is smaller than or equal to the storage space threshold value;
the receiving module is further configured to receive new audio data, and store the new audio data in another audio buffer of the hardware abstraction layer, where the another audio buffer is used as the target audio buffer in a process of storing audio data.
10. A smart device, comprising:
one or more processors;
a memory; and (c) a second step of,
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-3 or any of claims 5-8.
11. A computer-readable storage medium having program code stored therein, the program code being callable by a processor to perform the method of any one of claims 1-3 or any one of claims 5-8.
CN202011349527.7A 2020-11-26 2020-11-26 Audio transmission method and device, intelligent equipment and computer readable storage medium Active CN112468841B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011349527.7A CN112468841B (en) 2020-11-26 2020-11-26 Audio transmission method and device, intelligent equipment and computer readable storage medium
PCT/CN2021/123834 WO2022111111A1 (en) 2020-11-26 2021-10-14 Audio transmission method and apparatus, smart device, and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011349527.7A CN112468841B (en) 2020-11-26 2020-11-26 Audio transmission method and device, intelligent equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN112468841A CN112468841A (en) 2021-03-09
CN112468841B true CN112468841B (en) 2023-04-07

Family

ID=74808742

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011349527.7A Active CN112468841B (en) 2020-11-26 2020-11-26 Audio transmission method and device, intelligent equipment and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN112468841B (en)
WO (1) WO2022111111A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112468841B (en) * 2020-11-26 2023-04-07 Oppo广东移动通信有限公司 Audio transmission method and device, intelligent equipment and computer readable storage medium
CN114727128B (en) * 2022-03-30 2024-04-12 恒玄科技(上海)股份有限公司 Data transmission method and device of playing terminal, playing terminal and storage medium
CN114979726B (en) * 2022-06-30 2023-09-26 重庆紫光华山智安科技有限公司 Code rate adjusting method, device, server and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992282A (en) * 2017-11-29 2018-05-04 珠海市魅族科技有限公司 Audio data processing method and device, computer installation and readable storage devices
CN109600700A (en) * 2018-11-16 2019-04-09 珠海市杰理科技股份有限公司 Audio data processing method, device, computer equipment and storage medium

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5901149A (en) * 1994-11-09 1999-05-04 Sony Corporation Decode and encode system
CN1983907A (en) * 2005-12-13 2007-06-20 中兴通讯股份有限公司 Method for controlling flow media transmitting rate
CN101778204B (en) * 2010-02-06 2012-01-18 大连科迪视频技术有限公司 3G-SDI high-definition digital audio/video delay system
US9406341B2 (en) * 2011-10-01 2016-08-02 Google Inc. Audio file processing to reduce latencies in play start times for cloud served audio files
CN102547435B (en) * 2011-12-16 2014-06-25 Tcl集团股份有限公司 System and method for playing and processing multimedia file
CN104811789B (en) * 2014-01-24 2019-03-22 宇龙计算机通信科技(深圳)有限公司 The management method and device of multimedia file
TWI534618B (en) * 2015-07-13 2016-05-21 群聯電子股份有限公司 Mapping table updating method, memory control circuit unit and memory storage device
US9634947B2 (en) * 2015-08-28 2017-04-25 At&T Mobility Ii, Llc Dynamic jitter buffer size adjustment
CN108011845A (en) * 2016-10-28 2018-05-08 深圳市中兴微电子技术有限公司 A kind of method and apparatus for reducing time delay
CN108259931B (en) * 2018-01-22 2020-06-16 司马大大(北京)智能系统有限公司 Video file playing method and device
CN109151194B (en) * 2018-08-14 2021-03-02 Oppo广东移动通信有限公司 Data transmission method, device, electronic equipment and storage medium
CN109359091B (en) * 2018-09-26 2021-03-26 Oppo广东移动通信有限公司 File management method, device, terminal and computer readable storage medium
CN110620793B (en) * 2019-10-31 2022-03-15 苏州浪潮智能科技有限公司 Method, device and medium for improving audio quality
CN112468841B (en) * 2020-11-26 2023-04-07 Oppo广东移动通信有限公司 Audio transmission method and device, intelligent equipment and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107992282A (en) * 2017-11-29 2018-05-04 珠海市魅族科技有限公司 Audio data processing method and device, computer installation and readable storage devices
CN109600700A (en) * 2018-11-16 2019-04-09 珠海市杰理科技股份有限公司 Audio data processing method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2022111111A1 (en) 2022-06-02
CN112468841A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
CN112468841B (en) Audio transmission method and device, intelligent equipment and computer readable storage medium
CN109525853B (en) Live broadcast room cover display method and device, terminal, server and readable medium
CN101001362B (en) Method and terminal of on-line playing flow media
CN112422873B (en) Frame insertion method and device, electronic equipment and storage medium
US5434797A (en) Audio communication system for a computer network
US20230037913A1 (en) Server-side processing method and server for actively initiating dialogue, and voice interaction system capable of initiating dialogue
CN109348464B (en) Data transmission method for low-power-consumption Bluetooth receiving end equipment and receiving end equipment
CN112492372B (en) Comment message display method and device, electronic equipment, system and storage medium
CN111083508A (en) Message processing method and device, electronic equipment and storage medium
CN110784858A (en) Bluetooth device control method and device, electronic device and storage medium
CN109151194A (en) Data transmission method, device, electronic equipment and storage medium
WO2022017007A1 (en) Audio data processing method, server, and storage medium
CN107682752A (en) Method, apparatus, system, terminal device and the storage medium that video pictures are shown
CN110694267A (en) Cloud game implementation method and device
CN109062537A (en) A kind of reduction method, apparatus, medium and the equipment of audio frequency delay
JP2022517562A (en) How to run standalone programs, appliances, devices and computer programs
US11626140B2 (en) Audio data processing method, electronic device, and storage medium
WO2022116709A1 (en) Audio playback method, apparatus, head-mounted display device, and storage medium
CN113961484A (en) Data transmission method and device, electronic equipment and storage medium
CN114639392A (en) Audio processing method and device, electronic equipment and storage medium
CN112511407B (en) Self-adaptive voice playing method and system
CN114615381A (en) Audio data processing method and device, electronic equipment, server and storage medium
CN114040189A (en) Multimedia test method, device, storage medium and electronic equipment
CN115237367A (en) Audio playing method and device, mobile terminal and storage medium
US11876849B2 (en) Media service processing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant