CN115480728A - Audio playing method and device, terminal equipment and storage medium - Google Patents
Audio playing method and device, terminal equipment and storage medium Download PDFInfo
- Publication number
- CN115480728A CN115480728A CN202211230546.7A CN202211230546A CN115480728A CN 115480728 A CN115480728 A CN 115480728A CN 202211230546 A CN202211230546 A CN 202211230546A CN 115480728 A CN115480728 A CN 115480728A
- Authority
- CN
- China
- Prior art keywords
- audio
- queue
- frame data
- audio frame
- new
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000004590 computer program Methods 0.000 claims description 4
- 230000008034 disappearance Effects 0.000 abstract description 9
- 230000000694 effects Effects 0.000 abstract description 4
- 230000008569 process Effects 0.000 description 15
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000003139 buffering effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000003111 delayed effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 230000002159 abnormal effect Effects 0.000 description 1
- 230000005856 abnormality Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/165—Management of the audio stream, e.g. setting of volume, audio stream path
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Telephonic Communication Services (AREA)
Abstract
The present application relates to the field of audio processing, and in particular, to an audio playing method, an audio playing device, a terminal device, and a storage medium. The method comprises the following steps: if the first preset condition is met, a new audio queue is created; acquiring audio frame data to be cached in a new audio queue, and caching the audio frame data in the new audio queue; calling audio frame data to be output from the new audio queue and outputting the audio frame data to be output; wherein the first preset condition comprises any one of: audio frame data are not called from the current audio queue; the number of audio frame data remaining in the current audio queue is less than a first preset threshold. The method and the device have the effect of reducing the occurrence probability of sound disappearance phenomenon during audio playing.
Description
Technical Field
The present application relates to the field of audio processing, and in particular, to an audio playing method, an audio playing device, a terminal device, and a storage medium.
Background
In the IOS system and the Mac OSX system, when playing a continuous audio, the terminal device usually places each audio frame data corresponding to the audio in the buffer queue in sequence according to the playing sequence, and then calls the audio frame data from the buffer queue according to the first-in first-out principle to play the audio.
In the related art, when all the audio frame data currently stored in the buffer queue are completely played, that is, the audio frame data stored in the buffer queue is 0, and if there is audio frame data stored in the buffer queue in the next time, the audio frame data is not played again, so that the situation that sound disappears in the playing process may occur, and further, the user experience may be poor.
Disclosure of Invention
In order to solve at least one of the above technical problems, embodiments of the present application provide an audio playing method, an audio playing apparatus, a terminal device, and a storage medium.
In a first aspect, the present application provides an audio playing method, which adopts the following technical solution:
an audio playback method, comprising:
if the first preset condition is met, a new audio queue is created;
acquiring audio frame data to be cached in the new audio queue, and caching the audio frame data in the new audio queue;
calling audio frame data to be output from the new audio queue and outputting the audio frame data to be output;
wherein the first preset condition comprises any one of:
audio frame data are not called from the current audio queue;
the number of the audio frame data left in the current audio queue is smaller than a first preset threshold value.
By adopting the technical scheme, when the audio frame data are not called from the current audio queue or the number of the audio frame data left in the current audio queue is smaller than the first preset threshold value, the representation that the audio frame data cannot be played even if the audio frame data are cached in the current audio queue, therefore, when the conditions are met, a new audio queue is created, and the audio frame data to be cached are temporarily stored in the new audio queue, so that the audio frame data can be called from the new audio queue to continue to be played, the left or other audio frame data can be normally played, the situation that the sound disappears is reduced, and the use experience of a user is further improved.
In a possible implementation manner, the obtaining of the audio frame data to be buffered in the new audio queue includes any one of:
if the remaining audio frame data exist in the current audio queue, acquiring the remaining audio frame data and/or the audio frame data to be written into the audio queue as the audio frame data to be cached into the new audio queue;
and acquiring audio frame data to be written into an audio queue as the audio frame data to be cached in the new audio queue.
By adopting the technical scheme, on one hand, when the remaining audio frame data exist in the current audio queue, the remaining audio frame data are used as the audio frame data to be cached in the new audio queue, or the remaining audio frame data and the audio frame data to be written in the audio queue are used as the audio frame data to be cached in the new audio queue, so that continuous and complete audio can be played, and the user experience is improved; on the other hand, no matter whether the remaining audio frame data exist in the current audio queue, the audio frame data to be written into the audio queue can only be taken as the audio frame data to be cached into the new audio queue, so that the overhead of calling the audio frame data from the current queue to the new audio queue can be saved, the condition that the sound disappears in the playing process can be avoided, and the user experience can be further improved.
In another possible implementation, the method further comprises at least one of:
clearing queue creation data of the current audio queue;
and clearing the audio frame data buffered in the current audio queue.
Because the current audio queue cannot execute audio playing work, on one hand, audio frame data which are not called may be temporarily stored in the current audio queue and still occupy partial buffer space, and on the other hand, queue creating data of the current audio queue also occupy partial resources.
In another possible implementation manner, if the first preset condition is not met, the method further includes:
calling target audio frame data from the current audio queue;
outputting the target audio frame data;
if the first preset condition is not met and the second preset condition is met, circularly executing to call target audio frame data from the current audio queue and output the target audio frame data until the second preset condition is not met or the first preset condition is met;
wherein the second preset condition comprises: the total number of audio frame data that has been currently called is less than the total number of received audio frame data.
By adopting the technical scheme, when the first preset condition is not met, namely when the audio frame data can be normally called from the current audio queue to play audio, the target audio frame data can be normally called from the current queue, and the target audio frame data are output until the total number of the currently called audio frame data is equal to the total number of the received audio frame data, so that the audio frame data can be called and output under the normal condition.
In another possible implementation, the creating a new audio queue includes:
determining the queue length of the new audio queue, wherein the queue length is used for representing the maximum amount of audio frame data buffered by the new queue;
based on the queue length, a new audio queue is created.
By adopting the technical scheme, the queue length of the audio queue determines the number of audio frame data which can be contained by the new audio queue, when the utilization rate of the audio queue is low, unnecessary space waste is generated, and therefore the new audio queue is created according to the determined queue length of the new audio queue, and the utilization rate of the audio queue is favorably improved.
In another possible implementation manner, the determining the queue length of the new audio queue includes any one of:
determining the queue length of the new audio queue as a preset length;
determining the queue length of the new audio queue according to the queue length of the current audio queue;
determining the queue length of the new audio queue according to the total amount of the received audio frame data;
and determining the queue length of the new audio queue according to the total number of audio frame data to be cached in the audio queue, wherein the audio queue is the new audio queue.
By adopting the technical scheme, when the queue length of the new audio queue is determined, on one hand, the preset length can be directly adopted as the queue length of the new audio queue; on the other hand, the queue length of the new audio queue is determined according to the total number of the received audio frame data, on the other hand, the queue length of the new audio queue is determined according to the total number of the audio frame data to be buffered to the audio queue, the audio queue is the new audio queue, namely three possible implementation modes are provided to determine the queue length of the new audio queue, and furthermore, the queue length of the new audio queue is determined according to the total number of the received audio frame data or the total number of the audio frame data to be buffered to the audio queue, so that the utilization rate of the new audio queue can be improved, and the accuracy of the created new audio queue length can also be improved.
In another possible implementation manner, the retrieving audio frame data to be output from the new audio queue includes:
and when the number of the audio frame data cached in the new audio queue is greater than a second preset threshold value, calling the audio frame data to be output from the new audio queue.
By adopting the technical scheme, namely when the number of the audio frame data cached in the new audio queue is greater than a certain number, the audio frame data is called and output from the new audio queue, so that the condition that audio playing is interrupted due to the fact that the audio frame data temporarily stored in the new audio frame queue is 0 after the audio frame data is just started to be called from the new audio queue is reduced, and user experience can be further improved.
In a second aspect, the present application provides an audio playing apparatus, which adopts the following technical solutions:
an audio playback apparatus comprising:
the queue creating module is used for creating a new audio queue when a first preset condition is met;
the data acquisition module is used for acquiring audio frame data to be cached in the new audio queue and caching the audio frame data in the new audio queue;
the data calling module is used for calling audio frame data to be output from the new audio queue and outputting the audio frame data to be output;
wherein the first preset condition comprises any one of:
audio frame data are not called from the current audio queue;
the number of audio frame data remaining in the current audio queue is less than a first preset threshold.
By adopting the technical scheme, when the audio frame data are not called from the current audio queue or the number of the audio frame data left in the current audio queue is smaller than the first preset threshold value, the audio frame data cannot be played even if the audio frame data are cached in the current audio queue, so that when the conditions are met, the queue creating module creates a new audio queue, and the data acquisition module temporarily stores the audio frame data to be cached into the new audio queue, the data calling module can call the audio frame data from the new audio queue to continue playing, the remaining or other audio frame data can be normally played, the situation of sound disappearance is reduced, and the use experience of a user is further improved.
In a possible implementation manner, when the data obtaining module obtains the audio frame data to be buffered in the new audio queue, the data obtaining module is specifically configured to:
if the remaining audio frame data exist in the current audio queue, acquiring the remaining audio frame data and/or the audio frame data to be written into the audio queue as the audio frame data to be cached into the new audio queue; or,
and acquiring audio frame data to be written into an audio queue as the audio frame data to be cached in the new audio queue.
In another possible implementation manner, the apparatus further includes a first clearing module and/or a second clearing module, where:
the first clearing module is used for clearing queue creating data of the current audio queue;
and the second clearing module is used for clearing the audio frame data cached in the current audio queue.
In another possible implementation manner, when the first preset condition is not satisfied, the apparatus further includes:
the normal calling module is used for calling target audio frame data from the current audio queue;
the normal data module is used for outputting the target audio frame data;
the normal data module is used for circularly executing the calling of target audio frame data from the current audio queue when the first preset condition is not met and the second preset condition is met, and outputting the target audio frame data until the second preset condition is not met or the first preset condition is met;
wherein the second preset condition comprises: the total number of audio frame data that has been currently called up is less than the total number of received audio frame data.
In another possible implementation manner, when creating a new audio queue, the queue creating module is specifically configured to:
determining the queue length of the new audio queue, wherein the queue length is used for representing the maximum amount of audio frame data buffered by the new audio queue;
based on the queue length, a new audio queue is created.
In another possible implementation manner, when determining the queue length of the new audio queue, the queue creating module is specifically configured to:
determining the queue length of the new audio queue as a preset length; or,
determining the queue length of the new audio queue according to the queue length of the current audio queue; or,
determining the queue length of the new audio queue according to the total number of the received audio frame data; or,
and determining the queue length of the new audio queue according to the total number of audio frame data to be cached in the audio queue, wherein the audio queue is the new audio queue.
In another possible implementation manner, when the data retrieving module retrieves the audio frame data to be output from the new audio queue, the data retrieving module is specifically configured to:
and when the number of the audio frame data cached in the new audio queue is greater than a second preset threshold value, calling the audio frame data to be output from the new audio queue.
In a third aspect, the present application provides a terminal device, which adopts the following technical solution:
a terminal device, the terminal device comprising:
at least one processor;
a memory;
at least one application, wherein the at least one application is stored in the memory and configured to be executed by the at least one processor, the at least one application configured to: the audio playing method is executed.
In a fourth aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:
a computer-readable storage medium, comprising: a computer program is stored which can be loaded by a processor and which performs the audio playback method described above.
To sum up, this application includes following beneficial technological effect:
1. when the audio frame data are not called from the current audio queue or the number of the audio frame data left in the current audio queue is smaller than a first preset threshold value, the audio frame data cannot be played even if the audio frame data are cached in the current audio queue, therefore, when the conditions are met, a new audio queue is created, and the audio frame data to be cached are temporarily stored in the new audio queue, so that the audio frame data can be called from the new audio queue to be played continuously, the left or other audio frame data can be played normally, the situation that sound disappears is reduced, and the use experience of a user is further improved.
2. When the remaining audio frame data exist in the current audio queue, the remaining audio frame data are used as the audio frame data to be cached in the new audio queue, or the remaining audio frame data and the audio frame data to be written into the audio queue are used as the audio frame data to be cached in the new audio queue, so that continuous and complete audio can be played, and the user experience is improved; in addition, no matter whether the current audio queue has residual audio frame data or not, the audio frame data to be written into the audio queue can be only used as the audio frame data to be cached to the new audio queue, so that the overhead of calling the audio frame data from the current queue to the new audio queue can be saved, the situation that the sound disappears in the playing process can be avoided, and the user experience can be further improved.
Drawings
FIG. 1 is a schematic flowchart illustrating an audio playing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an example of audio playing according to an embodiment of the present application;
FIG. 3 is a block diagram of an audio playback apparatus according to an embodiment of the present application;
fig. 4 is a schematic diagram of a terminal device according to an embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to figures 1-4.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. During the process of using the terminal device by the user, it is often necessary to play audio, which may be background music of the game, may be audio of a specific sound effect, may be audio of a song or the like, and may be audio generated by recording and stored in the terminal device in advance, wherein one audio includes at least one audio frame data.
When a segment of audio needs to be played, the terminal device writes the received audio frame data into the audio queue in sequence, which takes the game process of the user as an example, if the background music of the scene a in the game needs to be played, where the background music includes a plurality of audio frame data, the server sends each audio frame data corresponding to the background music to the terminal device in sequence, and when the terminal device receives the audio frame data, the terminal device writes the audio frame data into the audio queue. The audio queue may be created in advance before the audio frame data are received, or may be created when the first audio frame data are received, which is not limited in the embodiment of the present application.
And forming each audio frame data arranged in time sequence in the audio queue, and in the process of continuously writing the audio frame data, sequentially calling the audio frame data from the audio queue by the terminal equipment to play audio until the audio frame data are not called when the audio frame data are called from the audio queue, and stopping audio playing.
However, in the process of playing the audio, because the writing of the audio frame data is delayed or because a delay is caused when the audio frame data is received, a situation that the audio frame data is not written into the audio queue for a short time may be caused, and in this situation, when the terminal device calls the audio frame data from the audio queue, a situation that the audio frame data is not called may occur, that is, at this time, the audio frame data temporarily stored in the audio queue is 0, so that the audio playing is stopped due to an abnormality, and after the audio frame data is written into the audio queue again, the audio will not be played again, so that a part of sound disappears, and further, the user experience is poor.
In order to solve the problem of sound disappearance generated under the above circumstances, in the related technical solution, when it is determined that the number of audio frame data temporarily stored in the audio queue is 0 if the audio frame data is called each time the audio frame data is called from the audio queue, a preset number of substitute audio frame data is inserted into the audio queue, so that when the audio frame data is called again from the audio queue, the audio frame data can be successfully called to solve the problem of sound disappearance, but the substitute audio frame data is usually blank audio frame data, that is, when the substitute audio frame data is called, there is a case that the user cannot hear any sound for a period of time until the audio frame data other than the blank frame is called again, and therefore, the user experience may be poor.
Therefore, in order to reduce the probability of sound interruption or even disappearance when abnormal, so as to further improve the user experience, an embodiment of the present application provides an audio playing method, which is executed by a terminal device, and with reference to fig. 1, the method may include:
step S101, if a first preset condition is met, a new audio queue is created.
Specifically, after audio frame data is called once by the current audio queue, whether a first preset condition is met is judged, and when the first preset condition is met, the current audio queue is not used any more to play audio, but a new audio queue is created, and the audio frame data is cached and called to play by the new audio queue.
Wherein, the first preset condition comprises: audio frame data are not called from the current audio queue; or the number of the audio frame data left in the current audio queue is smaller than the first preset threshold value.
In a possible implementation manner of the embodiment of the present application, when the first preset condition includes: when the audio frame data are not called from the current audio queue, namely the audio frame data are called from the current audio queue last time and are played, the audio frame data are called from the current audio queue again, and if the audio frame data are not called, the first preset condition is determined to be met. For example: the method comprises the steps that a server side of a game sends a section of audio with the audio length of 45 frames to a terminal device, after audio frame data are called from a current audio queue in the terminal device for one time, 30 frames of audio frame data are called and played, the number of the audio frame data which are cached to the current audio queue in the current audio queue is 30, 0 frame of audio frame data exist in the current audio queue, after the playing of the currently called audio frame data is completed, the 31 st frame of audio frame data are delayed to be queued, when the audio frame data are called again, the audio frame data cannot be called because the current audio queue is empty, and at the moment, it is judged that a first preset condition is met.
In another possible implementation manner of the embodiment of the present application, the first preset condition may further include that the number of audio frame data remaining in the current audio queue is smaller than a first preset threshold. In this embodiment of the application, the remaining audio frame data are audio frame data that are not called yet and exist in the current audio queue, for example, 10 audio frame data exist in the current audio queue and are not called yet, where the 10 audio frame data are all the remaining audio frame data, and the number is 10. Specifically, after audio frame data are called for the first time in the current audio queue, the number of the remaining audio frame data in the current audio queue is determined in real time, and when the number of the remaining audio frame data is smaller than a first preset threshold, it is characterized that the current audio queue cannot provide enough audio frame data for normal calling, that is, it is characterized that the current audio queue has a high possibility that audio frame data cannot be called, that is, it is characterized that the current audio queue satisfies a first preset condition.
It should be noted that, in the embodiment of the present application, the first preset threshold is not less than 1, and when the first preset threshold is 1, that is, the first preset condition may include: the current audio queue does not contain the remaining audio frame data. That is, the audio frame data cannot be retrieved from the current audio queue at this time.
Further, the first preset threshold may also be determined according to the number of audio frame data called each time, where the first preset threshold may be equal to the number of audio frame data called each time, for example, one frame of audio frame data is called each time, that is, the first preset threshold may be 1; for another example, each time the audio frame data is called up, the number of the called audio frame data is 2, and the first preset threshold may be set to 2. When the amount of audio frame data remaining is insufficient to provide the amount of audio data to be called at one time, i.e., there is not enough audio frame data in the current audio queue for calling, a new audio queue will be created.
When detecting that the number of the audio frame data left in the current audio queue is smaller than a first preset threshold value at any time, or when the audio frame data are not called from the current audio queue, creating a new audio queue, and then caching, calling and playing the audio frame data according to the new audio queue.
And step S102, audio frame data to be cached in a new audio queue are obtained, and the audio frame data are cached in the new audio queue.
It should be noted that, if the first preset condition is met, a new audio queue may be created first, then audio frame data to be buffered in the new audio queue is obtained, and then the audio frame data is buffered in the new audio queue; if the first preset condition is met, the audio frame data to be buffered in the new audio queue may be obtained first, then the new audio queue is created, then the audio frame data to be buffered in the new audio queue is buffered in the new audio queue, or the new audio queue may be created and the audio frame data to be buffered in the new audio queue may be obtained at the same time, and then the audio frame data is buffered in the new audio queue, where fig. 1 is merely an example and is not a limitation of the embodiment of the present application.
And the audio frame data to be cached in the new audio queue is used for representing the audio frame data needing to be continuously played. For example, the audio frame data to be buffered into the new audio queue may include: audio frame data not yet buffered to any audio queue. Taking the example in step S101 as an example, 45 frames of audio frame data are received in total, 30 frames of audio frame data are buffered and called, the remaining 15 frames of audio frame data are not buffered in the current audio queue, and the remaining 15 frames of audio frame data are audio frame data to be buffered in the new audio queue.
Specifically, after a new audio queue is created, if the audio frame data to be buffered in the new audio queue is still buffered in the original audio queue, and the original audio queue cannot realize the normal playing function at this time, the situation that the audio frame data is not played any more is caused, so that the audio frame data is directly buffered in the new audio queue at this time, and the audio frame data is not buffered in the original audio queue any more, so as to reduce the occurrence probability that the sound disappears due to the fact that the received audio frame data cannot be played.
And step S103, calling audio frame data to be output from the new audio queue and outputting the audio frame data to be output.
Specifically, after the audio frame data are buffered in the new audio queue, the audio frame data are called from the new audio queue, and the called audio frame data are output, so that the called audio frame data are played. The audio frame data are not called from the original audio queue any more, and the audio frame data are not stored in the original audio queue any more, so that the situation that no sound is played after the audio frame data are called from the original audio queue does not exist, and the new audio queue does not start to work by replacing the new audio queue, and the audio frame data start to work normally when the audio frame data are cached, so that the rest audio frame data can be played normally.
In the embodiment of the application, when audio frame data are not called from a current audio queue or the number of the remaining audio frame data in the current audio queue is smaller than a first preset threshold value, it is characterized that the audio frame data cannot be played even if the audio frame data are cached in the current audio queue, so that when the above conditions are met, a new audio queue is created, and the audio frame data to be cached are temporarily stored in the new audio queue, so that the audio frame data can be called from the new audio queue to continue to be played, further the remaining or other audio frame data can be normally played, the situation that sound disappears is reduced, and the use experience of a user is further improved.
It is noted that, in this embodiment of the present application, the new audio queue may be created in advance before it is detected that the first preset condition is satisfied, and when it is determined that the first preset condition is satisfied, the audio queue that is created in advance is directly used as a new audio queue to perform buffering, calling, and playing of audio frame data. For example, an audio queue a and an audio queue b are pre-created in the terminal device, the audio queue a is the current audio queue, the audio queue b is not enabled yet, and when it is determined that the first preset condition is met, the audio queue b can be directly used as a new audio queue, so that time consumed in creating the new audio queue is reduced, further, the duration of audio interruption is reduced, and the use experience of a user is further improved.
Further, when audio frame data is called from a new audio queue, if it is determined again that the first preset condition is met, a new audio queue is created again or an audio queue is determined again from the created new audio queue, then audio frame data to be buffered to the audio queue is obtained, the audio frame data is buffered to the audio queue, and the audio frame data is called from the audio queue for output.
Further, the obtaining of the audio frame data to be buffered in the new audio queue may include a first mode or a second mode, wherein,
the method I comprises the following steps: and if the remaining audio frame data exist in the current audio queue, acquiring the remaining audio frame data and/or the audio frame data to be written into the audio queue as the audio frame data to be cached into the new audio queue.
The audio frame data to be written into the audio queue is data which is not cached to any audio queue; and the rest audio frame data are used for representing the rest audio frame data in the current audio queue when the first preset condition is met. For example, when it is detected that the first preset condition is met, the uncached audio frame data is 15 frames, the cached audio frame data is 30 frames, the called audio frame data is 29 frames, and 1 frame of audio frame data still exists in the current audio queue to wait for calling, then the uncached audio frame data of 15 frames, that is, the audio frame data to be written, is correspondingly obtained; the remaining 1 frame of audio frame data in the current audio queue corresponds to the remaining audio frame data.
Specifically, in a possible implementation manner, only the audio frame data to be written into the audio queue is taken as the audio frame data to be buffered into the new audio queue, and the remaining audio frame data is not acquired, the number of the remaining audio frame data in the current audio queue is a first preset threshold, the number is small, the audio frame data lacking one or several frames is not played, and for human ears, the difference between the audio frame data of several frames is difficult to cause an influence, so that only the audio frame data to be written into the audio queue can be taken as the audio frame data to be buffered into the new audio queue, that is, the audio frame data received after the new audio queue is established is taken as the audio frame data to be buffered into the new audio queue.
In another possible implementation manner, the audio frame data to be written into the audio queue and the audio frame data remaining in the current audio queue are both used as the audio frame data to be buffered in the new audio queue. In the embodiment of the application, when the remaining audio frame data in the current audio queue is acquired, the remaining audio frame data is used as the audio frame data to be cached in the new audio queue and is cached in the new audio queue, and then when the audio frame data to be written is received, the audio frame data to be written is also used as the audio frame data to be cached in the new audio queue. For example, the number of the received audio frame data is 10, when a first preset condition is met, 3 frames of audio frame data have been called and output, it is determined that one frame of audio frame data still remains in the current audio queue, and the frame of audio frame data is obtained from the current audio queue and is used as audio frame data to be cached to a new audio queue; the remaining 6 frames of audio frame data, i.e., the audio frame data to be written into the audio queue, are also used as the audio frame data to be buffered into the new audio queue, and at this time, the audio frame data in the current audio frame queue and the audio frame data to be written into the audio queue are used.
The second method comprises the following steps: and acquiring audio frame data to be written into the audio queue as audio frame data to be cached into the new audio queue.
In a possible implementation manner, when there is no remaining audio frame data in the current audio queue, the audio frame data to be written into the audio queue is directly used as the audio frame data to be buffered in the new audio queue. For example, the number of the remaining audio frame data in the current audio queue is 0, and at this time, only the audio frame data to be written into the audio queue is taken as the audio frame data to be buffered into the new audio queue.
In another possible implementation manner, no matter the remaining audio frame data exists or does not exist in the current audio queue, the step of judging whether the remaining audio frame data exists in the current audio queue is not executed directly after the new audio queue is created, and the audio frame data to be written into the audio queue is directly used as the audio frame data to be buffered into the new audio queue. In the embodiment of the application, the audio frame data to be written into the audio queue is directly used as the audio frame data to be cached into the new audio queue, and the step of judging whether the remaining audio frames exist in the current audio queue is not required to be executed, so that the speed of continuously playing the audio can be increased, the time for audio disappearance is shortened, and the user experience is improved.
Further, when creating a new audio queue, the length of the new audio queue may be the same as or different from the length of the original audio queue, wherein the manner of creating the new audio queue may specifically include step S1011 (not shown in the figure) and step S1012 (not shown in the figure), in which:
step S1011 determines the queue length of the new audio queue.
Wherein the queue length is used to characterize the maximum number of audio frame data buffered by the new audio queue.
Specifically, the queue length of the audio queue determines the maximum amount of audio frame data that the audio queue can accommodate, for example, setting the queue length of the audio queue to 45, which indicates that the audio queue can buffer only 45 audio frame data when no audio frame data is called.
After the queue length of the new audio queue is determined, a new audio queue will be created based on the queue length.
Step S1012 creates a new audio queue based on the queue length.
Specifically, in the process of buffering the audio frame data, the audio frame data are continuously called from the audio queue, and the writing and calling processes are parallel, so that the number of the audio frame data written in one period may be greater than the number of the called audio frame data, the number of the audio frame data written in one period may be equal to the number of the called audio frame data, or the number of the audio frame data written in one period may be smaller than the number of the called audio frame data; therefore, the length of the audio queue determines the maximum amount of the audio queue capable of being cached, and further determines the fluency when audio frame data are played circularly.
Before establishing a new audio queue, the mode for determining the queue length of the new audio queue includes any one of a mode one, a mode two, a mode three and a mode four, wherein:
the first method is as follows: and determining the queue length of the new audio queue as a preset length.
The preset length is a preset queue length. For example, the queue length is set to 45. And when the queue length of the audio queue is determined, directly taking the preset queue length as the queue length of a new audio queue to create the new audio queue.
The second method comprises the following steps: and determining the queue length of the new audio queue according to the queue length of the current audio queue.
Specifically, when determining the queue length of the new audio queue, the queue length of the current audio queue may also be determined. In the embodiment of the present application, the length corresponding to the queue that is not used in the buffer queue may be subtracted from the queue length of the current audio queue to obtain a new queue length of the audio queue; or when the queue length of the current audio queue is small and smooth caching is difficult to realize, on the basis of the queue length of the current audio queue, increasing part of the queue length to serve as the queue length of a new audio queue so as to improve the smoothness of audio playing; or directly determining the queue length of the new audio queue as the queue length of the new audio queue.
For example, the length of the current audio queue may be 45, and the length of the unused queue in the buffer queue is 10, then the length of the new audio queue may be 45 or 35, and further, if the length of the queue to be added may be 15, that is, the length of the new audio queue may also be 60.
The third method comprises the following steps: and determining the queue length of the new audio queue according to the total number of the received audio frame data.
Specifically, the total amount of the received audio frame data is the total amount of the audio frame data corresponding to the audio sent by the external device to the terminal device, for example, if the server sends a section of audio with an audio length of 45 frames to the terminal device, the total amount of the received audio frame data is 45, and at this time, it is determined that the queue length of the new audio queue is 45.
Specifically, in this embodiment of the present application, the total amount of the received audio frame data is sent to the terminal device by the external device, where the total amount of the received audio frame data may be carried in the audio frame data and sent to the terminal device along with the audio frame data, or the external device may be sent to the terminal device separately, which is not limited in this embodiment of the present application.
When the total number of received audio frame data is small, if the queue length of the created new audio queue is too long, a situation that a redundant part occupies a large amount of creation time occurs when the new audio queue is created, and further, the interval duration between interruption and audio restoration is affected, for example, the total number of received audio frame data is 5, and if the queue length of the new audio queue is set to 45, the remaining part of the audio queue corresponding to 40 will occupy a large amount of creation time.
When the total amount of the received audio frame data is large, if the queue length of the audio queue is small, the audio playing is easily interrupted by other influences such as a network. For this reason, the influence of the total number of currently received audio frame data needs to be considered when establishing the audio queue.
Specifically, each total number of received audio frame data corresponds to a first preset length, and the preset queue lengths corresponding to different total numbers may be the same or different. For example, the total number of received audio frame data is 35, the corresponding first preset length is 50, the total number of received audio frame data is 40, the corresponding first preset length is 50, the total number of received audio frame data is 10, and the corresponding first preset length is 20.
When determining the queue length of the new audio queue, taking a first preset length corresponding to the total amount of the received audio frame data as the queue length of the new audio queue, for example, if the corresponding first preset length is 50, determining the first preset length of the new audio queue to be 50.
Further, since the current audio queue already calls and outputs part of the audio frame data, the new audio queue only needs to buffer, call and output audio frame data that has not been called, and for this reason, when the queue length of the new audio queue is considered, the determination may be performed according to the total number of audio frame data that has not been called, and the specific determination manner is detailed in manner four.
The method four comprises the following steps: and determining the queue length of the new audio queue according to the total amount of the audio frame data to be cached in the audio queue.
Wherein the audio queue is a new audio queue.
Specifically, the audio frame data to be buffered to the new audio queue includes audio frame data that has not been written to any audio queue and/or audio frame data remaining in the current audio queue, for example, the number of audio frame data that has not been written to any audio queue is 10, the number of audio frame data remaining in the current audio queue is 1, the number of audio frame data that has been output is 9, and the number of audio frame data to be written to the new audio queue may be 10 (including only the number of audio frame data that has not been written to any audio queue), and may also be 11 (including the number of audio frame data that has not been written to any audio queue and the number of audio frame data remaining in the current audio queue).
Determining the total number of audio frame data to be buffered in the new audio queue, and then: determining the queue length of a new audio queue according to the corresponding relation between the total number and the queue length; and the total number of the audio frame data to be cached in the new audio queue corresponds to a second preset length, and the second preset length corresponding to the total number is used as the queue length of the new audio queue. For example, if the determined total number of audio frames of the new audio queue to be written is 10 and the corresponding preset queue length is 15, the queue length of the created new audio queue is 15.
After a new audio queue is created according to the queue length, the audio frame data to be buffered in the new audio queue is buffered and called for output based on the new audio queue. In this embodiment of the application, the retrieving, from the new audio queue, audio frame data to be output in step S103 may specifically include: and when the number of the audio frame data cached in the new audio queue is greater than a second preset threshold value, calling the audio frame data to be output from the new audio queue.
The second preset threshold is a preset value, for example, the second preset threshold is set to 5.
Specifically, after a new audio queue is created, if the audio frame data is called and played after one audio frame data is cached in the audio queue, the situation that the audio frame data cannot be called again after the audio frame data is called is very easy to occur, and the new audio queue needs to be re-established. Therefore, a great waste of resources and an unstable phenomenon are generated, and after a new audio queue is created, when the number of audio frame data buffered in the new audio queue is greater than a second preset threshold value, audio frame data to be output are called from the new audio queue to perform audio playing. For example, after a new audio queue is created, 2 pieces of audio frame data are obtained from the current audio queue and cached in the new audio queue, when 4 pieces of audio frame data are cumulatively received, the 4 pieces of audio frame data are all cached in the new audio queue, 6 pieces of audio frame data exist in the new audio queue and are greater than a second preset threshold, and at this time, the audio frame data to be output are called from the new audio queue and the audio is continuously played.
Further, in order to start retrieving and playing audio frame data as soon as possible after creating a new audio queue to reduce the time length of audio interruption and further improve the user experience of a user, when the number of audio frame data buffered in the new audio queue is greater than a second preset threshold, the retrieving audio frame data to be output from the new audio queue, the method further includes:
and determining audio frame data waiting for copying according to the audio frame data remaining in the current audio queue.
Specifically, the audio frame data waiting for copying is determined in the order of arrangement of the respective audio frame data from the remaining audio frame data. And sequentially selecting a preset number of audio frame data from the rest audio frame data, wherein the preset number is greater than or equal to 1, for example, the preset number is 1, and the audio frame data positioned at the first in the rest audio frame data is the audio frame data waiting for copying.
After the audio frame data remaining in the current audio queue are cached in the new audio queue, if the number of the audio frame data in the new audio queue is lower than a second preset threshold value, the audio frame data waiting for being copied are copied, and the copied audio frame data are cached in the new audio queue.
Specifically, after the audio frame data remaining in the current audio queue is cached in the new audio queue, if the number of the audio frame data in the new audio queue is lower than the second preset threshold, it is represented that the new audio queue needs to wait for the next audio frame data to be cached in the new audio queue at this time, and the audio frame data can not be called and played until the number reaches the second preset threshold.
In order to save waiting time and reduce the time interval of audio interruption so as to improve the fluency of audio playing, after the remaining audio frame data are cached to a new audio queue, if the audio frame data in the new audio queue are lower than a second preset threshold value, the audio frame data waiting for being copied are copied, and the copied audio frame data are also cached to the new audio queue so as to reduce the time length waiting for being played, thereby being beneficial to improving the use experience of a user.
In the above manner, when the audio frame data to be output is called from the new audio queue and the playing audio is output, part of the remaining audio frame data may be buffered in the original audio queue, or audio frame data may not be buffered, but the queue creation data generated when the original audio queue is created is not yet cleared, in order to save the memory space of the terminal device occupied by the unrelated data to release resources, in step S102, after the audio data to be buffered in the new audio queue is acquired, the method further includes step Sa1 (not shown in the figure) and/or step Sa2 (not shown in the figure), where:
and step Sa1, clearing queue creating data of the current audio queue.
The queue creating data is data generated when the audio queue is created. The queue creation data is deleted from the terminal equipment, the corresponding audio queue can be cleared, and the current audio queue is deleted because the buffering and the output of the audio frame data are not performed based on the current audio queue at the moment, so that the necessity of the current audio queue is low, and the memory space occupation of the terminal equipment can be saved.
And step Sa2, removing the audio frame data cached in the current audio queue.
Specifically, when the first preset condition is met, it is possible that the previous audio frame data is still cached in the current audio queue and is not called, and the audio frame data cached in the current audio queue occupies the queue space resource of the current audio queue on one hand, and on the other hand, the data is already cached in the new audio queue or does not need to be called and played again, so that the data is equivalent to the unrelated data but occupies the memory space of the terminal device, and therefore, the audio frame data cached in the current audio queue is cleared, which is beneficial to saving the memory space of the terminal device.
If the audio frame data to be cached in the new audio queue does not include the audio frame data cached in the current audio queue, the audio frame data cached in the current audio queue can be cleared after the new audio queue is created; and if the audio frame data to be cached in the new audio queue comprises the audio frame data cached in the current audio queue, after the audio frame data to be cached in the new audio queue is obtained, the audio frame data cached in the current audio queue is removed.
Note that, when step Sa1 and step Sa2 are included at the same time, step Sa2 may be executed before step Sa1, or may be executed at the same time as step Sa1, and is not limited in the embodiment of the present application. The queue establishing data corresponding to the original audio queue and the audio frame data cached in the original audio queue are eliminated, so that the memory space can be saved to a greater extent.
The above embodiment is a process of how to play the remaining audio when it is detected that the first preset condition is satisfied, that is, when it is determined that audio disappearance is about to occur, and when the first preset condition is not satisfied, that is, when the audio is normally played based on the current audio queue, the method may further include step Sb1 (not shown in the figure), step Sb2 (not shown in the figure), and step Sb3 (not shown in the figure), in which:
and step Sb1, calling target audio frame data from the current audio queue.
Specifically, when the audio is played normally, a preset number of audio frame data are called from the current audio queue each time, and the preset number of audio frame data are played, where the preset number may be one or at least two. And judging whether a first preset condition is met after the audio frame data is called, if so, continuing to call the audio frame data from the current audio queue, wherein the called audio frame data is the target audio frame data. For example, the preset number is 1, the 30 th audio frame data is called and output from the current audio queue, it is determined that the first preset condition is not met, the 31 th audio frame data is continuously called from the current audio queue, and the 31 th audio frame data is the target audio frame data in the current calling process; when the audio frame data is called again, the 32 nd audio frame data is updated to the target audio frame data.
And step Sb2, outputting target audio frame data.
Specifically, after the target audio frame data is retrieved from the current audio queue, the target audio frame data is output to play the audio corresponding to the target audio frame data.
And step Sb3, if the first preset condition is not satisfied and the second preset condition is satisfied, circularly executing to call the target audio frame data from the current audio queue, and outputting the target audio frame data until the second preset condition is not satisfied or the first preset condition is satisfied.
Wherein the second preset condition comprises: the total number of audio frame data that has been currently called up is less than the total number of received audio frame data.
After the target audio frame data is called, whether the section of audio is completely called is judged, namely whether a second preset condition is met is judged.
Specifically, when the first audio frame data is acquired, the total number of the audio frame data corresponding to the audio segment is carried to the terminal device and acquired by the terminal device, where the total number represents the total number of the audio frame data of the audio segment to be played. For example, the server sends a section of audio frame data with an audio length of 45 frames, where 45 frames are the total number of received audio frame data.
After the target audio frame data is called and output, if the total number of the currently called audio frame data is judged to be smaller than the total number of the received audio frame data, that is, when a second preset condition is met, it is represented that after the target audio frame data is output, some audio frame data still have been unfinished to be called and played. And if the number of the currently called audio frame data is equal to the total number of the received audio frame data, representing that all the received audio frame data are completely played after the target audio frame data is output. For example: receiving 45 frames of audio frame data in total, and after the 44 th audio frame data is called, wherein the number of the called audio frame data is 44 and is less than 45, namely the representation meets a second preset condition; after the 44 th audio frame data is called and played, the 45 th audio frame data is called again from the current audio queue, and after calling, the number of the called audio frame data is 45 and is equal to 45, namely the second preset condition is not met at this time.
If it is detected that the first preset condition is not met and the second preset condition is met, that is, the representation is not yet played and is in the process of normal playing, the target audio frame data is continuously and circularly called from the current audio queue, for example, the total amount of the received audio frame data is 45, after the 30 th audio frame data is called, 1 audio frame data (31 st audio frame data) still exists in the current audio queue, the 31 st audio frame data is continuously called and output from the current audio queue, and the circulation is continuously performed. When the condition that the first preset condition is met is detected, a new audio queue is reestablished, audio frame data are cached and called from the new audio queue, for example, after the 31 st audio frame data are called, it is found that no audio frame data exist in the current audio queue, that is, the first preset condition is met, the new audio queue is established, or until the second preset condition is not met, that is, all audio frame data are played, and the loop is ended, for example, after the 45 th audio frame data are called, the total number of the called audio frame data is equal to 45, the second preset condition is met, the playing of the audio of the current segment is completed, the loop is ended, and no audio frame data are called from the current audio queue.
Further, when the number of the remaining audio frame data in the current audio queue is smaller than the first preset threshold but larger than zero, in order to reduce the number of times of creating a new audio queue to reduce adverse effects on a user caused by interruption, when it is determined that the number of the remaining audio frame data in the current audio queue is smaller than the second preset threshold, the current audio frame data to be copied is determined according to the remaining audio frame data in the current audio queue, and the current audio frame data to be copied is copied and buffered in the current audio queue.
The current audio frame data to be copied can be determined according to the arrangement sequence of each remaining audio frame data; in a specific manner, the audio frame data arranged at the first in the remaining audio frame data may be used as the audio frame data to be currently copied. In another specific manner, a preset number of audio frame data may be sequentially determined according to the arrangement order, and each determined audio frame data is used as the current audio frame data to be copied. For example, the number of the remaining audio frame data is 2, the preset number is 1, and the first audio frame data ranked in the remaining audio frame data is taken as the audio frame data to be copied currently.
In addition, the audio frame data to be copied currently may also be the audio frame data to be copied currently, where each of the remaining audio frame data is regarded as the audio frame data to be copied currently. For example, the number of the remaining audio frame data is 2, and specifically includes audio frame data a1 and audio frame data a2, where the audio frame data a1 and the audio frame data a2 are both the audio frame data to be copied currently.
Further, when a preset number of audio frame data is called from the terminal device each time, the preset number is greater than or equal to two, and the set first preset threshold is equal to the preset number, and the terminal device judges that the number of the audio frame data in the current audio queue is not greater than the preset number, the step of creating a new audio queue may not be executed at this time, and the preset number is re-determined, that is, the number of the audio frame data called each time is re-determined, so as to reduce the number of times of creating the new audio queue; for example, if the preset number is 2, the remaining 2 frames of audio frame data in the current audio queue are not output, and equal to the first preset threshold (preset to 2), the number of audio frame data to be called each time will be determined again without creating a new audio queue.
When determining the number of audio frame data to be called each time, determining the number of audio frame data remaining in the current audio queue, for example, if the number of remaining audio frame data is 2, the preset number may be set to 1; the number of the remaining audio frame data is 4, and the preset number may be newly determined to be 2.
Based on the above embodiment, the present application describes an audio playing process through a specific application scenario, with reference to fig. 2, a server starts to send an audio with an audio length of 45 to a terminal device installed with a client, and sends the audio to the terminal device, the total amount of audio frame data received by the terminal device is 45, the terminal device is pre-created with an audio queue 1, when 5 audio frame data are buffered in the audio queue 1, the terminal device also completes the retrieval of 5 audio frame data, at this time, the remaining audio frame data in the audio queue 1 are 0, after the fifth audio frame data are retrieved and output, it is determined that the amount of audio frame data buffered in the audio queue 1 is 0, it is determined that a first preset condition is met, at this time, a new audio queue, that is, an audio queue 2, is created again, and the audio queue 1 is deleted, when the sixth audio frame data are received, the sixth audio frame data are buffered in the audio queue 2, after the amount of audio frame data in the audio queue 2 reaches a second preset threshold, the audio queue 2 starts to be retrieved, when it is determined that all audio frame data buffered in the audio frame data in the audio queue 2 do not exist, and when it is determined that all audio frame data in the audio frame data buffered in the audio queue 2 are deleted, it is possible to retrieve all audio frame data, and it is found that the audio frame data that the audio queue 2 exist in the audio queue is normal audio queue is not exist, and all audio frame data, the audio frame data are retrieved, and the audio frame data are not exist, and the audio queue 3, the audio queue is found, the audio queue 2, and the audio frame data, the audio frame data that all audio frame data are not exist in the audio queue is removed, and the audio frame data are not exist.
The foregoing embodiments describe a method for audio playing from the perspective of a method flow, and the following embodiments describe an apparatus for audio playing from the perspective of a virtual module or a virtual unit, which are described in detail in the following embodiments.
Referring to fig. 3, an audio playback apparatus 300 includes:
a queue creating module 301, configured to create a new audio queue when a first preset condition is met;
a data obtaining module 302, configured to obtain audio frame data to be cached in a new audio queue, and cache the audio frame data in the new audio queue;
a data retrieving module 303, configured to retrieve audio frame data to be output from the new audio queue and output the audio frame data to be output;
wherein the first preset condition comprises any one of:
audio frame data are not called from the current audio queue;
the number of audio frame data remaining in the current audio queue is less than a first preset threshold.
By adopting the technical scheme, when audio frame data are not called from the current audio queue or the number of the audio frame data remaining in the current audio queue is smaller than the first preset threshold value, it is represented that the audio frame data cannot be played even if the audio frame data are cached in the current audio queue, so that when the above conditions are met, the queue creating module 301 creates a new audio queue, and the data obtaining module 302 temporarily stores the audio frame data to be cached in the new audio queue, so that the data calling module 303 can call the audio frame data from the new audio queue to continue playing, further the remaining or other audio frame data can be normally played, the appearance of the disappearance of sound is reduced, and the use experience of a user is further improved.
In a possible implementation manner of this embodiment of the present application, when the data obtaining module 302 obtains audio frame data to be cached in a new audio queue, the data obtaining module is specifically configured to:
if the remaining audio frame data exist in the current audio queue, acquiring the remaining audio frame data and/or the audio frame data to be written into the audio queue as the audio frame data to be cached into the new audio queue; or,
and acquiring audio frame data to be written into the audio queue as the audio frame data to be cached in the new audio queue.
In a possible implementation manner of this embodiment of the present application, the apparatus 300 further includes a first clearing module and/or a second clearing module, where:
the first clearing module is used for clearing queue creating data of the current audio queue;
and the second clearing module is used for clearing the audio frame data cached in the current audio queue.
In a possible implementation manner of this embodiment of the present application, when the first preset condition is not satisfied, the apparatus 300 further includes:
the normal calling module is used for calling target audio frame data from the current audio queue;
the normal data module is used for outputting target audio frame data;
the normal data module is used for circularly executing the calling of the target audio frame data from the current audio queue when the first preset condition is not met and the second preset condition is met, and outputting the target audio frame data until the second preset condition is not met or the first preset condition is met;
wherein the second preset condition comprises: the total number of audio frame data that has been currently called up is less than the total number of received audio frame data.
In a possible implementation manner of the embodiment of the present application, when creating a new audio queue, the queue creating module 301 is specifically configured to:
determining the queue length of a new audio queue, wherein the queue length is used for representing the maximum number of audio frame data cached in the new audio queue;
based on the queue length, a new audio queue is created.
In a possible implementation manner of the embodiment of the present application, when determining the queue length of a new audio queue, the queue creating module 301 is specifically configured to:
determining the queue length of a new audio queue as a preset length; or,
determining the queue length of a new audio queue according to the queue length of the current audio queue; or,
determining the queue length of a new audio queue according to the total number of the received audio frame data; or,
and determining the queue length of a new audio queue according to the total amount of the audio frame data to be cached in the audio queue, wherein the audio queue is the new audio queue.
In a possible implementation manner of the embodiment of the present application, when the data retrieving module 303 retrieves audio frame data to be output from a new audio queue, the data retrieving module is specifically configured to:
and when the number of the audio frame data cached in the new audio queue is greater than a second preset threshold value, calling the audio frame data to be output from the new audio queue.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The embodiment of the present application also introduces a terminal device from the perspective of a physical apparatus, as shown in fig. 4, a terminal device 400 shown in fig. 4 includes: a processor 401 and a memory 403. Wherein the processor 401 is coupled to the memory 403, such as via a bus 402. Optionally, the terminal device 400 may further include a transceiver 404. It should be noted that the transceiver 404 is not limited to one in practical application, and the structure of the terminal device 400 is not limited to the embodiment of the present application.
The Processor 401 may be a CPU (Central Processing Unit), a general purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 401 may also be a combination of computing functions, e.g., comprising one or more microprocessors in combination, a DSP and a microprocessor in combination, or the like.
The Memory 403 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 403 is used for storing application program codes for executing the scheme of the application, and the execution is controlled by the processor 401. Processor 401 is configured to execute application program code stored in memory 403 to implement the aspects illustrated in the foregoing method embodiments.
Wherein, the terminal device includes but is not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The terminal device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. In the embodiment of the application, when audio frame data are not called from the current audio queue or the number of the audio frame data remaining in the current audio queue is smaller than a first preset threshold value, it is characterized that the audio frame data cannot be played even if the audio frame data are cached in the current audio queue, so that when the above conditions are met, a new audio queue is created, and the audio frame data to be cached are temporarily stored in the new audio queue, so that the audio frame data can be called from the new audio queue to continue to be played, further, the remaining or other audio frame data can be normally played, the situation of sound disappearance is reduced, and the use experience of a user is further improved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a few embodiments of the present application and it should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present application, and that these improvements and modifications should also be considered as the protection scope of the present application.
Claims (10)
1. An audio playing method, comprising:
if the first preset condition is met, a new audio queue is created;
acquiring audio frame data to be cached in the new audio queue, and caching the audio frame data in the new audio queue;
calling audio frame data to be output from the new audio queue, and outputting the audio frame data to be output;
wherein the first preset condition comprises any one of:
audio frame data are not called from the current audio queue;
the number of the audio frame data left in the current audio queue is smaller than a first preset threshold value.
2. The method according to claim 1, wherein the obtaining audio frame data to be buffered in the new audio queue comprises any one of:
if the current audio queue has the remaining audio frame data, acquiring the remaining audio frame data and/or the audio frame data to be written into the audio queue as the audio frame data to be cached into the new audio queue;
and acquiring audio frame data to be written into an audio queue as the audio frame data to be cached in the new audio queue.
3. The method of claim 1, further comprising at least one of:
clearing the queue creation data of the current audio queue;
and clearing the audio frame data buffered in the current audio queue.
4. The method according to claim 1, wherein if the first predetermined condition is not satisfied, the method further comprises:
calling target audio frame data from the current audio queue;
outputting the target audio frame data;
if the first preset condition is not met and the second preset condition is met, circularly executing the calling of target audio frame data from the current audio queue and outputting the target audio frame data until the second preset condition is not met or the first preset condition is met;
wherein the second preset condition comprises: the total number of audio frame data that has been currently called is less than the total number of received audio frame data.
5. The method of claim 1, wherein creating the new audio queue comprises:
determining the queue length of the new audio queue, wherein the queue length is used for representing the maximum number of audio frame data buffered in the new queue;
based on the queue length, a new audio queue is created.
6. The method of claim 5, wherein determining the queue length of the new audio queue comprises any one of:
determining the queue length of the new audio queue as a preset length;
determining the queue length of the new audio queue according to the queue length of the current audio queue;
determining the queue length of the new audio queue according to the total number of the received audio frame data;
and determining the queue length of the new audio queue according to the total number of audio frame data to be cached in the audio queue, wherein the audio queue is the new audio queue.
7. The method of claim 1, wherein the retrieving the audio frame data to be output from the new audio queue comprises:
and when the number of the audio frame data cached in the new audio queue is greater than a second preset threshold value, calling the audio frame data to be output from the new audio queue.
8. An audio playback apparatus, comprising:
the queue creating module is used for creating a new audio queue when a first preset condition is met;
the data acquisition module is used for acquiring audio frame data to be cached in the new audio queue and caching the audio frame data in the new audio queue;
the data calling module is used for calling audio frame data to be output from the new audio queue and outputting the audio frame data to be output;
wherein the first preset condition comprises any one of:
audio frame data are not called from the current audio queue;
the number of audio frame data remaining in the current audio queue is less than a first preset threshold.
9. A terminal device, characterized in that the terminal device comprises:
at least one processor;
a memory;
at least one application, wherein the at least one application is stored in the memory and configured to be executed by the at least one processor, the at least one application configured to: an audio playback method as claimed in any one of claims 1 to 7 is performed.
10. A computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed in a computer, it causes the computer to execute the audio playback method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211230546.7A CN115480728A (en) | 2022-09-30 | 2022-09-30 | Audio playing method and device, terminal equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211230546.7A CN115480728A (en) | 2022-09-30 | 2022-09-30 | Audio playing method and device, terminal equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115480728A true CN115480728A (en) | 2022-12-16 |
Family
ID=84394534
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211230546.7A Pending CN115480728A (en) | 2022-09-30 | 2022-09-30 | Audio playing method and device, terminal equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115480728A (en) |
-
2022
- 2022-09-30 CN CN202211230546.7A patent/CN115480728A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5628013A (en) | Apparatus and method for allocating processing time in a frame-based computer system | |
TWI335512B (en) | Technique for using memory attributes | |
WO2021238265A1 (en) | File pre-reading method, apparatus and device, and storage medium | |
CN111427859B (en) | Message processing method and device, electronic equipment and storage medium | |
KR20060129873A (en) | Method for executing garbage collection of mobile terminal | |
US11907164B2 (en) | File loading method and apparatus, electronic device, and storage medium | |
CN111913807B (en) | Event processing method, system and device based on multiple storage areas | |
CN109599133B (en) | Language audio track switching method and device, computer equipment and storage medium | |
US6993598B2 (en) | Method and apparatus for efficient sharing of DMA resource | |
CN110928574A (en) | Microcontroller, interrupt processing chip, device and interrupt processing method | |
CN108650306A (en) | A kind of game video caching method, device and computer storage media | |
CN115480728A (en) | Audio playing method and device, terminal equipment and storage medium | |
US20080320176A1 (en) | Prd (physical region descriptor) pre-fetch methods for dma (direct memory access) units | |
US20150312369A1 (en) | Checkpoints for media buffering | |
CN116737084A (en) | Queue statistics method and device, electronic equipment and storage medium | |
US20070079109A1 (en) | Simulation apparatus and simulation method | |
CN113986849A (en) | Music caching method and device, electronic equipment and storage medium | |
CN110784756B (en) | File reading method and device, computing equipment and storage medium | |
CN110825652B (en) | Method, device and equipment for eliminating cache data on disk block | |
CN114064681A (en) | Configuration parameter updating method, device and equipment | |
US7213107B2 (en) | Dedicated cache memory | |
JP7073737B2 (en) | Communication log recording device, communication log recording method, and communication log recording program | |
JP2021189719A (en) | Information processing apparatus and reception processing program | |
JPH0728677A (en) | File management system of storage device | |
KR100815618B1 (en) | Apparatusand method for reproducing moving picture of external storage in mobile communication terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB03 | Change of inventor or designer information | ||
CB03 | Change of inventor or designer information |
Inventor after: Li Zhongtao Inventor after: Zhao Qiang Inventor after: Li Xin Inventor after: Guo Jianjun Inventor before: Li Zhongtao Inventor before: Zhao Qiang Inventor before: Zhang Hexiang Inventor before: Li Xin Inventor before: Guo Jianjun |