CN115633215A - Audio playing method and device, electronic equipment and storage medium - Google Patents

Audio playing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115633215A
CN115633215A CN202211181989.1A CN202211181989A CN115633215A CN 115633215 A CN115633215 A CN 115633215A CN 202211181989 A CN202211181989 A CN 202211181989A CN 115633215 A CN115633215 A CN 115633215A
Authority
CN
China
Prior art keywords
audio
playing
data
played
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211181989.1A
Other languages
Chinese (zh)
Inventor
刘卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202211181989.1A priority Critical patent/CN115633215A/en
Publication of CN115633215A publication Critical patent/CN115633215A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure relates to an audio playing method, an audio playing device, an electronic device and a storage medium, which are applied to a client, and include: acquiring audio data to be played and live streaming audio data of a preset page; under the condition that the client side meets the compatible condition of the audio context interface, an audio context component is created; calling an audio context component, converting audio data to be played into buffer data, and playing the buffer data to realize the playing effect of the audio data to be played; and calling a preset audio playing interface to play the audio data of the live streaming, wherein the preset audio playing interface is independent from the audio context component. Therefore, the audio playing process of the preset page and the live stream in the client side is not interfered with each other, the situation that the live stream is blocked by the audio playing of the preset page can be reduced, and therefore in the scene of the live broadcast service, audio elements can be set in the preset page, the display form of the preset page is enriched, interestingness is enhanced, and user experience is improved.

Description

Audio playing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of audio processing, and in particular, to an audio playing method and apparatus, an electronic device, and a storage medium.
Background
In a scene of a live broadcast service, an h5 (hypertext 5. Hypertext 5.0) page can be displayed for a user, and elements such as dynamic effect, game, vibration and the like are included in the h5 page, so that the playing interest of the user can be improved, and the visual impact experience of the user can be met.
In the prior art, most of h5 pages are presented in a half-screen webpage view in a live broadcast room, audio data are usually played on the h5 pages by using audio components, the audio components play the audio data of the h5 pages and also play the audio data in live broadcast streams, and thus, in the live broadcast process, the audio play of the h5 pages can block the live broadcast streams, and the live broadcast function is seriously influenced.
Therefore, in the current scene of live broadcast service, the h5 page usually does not include audio elements, and has a single presentation form and a poor presentation effect.
Disclosure of Invention
The present disclosure provides an audio playing method, an audio playing device, an electronic device, and a storage medium, so as to at least solve the problems in the related art that in a scene of a live broadcast service, an h5 page usually does not include an audio element, a presentation form is single, and a presentation effect is poor. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided an audio playing system method, applied to a client, including:
acquiring audio data to be played and live streaming audio data of a preset page;
creating an audio context component if the client meets the compatibility condition of the audio context interface;
calling the audio context component, converting the audio data to be played into buffer data, and playing the buffer data to realize the playing effect of the audio data to be played;
and calling a preset audio playing interface to play the live streaming audio data, wherein the preset audio playing interface is independent from the audio context component.
Optionally, the invoking the audio context component, converting the audio data to be played into buffer data, and playing the buffer data includes:
calling the audio context component, and creating an audio conversion module and an audio playing module;
converting the audio data to be played into buffer data by using the audio conversion module;
and connecting the buffer data to an outlet of the audio context component by utilizing the audio playing module so as to play the buffer data.
Optionally, the method further includes:
and in response to the unloading instruction of the preset page, stopping running the audio conversion node module and the audio playing node module, and deleting the buffer data to close the audio context component.
Optionally, the method further includes:
and under the condition that the client does not meet the compatible condition of the audio context interface, if the client is in a live broadcasting room state, after the audio data to be played is obtained, the audio data to be played is paused to be played.
Optionally, the method further includes:
if the client is not in the live broadcast state room, after the audio data to be played is obtained, the preset audio playing interface is called to play the audio data to be played.
Optionally, the method further includes:
and responding to the unloading instruction of the preset page, stopping playing the audio data to be played, and closing the preset audio playing interface.
Optionally, the invoking the audio context component, converting the audio data to be played into buffer data, and playing the buffer data includes:
in response to a first instruction for switching the preset page to a foreground, calling the audio context component, converting the audio data to be played into buffer data, and playing the buffer data;
and in response to a second instruction for switching the preset page to the background, pausing the playing of the buffer data.
Optionally, the invoking the audio context component, converting the audio data to be played into buffer data, and playing the buffer data includes:
and calling the audio context component, converting the audio data to be played into buffer data, and playing the buffer data according to a preset offset.
Optionally, the method further includes:
after the preset interaction operation is monitored, setting an interaction variable as a preset value;
and under the condition that the interaction variable is a preset value, executing the step of creating the audio context component under the condition that the client side meets the compatible condition of the audio context interface.
According to a second aspect of the embodiments of the present disclosure, there is provided an audio playing apparatus applied to a client, including:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is configured to acquire audio data to be played and live streaming audio data of a preset page;
a creating unit configured to perform creating an audio context component in a case where the client complies with a compatible condition of an audio context interface;
the first playing unit is configured to execute calling of the audio context component, convert the audio data to be played into buffer data, and play the buffer data to realize the playing effect of the audio data to be played;
and the second playing unit is configured to execute calling of a preset audio playing interface, and play the live streaming audio data, wherein the preset audio playing interface and the audio context component are independent.
Optionally, the first playing unit is specifically configured to perform:
calling the audio context component, and creating an audio conversion module and an audio playing module;
converting the audio data to be played into buffer data by using the audio conversion module;
and connecting the buffer data to an outlet of the audio context component by utilizing the audio playing module so as to play the buffer data.
Optionally, the apparatus further comprises:
and the first closing unit is configured to execute an unloading instruction responding to the preset page, stop running the audio conversion module and the audio playing module, and delete the buffer data so as to close the audio context component.
Optionally, the apparatus further comprises:
and the third playing unit is configured to execute, under the condition that the client does not meet the compatible condition of the audio context interface, if the client is in a live broadcast state, pause playing the audio data to be played after acquiring the audio data to be played.
Optionally, the third playing unit is further configured to perform:
if the client is not in a live broadcast state, after the audio data to be played is obtained, the preset audio playing interface is called to play the audio data to be played.
Optionally, the apparatus further comprises:
and the second closing unit is configured to execute an unloading instruction responding to the preset page, stop playing the audio data to be played and close the preset audio playing interface.
Optionally, the first playing unit is configured to perform:
in response to a first instruction for switching the preset page to a foreground, calling the audio context component, converting the audio data to be played into buffer data, and playing the buffer data;
and in response to a second instruction for switching the preset page to the background, pausing the playing of the buffer data.
Optionally, the first playing unit is configured to perform:
and calling the audio context component, converting the audio data to be played into buffer data, and playing the buffer data according to a preset offset.
Optionally, the apparatus further comprises:
the interaction unit is configured to set an interaction variable to a preset value after monitoring a preset interaction operation; and under the condition that the interaction variable is a preset value, executing the step of creating the audio context component under the condition that the client side meets the compatible condition of the audio context interface.
According to a third aspect of the embodiments of the present disclosure, there is provided an audio playback electronic device, including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the audio playing method of the first item.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium, wherein when instructions of the computer-readable storage medium are executed by a processor of an audio playing electronic device, the audio playing electronic device is enabled to execute the audio playing method according to the first item.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer programs/instructions which, when executed by a processor, implement the audio playing method of the first item described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
acquiring audio data to be played and live streaming audio data of a preset page; creating an audio context component under the condition that the client side meets the compatible condition of the audio context interface; calling an audio context component, converting audio data to be played into buffer data, and playing the buffer data to realize the playing effect of the audio data to be played; and calling a preset audio playing interface to play the audio data of the live streaming, wherein the preset audio playing interface is independent from the audio context component.
Therefore, the client plays the audio data to be played of the preset page through the audio context component, the live broadcast stream audio data is played through the preset audio play interface, the audio context component and the preset audio play interface are independent, the audio play processes of the preset page and the live broadcast stream are not interfered with each other, the situation that the live broadcast stream is blocked by the preset page audio play can be reduced, and therefore in the scene of a live broadcast service, audio elements can be set in the preset page, the display form of the preset page is enriched, interestingness is enhanced, and user experience is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flow chart illustrating an audio playback method according to an exemplary embodiment.
Fig. 2 is a schematic diagram illustrating a playback logic for audio data to be played back according to an exemplary embodiment.
FIG. 3 is a diagram illustrating a design of an audio context component in accordance with an exemplary embodiment.
FIG. 4 is a diagram illustrating invocation of a component in accordance with an exemplary embodiment.
Fig. 5 is a block diagram illustrating an audio playback device according to an example embodiment.
FIG. 6 is a block diagram illustrating an electronic device for audio playback in accordance with an exemplary embodiment.
Fig. 7 is a block diagram illustrating an apparatus for audio playback in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in other sequences than those illustrated or described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating an audio playing method applied to a client according to an exemplary embodiment, where the audio playing method includes the following steps, as shown in fig. 1.
In step S11, audio data to be played and live stream audio data of a preset page are obtained.
In some scenes, the h5 page can be displayed to the user in the scene of the live broadcast service, and the h5 page comprises elements such as action, game, vibration and the like, so that the interestingness of the playing method of the user is improved, and the visual impact experience of the user is met. Most of the h5 pages are presented in a half-screen webpage view in a live broadcast room, and in this case, the client needs to play audio data of the h5 pages and audio data in a live broadcast stream at the same time. The audio data to be played obtained in this step is the audio data in the h5 page, the live streaming audio data is the audio data in the live streaming, and the audio data to be played and the live streaming audio data may be specifically audio data in any form, which is not limited in this disclosure.
The client may acquire the audio data to be played in different manners, for example, the audio data to be played may be acquired from the server through an XMLHttpRequest request on a preset page, or the server may automatically send the audio data to be played after the client accesses the preset page, which is not limited specifically.
In step S12, in the case where the client complies with the compatibility condition of the audio context interface, an audio context component is created.
In this step, it is necessary to determine compatibility between the client and the audio context interface, and it can be understood that there may be a case where the client and the audio context interface are incompatible according to different settings of the client system and an application scenario limit of the audio context interface, and if the client does not satisfy a compatibility condition of the audio context interface, the client cannot call the audio context interface. The audio context interface may be represented as AudioContextAPI, which is a component in WebAudioAPI and represents an audio processing graph formed of audio modules. The audio context component controls the creation of the function module contained in the audio context component and the audio processing and decoding, and before other interfaces of the WebAudioAPI are used, an audio context component needs to be created, and the subsequent operations are carried out in the audio context environment.
The step of creating the audio context component may be performed after the client acquires the audio data to be played, that is, after the preset page is acquired and it is determined that the preset page has the corresponding audio data to be played, the audio context component is created, otherwise, if the preset page is not acquired currently or the preset page does not include the audio data to be played, the audio context component does not need to be created, thereby saving system resources; or, the audio context component may be created in advance after the client enters the live broadcast state, so that the created audio context component may be directly called after the audio data to be played of the preset page is acquired, thereby improving the response speed of the system.
In one implementation, when the client does not meet the compatibility condition of the audio context interface, if the client is in a live state, the playing of the audio data to be played is suspended after the audio data to be played is acquired. In addition, if the client is not in a live broadcast state, after audio data to be played are acquired, a preset audio playing interface is called, and an audio playing component is created; and then, playing the audio data to be played by using the audio playing component.
That is to say, for the condition that the client does not meet the compatible condition of the Audio context interface, it may be detected whether the client is in the live broadcast state first, if the client is in the live broadcast state, then a preset Audio play interface (Audio) is playing the live broadcast stream Audio data, then the preset Audio play interface may maintain the original state, continue playing the live broadcast stream Audio data, not play the Audio data to be played, if the client is not in the live broadcast state, then the preset Audio play interface is in the idle state at this time, a preset Audio play component may be called, and the Audio data to be played of the preset page is loaded to realize Audio play.
Therefore, on one hand, when the audio playing interface is preset to play the live streaming audio data, the audio data to be played does not need to be played simultaneously, so that the situation of audio blocking can be reduced, on the other hand, when the audio playing interface is not preset to play the live streaming audio data, the audio data to be played can be called to play, and therefore the applicable scene of audio playing of the preset page is improved.
Based on the above embodiment, the client may also respond to the uninstall instruction of the preset page, stop playing the audio data to be played, and close the preset audio playing interface. Therefore, after the preset page is unloaded, the audio data to be played can be destroyed, and the memory leakage is avoided.
In one implementation, the client may set the interaction variable to a preset value after monitoring the preset interaction operation; and under the condition that the interaction variable is a preset value, executing the step of creating the audio context component under the condition that the client side meets the compatible condition of the audio context interface.
That is to say, the audio playing scheme in the present disclosure is executed only on the premise of monitoring the preset interactive operation, wherein the preset interactive operation may be any human-computer interactive operation, which can improve the security of audio playing and reduce the resource leakage.
For example, for a client based on the IOS system, at least one human-computer interaction operation may be required to execute the audio playing scheme in the present disclosure, and for a client based on the android system, no limitation may be imposed.
For example, before the compatibility between the client and the audio context interface is determined, a hasuserlaction variable of the system may be obtained, the hasuserlaction variable value in the android system is usually set to true, the hasuserlaction variable value in the IOS system is set to false, then a touchstart (touch) event is monitored, the touchstart event is used as a preset interactive operation, and if the hasuserlaction variable is detected, the hasuserlaction variable value is set to true, and further, the audio playing scheme in the present disclosure may be executed.
In step S13, an audio context component is called, the audio data to be played is converted into buffer data, and the buffer data is played, so as to achieve the playing effect of the audio data to be played.
In this step, an audio context component may be called to play audio data to be played, where the audio context component is an AudioContext instance corresponding to the created audio context interface, and the buffered data may be data in a buffer format.
Wherein, call the context component of the audio frequency, will wait to broadcast the audio data and convert to the buffer data, and broadcast the buffer data, including: calling an audio context component, and creating an audio conversion module and an audio playing module; converting audio data to be played into buffer data by using an audio conversion module; and connecting the buffer data to an outlet of the audio context component by using the audio playing module so as to play the buffer data.
For example, first, an audio conversion module and an audio play module may be created through a createBufferSource function of the AudioContext instance; then, the buffer data in the buffer format can be appointed to be used as a container of the audio conversion module, and the audio data to be played is converted into the buffer data in the buffer format through a decodeaudiodadata function of the AudioContext instance; furthermore, the buffered data is connected to the outlet of the AudioContext instance through the pipe capability of the AudioContext instance, i.e., calling the connect function, and the audio effect is implemented through the start function.
Based on the above embodiment, the client may respond to the uninstalling instruction of the preset page, stop running the audio conversion module and the audio playing module, and delete the buffered data, so as to close the audio context component. Therefore, after the preset page is unloaded, the audio data to be played can be destroyed, and the memory leakage is avoided.
In one implementation, invoking an audio context component, converting audio data to be played into buffer data, and playing the buffer data includes: in response to a first instruction for switching a preset page to a foreground, calling an audio context component, converting audio data to be played into buffer data, and playing the buffer data; and in response to a second instruction for switching the preset page to the background, pausing the playing of the buffer data.
That is to say, in the present disclosure, the client may automatically control the playing and pausing of the audio data to be played based on different scenes, and when the preset page is switched to the foreground, the client starts playing the audio data to be played; when the preset page is switched to the background, the client stops playing the audio data to be played, so that the audio data can be played more flexibly and reasonably, and the user experience is improved.
In another implementation manner, invoking an audio context component, converting audio data to be played into buffer data, and playing the buffer data includes: and calling the audio context component, converting the audio data to be played into buffer data, and playing the buffer data according to the preset offset.
That is to say, can be through adjusting to predetermine the offset, control treats the broadcast time starting point of broadcast audio data, like this, when circulation broadcast, treat that the linking between the broadcast audio data is more smooth and easy to can realize seamless linking broadcast, further promote user experience.
In step S14, a preset audio playing interface is called to play the live streaming audio data, and the preset audio playing interface and the audio context component are independent from each other.
In this step, the preset audio playing interface is an audios playing interface, the audio context component and the preset audio playing interface are independent from each other, the audio playing process of the preset page and the live stream is not interfered with each other, and the consistency of the performance of the audio playing of the preset page inside and outside the live room can be realized.
Fig. 2 is a schematic diagram illustrating a playing logic of audio data to be played in the present disclosure. Firstly, detecting the compatibility between a client and an audio context interface, creating an audio context component under the condition that the client supports the audio context interface, acquiring audio data to be played of a preset page through an XMLHttpRequest request, and converting the acquired audio data to be played into buffer-format buffer data. Then, an audio playing module is established, buffer data in a buffer format is designated as a container of the audio playing module, and the buffer data is connected with an outlet of the audio context component by using the pipeline capacity, so that audio playing is realized; under the condition that the client does not support the audio context interface, whether the client is in a live broadcast state or not can be detected, if the client is in the live broadcast state, audio data to be played is not played, a preset page is muted, and if the client is not in the live broadcast state, a preset audio playing component can be created, and the preset audio playing component is called to load the audio data to be played, so that audio playing is realized.
As shown in fig. 3, for a design schematic diagram of an audio context component in the present disclosure, by introducing a preset component tool, a caller may invoke a pipeline tool in the component to implement functions such as obtaining audio resources, setting a play mode, and implementing a bottom-of-pocket policy, where the bottom-of-pocket policy refers to a processing policy for audio data to be played on a preset page under the condition that a client is incompatible with an audio context interface, for example, if the client is in a live broadcast state, playing of the audio data to be played is suspended after the audio data to be played is obtained; if the client is not in a live broadcast state, calling a preset audio playing interface after audio data to be played are obtained, and creating an audio playing component; and then, playing the audio data to be played by using the audio playing component.
As shown in fig. 4, which is a schematic diagram of invoking components, after an audio context component is introduced, the audio context component needs to be registered to obtain an instance of the audio context component, and then, based on parameters between components, invocation of the audio context component is implemented.
It can be seen from the above that, according to the technical scheme provided by the embodiment of the present disclosure, the client plays the audio data to be played of the preset page through the audio context component, the audio data of the live stream is played through the preset audio playing interface, the audio context component and the preset audio playing interface are independent of each other, the audio playing processes of the preset page and the live stream do not interfere with each other, and the situation that the live stream is blocked by the audio playing of the preset page can be reduced.
Fig. 5 is a block diagram of an audio playing apparatus according to an exemplary embodiment, applied to a client, the apparatus including:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is configured to acquire audio data to be played and live streaming audio data of a preset page;
a creating unit configured to perform creating an audio context component in a case where the client complies with a compatible condition of an audio context interface;
the first playing unit is configured to execute calling of the audio context component, convert the audio data to be played into buffer data, and play the buffer data to realize the playing effect of the audio data to be played;
and the second playing unit is configured to execute calling of a preset audio playing interface, and play the live streaming audio data, wherein the preset audio playing interface and the audio context component are independent.
In one implementation, the first playing unit is specifically configured to perform:
calling the audio context component, and creating an audio conversion module and an audio playing module;
converting the audio data to be played into buffer data by using the audio conversion module;
and connecting the buffer data to an outlet of the audio context component by utilizing the audio playing module so as to play the buffer data.
In one implementation, the apparatus further includes:
and the first closing unit is configured to execute an unloading instruction responding to the preset page, stop running the audio conversion module and the audio playing module, and delete the buffer data so as to close the audio context component.
In one implementation, the apparatus further includes:
and the third playing unit is configured to execute, under the condition that the client does not meet the compatible condition of the audio context interface, if the client is in a live broadcast state, pause playing the audio data to be played after acquiring the audio data to be played.
In one implementation, the third playing unit is further configured to perform:
if the client is not in a live broadcast state, after the audio data to be played is obtained, the preset audio playing interface is called to play the audio data to be played.
In one implementation, the apparatus further includes:
and the second closing unit is configured to execute an unloading instruction responding to the preset page, stop playing the audio data to be played and close the preset audio playing interface.
In one implementation, the first playing unit is configured to perform:
in response to a first instruction for switching the preset page to a foreground, calling the audio context component, converting the audio data to be played into buffer data, and playing the buffer data;
and in response to a second instruction for switching the preset page to the background, pausing the playing of the buffer data.
In one implementation, the first playing unit is configured to perform:
and calling the audio context component, converting the audio data to be played into buffer data, and playing the buffer data according to a preset offset.
In one implementation, the apparatus further includes:
the interaction unit is configured to set an interaction variable to a preset value after monitoring a preset interaction operation; and under the condition that the interaction variable is a preset value, executing the step of creating the audio context component under the condition that the client side meets the compatible condition of the audio context interface.
It can be seen from the above that, according to the technical scheme provided by the embodiment of the disclosure, the client plays the audio data to be played of the preset page through the audio context component, the audio data of the live stream is played through the preset audio playing interface, the audio context component and the preset audio playing interface are independent from each other, the audio playing process of the preset page and the live stream is not interfered with each other, and the situation that the live stream is blocked by the preset page audio playing can be reduced.
FIG. 6 is a block diagram illustrating an electronic device for audio playback, according to an example embodiment.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as a memory comprising instructions, executable by a processor of an electronic device to perform the above-described method is also provided. Alternatively, the computer-readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical audio playback device, and the like.
In an exemplary embodiment, a computer program product is also provided, which, when run on a computer, causes the computer to implement the method of audio playback described above.
It can be seen from the above that, according to the technical scheme provided by the embodiment of the present disclosure, the client plays the audio data to be played of the preset page through the audio context component, the audio data of the live stream is played through the preset audio playing interface, the audio context component and the preset audio playing interface are independent of each other, the audio playing processes of the preset page and the live stream do not interfere with each other, and the situation that the live stream is blocked by the audio playing of the preset page can be reduced.
Fig. 7 is a block diagram illustrating an apparatus 800 for audio playback according to an example embodiment.
For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast electronic device, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 7, the apparatus 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814, and a communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the device 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
A power supply component 807 provides power to the various components of the device 800. The power components 807 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed state of the device 800, the relative positioning of the components, such as a display and keypad of the apparatus 800, the sensor assembly 814 may also detect a change in position of the apparatus 800 or a component of the apparatus 800, the presence or absence of user contact with the apparatus 800, orientation or acceleration/deceleration of the apparatus 800, and a change in temperature of the apparatus 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communication between the apparatus 800 and other devices in a wired or wireless manner. The apparatus 800 may access a wireless network based on a communication standard, such as WiFi, an operator network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the methods of the first and second aspects.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 820 of the device 800 to perform the above-described method is also provided. Alternatively, for example, the storage medium may be a non-transitory computer-readable storage medium, such as a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical audio playback device, and the like.
In an exemplary embodiment, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the audio playback method described in the first of the above embodiments.
It can be seen from the above that, according to the technical scheme provided by the embodiment of the disclosure, the client plays the audio data to be played of the preset page through the audio context component, the audio data of the live stream is played through the preset audio playing interface, the audio context component and the preset audio playing interface are independent from each other, the audio playing process of the preset page and the live stream is not interfered with each other, and the situation that the live stream is blocked by the preset page audio playing can be reduced.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An audio playing method applied to a client includes:
acquiring audio data to be played and live streaming audio data of a preset page;
under the condition that the client side meets the compatible condition of the audio context interface, an audio context component is created;
calling the audio context component, converting the audio data to be played into buffer data, and playing the buffer data to realize the playing effect of the audio data to be played;
and calling a preset audio playing interface to play the live streaming audio data, wherein the preset audio playing interface is independent from the audio context component.
2. The audio playing method according to claim 1, wherein said invoking the audio context component, converting the audio data to be played into buffered data, and playing the buffered data comprises:
calling the audio context component, and creating an audio conversion module and an audio playing module;
converting the audio data to be played into buffer data by using the audio conversion module;
and connecting the buffer data to an outlet of the audio context component by utilizing the audio playing module so as to play the buffer data.
3. The audio playing method of claim 2, further comprising:
and responding to the unloading instruction of the preset page, stopping running the audio conversion module and the audio playing module, and deleting the buffer data to close the audio context component.
4. The audio playing method according to claim 1, wherein the method further comprises:
and under the condition that the client does not meet the compatible condition of the audio context interface, if the client is in a live broadcast state, the audio data to be played is paused after the audio data to be played is acquired.
5. The audio playing method according to claim 4, wherein the method further comprises:
if the client is not in a live broadcast state, after the audio data to be played is obtained, the preset audio playing interface is called to play the audio data to be played.
6. The audio playing method according to claim 5, wherein the method further comprises:
and responding to the unloading instruction of the preset page, stopping playing the audio data to be played, and closing the preset audio playing interface.
7. The audio playing method according to claim 1, wherein said invoking the audio context component, converting the audio data to be played into buffered data, and playing the buffered data comprises:
in response to a first instruction for switching the preset page to a foreground, calling the audio context component, converting the audio data to be played into buffer data, and playing the buffer data;
and in response to a second instruction for switching the preset page to the background, pausing the playing of the buffer data.
8. An audio playing device, applied to a client, includes:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is configured to acquire audio data to be played and live streaming audio data of a preset page;
a creating unit configured to perform creating an audio context component in a case where the client complies with a compatible condition of an audio context interface;
the first playing unit is configured to execute calling of the audio context component, convert the audio data to be played into buffer data, and play the buffer data to realize the playing effect of the audio data to be played;
and the second playing unit is configured to execute calling of a preset audio playing interface, and play the live streaming audio data, wherein the preset audio playing interface and the audio context component are independent.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the audio playback method of any of claims 1 to 7.
10. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an audio-playing electronic device, enable the audio-playing electronic device to perform the audio playing method of any of claims 1 to 7.
CN202211181989.1A 2022-09-27 2022-09-27 Audio playing method and device, electronic equipment and storage medium Pending CN115633215A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211181989.1A CN115633215A (en) 2022-09-27 2022-09-27 Audio playing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211181989.1A CN115633215A (en) 2022-09-27 2022-09-27 Audio playing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115633215A true CN115633215A (en) 2023-01-20

Family

ID=84904464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211181989.1A Pending CN115633215A (en) 2022-09-27 2022-09-27 Audio playing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115633215A (en)

Similar Documents

Publication Publication Date Title
CN106970754B (en) Screen capture processing method and device
EP3188066B1 (en) A method and an apparatus for managing an application
CN106911961B (en) Multimedia data playing method and device
JP6285615B2 (en) Remote assistance method, client, program, and recording medium
CN111314768A (en) Screen projection method, screen projection device, electronic equipment and computer readable storage medium
CN109254849B (en) Application program running method and device
CN105786507B (en) Display interface switching method and device
EP3796317A1 (en) Video processing method, video playing method, devices and storage medium
CN106528735B (en) Method and device for controlling browser to play media resources
CN107040591B (en) Method and device for controlling client
CN111031177A (en) Screen recording method, device and readable storage medium
CN110945467B (en) Disturbance-free method and terminal
CN111182328B (en) Video editing method, device, server, terminal and storage medium
CN107272896B (en) Method and device for switching between VR mode and non-VR mode
CN107632835B (en) Application installation method and device
CN105227426B (en) Application interface switching method and device and terminal equipment
CN104572230A (en) Script file loading method, script file generating method and script file generating device
CN105786561B (en) Method and device for calling process
CN112616053A (en) Transcoding method and device of live video and electronic equipment
CN114339320B (en) Virtual resource processing method, device, equipment and storage medium
CN105227891A (en) A kind of video call method and device
CN115633215A (en) Audio playing method and device, electronic equipment and storage medium
CN114827721A (en) Video special effect processing method and device, storage medium and electronic equipment
CN109981729B (en) File processing method and device, electronic equipment and computer readable storage medium
CN113727248A (en) Method, device and medium for playing audio by loudspeaker

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination