CN109101166B - Audio control method, device and storage medium - Google Patents

Audio control method, device and storage medium Download PDF

Info

Publication number
CN109101166B
CN109101166B CN201811004981.1A CN201811004981A CN109101166B CN 109101166 B CN109101166 B CN 109101166B CN 201811004981 A CN201811004981 A CN 201811004981A CN 109101166 B CN109101166 B CN 109101166B
Authority
CN
China
Prior art keywords
audio
webpage
playing
control
instruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811004981.1A
Other languages
Chinese (zh)
Other versions
CN109101166A (en
Inventor
苏卓斌
吴娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201811004981.1A priority Critical patent/CN109101166B/en
Publication of CN109101166A publication Critical patent/CN109101166A/en
Application granted granted Critical
Publication of CN109101166B publication Critical patent/CN109101166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0485Scrolling or panning
    • G06F3/04855Interaction with scrollbars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses an audio control method, an audio control device and a storage medium, and belongs to the technical field of audio processing. The method comprises the following steps: the first webpage receives an audio playing instruction from the second webpage, wherein the audio playing instruction is sent by the second webpage after the audio playing operation is detected and carries audio information and playing state information of audio played in the second webpage; the first webpage takes the audio information and the playing state information as parameters of the control elements and determines a plurality of control elements; and the first webpage generates an audio playing control according to the control elements and displays the audio playing control, wherein the audio playing control is used for controlling the audio in a related manner. The invention avoids the need of repeatedly switching among a plurality of webpages, improves the convenience of operation and further improves the audio control efficiency.

Description

Audio control method, device and storage medium
Technical Field
The present invention relates to the field of audio processing technologies, and in particular, to an audio control method, an audio control device, and a storage medium.
Background
Currently, not only audio can be played using APP (Application) such as QQ music, cool dog music, and the like, but also audio can be played using a web page. During the audio playing process, the user has a need to control the audio, such as adjusting the volume, adjusting the playing progress, collecting the audio, and so on.
In the related art, when a user plays audio using a web page, if the user needs to control a certain audio in the web page, the user is generally required to click the audio to enter the web page where the audio is located. Then, the user can operate the audio in the webpage where the audio is located, and therefore control over the audio is achieved.
However, in the above implementation manner, the user can only enter the webpage where the audio is located to perform the operation, and when the user browses other webpages, if the user wants to operate the currently played audio, the user needs to switch to the webpage where the audio is located, and after the operation is completed, the user returns to the browsed webpage.
Disclosure of Invention
The embodiment of the invention provides an audio control method, an audio control device and a storage medium, which can solve the problem that in the related art, a user needs to repeatedly switch among a plurality of webpages, so that the operation is complicated, and the audio control efficiency is low. The technical scheme is as follows:
in a first aspect, an audio control method is provided, the method comprising:
the method comprises the steps that a first webpage receives an audio playing instruction from a second webpage, wherein the audio playing instruction is sent by the second webpage after an audio playing operation is detected and carries audio information and playing state information of audio played in the second webpage;
the first webpage takes the audio information and the playing state information as parameters of control elements, and a plurality of control elements are determined;
and the first webpage generates an audio playing control according to the control elements and displays the audio playing control, wherein the audio playing control is used for controlling the audio in a related manner.
Optionally, after the displaying the audio playing control, the method further includes:
and when the first webpage detects audio control operation based on the audio playing control, sending an audio control instruction to the second webpage, wherein the audio control instruction is used for indicating the second webpage to control the audio.
Optionally, the audio control instruction includes any one of an audio pause instruction, an audio accelerated playing instruction, an audio decelerated playing instruction, an audio volume adjustment instruction, an audio collection instruction, and an audio purchasing instruction.
Optionally, after the displaying the audio playing control, the method further includes:
when the first webpage receives an audio closing instruction or a webpage refreshing instruction from the second webpage, an audio playing component is established;
and the first webpage plays the audio through the audio playing component according to the audio information and the playing state information of the audio.
Optionally, the first webpage receives an audio playing instruction from a second webpage, including:
the first webpage receives an audio playing instruction sent by a webpage browser, and the webpage browser is a browser for opening the first webpage and the second webpage and is used for receiving the audio playing instruction sent by the second webpage and forwarding the audio playing instruction to the first webpage.
In a second aspect, there is provided an audio control apparatus, the apparatus comprising:
a receiving module, configured to receive an audio playing instruction from a second webpage, where the audio playing instruction is sent by the second webpage after detecting an audio playing operation, and carries audio information and playing state information of an audio played in the second webpage;
the determining module is used for determining a plurality of control elements by taking the audio information and the playing state information as parameters of the control elements;
and the generating and displaying module is used for generating an audio playing control according to the control elements and displaying the audio playing control, and the audio playing control is used for associating control over the audio.
Optionally, the apparatus further comprises:
and the sending module is used for sending an audio control instruction to the second webpage when the first webpage detects an audio control operation based on the audio playing control, wherein the audio control instruction is used for indicating the second webpage to control the audio.
Optionally, the audio control instruction includes any one of an audio pause instruction, an audio accelerated playing instruction, an audio decelerated playing instruction, an audio volume adjustment instruction, an audio collection instruction, and an audio purchasing instruction.
Optionally, the apparatus further comprises:
the establishing module is used for establishing an audio playing component when receiving an audio closing instruction or a webpage refreshing instruction from the second webpage;
and the playing module is used for playing the audio through the audio playing component according to the audio information and the playing state information of the audio.
Optionally, the receiving module is configured to:
and receiving an audio playing instruction sent by a web browser, wherein the web browser is a browser for opening the first web page and the second web page, and is used for receiving the audio playing instruction sent by the second web page and forwarding the audio playing instruction to the first web page.
In a third aspect, a computer-readable storage medium is provided, the computer-readable storage medium having stored thereon instructions, which when executed by a processor, implement the audio control method of the first aspect.
In a fourth aspect, there is provided a computer program product containing instructions which, when run on a computer, cause the computer to perform the audio control method of the first aspect described above.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
and after detecting the audio playing operation, the second webpage sends an audio playing instruction to the first webpage, wherein the audio playing instruction carries audio information and playing state information of the audio played in the second webpage. The first webpage takes the audio information and the playing state information as parameters of the control elements, and determines a plurality of control elements. The first webpage generates and displays the audio playing control used for correlating the control of the audio according to the control elements, so that a user can control and operate the audio played in the second webpage in the first webpage based on the audio playing control, the situation that the user needs to repeatedly switch among the plurality of webpages is avoided, convenience of operation is improved, and audio control efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of an implementation environment shown in accordance with an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of audio control according to an exemplary embodiment;
FIG. 3 is a flow chart illustrating a method of audio control according to another exemplary embodiment;
FIG. 4 is a schematic illustration of a display of an audio playback control, according to an exemplary embodiment;
FIG. 5 is a schematic diagram illustrating the structure of an audio control device according to an exemplary embodiment;
FIG. 6 is a schematic diagram illustrating the structure of an audio control device according to another exemplary embodiment;
FIG. 7 is a schematic diagram illustrating the structure of an audio control device according to another exemplary embodiment;
fig. 8 is a block diagram illustrating a structure of a terminal 700 according to an example embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Before describing the audio control method provided by the embodiment of the present invention in detail, the application scenario and the implementation environment related to the embodiment of the present invention are briefly described.
First, a brief description is given of an application scenario related to the embodiment of the present invention.
With the rapid development of computer technology, users can play audio using web pages. When a user opens a certain webpage to play audio, if other webpages are browsed, and the currently played audio is to be operated in the process of browsing other webpages, for example, when the user hears that the interested audio is to collect the audio, the user needs to switch to the webpage where the audio is located to operate the audio, and then the user returns to the browsed webpage. Then, if the user wants to operate the audio, for example, wants to adjust the volume of the audio, the user needs to switch back to the web page where the audio is located. Therefore, the user needs to switch among a plurality of webpages repeatedly, so that the operation is more complicated, and the audio control efficiency is reduced. Moreover, such repeated switching is likely to interrupt the mind of the user, reducing the user experience, such as the desire of the user to purchase a product (e.g., paid audio). Therefore, an embodiment of the present invention provides an audio control method, which can avoid the need for repeated operations by a user, and can improve the control efficiency of audio and improve the user experience effect, and its specific implementation please refer to the embodiments shown in fig. 2 and fig. 3 below.
Next, a brief description will be given of an implementation environment related to an embodiment of the present invention.
Referring to fig. 1, fig. 1 is a schematic diagram illustrating an implementation environment according to an exemplary embodiment, where the implementation environment mainly includes a first web page and a second web page, and information interaction between the first web page and the second web page is possible. The first webpage and the second webpage can be provided with executable programs, so that the audio control method provided by the embodiment of the invention can be realized through the executable programs.
Further, a web browser may be included in the implementation environment, where the web browser may be a browser for opening the first web page and the second web page, and in one embodiment, the web browser may be configured to implement information interaction between the first web page and the second web page.
After the application scenarios and the implementation environments related to the embodiments of the present invention are described, an audio control method provided by the embodiments of the present invention will be described with reference to the accompanying drawings.
Fig. 2 is a flowchart illustrating an audio control method according to an exemplary embodiment, which may be applied to the implementation environment shown in fig. 1, and which may include the following implementation steps:
step 201: the first webpage receives an audio playing instruction from a second webpage, wherein the audio playing instruction is sent by the second webpage after detecting an audio playing operation and carries audio information and playing state information of audio played in the second webpage.
Step 202: and the first webpage takes the audio information and the playing state information as parameters of control elements, and determines a plurality of control elements.
Step 203: and the first webpage generates an audio playing control according to the control elements and displays the audio playing control, wherein the audio playing control is used for controlling the audio in a related manner.
In the embodiment of the present invention, after detecting an audio playing operation, the second webpage sends an audio playing instruction to the first webpage, where the audio playing instruction carries audio information and playing state information of an audio played in the second webpage. The first webpage takes the audio information and the playing state information as parameters of the control elements, and determines a plurality of control elements. The first webpage generates and displays the audio playing control used for correlating the control of the audio according to the control elements, so that a user can control and operate the audio played in the second webpage in the first webpage based on the audio playing control, the situation that the user needs to repeatedly switch among the plurality of webpages is avoided, convenience of operation is improved, and audio control efficiency is improved.
Optionally, after the displaying the audio playing control, the method further includes:
and when the first webpage detects audio control operation based on the audio playing control, sending an audio control instruction to the second webpage, wherein the audio control instruction is used for indicating the second webpage to control the audio.
Optionally, the audio control instruction includes any one of an audio pause instruction, an audio accelerated playing instruction, an audio decelerated playing instruction, an audio volume adjustment instruction, an audio collection instruction, and an audio purchasing instruction.
Optionally, after the displaying the audio playing control, the method further includes:
when the first webpage receives an audio closing instruction or a webpage refreshing instruction from the second webpage, an audio playing component is established;
and the first webpage plays the audio through the audio playing component according to the audio information and the playing state information of the audio.
Optionally, the first webpage receives an audio playing instruction from a second webpage, including:
the first webpage receives an audio playing instruction sent by a webpage browser, and the webpage browser is a browser for opening the first webpage and the second webpage and is used for receiving the audio playing instruction sent by the second webpage and forwarding the audio playing instruction to the first webpage.
All the above optional technical solutions can be combined arbitrarily to form an optional embodiment of the present invention, which is not described in detail herein.
Fig. 3 is a flowchart illustrating an audio control method according to an exemplary embodiment, which may be applied to the implementation environment shown in fig. 1, and which may include the following implementation steps:
step 301: and the second webpage sends an audio playing instruction to the first webpage.
The audio playing instruction is sent by the second webpage after detecting the audio playing operation, and carries the audio information and the playing state information of the audio played in the second webpage.
The second webpage can be any audio webpage capable of playing audio, and the first webpage can be any webpage different from the second webpage. In some embodiments, in a first webpage and a second webpage opened by a user, the user may select an audio to be played in the second webpage according to a preference of the user, and after the second webpage detects an audio playing operation, the audio playing instruction may be sent to the first webpage, where the audio playing instruction carries audio information and playing state information of the played audio.
The audio information may include at least one of a name of the audio, a name of a singer, a name of an album to which the singer belongs, and lyrics. In addition, the playing state information may include at least one of a playing time point and a playing volume of the audio.
In a possible implementation manner, a specific implementation that the second webpage sends the audio playing instruction to the first webpage may include: the second webpage sends the audio playing instruction to the webpage browser, and the webpage browser can forward the audio playing instruction to the first webpage after receiving the audio playing instruction. The web browser is a browser for opening the first web page and the second web page.
Further, the second webpage may send the audio playing instruction to a storage space of the web browser, and when detecting that the second webpage writes data into the storage space, the web browser sends the written data to the opened first webpage.
Further, in some embodiments, the websites to which the first webpage and the second webpage belong may be the same, for example, the first webpage and the second webpage both belong to audio-related webpages. In another embodiment, the second webpage may belong to a different website from the first webpage, which is not limited in the embodiment of the present invention.
Step 302: the first webpage receives an audio playing instruction from the second webpage.
In a possible implementation manner, a specific implementation that the first webpage receives the audio playing instruction from the second webpage may include: the first webpage receives an audio playing instruction sent by a webpage browser, and the webpage browser is a browser for opening the first webpage and the second webpage and is used for receiving the audio playing instruction sent by the second webpage and forwarding the audio playing instruction to the first webpage.
As described above, in some embodiments, the second web page may send the audio playing instruction to the web browser, and the web browser forwards the audio playing instruction to the first web page, and at this time, the first web page receives the audio playing instruction forwarded by the web browser.
Further, after receiving an audio playing instruction from a second webpage, the first webpage analyzes the audio playing instruction to obtain carried data, wherein the data includes audio information and playing state information of the audio played by the second webpage.
Step 303: the first webpage takes the audio information and the playing state information as parameters of control elements, and determines a plurality of control elements.
For example, when the audio information includes the name of the audio and the name of the singer, and the playing status information includes the playing time point and the playing volume of the audio, the name of the singer, the playing time point and the playing volume of the audio are respectively determined as parameters of a control element, and a plurality of control elements are obtained, wherein the control elements are used for indicating the name of the audio, the name of the singer, the playing progress and the playing volume of the audio. Further, the first webpage can also determine a collection control element, a purchase control element and the like according to the audio information and the playing state information.
Step 304: and the first webpage generates an audio playing control according to the control elements and displays the audio playing control, wherein the audio playing control is used for associating control over the audio.
Further, the specific implementation of the first webpage generating the audio playing control according to the multiple control elements may include: the first webpage combines the control elements to obtain an audio playing control, and the audio playing control comprises the control elements.
After the audio playing control is generated by the first web page, the audio playing control may be displayed at a specified position, where the specified position may be set according to actual requirements, for example, the specified position may be a bottom area of a page of the first web page, as shown in fig. 4, where fig. 4 is a schematic display diagram of an audio playing control according to an exemplary embodiment. In this way, the user can control the audio in the second webpage based on the audio playing control.
Further, after the audio playing control is displayed, when the first webpage detects an audio control operation based on the audio playing control, an audio control instruction is sent to the second webpage, and the audio control instruction is used for instructing the second webpage to control the audio.
The audio control instruction comprises any one of an audio pause instruction, an audio accelerated playing instruction, an audio decelerated playing instruction, an audio volume adjusting instruction, an audio collection instruction and an audio purchasing instruction.
That is to say, when a user browses a first webpage, when the user wants to control audio played in the second webpage, the user may operate the audio based on an audio playing control displayed on the first webpage, in other words, the user may control the audio based on a plurality of control elements included in the audio playing control.
For example, when a user wants to adjust the volume of the audio played by the second web page while browsing the first web page, the volume adjustment control included in the audio playing control displayed on the first web page may be slid to a target position, and when the first web page detects an operation on the volume adjustment control, an audio volume adjustment instruction is generated, where the audio volume adjustment instruction carries the target volume. Further, the first webpage may send the audio volume adjustment instruction to a storage space of a web browser, and after receiving the audio volume adjustment instruction, the web browser forwards the audio volume adjustment instruction to a second webpage, where the second webpage adjusts the volume of the currently played audio to a target volume.
For another example, when the user wants to pause the audio played by the second web page while browsing the first web page, the user may click a pause control included in the audio play control displayed on the first web page, and when the first web page detects an operation on the pause control, an audio pause instruction is generated. Further, the first webpage may send the audio pause instruction to a storage space of a web browser, and after receiving the audio pause instruction, the web browser forwards the audio pause instruction to a second webpage, where the second webpage pauses the currently played audio. Therefore, the control of the audio played by the second webpage on the first webpage is realized.
It should be noted that, the above description is only made by taking an example that the audio control instruction includes any one of an audio pause instruction, an audio speed-up playing instruction, an audio speed-down playing instruction, an audio volume adjusting instruction, an audio collection instruction, and an audio purchasing instruction. In another embodiment, the audio control command may further include other commands, such as an audio sharing command, which is not limited in the embodiments of the present invention.
Further, after the audio playing control is displayed, when the first webpage receives an audio closing instruction or a webpage refreshing instruction from the second webpage, an audio playing component is established, and the audio is played through the audio playing component according to the audio information and the playing state information of the audio.
In a possible implementation manner, after the first webpage displays the audio playing control, if an audio closing instruction sent by the second webpage is received, which indicates that the second webpage has stopped playing the audio, at this time, the first webpage may continue to play the audio following the position where the second webpage stopped playing the audio. Therefore, the first webpage establishes an audio playing component in the first webpage, so that the audio is continuously played through the audio playing component according to the audio information and the playing state information of the audio.
In addition, when the first webpage receives a webpage refreshing instruction from the second webpage, the first webpage can replace the second webpage to play audio. Therefore, the first webpage establishes an audio playing component in the first webpage, so that the audio is continuously played through the audio playing component according to the audio information and the playing state information of the audio.
It should be noted that, the above description is only given by taking an example that after the first webpage receives the webpage refreshing instruction from the second webpage, the first webpage replaces the second webpage to play and continue playing the audio. In some embodiments, when the second webpage continues to play the audio after the refresh is completed, that is, when the second webpage detects the refresh operation, the second webpage may perform the refresh operation, and send the audio information and the play status information of the audio played during the refresh operation to the storage space of the web browser for storage. After the second webpage is refreshed, the stored data can be read from the storage space of the webpage browser, and the audio is continuously played according to the read data.
Further, in the audio playing process of the second webpage, the playing state information of the played audio can be written into the storage space of the web browser in real time. And the webpage browser forwards the written data to the first webpage every time the webpage browser detects that the data are written in the storage space. The first webpage receives the data forwarded by the webpage browser, and updates the playing state of the audio displayed by the audio playing control according to the data so as to achieve the purpose of synchronizing with the audio played by the second webpage.
Further, after the second webpage sends the audio playing instruction to the first webpage, a plurality of control elements can be generated according to the audio information and the playing state information of the audio, an audio playing control is generated according to the control elements, and the second webpage displays the audio playing control at the specified position of the second webpage. In this way, the user can also perform control operation on the played audio on the second webpage based on the audio playing control.
In the embodiment of the present invention, after detecting an audio playing operation, the second webpage sends an audio playing instruction to the first webpage, where the audio playing instruction carries audio information and playing state information of an audio played in the second webpage. The first webpage takes the audio information and the playing state information as parameters of the control elements, and determines a plurality of control elements. The first webpage generates and displays the audio playing control used for correlating the control of the audio according to the control elements, so that a user can control and operate the audio played in the second webpage in the first webpage based on the audio playing control, the situation that the user needs to repeatedly switch among the plurality of webpages is avoided, convenience of operation is improved, and audio control efficiency is improved.
Fig. 5 is a schematic diagram illustrating the structure of an audio control device according to an exemplary embodiment, which may be implemented by software, hardware, or a combination of both. The audio control apparatus may include:
a receiving module 410, configured to receive an audio playing instruction from a second webpage, where the audio playing instruction is sent by the second webpage after detecting an audio playing operation, and carries audio information and playing state information of an audio played in the second webpage;
a determining module 420, configured to determine multiple control elements by using the audio information and the play status information as parameters of the control elements;
and a generating and displaying module 430, configured to generate an audio playing control according to the multiple control elements, and display the audio playing control, where the audio playing control is used to associate with the control of the audio.
Optionally, referring to fig. 6, the apparatus further includes:
a sending module 440, configured to send an audio control instruction to the second webpage when the first webpage detects an audio control operation based on the audio playing control, where the audio control instruction is used to instruct the second webpage to control the audio.
Optionally, the audio control instruction includes any one of an audio pause instruction, an audio accelerated playing instruction, an audio decelerated playing instruction, an audio volume adjustment instruction, an audio collection instruction, and an audio purchasing instruction.
Optionally, referring to fig. 7, the apparatus further includes:
the establishing module 450 is configured to establish an audio playing component when receiving an audio closing instruction or a webpage refreshing instruction from the second webpage;
the playing module 460 is configured to play the audio through the audio playing component according to the audio information and the playing state information of the audio.
Optionally, the receiving module 410 is configured to:
and receiving an audio playing instruction sent by a web browser, wherein the web browser is a browser for opening the first web page and the second web page, and is used for receiving the audio playing instruction sent by the second web page and forwarding the audio playing instruction to the first web page.
In the embodiment of the present invention, after detecting an audio playing operation, the second webpage sends an audio playing instruction to the first webpage, where the audio playing instruction carries audio information and playing state information of an audio played in the second webpage. The first webpage takes the audio information and the playing state information as parameters of the control elements, and determines a plurality of control elements. The first webpage generates and displays the audio playing control used for correlating the control of the audio according to the control elements, so that a user can control and operate the audio played in the second webpage in the first webpage based on the audio playing control, the situation that the user needs to repeatedly switch among the plurality of webpages is avoided, convenience of operation is improved, and audio control efficiency is improved.
It should be noted that: in the audio control apparatus provided in the foregoing embodiment, when implementing the audio control method, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the audio control apparatus provided in the above embodiment and the embodiment of the method for implementing audio control belong to the same concept, and specific implementation processes thereof are described in the method embodiment and are not described herein again.
Fig. 8 is a block diagram illustrating a terminal 700 according to an exemplary embodiment of the present invention. The terminal 700 can be used to run a web browser, and the terminal 700 can be: a smartphone, a tablet, a laptop, or a desktop computer. Terminal 700 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so on.
In general, terminal 700 includes: a processor 701 and a memory 702.
The processor 701 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 701 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 701 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 701 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 701 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 702 may include one or more computer-readable storage media, which may be non-transitory. Memory 702 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 702 is used to store at least one instruction for execution by processor 701 to implement the audio control method provided by the method embodiments herein.
In some embodiments, the terminal 700 may further optionally include: a peripheral interface 703 and at least one peripheral. The processor 701, the memory 702, and the peripheral interface 703 may be connected by buses or signal lines. Various peripheral devices may be connected to peripheral interface 703 via a bus, signal line, or circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 704, touch screen display 705, camera 706, audio circuitry 707, positioning components 708, and power source 709.
The peripheral interface 703 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 701 and the memory 702. In some embodiments, processor 701, memory 702, and peripheral interface 703 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 701, the memory 702, and the peripheral interface 703 may be implemented on a separate chip or circuit board, which is not limited in this embodiment.
The Radio Frequency circuit 704 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 704 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 704 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 704 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 704 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the radio frequency circuit 704 may also include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 705 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 705 is a touch display screen, the display screen 705 also has the ability to capture touch signals on or over the surface of the display screen 705. The touch signal may be input to the processor 701 as a control signal for processing. At this point, the display 705 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 705 may be one, providing the front panel of the terminal 700; in other embodiments, the display 705 can be at least two, respectively disposed on different surfaces of the terminal 700 or in a folded design; in still other embodiments, the display 705 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 700. Even more, the display 705 may be arranged in a non-rectangular irregular pattern, i.e. a shaped screen. The Display 705 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), or the like.
The camera assembly 706 is used to capture images or video. Optionally, camera assembly 706 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 706 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 707 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 701 for processing or inputting the electric signals to the radio frequency circuit 704 to realize voice communication. For the purpose of stereo sound collection or noise reduction, a plurality of microphones may be provided at different portions of the terminal 700. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 701 or the radio frequency circuit 704 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 707 may also include a headphone jack.
The positioning component 708 is used to locate the current geographic Location of the terminal 700 for navigation or LBS (Location Based Service). The Positioning component 708 can be a Positioning component based on the Global Positioning System (GPS) in the united states, the beidou System in china, or the galileo System in russia.
Power supply 709 is provided to supply power to various components of terminal 700. The power source 709 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 709 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 700 also includes one or more sensors 710. The one or more sensors 710 include, but are not limited to: acceleration sensor 711, gyro sensor 712, pressure sensor 713, fingerprint sensor 714, optical sensor 715, and proximity sensor 716.
The acceleration sensor 711 can detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 700. For example, the acceleration sensor 711 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 701 may control the touch screen 705 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 711. The acceleration sensor 711 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 712 may detect a body direction and a rotation angle of the terminal 700, and the gyro sensor 712 may cooperate with the acceleration sensor 711 to acquire a 3D motion of the terminal 700 by the user. From the data collected by the gyro sensor 712, the processor 701 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensors 713 may be disposed on a side bezel of terminal 700 and/or an underlying layer of touch display 705. When the pressure sensor 713 is disposed on a side frame of the terminal 700, a user's grip signal on the terminal 700 may be detected, and the processor 701 performs right-left hand recognition or shortcut operation according to the grip signal collected by the pressure sensor 713. When the pressure sensor 713 is disposed at a lower layer of the touch display 705, the processor 701 controls the operability control on the UI interface according to the pressure operation of the user on the touch display 705. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 714 is used for collecting a fingerprint of a user, and the processor 701 identifies the identity of the user according to the fingerprint collected by the fingerprint sensor 714, or the fingerprint sensor 714 identifies the identity of the user according to the collected fingerprint. When the user identity is identified as a trusted identity, the processor 701 authorizes the user to perform relevant sensitive operations, including unlocking a screen, viewing encrypted information, downloading software, paying, changing settings, and the like. The fingerprint sensor 714 may be disposed on the front, back, or side of the terminal 700. When a physical button or a vendor Logo is provided on the terminal 700, the fingerprint sensor 714 may be integrated with the physical button or the vendor Logo.
The optical sensor 715 is used to collect the ambient light intensity. In one embodiment, the processor 701 may control the display brightness of the touch display 705 based on the ambient light intensity collected by the optical sensor 715. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 705 is increased; when the ambient light intensity is low, the display brightness of the touch display 705 is turned down. In another embodiment, processor 701 may also dynamically adjust the shooting parameters of camera assembly 706 based on the ambient light intensity collected by optical sensor 715.
A proximity sensor 716, also referred to as a distance sensor, is typically disposed on a front panel of the terminal 700. The proximity sensor 716 is used to collect the distance between the user and the front surface of the terminal 700. In one embodiment, when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually decreases, the processor 701 controls the touch display 705 to switch from the bright screen state to the dark screen state; when the proximity sensor 716 detects that the distance between the user and the front surface of the terminal 700 gradually becomes larger, the processor 701 controls the touch display 705 to switch from the breath screen state to the bright screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting of terminal 700 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
An embodiment of the present application further provides a non-transitory computer-readable storage medium, and when instructions in the storage medium are executed by a processor of a mobile terminal, the mobile terminal is enabled to execute the audio control method provided in the embodiment shown in fig. 2 or fig. 3.
Embodiments of the present application further provide a computer program product containing instructions, which when run on a computer, cause the computer to execute the audio control method provided in the embodiment shown in fig. 2 or fig. 3.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (9)

1. An audio control method, characterized in that the method comprises:
the method comprises the steps that a first webpage receives an audio playing instruction from a second webpage, wherein the audio playing instruction is sent by the second webpage after an audio playing operation is detected and carries audio information and playing state information of audio played in the second webpage;
the first webpage takes the audio information and the playing state information as parameters of control elements, and a plurality of control elements are determined;
the first webpage combines the control elements to obtain an audio playing control, and the audio playing control is displayed and comprises the control elements and is used for controlling the audio in a related mode;
the first webpage takes the audio information and the playing state information as parameters of control elements, and determines a plurality of control elements, wherein the determining comprises the following steps:
when the audio information comprises an audio name and a singer name, and the playing state information comprises an audio playing time point and an audio playing volume, the first webpage determines the audio name, the singer name, the audio playing time point and the audio playing volume as parameters of control elements respectively to obtain a plurality of control elements, and the control elements are used for indicating the audio name, the singer name, the audio playing progress and the audio playing volume;
after the displaying the audio playing control, the method further includes:
when the first webpage receives an audio closing instruction or a webpage refreshing instruction from the second webpage, an audio playing component is established;
and the first webpage plays the audio through the audio playing component according to the audio information and the playing state information of the audio.
2. The method of claim 1, wherein after displaying the audio playback control, further comprising:
and when the first webpage detects audio control operation based on the audio playing control, sending an audio control instruction to the second webpage, wherein the audio control instruction is used for indicating the second webpage to control the audio.
3. The method of claim 2, wherein the audio control instructions comprise any one of audio pause instructions, audio speed-up play instructions, audio speed-down play instructions, audio volume adjustment instructions, audio collection instructions, audio purchase instructions.
4. The method of any of claims 1-3, wherein the first web page receives an audio playback instruction from a second web page, comprising:
the first webpage receives an audio playing instruction sent by a webpage browser, and the webpage browser is a browser for opening the first webpage and the second webpage and is used for receiving the audio playing instruction sent by the second webpage and forwarding the audio playing instruction to the first webpage.
5. An audio control apparatus, characterized in that the apparatus comprises:
the receiving module is used for receiving an audio playing instruction from a second webpage through a first webpage, wherein the audio playing instruction is sent by the second webpage after detecting an audio playing operation and carries audio information and playing state information of audio played in the second webpage;
the determining module is used for determining a plurality of control elements by taking the audio information and the playing state information as parameters of the control elements;
a generating and displaying module, configured to combine the multiple control elements to obtain an audio playing control, and display the audio playing control, where the audio playing control includes the multiple control elements, and the audio playing control is used to associate control over the audio;
the determining module is configured to:
when the audio information comprises an audio name and a singer name, and the playing state information comprises an audio playing time point and an audio playing volume, the first webpage determines the audio name, the singer name, the audio playing time point and the audio playing volume as parameters of control elements respectively to obtain a plurality of control elements, and the control elements are used for indicating the audio name, the singer name, the audio playing progress and the audio playing volume;
the device further comprises:
the establishing module is used for establishing an audio playing component when receiving an audio closing instruction or a webpage refreshing instruction from the second webpage;
and the playing module is used for playing the audio through the audio playing component according to the audio information and the playing state information of the audio.
6. The apparatus of claim 5, wherein the apparatus further comprises:
and the sending module is used for sending an audio control instruction to the second webpage when the first webpage detects an audio control operation based on the audio playing control, wherein the audio control instruction is used for indicating the second webpage to control the audio.
7. The apparatus of claim 6, wherein the audio control instructions comprise any one of audio pause instructions, audio speed-up play instructions, audio speed-down play instructions, audio volume adjustment instructions, audio collection instructions, audio purchase instructions.
8. The apparatus of any one of claims 5-7, wherein the receiving module is to:
and receiving an audio playing instruction sent by a web browser, wherein the web browser is a browser for opening the first web page and the second web page, and is used for receiving the audio playing instruction sent by the second web page and forwarding the audio playing instruction to the first web page.
9. A computer-readable storage medium having instructions stored thereon, wherein the instructions, when executed by a processor, implement the steps of the method of any of claims 1-4.
CN201811004981.1A 2018-08-30 2018-08-30 Audio control method, device and storage medium Active CN109101166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811004981.1A CN109101166B (en) 2018-08-30 2018-08-30 Audio control method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811004981.1A CN109101166B (en) 2018-08-30 2018-08-30 Audio control method, device and storage medium

Publications (2)

Publication Number Publication Date
CN109101166A CN109101166A (en) 2018-12-28
CN109101166B true CN109101166B (en) 2021-06-22

Family

ID=64864511

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811004981.1A Active CN109101166B (en) 2018-08-30 2018-08-30 Audio control method, device and storage medium

Country Status (1)

Country Link
CN (1) CN109101166B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109858237A (en) * 2019-03-05 2019-06-07 广州酷狗计算机科技有限公司 Audio data collecting method, apparatus, terminal and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246665A (en) * 2012-02-08 2013-08-14 腾讯科技(深圳)有限公司 Method and apparatus for keeping music playing during web page switching

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103279321A (en) * 2013-05-07 2013-09-04 青岛海信电器股份有限公司 Audio and video control device and audio and video control method
CN106155507A (en) * 2015-03-31 2016-11-23 北京搜狗科技发展有限公司 A kind of page content display method and electronic equipment
US10387104B2 (en) * 2015-06-07 2019-08-20 Apple Inc. Audio control for web browser
CN106980446A (en) * 2016-01-19 2017-07-25 阿里巴巴集团控股有限公司 A kind of condition control method and device of page application
CN108388628B (en) * 2018-02-12 2022-02-22 腾讯科技(深圳)有限公司 Webpage audio playing method and device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103246665A (en) * 2012-02-08 2013-08-14 腾讯科技(深圳)有限公司 Method and apparatus for keeping music playing during web page switching

Also Published As

Publication number Publication date
CN109101166A (en) 2018-12-28

Similar Documents

Publication Publication Date Title
CN108391171B (en) Video playing control method and device, and terminal
CN110602321B (en) Application program switching method and device, electronic device and storage medium
CN107908929B (en) Method and device for playing audio data
CN108449641B (en) Method, device, computer equipment and storage medium for playing media stream
CN108965922B (en) Video cover generation method and device and storage medium
CN110368689B (en) Game interface display method, system, electronic equipment and storage medium
CN109327608B (en) Song sharing method, terminal, server and system
CN109144346B (en) Song sharing method and device and storage medium
CN109068008B (en) Ringtone setting method, device, terminal and storage medium
CN110288689B (en) Method and device for rendering electronic map
WO2022134632A1 (en) Work processing method and apparatus
CN112965683A (en) Volume adjusting method and device, electronic equipment and medium
CN110868636B (en) Video material intercepting method and device, storage medium and terminal
CN110868642B (en) Video playing method, device and storage medium
CN111092991B (en) Lyric display method and device and computer storage medium
WO2020253129A1 (en) Song display method, apparatus and device, and storage medium
CN108664300B (en) Application interface display method and device in picture-in-picture mode
CN107943484B (en) Method and device for executing business function
CN112616082A (en) Video preview method, device, terminal and storage medium
CN109032492B (en) Song cutting method and device
CN109101166B (en) Audio control method, device and storage medium
CN111464829B (en) Method, device and equipment for switching media data and storage medium
CN111241334B (en) Method, device, system, equipment and storage medium for displaying song information page
CN109189525B (en) Method, device and equipment for loading sub-page and computer readable storage medium
CN114388001A (en) Multimedia file playing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant