CN115220625A - Audio playing method and device, electronic equipment and computer readable storage medium - Google Patents

Audio playing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN115220625A
CN115220625A CN202210855118.7A CN202210855118A CN115220625A CN 115220625 A CN115220625 A CN 115220625A CN 202210855118 A CN202210855118 A CN 202210855118A CN 115220625 A CN115220625 A CN 115220625A
Authority
CN
China
Prior art keywords
moving object
audio
display interface
user
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210855118.7A
Other languages
Chinese (zh)
Inventor
王志杰
揭凯雯
黎锦昌
黎法鸿
徐超
陈传艺
陈宙炜
党正军
刘松
杨亚斌
吴鸿琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN202210855118.7A priority Critical patent/CN115220625A/en
Publication of CN115220625A publication Critical patent/CN115220625A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C7/00Arrangements for writing information into, or reading information out from, a digital store
    • G11C7/16Storage of analogue signals in digital stores using an arrangement comprising analogue/digital [A/D] converters, digital memories and digital/analogue [D/A] converters 

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an audio playing method and device, electronic equipment and a computer readable storage medium. The method comprises the following steps: determining a display interface during audio playing; responding to an arrival event of a beat node of the audio, and displaying a moving object corresponding to the beat node on a display interface; and adjusting the display state of the moving object based on an interactive instruction aiming at the moving object issued by the user. The audio rhythm information is expressed by the moving object, and the rhythm information is displayed in a visual mode. In addition, the user can send out an interaction instruction aiming at the moving object, so that the user can interact with the audio in real time, and the interactivity is enhanced.

Description

Audio playing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of audio information processing technologies, and in particular, to an audio playing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Currently, users often play music through electronic devices. During playing music, a playing page of the music often shows a dynamically rotated circular image, or a circle of dynamic ripples is shown around the circular image.
The inventors in the course of carrying out the present application have found that the only association a user has established with music is to listen to a song and to view a rotating circular image or a dynamic ripple around the circular image and to perceive the rhythm of the song by virtue of this dynamic effect. That is, it is difficult for the user to interact with the song in real time.
Disclosure of Invention
A first objective of the present application is to provide an audio playing method and apparatus, an electronic device, and a computer-readable storage medium.
In order to meet various purposes of the application, the following technical scheme is adopted in the application:
an audio playing method provided in accordance with one of the objects of the present application may include:
determining a display interface during audio playing;
responding to an arrival event of a beat node of the audio, and displaying a moving object corresponding to the beat node on a display interface;
and adjusting the display state of the moving object based on an interactive instruction aiming at the moving object issued by the user.
Optionally, the step of displaying, on the display interface, the moving object corresponding to the beat node in response to the arrival event of the beat node of the audio may include:
responding to an arrival event of a beat node of the audio, and calling a moving object corresponding to the beat node to display the moving object at a first preset position of a display interface;
controlling the moving object to move in the display interface according to the corresponding motion control data of the moving object; the movable range of the moving object is any point in the display interface.
Optionally, the step of controlling the motion of the moving object in the display interface according to the motion control data corresponding to the moving object may include:
and controlling the moving object to move in the display interface according to the corresponding motion track data and/or deformation data of the moving object.
Optionally, the step of controlling the motion of the moving object in the display interface may include:
when the boundary of the moving object and the boundary of the display interface are smaller than a preset distance, controlling the moving object to rebound or start moving from any other boundary of the display interface;
and when the movement time of the moving object reaches the preset time or the position of the moving object is located at a second preset position, clearing the moving object.
Alternatively, the display state may include at least one of a display size, a display position, a display shape, a retention state, and a dynamic and static state.
Optionally, the step of adjusting the display state of the moving object based on an interactive instruction issued by the user for the moving object may include:
responding to a touch event aiming at the moving object sent by a user, and playing an animation special effect representing that the touched moving object is damaged;
playing the single score corresponding to the damaged moving object;
the single score is accumulated into the user's total score value.
Optionally, after adjusting the display state of the moving object based on an interactive instruction issued by a user for the moving object, the method may further include:
responding to the playing end event of the audio, and displaying the total integral value of the user;
displaying a ranking list corresponding to the audio, wherein the ranking list comprises: and ranking the top N user identifications and corresponding total integral values from large to small in the total integral values of the users synchronously listening to the audio, wherein N is a positive integer greater than or equal to 1.
Optionally, the moving object is a semitransparent three-dimensional figure, and at least some of the different moving objects are configured to be different colors from each other.
An audio playing device adapted to one of the objectives of the present application may include: interface determination module, motion execution module, and state adjustment module, wherein:
the interface determining module is used for determining a display interface during audio playing;
the motion execution module is used for responding to the arrival event of the beat node of the audio and displaying a motion object corresponding to the beat node on the display interface;
and the state adjusting module is used for adjusting the display state of the moving object based on an interactive instruction which is sent by a user and aims at the moving object.
An electronic device adapted to one of the objectives of the present application includes a central processing unit and a memory, wherein the central processing unit is used for invoking and running a computer program stored in the memory to execute the steps of the audio playing method described in the present application.
A computer-readable storage medium, which stores a computer program implemented according to the audio playback method in the form of computer-readable instructions, executes the steps included in the method when the computer program is called by a computer.
A computer program product provided in accordance with another object of the present application includes a computer program/instructions, which when executed by a processor, implement the steps of the audio playing method described in any one of the embodiments of the present application.
Compared with the prior art, the application has the following advantages:
according to the embodiment of the application, in the audio playing process, the motion object which corresponds to the beat node and has the interaction capacity can be displayed on the display interface in response to the beat node reached in the audio playing process. Thus, the rhythm information of the audio can be effectively expressed in a dynamic form. And the user can send out an interactive instruction aiming at the moving object, so that the user can interact with the audio in real time. Therefore, the user can be deeply integrated into the display of the audio rhythm information, and the interaction between the user and the song is enhanced.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic diagram of a network deployment architecture related to implementing the technical solution of the present application;
FIG. 2 is a flowchart illustrating an embodiment of an audio playing method according to the present application;
FIGS. 3 and 4 illustrate display interfaces of the present application, wherein FIG. 3 does not show moving objects, and FIG. 4 shows a plurality of moving objects that have been generated;
FIGS. 5 and 6 are schematic flow diagrams of two different embodiments of audio-enabled playback in the present application;
fig. 7 is a schematic flowchart illustrating a process of controlling a moving object to move according to a node of a beat in an embodiment of the present application;
FIG. 8 is a flow chart illustrating the control of moving objects and cleaning in an embodiment of the present application;
FIG. 9 is a schematic flow chart illustrating the cleaning of a moving object according to the motion data of an acceleration sensor in an embodiment of the present application;
FIG. 10 is a schematic flow chart of cleaning a moving object and calculating an integral in an embodiment of the present application;
FIG. 11 is a schematic flow diagram illustrating loading of a leader board in an embodiment of the present application;
FIGS. 12 and 13 are display interfaces of the present application, with FIG. 12 displaying a leaderboard and FIG. 13 showing a first image indicating an initial display position of a moving object;
fig. 14 is a schematic block diagram of an audio playback apparatus according to the present application;
fig. 15 is a schematic structural diagram of an electronic device used in the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As will be appreciated by those skilled in the art, "client," "terminal," and "terminal device" as used herein include both devices that are wireless signal receivers, which are devices having only wireless signal receivers without transmit capability, and devices that are receive and transmit hardware, which have receive and transmit hardware capable of two-way communication over a two-way communication link. Such a device may include: cellular or other communication devices such as personal computers, tablets, etc. having a single line display or a multi-line display or cellular or other communication devices without a multi-line display; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "client," "terminal device" can be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "client", "terminal Device" used herein may also be a communication terminal, a Internet access terminal, and an audio/video playing terminal, and may be, for example, a PDA, an MID (Mobile Internet Device), and/or a Mobile phone with an audio/video playing function, and may also be a smart television, a set-top box, and other devices.
The hardware referred to by the names "server", "client", "service node", etc. in the present application is essentially an electronic device with the performance of a personal computer, and is a hardware device having necessary components disclosed by the von neumann principles such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, and an output device, in which a computer program is stored in the memory, and the central processing unit loads a program stored in an external memory into the internal memory to run, executes instructions in the program, and interacts with the input and output devices, thereby accomplishing specific functions.
It should be noted that the concept of "server" in the present application can be extended to the case of server cluster. According to the network deployment principle understood by those skilled in the art, each server should be logically divided, and in physical space, the servers can be independent from each other but can be called through an interface, or can be integrated into one physical computer or a set of computer clusters. Those skilled in the art will appreciate this variation and should not be so limited as to restrict the implementation of the network deployment of the present application.
Referring to fig. 1, in an embodiment, a hardware basis required for implementing the related art embodiment of the present application may be deployed according to the architecture shown in the figure, but is not limited thereto. The server 70 is deployed at the cloud end, and serves as a business server, and is responsible for further connecting to a related data server and other servers providing related support, so as to form a logically associated server cluster to provide services for related terminal devices, such as a smart phone 71 and a personal computer 72 shown in the figure, or a third-party server (not shown). Both the smart phone and the personal computer can access the internet through a known network access method, and establish a data communication link with the cloud server 70 so as to run terminal applications related to the services provided by the server.
For the server, the application program is usually constructed as a service process, and a corresponding program interface is opened for the remote call of the application program running on various terminal devices.
The application program refers to an application program running on a server or a terminal device, and the application program implements the related technical scheme of the application in a programming manner, and a program code of the application program can be saved in a nonvolatile storage medium which can be identified by a computer in a form of a computer executable instruction, and is called into a memory by a central processing unit to run, and the related device of the application is constructed by running the application program on the computer.
Various data related to the present application can be stored in a server remotely or in a local terminal device, as long as the data is suitable for being called by the technical scheme of the present application.
The person skilled in the art will know this: although the various methods of the present application are described based on the same concept so as to be common to each other, they may be independently performed unless otherwise specified. In the same way, for each embodiment disclosed in the present application, the same inventive concept is proposed, and therefore, concepts expressed in the same manner and concepts expressed in terms of the same are equally understood, and even though the concepts are expressed differently, they are merely convenient and appropriately changed.
The embodiments to be disclosed herein can be flexibly constructed by cross-linking related technical features of the embodiments unless the mutual exclusion relationship between the related technical features is stated in the clear text, as long as the combination does not depart from the inventive spirit of the present application and can meet the needs of the prior art or solve the deficiencies of the prior art. Those skilled in the art will appreciate variations therefrom.
The audio playing method can be programmed into a computer program product, is mainly deployed in terminal equipment to operate and is realized, so that the method can be executed by accessing an open interface after the computer program product operates and performing man-machine interaction with the computer program product through a graphical user interface.
Referring to fig. 2, in an embodiment, the audio playing method of the present application includes the following steps:
step S1100, determining a display interface during audio playing:
when a user needs to play audio, an APP or a webpage implementing the technical scheme of the application can be opened in the terminal device so as to run a corresponding application program, so that the method of the application is implemented.
It is understood that the display interface during audio playing may be an audio playing page, i.e. a specific interface for playing audio. Non-audio playback pages are also possible, such as a cell phone screen home page, a browser interface, a shopping website interface, and so forth. Although not limited thereto.
For example, the user may select a piece of audio to play through the list of audio provided by the song list. Generally, the audio is composed of an accompaniment audio and a main melody, and the main melody can be formed by playing a pure musical instrument or by singing a human voice. Therefore, the audio frequency of the application can be a song with a vocal play melody and a song with an unmanned vocal play melody in content, and can also be simple accompaniment audio frequency. Generally, the accompaniment audio includes the rhythm information of the whole audio, and when the rhythm information and the main melody are presented in the same audio, the audio presents corresponding rhythm sense when being played.
In terms of the form of the Audio carrier, the playback File in any coding Format suitable for being called by the terminal device to play by the Audio player may be packaged as a pure Audio File, such as MP3 (Moving Picture Experts Group Audio Layer III, MPEG Audio Layer 3), WAV (Wave form Audio File Format), and any other known or unknown pure Audio Format File, or any known or unknown Video Format File, such as MPEG (Moving Picture Experts Group, MPEG), 3GP (3 Generation Partnership Project), WMV (Windows Media Video), and the like. In the transmission form, the audio can be transmitted by streaming media, and can also be downloaded by completing the file. A carrier form such as this does not affect the embodiment of the inventive spirit of the present application as long as it is suitable for being played in a terminal device to play audio.
In one embodiment, after the audio is played, the application program may automatically switch to display the display interface on which the audio is being played, as shown in fig. 3, song information corresponding to the audio is displayed in the display interface, where the song information includes lyrics corresponding to the audio, a writer of the lyrics, a singer, a song duration, a current playing time, and the like. In addition, a play control area can be provided in the display interface, so that the user can implement the play control of the audio through various corresponding function controls provided by the play control area, wherein the function controls include, for example, a play pause control, a play progress bar, a previous head, a next head, and the like, which can be respectively used for implementing the related control of the audio.
In another embodiment, slightly different from the previous embodiment, after the audio is played, the display interface is not automatically switched, but the audio is played in background in a silent manner, but when the user triggers an operation event preset for the audio, for example, clicks the title of the audio in the song list, the display interface is entered and displayed.
Step S1200, responding to an arrival event of a beat node of the audio, and displaying a moving object corresponding to the beat node on a display interface:
in the display interface, the audio is normally played, and correspondingly, the playing progress bar provided by the display interface can correspondingly display the current playing progress of the audio, so that the playing process is more visual.
As mentioned above, the audio includes rhythm information, which is mainly embodied by audio beats, and the audio beats usually include hard beats and soft beats, sometimes including sub-hard beats, and when a beat arrives, it is substantially a beat node that arrives on the playing time of the corresponding audio. In this embodiment, the beat node may be only a fast beat, only a weak beat, or only a secondary fast beat, or may be both a fast beat and a weak beat, or both a fast beat and a secondary fast beat, and the like, which can be flexibly determined by the person skilled in the art. Generally, the beat is also weighed, and when a beat node corresponding to the beat is reached in the audio playing process, a drum point is also called to be generated, so that the reader can understand conveniently, the application can be simplified to only consider the condition of the beat node corresponding to the drum point.
The essence of identifying the node is to identify the corresponding timestamp of the node of the audio during the playing process of the audio, so that the arrival event of the node of the audio can be triggered when the timestamp arrives during the playing process of the audio.
During the playing of the audio, by identifying the tempo nodes, each tempo node arrives at and triggers a corresponding arrival event, and in response to the arrival event, a plurality of moving objects 8 are displayed on the display interface as shown in fig. 4, which is opposite to fig. 3. The moving object 8 is visually recognizable, is endowed with a human-computer interaction function, and can respond to the operation of a user to execute corresponding pre-programmed background instructions to realize a visual display effect.
The moving object can contain image content by loading pictures, and the image content of the moving object corresponding to different arrival events can be the same or different. For example, the image content of one moving object may be displayed as a balloon and the image content of another moving object may be displayed as chocolate candy.
The arrival events and the moving objects can be in one-to-one correspondence, namely when one beat node arrives, one arrival event is triggered, and a corresponding moving object is correspondingly displayed. In other embodiments, one arrival event is allowed to pop up two or more moving objects. Similarly, a part of the arrival events may trigger only a single moving object synchronously, and another part of the arrival events trigger two or more moving objects synchronously. And so on, may be implemented in flexible combinations.
Step 1300, based on the interactive instruction for the moving object sent by the user, adjusting the display state of the moving object:
the moving object is carried by a control capable of implementing movement, and the moving path of the moving object can be random or preset, and even the moving object can be adaptive adjusted in real time to avoid the moving objects corresponding to other beat nodes. The motion path can be limited within the range of the display interface, and can also escape beyond the range of the display interface. In this regard, one skilled in the art can implement this by itself programming in accordance with the principles disclosed herein. In summary, when a beat node arrives, a corresponding moving object is triggered to be displayed, and the moving object starts moving on the display interface.
The motion mode of the moving object may be a bouncing motion based on an audio rhythm, or a shifting motion, but is not limited to this.
In an alternative embodiment, the moving object may be configured to automatically perform size transformation, such as continuously zooming in or continuously zooming out, during the moving process of the moving object, so as to enhance the visual perception effect. Therefore, after the moving object is created, the display state of the moving object can be properly adjusted, and particularly, the moving object can be realized by adjusting the display size and/or the display position and the display shape of the moving object, so that the dynamic effect is presented.
In one embodiment, the moving object allows the display state to be adjusted by responding to the interactive instruction issued by the user, wherein the display state comprises any one or more of display size, display position, display shape, retention state and dynamic and static state. The retention state is either one of two mutually exclusive states of eliminating the display of the moving object or maintaining the display of the moving object. The dynamic and static states are either states of keeping the moving object still or implementing motion. The interactive instruction may be an interactive instruction generated by user touch, and the interactive instruction generated by user touch may include any one of a drag-and-drop instruction, a zoom instruction, and the like, in which a corresponding response function is predetermined in advance, or an interactive instruction generated by an acceleration sensor, and the like.
Therefore, it is easy to understand that, along with the progress of the playing process of the audio, the display interface continuously emits moving objects along with the arrival of the beat nodes of the audio, the beat nodes are expressed by the moving objects, the auditory recognition effect of the audio is converted into the visual perception effect on the display interface, the rhythm information of the audio can be more vividly shown, the rhythm of the audio can be more effectively captured by a user, and the immersion feeling of listening to the song is enhanced. Therefore, in the process of implementing audio playing, in response to the beat node reached in the process of audio playing, the motion object with the interaction capability corresponding to the beat node can be displayed on the display interface. Thus, the rhythm information of the audio can be effectively expressed in a dynamic form. And the user can send an interaction instruction aiming at the moving object, so that the user can interact with the song in real time. Therefore, the user can be deeply integrated into the display of the audio rhythm information, and the interaction between the user and the song is enhanced.
Referring to fig. 5, in an embodiment, the step S1100 of determining the display interface during audio playing includes the following steps:
step S1111, loading an audio playing file and a rhythm file, where the rhythm file includes a distribution sequence of beat nodes of the audio and timestamps of the beat nodes, and the timestamps are used to determine corresponding beat nodes in the audio playing process to trigger corresponding arrival events:
when any user of the terminal equipment touches the audio, a corresponding audio playing instruction is generated, and then the audio playing file and the rhythm file are loaded.
The playing file of the audio corresponds to a rhythm file which is prepared in advance and comprises a distribution sequence of the beat nodes of the audio, and the time stamps of the beat nodes in the playing process of the audio are carried out in the distribution sequence, so that the arrival event corresponding to one beat node can be correspondingly identified when the audio is played to the progress corresponding to the time stamp according to each time stamp in the distribution sequence. The distribution sequence can be prepared in advance by an online audio platform and stored in association with a playing file of the audio, so that when the playing file is downloaded, a rhythm file corresponding to the playing file can be synchronously downloaded to the terminal equipment for local storage according to the technical scheme realized by the application, and the rhythm file can be directly called to identify a beat node arrival event in the subsequent audio playing process.
Step S1112, displaying a display interface and playing audio:
after the audio playing file and the rhythm file are loaded, the application program can pop up a display interface to start playing the audio, and similarly, the beat node distribution sequence in the corresponding rhythm file is correspondingly analyzed to determine the arrival event of the beat node of the audio.
In the embodiment, the rhythm file is provided by being associated with the playing file of the audio, so that each beat node of the audio is represented, the computer running resource overhead of the beat node of the audio which is automatically identified by the terminal equipment is avoided, the accuracy of beat node identification can be improved, and the utilization and the expression of the rhythm information of the audio are more timely and accurate.
Referring to fig. 6, in another embodiment, the step S1100 of determining the display interface during audio playing includes the following steps:
step S1121, loading the audio playback file, displaying the display interface, and playing the audio:
in this embodiment, similarly, when any user of the terminal device touches one audio, a corresponding audio playing instruction is generated, so that the audio playing file is directly loaded, and the display interface is popped up. The playing file can be downloaded from a remote server or called from local.
Step S1122, detecting a beat node where the audio arrives in the audio playing process:
the present embodiment is different from the previous embodiment in that the detection of the beat node is performed on the audio during the audio playing process. In one mode, a preset acoustic model suitable for determining the beat nodes can be adopted to directly evaluate and detect the played files, in the other mode, audio data corresponding to audio can be collected from a sound card of the terminal equipment to be detected in real time, and no matter which mode is adopted, the audio beats are generally carried out circularly, so that the beat nodes with errors which are difficult to identify can be determined for the audio in both modes.
As for the algorithm for extracting the beat nodes from the audio data of the playing file or the sound card, the algorithm can be implemented by using an interface provided by the existing professional software, can be implemented by using various known algorithms, and can be implemented by using a pre-trained neural network model based on deep learning. For this reason, those skilled in the art know the implementation method, which is not repeated herein.
Step S1123, determining a timestamp of the beat node in the audio playing process, and triggering a corresponding arrival event according to the timestamp:
after the beat nodes of the audio are determined through detection, the corresponding timestamps of all the beat nodes in the audio can be determined according to the information output by the detection, and the corresponding arrival events are automatically triggered when the timestamps arrive.
In this embodiment, the terminal device is allowed to detect the beat node of the audio by using the audio playing file or the audio data formed in the sound card after the audio playing file is played, and the beat node is not dependent on the predetermination of the beat node, so that the compatibility with various audio without preprocessing can be realized, and even if the audio appears for the first time, the beat node can be determined, thereby ensuring that the technical scheme of the present application can be applied to each audio.
Referring to fig. 7, in an embodiment deepened on the basis of any embodiment of the present application, the step S1200 of displaying, on a display interface, a moving object corresponding to a beat node in response to an arrival event of the beat node of an audio includes the following steps:
step S1210, responding to an arrival event of a beat node of the audio, and calling a moving object corresponding to the beat node to display the moving object at a preset position on the display interface:
when a beat node arrives during the playing process of the audio, for example, according to the foregoing embodiments, when a timestamp of the beat node in the playing process of the audio arrives, a corresponding arrival event is triggered, and at this time, in response to each arrival event, a corresponding control, which may be an image control, is created in the memory, and is responsible for invoking and loading a preset moving object, and then, the moving object is displayed in the display interface.
In one embodiment, a plurality of different styles of moving objects may be prepared for a moving object. Or in another embodiment, the moving objects in different controls are colored differently based on the same moving object, so that the moving objects appear different styles after being displayed by different controls. In the example of the avatar, the image content of the moving object loaded by the control is a semitransparent stereo figure, and the content may be a bubble or a balloon, etc., so that at least some moving objects in the display interface are configured to display colors different from each other by representing different styles according to the principles herein. Through enriching the styles of the moving objects, the whole display interface can be more colorful along with more and more moving objects, so that the atmosphere with more immersion effect is created.
The position of each moving object appearing for the first time in the display interface may be a first preset position, a center position of the display interface, an upper left corner position, and a lower right corner position. It is understood that the first preset position may also refer to a random position, which can be flexibly implemented by those skilled in the art, but is not limited thereto.
Step S1220, controlling the motion of the moving object in the display interface according to the corresponding motion control data of the moving object; the movable range of the moving object is any point in the display interface:
the control used for bearing the moving object comprises a plurality of attribute data as a computer memory object, the attribute data comprises a type of motion control data, the motion control data comprises any one or more corresponding motion track data which can be used for describing the motion path and the display position change of the moving object in a display interface, and can be used for describing any one or more corresponding deformation data of the display size change, the display shape change and the display style change of the moving object, and the like.
For example, a moving object may be constrained to any one or any number of the following by constraints on its motion control data: the method comprises the following steps of randomly appearing at any position of a display interface, implementing motion with randomly determined paths in the display interface, triggering an event of automatically cleaning a moving object to change a retention state when the time of the moving object in the display interface reaches a preset maximum duration, continuously increasing the display size of the moving object in the motion process until a preset condition is reached to trigger the event of automatically cleaning the moving object, and the like.
In one embodiment, when the time of the moving object in the display interface reaches the preset maximum duration time, the event of automatically cleaning the moving object is triggered, the moving object disappears from the display interface, and the retention state of the moving object is switched from the visible state to the invisible state after disappearance, so that the switching control of the retention state of the moving object is realized.
Similarly, the control carrying the moving object is configured with various attribute data, and also encapsulates various method interfaces thereof, for example, a method interface for controlling the moving object carried by the control to make bouncing movement or drifting movement according to a movement path defined by the movement attribute data, a method interface for enabling the moving object carried by the control to realize self-cleaning, a method interface for enabling the display size of the moving object carried by the control to be adjusted, a method interface for invoking to play other animation special effects, and the like, so that the movement control data of the control carrying the moving object can be executed analytically, thereby implementing the corresponding bouncing movement or drifting movement of the corresponding moving object in the display interface according to the constraint of the movement attribute data of the corresponding moving object.
The motion control data configured for each of the sequentially appearing moving objects may be different, so that the interface expression effects of different moving objects are different, and the styles of the moving objects are further enriched.
In this way, in the embodiment of the application, in the audio playing process, in response to the beat node reached in the audio playing process, the moving object with the interaction capability corresponding to the beat node can be displayed on the display interface. Thus, the rhythm information of the audio can be effectively expressed in a dynamic form. And the user can send out an interactive instruction aiming at the moving object, so that the user can interact with the audio in real time. Therefore, the user can be deeply integrated into the display of the audio rhythm information, and the interaction between the user and the song is enhanced.
On the basis of any of the above embodiments, step S1220 is to control the moving object to move in the display interface according to the corresponding motion trajectory data and/or deformation data of the moving object, and includes the following steps: and controlling the moving object to move in the display interface according to the corresponding motion trail data and/or deformation data of the moving object.
In this embodiment, the motion trajectory data includes motion trajectory data and/or deformation data, which may be corresponding data preset when the control of the moving object is created, or corresponding data generated by triggering a corresponding interaction instruction by a user, for example, the user may generate the motion trajectory data by triggering a drag and drop instruction acting on the moving object, and the background process controls the moving object to move according to an actual trajectory generated by dragging by the user according to the motion trajectory data; the user can generate deformation data by triggering a zooming instruction acting on the moving object, the background process controls the moving object to generate a corresponding zooming effect according to the deformation data, and the deformation process of the moving object is displayed through the zooming effect. Other deformation methods may be to apply mirror image deformation to the moving object to transform the moving object into a mirror image effect, or to apply compression deformation to the moving object to generate a locally compressed image effect.
The embodiment allows more exquisite control to be implemented on the motion process of the moving object, and enriches the interaction forms of the moving object.
On the basis of the above embodiment, the step of controlling the moving object to move in the display interface according to the corresponding motion trajectory data and/or deformation data of the moving object includes the following steps:
step S1221, when the boundary of the moving object and the boundary of the display interface are smaller than a preset distance, controlling the moving object to rebound or start moving from any other boundary of the display interface;
in this embodiment, if the moving object exceeds any one boundary of the display interface in the moving process, specifically, when the boundary of the moving object is smaller than a preset distance, the moving object itself is controlled to rebound from the interface to start the rebound movement, or directly move into another boundary opposite to the boundary to continue the movement, or continue the movement from any other boundary, so that the movement space corresponding to the movement path of the moving object can be expanded, and in this way, it is ensured that the user can follow the complete movement path of the same moving object, so that the user can quickly and accurately touch the moving object pointed by the moving object, for example, in the example of a figure, the user desires to puncture a moving object displayed as a bubble, and the bubble is in a moving state, and when the bubble is at the left boundary of the screen at the first time, the bubble is quickly gathered to the right boundary at the second time due to be expected to appear at the right side of the display interface at the second time, so that the balloon can be quickly positioned to puncture the balloon in time.
Therefore, the method not only extends the length of the motion path of the motion object, but also facilitates the user to effectively follow the motion path of the motion object corresponding to the beat node, facilitates the user to touch in time, and improves the human-computer interaction efficiency.
And step S1222, when the moving time length of the moving object reaches the preset time length or the position is located at the second preset position, clearing the moving object.
In order to realize the cleaning of the moving object, any one or any plurality of ways can be adopted for implementation:
in one mode, when the movement duration of any one moving object in the display interface, that is, the duration of the any one moving object in the display interface reaches its corresponding preset condition, the preset condition may be a uniformly set maximum duration, or may be a maximum duration individually determined for each moving object, and the maximum duration may be encapsulated in the movement attribute data of the corresponding moving object in the same manner. When the duration of a moving object in the display interface reaches the maximum duration, an event for automatically cleaning the moving object can be triggered, and the event clears the instance of the control bearing the moving object from the memory, so that the corresponding moving object disappears from the display interface. And (4) as an image example, when the moving object loaded by the control is a balloon or a bubble, when the floating time of the moving object in the display interface reaches the maximum duration, the moving object can disappear, or disappear after blasting.
In another mode, when any moving object in the display interface reaches a second preset position in the moving process of the moving object (the preset position can be encapsulated in the moving attribute data of the moving object in the same way), the second preset position is regarded as a corresponding preset condition, for example, when the position of a preset garbage can pattern is reached, or the position exceeds the position outside the display interface, an event for automatically cleaning the moving object can be triggered, and the event clears the instance of the control bearing the moving object from the memory, so that the moving object disappears from the display interface. The image example is that when the moving object loaded by the control is a balloon or a bubble, the moving object can disappear when entering a garbage can or escaping from the outside in a display interface, or disappear after blasting.
In another mode, when any moving object in the display interface is identified to reach the maximum display size constrained in the motion attribute data according to the continuously increased display size constrained in the motion attribute data during the moving process of the moving object, the maximum display size is regarded as a corresponding preset condition, an event for automatically cleaning the moving object can also be triggered, and the event clears the instance of the control bearing the moving object from the memory, so that the instance disappears from the display interface. And (4) image examples, when the moving object loaded by the control is a balloon or a bubble, when the moving object is continuously expanded in the display interface to reach a preset size, the moving object can disappear, or disappear after blasting.
According to the mode, the motion process and the automatic cleaning mechanism of the motion object are further disclosed, so that the performance capability of the motion object on rhythm information is stronger, the capability of creating audio immersion through the motion object can be improved, meanwhile, the memory overhead and the cleanliness of a display interface can be effectively maintained through the cleaning mechanism of the motion object, and the accumulation of memory and interface information garbage is avoided.
Referring to fig. 9, in order to enrich the man-machine interaction manner, in an embodiment of the present invention, after step S1300, the following steps are included:
step S2100, monitoring and acquiring motion data generated by an acceleration sensor:
the user can simulate the touch of a moving object by using the acceleration sensor of the terminal device, so that an application program on the terminal device can determine whether to implement cleaning of the moving object by monitoring whether the motion data generated by the acceleration sensor can identify the touch behavior aiming at the moving object. For this purpose, the application program listens in the background to acquire the motion data generated by the acceleration sensor continuously.
Step S2200, judging whether the motion data is matched with a preset motion model, wherein the motion model is a mathematical model used for identifying whether the motion data corresponds to a condition meeting a preset numerical value, and if the motion data is matched with the preset numerical value, correspondingly removing the motion object with the longest duration.
In order to identify whether the motion data constitutes a touch control for the moving object, in this embodiment, a motion model is preset correspondingly, where the motion model is a mathematical model used for identifying whether the motion data corresponds to a data that satisfies a preset numerical condition, and if necessary, a parameter corresponding relationship of three-axis data of the acceleration sensor may be specified, and the parameter corresponding relationship may be implemented flexibly by a person skilled in the art.
In one example of a figure, a mathematical model may be modeled from acceleration sensor motion data generated by a user shaking his terminal device hard, such that the mathematical model is used to characterize what is considered to be a triggered touched event whenever the terminal device is shaken.
And matching and comparing the motion data with the motion model, and when the motion data is matched with the motion model, considering the motion data as a touched event aiming at the motion object with the longest duration in the current display interface.
Accordingly, the moving object with the longest duration can be correspondingly cleared in response to the matching event distinguished according to the motion data of the acceleration sensor. It is easy to understand that, after a user operates the terminal device to generate motion data through the acceleration sensor and is judged to trigger a matching event, the instance of the control which lasts the longest in the display interface is correspondingly cleaned. An example of an avatar is that the user may shake his terminal device with the rhythm of the audio, thereby corresponding to the elimination of a moving object that has just emerged, deeply immersing the user in the rhythm atmosphere of the audio.
The embodiment further develops the participation capability of the acceleration sensor of the terminal equipment, enriches the rhythm way of the audio participated by the user, and is not limited to pointing touch operation, thereby further deepening the immersion effect in the audio playing process.
Referring to fig. 10, in another embodiment that may be deepened by combining with any embodiment of the present application, the step S1300 of adjusting the display state of the moving object based on the interactive instruction for the moving object issued by the user includes the following steps:
step S1310, in response to a touch event for the moving object sent by the user, playing an animation special effect representing that the touched moving object is damaged:
the moving object moving in the display interface can respond to the touch operation of the user because of being endowed with the human-computer interaction function. For example, a user may touch any moving object in the display interface at any time, so as to trigger a touch event corresponding to the moving object, and when any moving object triggers its corresponding touch event, execute a background business logic preset by the touch event, and correspondingly clear the touched moving object, so that an instance of its background control is cleared from the memory, and thus the instance disappears in the display interface. Therefore, the moving object can disappear in response to touch operation, so that a user can clean the moving object in the display interface in various pointing control modes such as mouse clicking, finger touch and the like, on one hand, the situation that the moving object appearing successively blocks the display interface to keep the interface clean and legibility of information display in the interface can be avoided, on the other hand, the user can be guided to click the moving object appearing successively along with the audio rhythm to realize the audio immersion effect, and the user experience is upgraded.
In this embodiment, when any moving object correspondingly triggers a touch event, the touched moving object triggers and executes a pre-configured event for automatically cleaning the moving object, and in the event, a code for calling and playing an animation special effect corresponding to the touched moving object is implemented. Referring to the example that bubbles are taken as moving objects in the foregoing, in terms of images, the corresponding animation special effect can be used for displaying the effect that the bubbles are punctured, accordingly, when one moving object is punctured by a user, a corresponding touch event is triggered, and then, the corresponding background instruction calls the animation special effect to play the animation process that the moving object in the moving object, namely the bubbles are punctured, so as to complete the man-machine interaction display.
Step S1320, playing the single score corresponding to the damaged moving object:
the code corresponding to the event of automatically cleaning the moving object of the moving object may further include a process of determining a single score corresponding to the touched moving object, where the single score may be pre-packaged in attribute data of the moving object, and different moving objects may randomly determine different single scores, or the single score may decrease with an increase in the duration of the moving object in the display interface, and may be flexibly set.
When a moving object obtains the corresponding single-item score of the moving object in the cleaning process, the moving object can be played and displayed in a display interface.
After the animation special effect is played, a final cleaning procedure for the touched moving object may be entered, specifically, an instance of the control bearing the moving object is removed from the computer memory, and the single score is removed from the display interface, so as to indicate that the touched moving object has been removed completely, and in terms of visual examples, for example, a bubble has been completely punctured, and the current user has successfully obtained a single score corresponding to the punctured the bubble.
And step S1330, accumulating the single score to the total point value of the user.
In the background, the single score corresponding to the touched moving object can be accumulated into the total score value of the current user, so that the corresponding total score obtained by the user who participates in interactive touch of each moving object in the audio playing process is accumulated for the user.
The embodiment further converts the human-computer interaction events aiming at the moving objects into the user scores, effectively quantifies the rhythm process of the user participating in the audio, can measure the accuracy and effectiveness of the user in grasping and responding to the audio rhythm information through quantifying the total score value, plays the role of evaluating the rhythm sensing training effect, and expands the application boundary of the technical scheme in the aspects of audio training, audio games and the like.
Referring to fig. 11, in an embodiment expanded in combination with the previous embodiment, after the step of determining the display interface during audio playing in step S1100, the method includes the following steps:
step S1400, responding to the playing end event of the audio, and displaying the total integral value of the user;
in this embodiment, the technical solution of the present application may be further applied to a scenario such as an online concert hall user or an online ranking of an online concert hall in combination with the previous embodiment.
In the application scene of the on-line music hall row, the on-line music hall is composed of a user group listening to the same audio, and each listener user in the user group listens to the audio synchronously, so that the technical scheme of the application can be implemented in the terminal equipment of each listener user to enable the corresponding listener user to participate in the man-machine interaction activity based on the moving object.
Step S1500, displaying a ranking list corresponding to the audio, wherein the ranking list comprises the following steps: ranking top N user identifications and corresponding total integral values from large to small in total integral values of users synchronously listening to audio, wherein N is a positive integer greater than or equal to 1:
after the audio playing is finished, the server obtains the total score value of each listener user, the user group can be subjected to reverse ranking according to the total score value, and finally the server selects the first N (N > = 1) listeners, such as the first ten listener users, to construct a ranking list formed by the current playing of the first audio. Each listener user may obtain a ranking list after the play end event, where the obtained ranking list may include the integral value of the listener user, the user names of the previous listener users, and the total integral values corresponding to the user names.
In the application scenario of online music hall user spelling, a current audience user can designate to perform music-sense spelling with one or more other audience users based on the same audio, so that in the audio playing process, the two parties respectively generate the total score values of the two parties and submit the total score values to a server, after a corresponding playing ending event is triggered when the audio playing process is ended, each audience user serves as the current audience user to acquire a corresponding ranking list from the server, and the ranking list only comprises the user name of each user participating in the music-sense spelling, the corresponding total score value of the user and the total score value of the current audience user. Similarly, if there are more users participating, only the first audience users and their corresponding total points are included.
No matter what kind of the above-mentioned scenes, after receiving the ranking list, the application program of the terminal device performs corresponding analysis display on the ranking list, and displays a popup window constructed after analysis above the display interface, as shown in fig. 12, the application program displays the popup window on the basis of the display interface of the audio song hall, displays the total point value of the current user in the popup window, and simultaneously provides a list for displaying each user who is ranked ahead in the ranking list and the corresponding total point value thereof.
The embodiment further uses the technical scheme of the application in supporting collective activities implemented by the online singing hall, provides a basic technical framework for the online singing hall to hold music spelling activities based on the associated audio, enriches the social atmosphere of the audio, is beneficial to activating the user traffic of an online audio platform, and further improves the economic value of the online audio platform.
In an improved preferred embodiment, the display interface includes a first image for indicating the initial display position of the moving object all the way during the playing of the audio, as shown in fig. 13, the first image is visually illustrated as a horn shape, the center position of the horn image 9 is used as a preset position for defining the initial display position of the moving object, and the first image may be an animation image so that it can change regularly following the node of the beat of the audio. Accordingly, when a beat node arrives, the center position of the horn image correspondingly displays a moving object to emit a bubble, then the bubble starts to move in the display interface, when the next beat node arrives, the bubble corresponding to another moving object is emitted again in the center position, and so on, and the cycle is continued until the audio is finished.
According to the principle disclosed by the application, the embodiment can be flexibly combined with other embodiments by a person skilled in the art, the initial display position of the moving object is indicated through the first image, so that the first appearance position of the moving object is relatively fixed, a user can conveniently and quickly capture the moving object and carry out human-computer interaction, for example, the user can quickly identify and puncture bubbles, unnecessary interaction delay caused by non-user subjective factors is avoided as much as possible, and the human-computer interaction efficiency is improved.
Referring to fig. 14 and the related gui cited above, an audio playing apparatus adapted for one of the purposes of the present application is provided, which is a solution functionally deployed according to an audio playing method, and includes: an interface determination module 1100, a motion execution module 1200, and a state adjustment module 1300, wherein: an interface determination module 1100, configured to determine a display interface during audio playing; a motion execution module 1200, configured to respond to an arrival event of a beat node of the audio, and display a motion object corresponding to the beat node on a display interface; and the state adjusting module 1300 is configured to adjust the display state of the moving object based on an interaction instruction for the moving object issued by the user.
Optionally, the motion execution module comprises: the response calling sub-module is used for responding to an arrival event of a beat node of the audio, calling the moving object corresponding to the beat node and displaying the moving object at a first preset position of a display interface; and the motion sub-module is used for controlling the motion of the motion object in the display interface according to the motion control data corresponding to the motion object.
Optionally, a motion sub-module comprising: the detail control unit is used for controlling the motion of the motion object in the display interface according to the corresponding motion track data and/or deformation data of the motion object; the movable range of the moving object is any point in the display interface.
Optionally, the detail control unit comprises: the boundary control subunit is used for controlling the rebound movement of the moving object or starting to move from any other boundary of the display interface when the boundary of the moving object and the boundary of the display interface are smaller than a preset distance; and the movement cleaning subunit is used for cleaning the moving object when the movement time length of the moving object reaches the preset time length or the position of the moving object is located at a second preset position.
Optionally, the display state includes at least one of a display size, a display position, a display shape, a retention state, a dynamic state and a static state.
Optionally, the state adjustment module includes: the special effect playing submodule is used for responding to a touch event aiming at the moving object sent by a user and playing an animation special effect representing that the touched moving object is damaged; the score playing sub-module is used for playing the single score corresponding to the damaged moving object; and the score accumulation sub-module is used for accumulating the single score to the total score value of the user.
Optionally, the state adjustment module further includes: the integral display module is used for responding to the playing end event of the audio and displaying the total integral value of the user; the ranking display module is used for displaying ranking lists corresponding to the audios, and the ranking lists comprise: and ranking top N user identifications and corresponding total integral values from large to small in the total integral value of the users synchronously listening to the audio, wherein N is a positive integer greater than or equal to 1.
Optionally, the moving object is a semitransparent three-dimensional figure, and at least some of the different moving objects are configured to be different colors from each other.
In order to solve the above technical problem, an embodiment of the present application further provides an electronic device, which may be a computer device. As shown in fig. 15, the internal structure of the computer device is schematically illustrated. The computer device includes a processor, a computer-readable storage medium, a memory, and a network interface connected by a system bus. The computer readable storage medium of the computer device stores an operating system, a database and computer readable instructions, the database can store control information sequences, and the computer readable instructions, when executed by the processor, can make the processor implement an audio playing method. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer readable instructions, which, when executed by the processor, may cause the processor to perform the audio playback method of the present application. The network interface of the computer device is used for connecting and communicating with the terminal. It will be appreciated by those skilled in the art that the configuration shown in fig. 14 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In this embodiment, the processor is configured to execute specific functions of each module and its sub-module in fig. 14, and the memory stores program codes and various data required for executing the modules or the sub-modules. The network interface is used for data transmission to and from a user terminal or a server. The memory in this embodiment stores program codes and data required for executing all modules/sub-modules in the audio playback device of the present application, and the server can call the program codes and data of the server to execute the functions of all sub-modules.
The present application also provides a storage medium storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the audio playback method of any of the embodiments of the present application.
The present application also provides a computer program product comprising computer program/instructions which, when executed by one or more processors, implement the steps of the audio playback method of any of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments of the present application can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when the computer program is executed, the processes of the embodiments of the methods can be included. The storage medium may be a computer-readable storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
In conclusion, the method and the device enrich the audio playing forms, effectively express the rhythm information of the played audio, create the immersion effect in the song listening process by utilizing the rhythm information to participate in human-computer interaction, stimulate the user flow of an online audio platform and improve the economic value of the online audio platform.
Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in this application can be interchanged, modified, combined, or eliminated. Further, various operations, methods, steps, measures, schemes in the various processes, methods, procedures that have been discussed in this application may be alternated, modified, rearranged, decomposed, combined, or eliminated. Further, the steps, measures, and schemes in the various operations, methods, and flows disclosed in the present application in the prior art can also be alternated, modified, rearranged, decomposed, combined, or deleted.
The foregoing is only a few embodiments of the present application and it should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present application, and that these improvements and modifications should also be considered as the protection scope of the present application.

Claims (11)

1. An audio playing method, comprising:
determining a display interface during audio playing;
responding to an arrival event of the beat node of the audio, and displaying a moving object corresponding to the beat node on the display interface;
and adjusting the display state of the moving object based on an interactive instruction aiming at the moving object issued by a user.
2. The method of claim 1, wherein in response to an arrival event of a beat node of the audio, displaying a moving object corresponding to the beat node on the display interface comprises:
responding to an arrival event of the beat node of the audio, and calling a moving object corresponding to the beat node to display the moving object at a first preset position of the display interface;
controlling the moving object to move in the display interface according to the corresponding motion control data of the moving object; the movable range of the moving object is any point in the display interface.
3. The method of claim 2, wherein controlling the motion of the moving object in the display interface according to the motion control data corresponding to the moving object comprises:
and controlling the moving object to move in the display interface according to the corresponding motion trail data and/or deformation data of the moving object.
4. The method according to claim 2 or 3, wherein the controlling the moving object to move in the display interface comprises:
when the boundary of the moving object and the boundary of the display interface are smaller than a preset distance, controlling the moving object to rebound or start moving from any other boundary of the display interface;
and when the movement time length of the moving object reaches a preset time length or the position of the moving object is located at a second preset position, clearing the moving object.
5. The method of any of claims 1-3, wherein the display state comprises at least one of a display size, a display position, a display shape, a persistence state, and a dynamic-static state.
6. The method according to any one of claims 1 to 3, wherein the adjusting the display state of the moving object based on the interactive instruction for the moving object issued by the user comprises:
responding to a touch event which is sent by a user and aims at the moving object, and playing an animation special effect which represents that the touched moving object is damaged;
playing the single score corresponding to the damaged moving object;
and accumulating the single score into the total point value of the user.
7. The method according to claim 6, further comprising, after the adjusting the display state of the moving object based on the interactive instruction issued by the user for the moving object, the step of:
responding to the playing end event of the audio, and displaying the total point value of the user;
displaying a ranking list corresponding to the audio, wherein the ranking list comprises: and ranking the top N user identifications and corresponding total integral values of the total integral values from large to small in the users synchronously listening to the audio, wherein N is a positive integer greater than or equal to 1.
8. The method according to claim 1, wherein the moving object is a translucent solid figure, and at least some of the different moving objects are configured to be different colors from each other.
9. An audio playback apparatus, comprising:
the interface determining module is used for determining a display interface during audio playing;
the motion execution module is used for responding to an arrival event of the beat node of the audio and displaying a motion object corresponding to the beat node on the display interface;
and the state adjusting module is used for adjusting the display state of the moving object based on an interactive instruction which is sent by a user and aims at the moving object.
10. An electronic device comprising a central processing unit and a memory, wherein the central processing unit is configured to invoke execution of a computer program stored in the memory to perform the steps of the method according to any one of claims 1 to 8.
11. A computer-readable storage medium, characterized in that it stores, in the form of computer-readable instructions, a computer program implemented according to the method of any one of claims 1 to 8, which, when invoked by a computer, performs the steps comprised by the corresponding method.
CN202210855118.7A 2022-07-19 2022-07-19 Audio playing method and device, electronic equipment and computer readable storage medium Pending CN115220625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210855118.7A CN115220625A (en) 2022-07-19 2022-07-19 Audio playing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210855118.7A CN115220625A (en) 2022-07-19 2022-07-19 Audio playing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN115220625A true CN115220625A (en) 2022-10-21

Family

ID=83612899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210855118.7A Pending CN115220625A (en) 2022-07-19 2022-07-19 Audio playing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115220625A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112044053A (en) * 2020-09-03 2020-12-08 腾讯科技(深圳)有限公司 Information processing method, device, equipment and storage medium in virtual scene
CN112165628A (en) * 2020-09-29 2021-01-01 广州繁星互娱信息科技有限公司 Live broadcast interaction method, device, equipment and storage medium
CN112883223A (en) * 2019-11-29 2021-06-01 阿里巴巴集团控股有限公司 Audio display method and device, electronic equipment and computer storage medium
CN113284523A (en) * 2020-02-20 2021-08-20 腾讯数码(天津)有限公司 Dynamic effect display method and device, computer equipment and storage medium
CN114073854A (en) * 2020-08-14 2022-02-22 上海哔哩哔哩科技有限公司 Game method and system based on multimedia file
CN114422814A (en) * 2022-01-14 2022-04-29 广州虎牙科技有限公司 Live audio and video processing method and device, server and readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112883223A (en) * 2019-11-29 2021-06-01 阿里巴巴集团控股有限公司 Audio display method and device, electronic equipment and computer storage medium
CN113284523A (en) * 2020-02-20 2021-08-20 腾讯数码(天津)有限公司 Dynamic effect display method and device, computer equipment and storage medium
CN114073854A (en) * 2020-08-14 2022-02-22 上海哔哩哔哩科技有限公司 Game method and system based on multimedia file
CN112044053A (en) * 2020-09-03 2020-12-08 腾讯科技(深圳)有限公司 Information processing method, device, equipment and storage medium in virtual scene
CN112165628A (en) * 2020-09-29 2021-01-01 广州繁星互娱信息科技有限公司 Live broadcast interaction method, device, equipment and storage medium
CN114422814A (en) * 2022-01-14 2022-04-29 广州虎牙科技有限公司 Live audio and video processing method and device, server and readable storage medium

Similar Documents

Publication Publication Date Title
US11443771B2 (en) Systems and methods for modifying videos based on music
CN109144610B (en) Audio playing method and device, electronic device and computer readable storage medium
KR20210110620A (en) Interaction methods, devices, electronic devices and storage media
US10860345B2 (en) System for user sentiment tracking
CN110381388A (en) A kind of method for generating captions and device based on artificial intelligence
TWI486904B (en) Method for rhythm visualization, system, and computer-readable memory
US20150161908A1 (en) Method and apparatus for providing sensory information related to music
CN114746159B (en) Artificial Intelligence (AI) controlled camera view generator and AI broadcaster
CN107040452B (en) Information processing method and device and computer readable storage medium
CN104166547B (en) A kind of control method and device of channel
CN114746158B (en) Artificial Intelligence (AI) controlled camera view generator and AI broadcaster
US11511200B2 (en) Game playing method and system based on a multimedia file
EP4106337A1 (en) Video processing method and apparatus, computer device, and storage medium
CN107621882A (en) A kind of switching method of control model, device and storage medium
CN111970521B (en) Live broadcast method and device of virtual anchor, computer equipment and storage medium
CN105845158A (en) Information processing method and client
CN114007064B (en) Special effect synchronous evaluation method, device, equipment and storage medium
EP3654194A1 (en) Information processing device, information processing method, and program
CN115220625A (en) Audio playing method and device, electronic equipment and computer readable storage medium
Freeman et al. Auracle: a voice-controlled, networked sound instrument
CN111026872B (en) Associated dictation method and electronic equipment
CN112752146A (en) Video quality evaluation method and device, computer equipment and storage medium
JP2020042557A (en) Excitement promotion system
WO2023168990A1 (en) Performance recording method and apparatus in virtual scene, device, storage medium, and program product
KR102616058B1 (en) Method, computer device, and computer program to replay audio recording through visualization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination