CN111753125A - Song audio frequency display method and device - Google Patents

Song audio frequency display method and device Download PDF

Info

Publication number
CN111753125A
CN111753125A CN202010575301.2A CN202010575301A CN111753125A CN 111753125 A CN111753125 A CN 111753125A CN 202010575301 A CN202010575301 A CN 202010575301A CN 111753125 A CN111753125 A CN 111753125A
Authority
CN
China
Prior art keywords
audio
song
playing
frequency spectrum
display
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010575301.2A
Other languages
Chinese (zh)
Inventor
刘培
曾义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Music Entertainment Technology Shenzhen Co Ltd
Original Assignee
Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Music Entertainment Technology Shenzhen Co Ltd filed Critical Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority to CN202010575301.2A priority Critical patent/CN111753125A/en
Publication of CN111753125A publication Critical patent/CN111753125A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/685Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using automatically derived transcript of audio data, e.g. lyrics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/64Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/34Indicating arrangements 

Abstract

The application discloses a method and a device for displaying song audio, and belongs to the technical field of information processing. The method comprises the following steps: when a display instruction of the song audio is received, acquiring display information of the song audio, wherein the display information comprises audio related information and a preview audio frequency spectrum, the image attribute of the preview audio frequency spectrum is a target image attribute matched with the style type of the song audio, and the image attribute comprises one or two of color and style; displaying the audio related information and displaying the preview audio spectrum. By adopting the method and the device, before the song audio is played, the user can roughly know the style of the song audio according to the displayed preview audio frequency spectrum and the color and/or the style of the preview audio frequency spectrum, so that the user can perform preliminary screening on the song before playing, the time of the user is saved, and the song listening experience of the user is improved.

Description

Song audio frequency display method and device
Technical Field
The application relates to the technical field of information processing, in particular to a method and a device for displaying song audio.
Background
Nowadays, as people have higher and higher requirements for entertainment, more and more song listening software is available. The song listening software often comprises a song library with a large number of songs, and a user can find favorite songs in the song library.
When searching for favorite songs, a user needs to start the songs one by one, and listen to the songs for a period of time respectively, and then can judge the favorite songs.
In the course of implementing the present application, the inventors found that the related art has at least the following problems:
the user can not roughly know the style of the songs before the songs are played, so that the songs can not be preliminarily screened, all the songs must be opened one by one and auditioned, a large amount of time is consumed for the user, and the user experience of listening to the songs is poor.
Disclosure of Invention
The embodiment of the application provides a method and a device for displaying song audio, which can solve the technical problems in the related art. The technical scheme of the song audio display method and device is as follows:
in a first aspect, a method for audio display of a song is provided, the method comprising:
when a display instruction of a song audio is received, acquiring display information of the song audio, wherein the display information comprises audio related information and a preview audio frequency spectrum, the image attribute of the preview audio frequency spectrum is a target image attribute matched with the style type of the song audio, and the image attribute comprises one or two of color and style;
and displaying the audio related information and displaying the preview audio frequency spectrum.
In one possible implementation manner, after displaying the preview audio spectrum, the method further includes:
when a playing instruction of the song audio is received, playing the song audio;
according to a first playing time of the song audio, acquiring first lyrics corresponding to the first playing time and a first audio frequency spectrum corresponding to the first playing time, wherein the image attribute of the first audio frequency spectrum is the target image attribute;
displaying the first lyrics and the first audio frequency spectrum.
In a possible implementation manner, after displaying the first lyric and the first audio frequency spectrum, the method further includes:
when a pause instruction for the song audio is received, stopping playing the song audio;
acquiring a complete audio frequency spectrum of the song audio, wherein the image attribute of the complete audio frequency spectrum is the target image attribute;
and displaying the complete audio frequency spectrum, and displaying a dragging icon representing the playing progress on the complete audio frequency spectrum.
In one possible implementation manner, after the displaying the drag icon representing the progress of playing on the complete audio spectrum, the method further includes:
when an adjusting instruction for adjusting the playing progress of the song audio to a target progress is received, acquiring second lyrics corresponding to second playing time according to the second playing time of the target progress;
and displaying the second lyrics.
In a possible implementation manner, after the displaying the second lyric, the method further includes:
when an instruction for starting playing the song audio at the target progress is received, starting playing the song audio from the second playing time;
acquiring a second audio frequency spectrum corresponding to the second playing time, wherein the image attribute of the second audio frequency spectrum is the target image attribute;
displaying the second audio spectrum.
In one possible implementation, the name and lyrics of the song audio are displayed in the same display area, and the name and the lyrics are not displayed at the same time.
In one possible implementation, the genre types include one or more of rock music, metallic music, classical music, pop music, and national music.
In one possible implementation, the audio-related information includes an audio cover and audio text information; the displaying the preview audio spectrum includes:
displaying the preview audio spectrum in the audio cover.
In a second aspect, there is provided an apparatus for audio display of songs, the apparatus comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring display information of song audio when a display instruction of the song audio is received, the display information comprises audio related information and a preview audio frequency spectrum, the image attribute of the preview audio frequency spectrum is a target image attribute matched with the style type of the song audio, and the image attribute comprises one or two of color and style;
and the display module is used for displaying the audio related information and displaying the preview audio frequency spectrum.
In a possible implementation manner, the apparatus further includes a playing module, configured to play the song audio when a playing instruction for the song audio is received;
the obtaining module is further configured to obtain first lyrics corresponding to a first playing time and a first audio frequency spectrum corresponding to the first playing time according to the first playing time of the song audio, where an image attribute of the first audio frequency spectrum is the target image attribute;
the display module is further configured to display the first lyrics and the first audio frequency spectrum.
In a possible implementation manner, the playing module is further configured to stop playing the song audio when a pause instruction for the song audio is received;
the acquisition module is further configured to acquire a complete audio frequency spectrum of the song audio, where an image attribute of the complete audio frequency spectrum is the target image attribute;
the display module is further configured to display the complete audio frequency spectrum, and display a drag icon representing a playing progress on the complete audio frequency spectrum.
In a possible implementation manner, the obtaining module is further configured to, when an adjustment instruction for adjusting the playing progress of the song audio to a target progress is received, obtain second lyrics corresponding to a second playing time according to the second playing time of the target progress;
the display module is further configured to display the second lyrics.
In a possible implementation manner, the playing module is configured to, when receiving an instruction to start playing the song audio at the target progress, start playing the song audio from the second playing time;
the obtaining module is configured to obtain a second audio frequency spectrum corresponding to the second playing time, where an image attribute of the second audio frequency spectrum is the target image attribute;
the display module is further configured to display the second audio frequency spectrum.
In one possible implementation, the name and lyrics of the song audio are displayed in the same display area, and the name and the lyrics are not displayed at the same time.
In one possible implementation, the genre types include one or more of rock music, metallic music, classical music, pop music, and national music.
In one possible implementation, the audio-related information includes an audio cover and audio text information; the display module is used for:
displaying the preview audio spectrum in the audio cover.
In a third aspect, a terminal is provided, which includes a memory and a processor, wherein the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the method for audio display of songs according to any one of the first aspect.
In a fourth aspect, there is provided a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the method of audio display of songs according to any of the first aspects.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
the embodiment of the application provides a method for displaying song audio, which can display audio related information and preview audio frequency spectrum when the song audio is displayed. Since the image attributes of the preview audio spectrum are target image attributes that match the genre of the song audio, including one or both of color and pattern, the user can have a general idea of the genre of the song audio by viewing the waveform and image attributes of the preview audio spectrum. Therefore, the user can know the style of the song audio without playing the song audio, the user can perform preliminary screening on the song before playing, the time of the user is saved, and the song listening experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a method for audio display of songs provided by an embodiment of the present application;
FIG. 2 is a schematic structural diagram of an apparatus for audio display of songs according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a terminal according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a display interface of song audio provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a display interface of song audio provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a display interface of song audio provided by an embodiment of the present application;
FIG. 7 is a schematic diagram of a display interface of song audio provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of a display interface for song audio provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of a display interface for song audio provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of a display interface for song audio provided by an embodiment of the present application;
FIG. 11 is a flowchart of a method for updating lyrics and an audio spectrogram during playing according to an embodiment of the present application;
fig. 12 is a flowchart of updating lyrics and an audio frequency spectrum after adjusting the playing progress according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
Referring to fig. 4, a schematic diagram of an implementation environment of a method for displaying song audio provided by an embodiment of the present application is shown. The implementation environment may include a server 01 and one or more terminals 02 (e.g., 2 terminals 02 are shown in fig. 4). Each terminal 02 may establish a communication connection with the server 02 by wire or wirelessly.
The terminal 02 may be a smart phone, a tablet computer, an MP4(moving picture experts group audio layer IV), a laptop portable computer, or a desktop computer. Also, an audio playing client 021 (which may also be referred to as an application) capable of playing audio, such as a karaoke client, may be installed in each terminal 02. The server 01 may be a server, a server cluster composed of a plurality of servers, or a cloud computing service center. Moreover, the server 01 may be a background server of the audio playing client 021 installed in the terminal 02.
In the embodiment of the present application, when a user wants to listen to a song, the audio playing client 021 may be opened on the terminal 02, or a refresh interface operation is performed on the audio playing client 021, so that the audio playing client 021 receives a display instruction of the audio of the song. And after receiving the display instruction of the song audio, acquiring the display information of the song audio. The display information of the song audio comprises audio related information and a preview audio frequency spectrum, and the audio related information and the preview audio frequency spectrum are displayed in the display interface.
Before the song audio is played, the preview audio frequency spectrum of the song audio is displayed in the display interface, so that a user can roughly know the style of the song audio according to the color and/or the style of the displayed preview audio frequency spectrum, the user can conveniently perform preliminary screening on the song before playing, the time of the user is saved, and the song listening experience of the user is improved.
As shown in fig. 1, the processing flow of the method for audio display of songs may include the following steps:
in step 101, when a display instruction of song audio is received, display information of the song audio is acquired.
Wherein the display information comprises a preview audio spectrum, or may further comprise audio related information such as one or more of an audio cover and audio text information, the image attribute of the preview audio spectrum is a target image attribute matching the style type of the song audio, and the image attribute comprises one or both of color and style. The audio spectrum may be referred to as an audio pitch waveform, and the preview audio spectrum may also be referred to as a preview audio pitch waveform.
Genre types include one or more of rock music, metallic music, classical music, pop music, and national music.
In implementation, when a user opens an audio playing client to enter a song audio display interface, or after the audio client is opened, a refresh interface operation is performed on the song audio display interface, a display instruction of song audio can be triggered.
After receiving a display instruction of the song audio, the audio playing client needs to acquire display information of the song audio. In a possible implementation manner, the display information of the song audio is cached in the terminal in advance, and the audio playing client can directly acquire the song audio stored in the terminal. In another possible implementation manner, the audio playing client sends an acquisition request of display information of the song audio to the server, and the server returns the display information to the audio playing client.
As shown in fig. 5, the display information of the song audio includes at least an audio cover 51, audio text information 52, and a preview audio spectrum 53:
the audio cover 51 of the song audio may include a cover picture that may be uploaded simultaneously when the audio playback client uploads the song audio, or may be distributed by the server for the song audio.
As shown in fig. 5, the audio text information 52 may include: song name 521, and description information 522 of the song audio. The description information 522 of the song audio may be information that is synchronously distributed when the user distributes the song audio, and the description information may be text. Of course, the audio text information may also include: scoring and on-demand rate. The content included in the audio text information is not limited in the embodiment of the present application. The song title 521 may be displayed in the audio cover 51.
The image attributes of the preview audio spectrum 53 are target image attributes that match the genre of the song audio, thereby allowing the user to have a more intuitive feel of the genre of the song audio by observing the preview audio spectrum before playing. The image attributes include color and/or style, the style may include line type and image type, the line type may include thick line type, thin line type, dotted line, solid line, dotted line, and the like, and the image type may include column pattern, broken line pattern, column broken line combined pattern, and the like.
For example, if the image attributes include color, song audio of the rock music genre may be represented by the red preview audio spectrum and the ethnic music genre may be represented by the blue preview audio spectrum. The color attribute of the preview audio spectrum brings a psychological feeling to the person, and the psychological feeling to the person is consistent with the psychological feeling to the person brought by the genre type of the song audio represented by the preview audio spectrum.
As another example, where the image attributes include style, song audio of a rock music genre may be represented by a preview audio spectrum of a bar graph (as shown in fig. 6, 7, and 9), ethnic music genre may be represented by a preview audio spectrum of a polyline graph, and song audio of a classical music genre may be represented by a combined bar and polyline graph (as shown in fig. 5, 8, and 10). Additionally, the image attributes may also be a combination of style and color.
As shown in fig. 5, the display information for the song audio may also include author information 54. The author information 54 may include: a nickname 541 and an avatar 542 of the user who issued the song audio.
In step 102, the audio related information is displayed and the preview audio spectrum is displayed.
In implementation, after the display information is acquired, the audio related information may be displayed in a display interface of the song audio, and the preview audio spectrum is displayed. Optionally, the preview audio spectrum may be displayed in an audio cover.
For example, as shown in fig. 5 and 6, an audio cover, audio text information, and a preview audio frequency spectrum are displayed in a display interface of a song audio, a song name in the audio text information is "song name", description information of the song audio is "a song is sung newly, a favorite friend wants to send me flowers-", and a preview audio frequency spectrum is displayed in the audio cover. The display interface of the song audio also displays the nickname of the user who uploads the audio and the head portrait of the user.
The embodiment of the application provides a method for displaying song audio, which can display audio related information and preview audio frequency spectrum when the song audio is displayed. Since the image attributes of the preview audio spectrum are target image attributes that match the genre of the song audio, including one or both of color and pattern, the user can have a general idea of the genre of the song audio by viewing the waveform and image attributes of the preview audio spectrum. Therefore, the user can know the style of the song audio without playing the song audio, the user can perform preliminary screening on the song before playing, the time of the user is saved, and the song listening experience of the user is improved.
As shown in fig. 7 and 8, after displaying the preview audio spectrum, when the user wants to play the song audio, the user may click a play button (which may be located in the audio cover) to trigger a play instruction of the song audio. The corresponding processing procedure may be as follows, when a playing instruction for the song audio is received, the song audio is played, and according to a first playing time of the song audio, first lyrics corresponding to the first playing time and a first audio frequency spectrum corresponding to the first playing time are acquired, where an image attribute of the first audio frequency spectrum is a target image attribute. The first lyrics and the first audio frequency spectrum are displayed.
Wherein the first lyric may be displayed at a position in the audio cover where the song title is displayed, and the song title is not displayed when the first lyric is displayed. The first audio spectrum may also be referred to as a first audio pitch waveform.
In implementation, during the process of playing the song audio, the audio playing client may obtain the first lyrics corresponding to the first playing time every first time period, and obtain the first audio frequency spectrum corresponding to the first playing time every second time period. The second time period may be the same as or different from the first time period, which is not limited in this application.
Optionally, the first time period may be 50ms (millisecond), that is, the audio playing client may obtain the first lyric corresponding to the first playing time of the song audio every 50 ms. The second time period may be 30ms, that is, the audio playing client may obtain the first audio spectrum corresponding to the first playing time of the song audio every 30 ms.
The first lyrics of the song audio may include one or more character units. The character unit can be a character, a word, an English word or a phrase and the like. For example, the first lyric of the song audio may be a lyric of a sentence of the song audio.
The first audio frequency spectrum of the song audio may be obtained by the audio playing client through Fast Fourier Transform (FFT) real-time calculation, may be pre-stored by the audio playing client, and may be directly obtained from the server by the audio playing client.
It should be noted that if the first audio frequency spectrum of the song audio is relatively flat, it indicates that the audio corresponding to the first audio frequency spectrum of the song audio is relatively flat, and if the first audio frequency spectrum of the song audio has a relatively large vibration amplitude, it indicates that the audio corresponding to the first audio frequency spectrum of the song audio is relatively maniac.
Before obtaining the first lyrics corresponding to the first playing time, the audio playing client may first obtain the complete lyrics of the song audio, analyze the complete lyrics of the song audio, and determine each first lyric in the complete lyrics of the song audio and a playing time range corresponding to each first lyric. Wherein, the playing time range corresponding to each first lyric can be represented by the starting time and the ending time of the first lyric.
For example, referring to table 1, assuming that the total playing time of the song audio is 2 minutes, the complete lyrics of the song audio include four first lyrics (a1, a2, A3 and a 4). If the starting time of the first lyric "a 1" is 0 min 0 s and the ending time is 0 min 30 s, the playing time range corresponding to the first lyric "a 1" is 0 min 0 s to 0 min 30 s. The first lyric "a 2" corresponds to a playing time ranging from 0 minutes 31 seconds to 0 minutes 58 seconds. The first lyric "a 3" corresponds to a playing time in the range of 0 minutes 59 seconds to 1 minutes 25 seconds. The playing time range corresponding to the first lyric "a 4" is 1 minute 26 seconds to 2 minutes 0 seconds.
TABLE 1
First lyric Starting time End time
A1 0 minute and 0 second 0 minute and 30 seconds
A2 0 minute and 31 seconds 0 minute 58 seconds
A3 0 minute and 59 seconds 1 minute and 25 seconds
A4 1 minute and 26 seconds 2 minutes and 0 seconds
And in the process of playing the song audio, the audio playing client side can determine the first lyrics corresponding to the first playing time according to the determined first playing time and the starting time of the playing time range corresponding to the first playing time. For example, if the start time of the playing time range corresponding to the first playing time determined by the audio playing client is greater than or equal to the start time of the first playing time range and less than the start time of the second playing time range, it may be determined that the first lyrics corresponding to the first playing time are the lyrics corresponding to the first playing time range. The second playing time range may be a next playing time range adjacent to the first playing time range.
For example, assuming that the first playing time of the song audio determined by the audio playing client is 0 minutes and 20 seconds, the audio playing client may obtain the first lyric "a 1" corresponding to the first playing time. After the audio playing client determines that the playing progress of the song audio is 0 minutes and 31 seconds after the interval of more than 50ms, the audio playing client may obtain the first lyric "a 2" corresponding to the first playing time.
It should be noted that, when the audio playing client receives a playing instruction of the user for the song audio, if it is detected that the terminal does not store the song audio and the complete lyrics of the song audio, an acquisition request for the song audio and the complete lyrics of the song audio may be sent to the server. The server may then send the audio link for the song audio and the complete lyrics for the song audio to the audio playback client in response to the retrieval request. The audio playing client may initiate a hypertext transfer protocol (HTTP) request to the corresponding resource server according to the audio link of the song audio, so as to obtain the song audio sent by the resource server. And then, the audio playing client can store the song audio and the complete lyrics of the song audio in the terminal. Then, in the process of playing the song audio, the audio playing client may obtain the first lyrics corresponding to the first playing time from the terminal every a first time period, and obtain the first audio frequency spectrum corresponding to the first playing time every a second time period.
As shown in fig. 9, during the process of playing the song audio, when the user wants to pause the playing of the song, the user may also trigger a pause instruction for the song audio, and the corresponding processing may be as follows, when the pause instruction for the song audio is received, the playing of the song audio is stopped. And acquiring a complete audio frequency spectrum of the song audio, wherein the image attribute of the complete audio frequency spectrum is the target image attribute. And displaying a complete audio frequency spectrum, and displaying a dragging icon representing the playing progress on the complete audio frequency spectrum.
Wherein the complete audio spectrum may be determined by a plurality of first audio spectra of the song audio. For example, the height of each bar in the full audio spectrum may be determined by the height of each bar in the first audio spectrum of the song audio.
For example, assuming that the total playing time of the song audio is 2 minutes, the complete audio spectrum displayed on the audio cover of the song audio may include 60 bar graphs, and each bar graph may be used to represent the first audio spectrum for 2 seconds. Since one bar needs to represent the 2 second first audio spectrum in the complete audio spectrum, the height of one bar may be determined based on the heights of all bars in the 2 second first audio spectrum corresponding to the bar, and optionally, may be an average of the heights of all bars. Assuming that the second time period is 30ms, the first 2 seconds of the song audio may refresh the first audio spectrum 66 times (i.e., there are 66 first audio spectra within 2 seconds). Assuming again that each first audio spectrum also comprises 60 bars, the height of each bar in the complete audio spectrum may be equal to the average of the heights of 66 (indicating 66 first audio spectra within 2 seconds) x 60 (indicating 60 bars per first audio spectrum).
The full audio spectrum may also be referred to as a full audio pitch waveform.
It should be noted that the complete audio spectrum and the preview audio spectrum may be determined in the same manner, that is, the preview audio spectrum may also be determined by a plurality of first audio spectrums of the song audio. The full audio spectrum may be more complete than the preview audio spectrum, e.g., the full audio spectrum may comprise 60 bar graphs, and the preview audio spectrum may comprise 30 bar graphs, which are uniformly chosen from the 60 bar graphs.
In an implementation, the manner of triggering the pause instruction may be to touch the play button, or to continuously touch the audio cover for a set time length.
In order to facilitate the user to adjust the playing progress of the song audio, a dragging icon representing the playing progress can be displayed on the complete audio frequency spectrum.
As shown in fig. 9, after the song audio is paused, the user may drag the icon on the complete audio frequency spectrum, or otherwise adjust the playing progress, and then the corresponding processing procedure may be as follows, when an adjustment instruction for adjusting the playing progress of the song audio to the target progress is received, according to the second playing time of the target progress, obtain the second lyrics corresponding to the second playing time. The second lyrics are displayed.
In implementation, there are various ways to adjust the playing progress, and the following three specific ways are exemplified:
as an alternative implementation, referring to fig. 9, a drag icon of the song audio is displayed on the complete audio spectrum of the audio cover of the song audio. When the user wants to adjust the playing progress of the song audio, the drag icon can be dragged, and at this time, the audio playing client can receive the adjustment operation of the user on the playing progress of the song audio. That is, in this implementation, the user may trigger the adjustment instruction of the play progress by dragging the drag icon displayed on the complete audio spectrum.
As another optional implementation manner, the audio cover of the song audio displays a time axis of the song audio, and when the user wants to adjust the playing progress of the song audio, a certain position point on the time axis may be clicked, and at this time, the audio playing client may receive an adjustment operation of the user on the playing progress of the song audio. That is, in this implementation manner, the user may click on the time axis to trigger the adjustment instruction of the play progress.
As another optional implementation manner, a time axis of the song audio is displayed on an audio cover of the song audio, a drag icon is displayed on the time axis, a user can drag the drag icon displayed on the time axis, and at this time, the audio playing client can receive an adjustment operation of the user on the playing progress of the song audio. That is, in this implementation manner, the user may drag the drag icon displayed on the time axis to trigger the adjustment instruction of the play progress.
In addition, in the implementation mode of triggering the adjustment instruction of the playing progress by dragging the dragging icon, the lyrics displayed on the audio cover of the song audio can change along with the change of the dragging operation of the user in the process of dragging the dragging icon, so that the user can conveniently position the playing progress of the song audio.
The process of displaying the second lyrics corresponding to the second playing time may refer to the above description of displaying the first lyrics, and is not described herein again in this embodiment of the application.
As shown in fig. 10, after adjusting to the target progress, the user may control the song audio to start playing at the target progress. Then, as described below, when an instruction to start playing the song audio at the target progress is received, a second audio frequency spectrum corresponding to a second playing time is obtained, where an image attribute of the second audio frequency spectrum is a target image attribute. The second audio spectrum is displayed. The second audio spectrum may also be referred to as a second audio pitch waveform.
In an implementation, the triggering operation of the instruction to start playing the song audio at the target progress may be to click a playing control displayed on the dragged icon after dragging the dragged icon to a position representing the target progress. Or, after the drag icon is dragged to a position representing the target progress (in the dragging process, the finger is always in contact with the touch screen), the finger is separated from the touch screen, and then an instruction for starting playing the song audio at the target progress is automatically triggered.
The process of displaying the second audio frequency spectrum corresponding to the second playing time may refer to the related description of displaying the first audio frequency spectrum, and the description of the embodiment of the present application is not repeated herein. In addition, when the second audio spectrum is displayed, the complete audio spectrum and the drag icon are not displayed.
It should be noted that, in the above solution, the name and the lyrics of the audio frequency of the song are displayed in the same display area (the display area may be located in the audio cover), and the name and the lyrics are not displayed at the same time. Wherein the lyrics comprise first lyrics and second lyrics.
The preview audio frequency spectrum, the first audio frequency spectrum, the second audio frequency spectrum, the complete audio frequency spectrum, the name and lyrics of the song audio can all be displayed in the audio cover.
The embodiment of the application provides a more intuitive method for displaying the song audio, and the user can roughly know the style type of the song audio by displaying the preview audio frequency spectrum of the specific image attribute before playing. In addition, in the playing process, the first audio frequency spectrum (or the second audio frequency spectrum) and the first lyrics (or the second lyrics) with the same image attributes are displayed, so that the playing effect is more visually displayed in front of the user, and the user can more immersive play and enjoy the works.
The following describes functions used in the implementation of the present application:
in an embodiment of the present application, the LyricItem function may be used to deposit the lyric content of each sentence and the corresponding time. The Lyric function may be used to store the contents of all lyrics. The getlyricitem (int time) function may obtain lyrics corresponding to a play time according to the play time. The WaveItem function may be used to store audio data for a frame of data. The Wave function may store all audio data for a song. The getWaveItem (int time) function may obtain an audio spectrum corresponding to a play time according to the play time. The SongInfo function is used for storing song information, including the color of an audio frequency spectrum, a song storage path, lyrics, audio, a lyric file path, author information, work information and the like. The progressive listener function is a playback progress callback interface and can notify the audio playback client of the current playback position. SongPlayer is the playing interface of the terminal and is used for playing audio. The startPlay (int time) function may cause the play interface to start playing from the time of time. The pause () function may cause the play interface to switch from the play state to the pause state. The getToalTime () function may obtain the total play time of the song audio. The setprogreslistener (progressive listener) function may set a play progress callback interface to call back the current play position of the service every first time period or second time period. The AuthorInfoView function is used to present display information, including an author head portrait, an author nickname, etc. The SongDetailView function is used to display audio detail information, including a play icon, a work name, a work registration, etc. The OperationView is used to present audio operating buttons, including a gift giving button and a sharing button, etc. The LyricView function is used for drawing lyric information currently played, the lyricItem is used for storing lyrics, the setLyricItem (LyricItem item) function is used for updating the lyrics, and the draw (canvas) function is used for acquiring the lyrics in the lyricItem and drawing the lyrics on an audio cover through canvas. The WaveView function is used for drawing an audio frequency spectrum of song audio, the waveItem is used for storing the audio frequency spectrum, the setWaveItem (WaveItem time) function is used for updating the audio frequency spectrum, and the draw (canvas) function is used for acquiring the audio frequency spectrum in the waveItem and drawing the audio frequency spectrum on an audio cover through canvas. The SongInfoView function is used for displaying an audio frequency spectrum, is formed by combining a plurality of basic components, monitors the progress callback of a playing interface, respectively acquires lyrics and an audio frequency spectrum corresponding to the current playing time from the lyric and the wave in the songInfo according to the playing progress, and respectively informs the LyricView and the WaveView components of updating the lyrics and the audio frequency spectrum.
When song audio is displayed before playing:
and displaying the audio cover and the audio text information according to the display information returned by the server, and displaying the preview audio frequency spectrum in the audio cover, so that the user can preliminarily feel the style of the song according to the style and the color of the audio frequency spectrum.
Display of song audio playing:
when the user clicks the play button of the SongDetailView, a gradual animation with the transparency ranging from 100% to 0% is set for the SongDetailView, and the animation duration can be 2 seconds. Attribute animations were also set for the lysicview and WaveView, respectively, with a width and height changed from 200px to 400px, and animation duration could also be 2 seconds. And meanwhile, initializing the player, starting audio playing, and updating the lyrics and the first audio frequency spectrum according to the playing progress.
Description of the drawings: the work interface is provided with a playing progress monitor, the player calls back the current playing progress of the work interface every 20ms, the work interface inquires lyrics and an audio frequency spectrum according to the current playing progress, and the lyrics interface and the tone interface are informed of refreshing data respectively. The specific process can refer to fig. 11.
After the song audio frequency is adjusted to play the progress, the display during playing is as follows:
the user pauses the player when touching the play button or pressing the audio envelope for a long time, and then the play progress can be adjusted. The user drags and obtains the x coordinate clicked each time, then according to the starting x coordinate and the ending x coordinate of the display interface, the percentage of the point clicked by the user on the audio cover is calculated, then the percentage is multiplied by the playing time length of the work to obtain the user clicking dragging progress, the current playing progress of the work interface is informed, and finally the work interface informs the lyric interface and the note interface to update the lyric and the audio frequency spectrum. The specific process can refer to fig. 12.
Based on the same technical concept, an embodiment of the present application further provides an apparatus for displaying audio of a song, as shown in fig. 2, the apparatus includes:
the acquisition module 201 is configured to acquire display information of a song audio when a display instruction of the song audio is received, where the display information includes audio related information and a preview audio spectrum, an image attribute of the preview audio spectrum is a target image attribute matched with a genre type of the song audio, and the image attribute includes one or two of a color and a style;
the display module 202 is configured to display the audio related information and display the preview audio spectrum.
In a possible implementation manner, the apparatus further includes a playing module, configured to play the song audio when a playing instruction for the song audio is received;
the obtaining module 201 is further configured to obtain, according to a first playing time of a song audio, first lyrics corresponding to the first playing time and a first audio frequency spectrum corresponding to the first playing time, where an image attribute of the first audio frequency spectrum is a target image attribute;
the display module 202 is further configured to display the first lyric and the first audio frequency spectrum.
In a possible implementation manner, the playing module is further configured to stop playing the song audio when a pause instruction for the song audio is received;
the obtaining module 201 is further configured to obtain a complete audio frequency spectrum of the song audio, where an image attribute of the complete audio frequency spectrum is a target image attribute;
the display module 202 is further configured to display the complete audio frequency spectrum, and display a drag icon representing the playing progress on the complete audio frequency spectrum.
In a possible implementation manner, the obtaining module 201 is further configured to, when an adjustment instruction for adjusting the playing progress of the song audio to the target progress is received, obtain second lyrics corresponding to a second playing time according to the second playing time of the target progress;
the display module 202 is further configured to display the second lyrics.
In a possible implementation manner, the playing module is further configured to start playing the song audio from the second playing time when receiving an instruction to start playing the song audio at the target progress;
an obtaining module 201, configured to obtain a second audio frequency spectrum corresponding to a second playing time, where an image attribute of the second audio frequency spectrum is a target image attribute;
the display module 202 is further configured to display the second audio spectrum.
In one possible implementation, the name and lyrics of the song audio are displayed in the same display area, and the name and lyrics are not displayed at the same time.
In one possible implementation, the genre types include one or more of rock music, metallic music, classical music, pop music, and national music.
In one possible implementation, the audio-related information includes an audio cover and audio text information; a display module 202 for:
the preview audio spectrum is displayed in the audio cover.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
It should be noted that: in the device for displaying song audio provided by the above embodiment, when displaying song audio, only the division of the above functional modules is taken as an example, and in practical application, the above function distribution can be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the above described functions. In addition, the device for displaying the song audio and the method embodiment for displaying the song audio provided by the above embodiment belong to the same concept, and the specific implementation process is described in the method embodiment and is not described herein again.
Fig. 3 is a block diagram of a terminal according to an embodiment of the present disclosure. The terminal 300 may be a portable mobile terminal such as: smart phones, tablet computers, smart cameras. The terminal 300 may also be referred to by other names such as user equipment, portable terminal, etc.
Generally, the terminal 300 includes: a processor 301 and a memory 302.
The processor 301 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so on. The processor 301 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 301 may also include a main processor and a coprocessor, where the main processor is a processor for processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 301 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, the processor 301 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 302 may include one or more computer-readable storage media, which may be tangible and non-transitory. Memory 302 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 302 is used to store at least one instruction for execution by processor 301 to implement the method of audio display of songs provided herein.
In some embodiments, the terminal 300 may further include: a peripheral interface 303 and at least one peripheral. Specifically, the peripheral device includes: at least one of radio frequency circuitry 304, display screen 305, camera assembly 306, audio circuitry 307, positioning assembly 308, and power supply 309.
The peripheral interface 303 may be used to connect at least one peripheral related to I/O (Input/Output) to the processor 301 and the memory 302. In some embodiments, processor 301, memory 302, and peripheral interface 303 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 301, the memory 302 and the peripheral interface 303 may be implemented on a separate chip or circuit board, which is not limited by the embodiment.
The Radio Frequency circuit 304 is used for receiving and transmitting RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuitry 304 communicates with communication networks and other communication devices via electromagnetic signals. The rf circuit 304 converts an electrical signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 304 comprises: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuitry 304 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: the world wide web, metropolitan area networks, intranets, generations of mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 304 may further include NFC (Near Field Communication) related circuits, which are not limited in this application.
The display screen 305 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. The display screen 305 also has the ability to capture touch signals on or over the surface of the touch display screen 305. The touch signal may be input to the processor 301 as a control signal for processing. The display screen 305 is used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 305 may be one, providing the front panel of the terminal 300; in other embodiments, the display screens 305 may be at least two, respectively disposed on different surfaces of the terminal 300 or in a folded design; in still other embodiments, the display 305 may be a flexible display disposed on a curved surface or on a folded surface of the terminal 300. Even further, the display screen 305 may be arranged in a non-rectangular irregular figure, i.e. a shaped screen. The Display screen 305 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
The camera assembly 306 is used to capture images or video. Optionally, camera assembly 306 includes a front camera and a rear camera. Generally, a front camera is used for realizing video call or self-shooting, and a rear camera is used for realizing shooting of pictures or videos. In some embodiments, the number of the rear cameras is at least two, and each of the rear cameras is any one of a main camera, a depth-of-field camera and a wide-angle camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting function and a VR (Virtual Reality) shooting function. In some embodiments, camera assembly 306 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
Audio circuit 307 is used to provide an audio interface between the user and terminal 300. Audio circuitry 307 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 301 for processing or inputting the electric signals to the radio frequency circuit 304 to realize voice communication. The microphones may be provided in plural numbers, respectively, at different portions of the terminal 300 for the purpose of stereo sound collection or noise reduction. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 301 or the radio frequency circuitry 304 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, audio circuitry 307 may also include a headphone jack.
The positioning component 308 is used to locate the current geographic location of the terminal 300 to implement navigation or LBS (location based Service). The positioning component 308 may be a positioning component based on the GPS (global positioning System) in the united states, the beidou System in china, or the galileo System in russia.
The power supply 309 is used to supply power to the various components in the terminal 300. The power source 309 may be alternating current, direct current, disposable batteries, or rechargeable batteries. When the power source 309 includes a rechargeable battery, the rechargeable battery may be a wired rechargeable battery or a wireless rechargeable battery. The wired rechargeable battery is a battery charged through a wired line, and the wireless rechargeable battery is a battery charged through a wireless coil. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 300 also includes one or more sensors 310. The one or more sensors 310 include, but are not limited to: acceleration sensor 311, gyro sensor 312, pressure sensor 313, fingerprint sensor 314, optical sensor 315, and proximity sensor 316.
The acceleration sensor 311 may detect the magnitude of acceleration in three coordinate axes of a coordinate system established with the terminal 300. For example, the acceleration sensor 311 may be used to detect components of the gravitational acceleration in three coordinate axes. The processor 301 may control the display screen 305 to display the user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 311. The acceleration sensor 311 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 312 may detect a body direction and a rotation angle of the terminal 300, and the gyro sensor 312 may cooperate with the acceleration sensor 311 to acquire a 3D motion of the user on the terminal 300. The processor 301 may implement the following functions according to the data collected by the gyro sensor 312: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
The pressure sensor 313 may be disposed on a side bezel of the terminal 300 and/or on a lower layer of the display screen 305. When the pressure sensor 313 is disposed at the side frame of the terminal 300, a user's grip signal of the terminal 300 can be detected, and left-right hand recognition or shortcut operation can be performed according to the grip signal. When the pressure sensor 313 is disposed at the lower layer of the display screen 305, the operability control on the UI interface can be controlled according to the pressure operation of the user on the display screen 305. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 314 is used for collecting a fingerprint of a user to identify the identity of the user according to the collected fingerprint. Upon identifying that the user's identity is a trusted identity, processor 301 authorizes the user to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying, and changing settings, etc. The fingerprint sensor 314 may be disposed on the front, back, or side of the terminal 300. When a physical button or a vendor Logo is provided on the terminal 300, the fingerprint sensor 314 may be integrated with the physical button or the vendor Logo.
The optical sensor 315 is used to collect the ambient light intensity. In one embodiment, the processor 301 may control the display brightness of the display screen 305 based on the ambient light intensity collected by the optical sensor 315. Specifically, when the ambient light intensity is high, the display brightness of the display screen 305 is increased; when the ambient light intensity is low, the display brightness of the display screen 305 is reduced. In another embodiment, the processor 301 may also dynamically adjust the shooting parameters of the camera head assembly 306 according to the ambient light intensity collected by the optical sensor 315.
A proximity sensor 316, also known as a distance sensor, is typically provided on the front face of the terminal 300. The proximity sensor 316 is used to collect the distance between the user and the front surface of the terminal 300. In one embodiment, when the proximity sensor 316 detects that the distance between the user and the front surface of the terminal 300 gradually decreases, the processor 301 controls the display screen 305 to switch from the bright screen state to the dark screen state; when the proximity sensor 316 detects that the distance between the user and the front surface of the terminal 300 is gradually increased, the display screen 305 is controlled by the processor 301 to switch from the breath-screen state to the bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 3 is not intended to be limiting of terminal 300 and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components may be used.
In an exemplary embodiment, a computer-readable storage medium is also provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the method for audio display of a song in the above-described embodiment. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for audio display of a song, the method comprising:
when a display instruction of a song audio is received, acquiring display information of the song audio, wherein the display information comprises audio related information and a preview audio frequency spectrum, the image attribute of the preview audio frequency spectrum is a target image attribute matched with the style type of the song audio, and the image attribute comprises one or two of color and style;
and displaying the audio related information and displaying the preview audio frequency spectrum.
2. The method of claim 1, wherein after displaying the preview audio spectrum, further comprising:
when a playing instruction of the song audio is received, playing the song audio;
according to a first playing time of the song audio, acquiring first lyrics corresponding to the first playing time and a first audio frequency spectrum corresponding to the first playing time, wherein the image attribute of the first audio frequency spectrum is the target image attribute;
displaying the first lyrics and the first audio frequency spectrum.
3. The method of claim 2, wherein after displaying the first lyrics and the first audio spectrum, further comprising:
when a pause instruction for the song audio is received, stopping playing the song audio;
acquiring a complete audio frequency spectrum of the song audio, wherein the image attribute of the complete audio frequency spectrum is the target image attribute;
and displaying the complete audio frequency spectrum, and displaying a dragging icon representing the playing progress on the complete audio frequency spectrum.
4. The method of claim 3, wherein after displaying the dragged icon representing the progress of the playing over the full audio spectrum, further comprising:
when an adjusting instruction for adjusting the playing progress of the song audio to a target progress is received, acquiring second lyrics corresponding to second playing time according to the second playing time of the target progress;
and displaying the second lyrics.
5. The method of claim 4, wherein after displaying the second lyrics, further comprising:
when an instruction for starting playing the song audio at the target progress is received, starting playing the song audio from the second playing time;
acquiring a second audio frequency spectrum corresponding to the second playing time, wherein the image attribute of the second audio frequency spectrum is the target image attribute;
displaying the second audio spectrum.
6. The method of any of claims 1-5, wherein a name of the song audio and lyrics are displayed in a same display area, and wherein the name and the lyrics are not displayed at the same time.
7. The method of any one of claims 1-5, wherein the audio-related information includes audio cover and audio text information; the displaying the preview audio spectrum includes:
displaying the preview audio spectrum in the audio cover.
8. An apparatus for audio display of a song, the apparatus comprising:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring display information of song audio when a display instruction of the song audio is received, the display information comprises audio related information and a preview audio frequency spectrum, the image attribute of the preview audio frequency spectrum is a target image attribute matched with the style type of the song audio, and the image attribute comprises one or two of color and style;
and the display module is used for displaying the audio related information and displaying the preview audio frequency spectrum.
9. A terminal, characterized in that the terminal comprises a memory and a processor, the memory having stored therein at least one instruction, the at least one instruction being loaded and executed by the processor to implement the method of audio display of a song according to any one of claims 1-7.
10. A computer-readable storage medium having stored therein at least one instruction, which is loaded and executed by a processor, to implement a method of audio display of a song according to any one of claims 1 to 7.
CN202010575301.2A 2020-06-22 2020-06-22 Song audio frequency display method and device Pending CN111753125A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010575301.2A CN111753125A (en) 2020-06-22 2020-06-22 Song audio frequency display method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010575301.2A CN111753125A (en) 2020-06-22 2020-06-22 Song audio frequency display method and device

Publications (1)

Publication Number Publication Date
CN111753125A true CN111753125A (en) 2020-10-09

Family

ID=72674859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010575301.2A Pending CN111753125A (en) 2020-06-22 2020-06-22 Song audio frequency display method and device

Country Status (1)

Country Link
CN (1) CN111753125A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419458A (en) * 2020-11-20 2021-02-26 青岛以萨数据技术有限公司 User interaction method, server, medium and system based on android animation
CN112435643A (en) * 2020-11-20 2021-03-02 腾讯音乐娱乐科技(深圳)有限公司 Method, device, equipment and storage medium for generating electronic style song audio
CN112818163A (en) * 2021-01-22 2021-05-18 惠州Tcl移动通信有限公司 Song display processing method, device, terminal and medium based on mobile terminal
CN114296669A (en) * 2021-03-11 2022-04-08 海信视像科技股份有限公司 Display device
CN114579017A (en) * 2022-02-10 2022-06-03 优视科技(中国)有限公司 Method and device for displaying audio
WO2023061281A1 (en) * 2021-10-12 2023-04-20 维沃移动通信有限公司 Display method and apparatus, and electronic device
WO2023088183A1 (en) * 2021-11-17 2023-05-25 维沃移动通信有限公司 Image display method and apparatus, and electronic device

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112419458A (en) * 2020-11-20 2021-02-26 青岛以萨数据技术有限公司 User interaction method, server, medium and system based on android animation
CN112435643A (en) * 2020-11-20 2021-03-02 腾讯音乐娱乐科技(深圳)有限公司 Method, device, equipment and storage medium for generating electronic style song audio
CN112818163A (en) * 2021-01-22 2021-05-18 惠州Tcl移动通信有限公司 Song display processing method, device, terminal and medium based on mobile terminal
CN114296669A (en) * 2021-03-11 2022-04-08 海信视像科技股份有限公司 Display device
WO2023061281A1 (en) * 2021-10-12 2023-04-20 维沃移动通信有限公司 Display method and apparatus, and electronic device
WO2023088183A1 (en) * 2021-11-17 2023-05-25 维沃移动通信有限公司 Image display method and apparatus, and electronic device
CN114579017A (en) * 2022-02-10 2022-06-03 优视科技(中国)有限公司 Method and device for displaying audio

Similar Documents

Publication Publication Date Title
CN110267067B (en) Live broadcast room recommendation method, device, equipment and storage medium
CN111753125A (en) Song audio frequency display method and device
CN108683927B (en) Anchor recommendation method and device and storage medium
WO2019114514A1 (en) Method and apparatus for displaying pitch information in live broadcast room, and storage medium
CN110061900B (en) Message display method, device, terminal and computer readable storage medium
CN109327608B (en) Song sharing method, terminal, server and system
CN109346111B (en) Data processing method, device, terminal and storage medium
CN110139143B (en) Virtual article display method, device, computer equipment and storage medium
WO2021068903A1 (en) Method for determining volume adjustment ratio information, apparatus, device and storage medium
CN109275013B (en) Method, device and equipment for displaying virtual article and storage medium
CN111061405B (en) Method, device and equipment for recording song audio and storage medium
CN112118477A (en) Virtual gift display method, device, equipment and storage medium
CN114116053A (en) Resource display method and device, computer equipment and medium
CN114945892A (en) Method, device, system, equipment and storage medium for playing audio
CN111402844B (en) Song chorus method, device and system
CN111092991B (en) Lyric display method and device and computer storage medium
CN110798327B (en) Message processing method, device and storage medium
CN111628925A (en) Song interaction method and device, terminal and storage medium
WO2022227581A1 (en) Resource display method and computer device
CN108055349B (en) Method, device and system for recommending K song audio
CN112256181B (en) Interaction processing method and device, computer equipment and storage medium
CN110337042B (en) Song on-demand method, on-demand order processing method, device, terminal and medium
CN112086102B (en) Method, apparatus, device and storage medium for expanding audio frequency band
CN112118482A (en) Audio file playing method and device, terminal and storage medium
CN111726670A (en) Information interaction method, device, terminal, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination