WO2023185387A1 - 音乐信息展示方法、装置、设备、程序产品和存储介质 - Google Patents

音乐信息展示方法、装置、设备、程序产品和存储介质 Download PDF

Info

Publication number
WO2023185387A1
WO2023185387A1 PCT/CN2023/079797 CN2023079797W WO2023185387A1 WO 2023185387 A1 WO2023185387 A1 WO 2023185387A1 CN 2023079797 W CN2023079797 W CN 2023079797W WO 2023185387 A1 WO2023185387 A1 WO 2023185387A1
Authority
WO
WIPO (PCT)
Prior art keywords
beat
music
target
line
track
Prior art date
Application number
PCT/CN2023/079797
Other languages
English (en)
French (fr)
Inventor
胡建丰
苏泓程
李旺成
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2023185387A1 publication Critical patent/WO2023185387A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/683Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments

Definitions

  • the present application relates to the field of computer technology, and specifically to a music information display method, device, equipment, program product and storage medium.
  • the existing music information display method mainly identifies and displays the vocal pitch in the music.
  • the high pitch of the human voice cannot completely represent a complete piece of music, and it cannot accurately express the content of the music. Therefore, the accuracy of the display of music information is low.
  • Embodiments of the present application provide a music information display method, device and computer-readable storage medium, which can improve the accuracy of music information display.
  • the embodiment of the present application provides a music information display method, which is executed by an electronic device, including:
  • a music information display page of the target music is displayed, the music information display page includes a track corresponding to at least one music element in the target music, and the The beat line corresponding to the audio track, the beat line is used to indicate the beat of the target music;
  • the audio track and the beat line following the movement of the audio track are scrolled and displayed on the music information display page.
  • An embodiment of the present application provides a music information display device, including:
  • a first display unit configured to display a music playback page of the currently played target music, where the music playback page includes a music information display control
  • the second display unit is configured to display a music information display page of the target music in response to a triggering operation of the music information display control, where the music information display page includes a sound corresponding to at least one music element in the target music. track, and the beat line corresponding to the audio track, the beat line being used to indicate the beat of the target music;
  • a scrolling display unit configured to scroll and display the audio track and the beat line following the movement of the audio track on the music information display page based on the playback progress of the target music.
  • An embodiment of the present application also provides an electronic device, including a processor and a memory.
  • the memory stores an application program.
  • the processor is configured to run the application program in the memory to implement the music information provided by the embodiment of the present application. Display method.
  • Embodiments of the present application also provide a computer-readable storage medium that stores a plurality of instructions, and the instructions are suitable for loading by the processor to execute any music provided by the embodiments of the present application. Steps in the information presentation method.
  • An embodiment of the present application also provides a computer program product, which includes computer-executable instructions or computer programs.
  • the feature is that when the computer-executable instructions or computer programs are executed by a processor, any music provided by the embodiments of the present application Steps in the information presentation method.
  • the embodiment of the present application can display the audio track corresponding to at least one music element in the target music and the beat line corresponding to the audio track while playing the target music. Moreover, the audio track and the beat line can also be processed according to the playback progress of the target music.
  • the scrolling display can accurately display the content contained in the target music, thus improving the accuracy of music information display.
  • Figure 1 is a schematic scene diagram of a music information display method provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a music information display method provided by an embodiment of the present application.
  • Figure 3 is a schematic diagram of a music to be played page provided by an embodiment of the present application.
  • Figure 4 is a schematic diagram of the first page of the music playback page provided by the embodiment of the present application.
  • Figure 5 is a schematic diagram of a local to-be-uploaded content page provided by an embodiment of the present application.
  • Figure 6 is a schematic diagram of the second page of the music playback page provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of the music information display page provided by the embodiment of the present application.
  • Figure 8 is a schematic diagram of the waveforms of different musical instruments provided by the embodiment of the present application.
  • Figure 9 is a schematic diagram of the sound waveform provided by the embodiment of the present application.
  • Figure 10 is a schematic flowchart of uploading audio files of target music to a server provided by an embodiment of the present application
  • Figure 11 is a schematic flow chart of visual processing of audio by the server provided by the embodiment of the present application.
  • Figure 12 is a schematic flow chart of the audio server parsing the audio file of the target music provided by the embodiment of the present application;
  • Figure 13 is a schematic flowchart of dynamic playback of drum tracks provided by an embodiment of the present application.
  • Figure 14 is a schematic diagram of the player callback log provided by the embodiment of the present application.
  • Figure 15 is a schematic diagram of the callback interval data provided by the embodiment of the present application.
  • Figure 16 is a schematic diagram of bucketing drum beats provided by an embodiment of the present application.
  • Figure 17 is a schematic diagram of calculating the sequence number of the beat line corresponding to the beat to be played according to the embodiment of the present application.
  • Figure 18 is a schematic diagram of the beat display area provided by the embodiment of the present application.
  • Figure 19 is a schematic flow chart of the metronome and drum track synchronization algorithm provided by the embodiment of the present application.
  • Figure 20 is a schematic diagram of a displayed beat identifier provided by an embodiment of the present application.
  • Figure 21 is a schematic diagram of a music information display page after adding a mask layer provided by an embodiment of the present application.
  • Figure 22 is a schematic diagram of prompt information in the music information display page provided by the embodiment of the present application.
  • Figure 23 is a schematic diagram of visualizing music through two visualization paths provided by the embodiment of the present application.
  • Figure 24 is another schematic flow chart of the music information display method provided by the embodiment of the present application.
  • Figure 25 is a schematic structural diagram of a music information display device provided by an embodiment of the present application.
  • Figure 26 is another structural schematic diagram of the music information display device provided by the embodiment of the present application.
  • Figure 27 is another structural schematic diagram of the music information display device provided by the embodiment of the present application.
  • Figure 28 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Embodiments of the present application provide a music information display method, device and computer-readable storage medium.
  • the music information display device can be integrated in an electronic device, and the electronic device can be a server, a terminal or other equipment.
  • the server can be an independent physical server, or a server cluster or distributed system composed of multiple physical servers. It can also provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, and cloud communications. , middleware services, domain name services, security services, network acceleration services (Content Delivery Network, CDN), and cloud servers for basic cloud computing services such as big data and artificial intelligence platforms.
  • the terminal can be a smartphone, tablet, laptop, desktop computer, smart speaker, smart watch, etc., but is not limited to this.
  • the terminal and the server can be connected directly or indirectly through wired or wireless communication methods, which is not limited in this application.
  • the electronic device After displaying the music playback page of the currently playing target music, the electronic device responds to the triggering operation of the music information display control in the music playback page and displays the music information display page of the target music; the music information display page includes at least one of the target music The audio track corresponding to the music element and the beat line corresponding to the audio track; then, based on the playback progress of the target music, the audio track and the beat line following the movement of the audio track are scrolled on the music information display page, thereby improving the quality of the music information display. accuracy.
  • response is used to represent the condition or state on which the performed operation depends.
  • the one or more operations performed may be in real time or may have a set delay; Unless otherwise specified, there is no restriction on the execution order of the multiple operations performed.
  • the music information display device can be integrated in an electronic device.
  • the electronic device can be a server or a terminal.
  • the terminal can include a tablet computer. , laptop computers, personal computers (PCs, Personal Computers), wearable devices, virtual reality devices or other smart devices that can display music information.
  • Step 101 Display the music play page of the currently played target music.
  • the music playback page is a page that plays the current target music
  • the music playback page includes a music information display control for the target music.
  • the music information display control is used to trigger the display of music information of the target music.
  • the so-called music information can be music elements in the target music.
  • Music elements can be sound melodies in the target music. Music elements make up the content of the target music. There can be many types of music elements. For example, they can include a variety of sound melodies such as vocal pitch, accompaniment, and musical instruments. Instrument melodies can be Includes melody for drums, bass, guitar or other instruments.
  • the music playback page can also include information such as the basic melody, playback time and lyrics of the target music when playing the target music.
  • a music to be played page can be displayed, and the music to be played page includes a to-be-played music list, and in response to the response to the to-be-played music list Select operation, play the target music corresponding to the selection operation, and display the music playback page of the currently played target music.
  • the music to be played page 31 may be a display page of the music to be played in the music library.
  • the music to be played page may include a music to be played list 32, and may also include a music classification list 33 and a search control 34.
  • the music to be played is The list can be a list of popular music in the current music library.
  • the user operation page includes creative controls. The user can trigger the creative controls on the user operation page. In response to the trigger operation for the creative controls, display the corresponding music library. Music to be played page.
  • the to-be-played music list may be replaced with the to-be-played music list of the target music category corresponding to the selection operation in response to the selection operation on the music category list.
  • the target music corresponding to the selection operation can be played in response to the selection operation on the music to be played list, and the music playing page of the currently played target music is displayed.
  • the music to be played page can display the playback control 41 for controlling the pause and continued playback of music;
  • music playback The interface can display a playback progress bar 42 for prompting the current playback progress of the target music.
  • the music to be played page may also include an upload control 51, which is used to upload the local content of the terminal and play the target music in the local content of the terminal. Therefore, after displaying the music to-be-played page, a local to-be-uploaded content page may also be displayed in response to the triggering operation of the upload control 51.
  • the local to-be-uploaded content page includes at least one to-be-uploaded content; in response to the selection of the to-be-uploaded content Operation, play the target music of the content to be uploaded corresponding to the selected operation, and display the music playback page of the currently played target music.
  • the local to-be-uploaded content page is used to display the local to-be-uploaded content of the terminal.
  • the content to be uploaded can be in various forms, such as audio, video, and other content containing audio or music. Taking the content to be uploaded as a video as an example, in response to the triggering operation of the upload control of the music to be played page, the local content to be uploaded page 52 is displayed. There may be many ways to display the local to-be-uploaded content page 52.
  • the local photo album or local content database of the terminal is pulled up, based on the local photo album or The local content database generates a local content page 52 to be uploaded, and displays the local content page 52 to be uploaded.
  • the target music of the content to be uploaded corresponding to the selection operation is played, and the music playback page of the currently played target music is displayed, for example, as shown in Figure 6
  • the audio playback interface includes an indication that it is currently in Play status control 62, play progress bar 63 and music information display control 64.
  • the target content to be uploaded corresponding to the selection operation is filtered out from the content to be uploaded.
  • the target content to be uploaded is audio
  • the target content to be uploaded is used as the audio file of the target music.
  • the target content to be uploaded is video or other types of
  • the audio file can be extracted from the target content to be uploaded, and the audio file can be used as the audio file of the target music. This eliminates the need for users to manually convert formats and is compatible with the playback of content in various formats.
  • Step 102 In response to a triggering operation on the music information display control, display the music information display page of the target music.
  • the music information page includes a track corresponding to at least one music element in the target music (vocal track 71, drum track 72, accompaniment track 73 and bass track 74), and the beat corresponding to the track Line 79 may also include a music element identifier indicating a music element of the target music.
  • the so-called audio track may be a music element that displays the music element in the target music.
  • a visual track of elements, and the beat line is used to indicate the beat of the target music. Taking the musical elements of the target music as vocals, drum beats, bass and accompaniment as an example, the music information display page can be used, and the details of each audio track can be as follows:
  • the vocal track 71 is represented by the vocal identifier 75, and the vocal track file separated by the vocal splitting algorithm is used as input, and the vocal track file is pitch-separated through the algorithm (for example, to obtain the atmosphere-expressing 120 pitches), normalizes the vocal pitch, and displays it as a scale diagram with 24 layers of pitches on the interface, achieving the effect of drawing scales as the pitch of the human voice changes in pitch or pitch in the current song. ;
  • the drum beat track 72 is represented by the drum beat identifier 76, and the drum beat track file separated by the sound tracking algorithm is used as input.
  • the drum beat track file is identified through the algorithm, and the drum beats in the basic rhythm of the music are divided into important parts.
  • the heavy drum uses one logo (such as a big blue circle), and the light drum uses another logo (a small green circle).
  • the drum beat simulates the drum click effect to display the amplification and diffusion effect, allowing users to understand the rhythm of the current drum beat more clearly;
  • the accompaniment track 73 is represented by the accompaniment identifier 77.
  • the accompaniment track file separated by the sound track separation algorithm is used as input, and the pitch of the accompaniment melody is drawn through the waveform drawing algorithm to facilitate the user to visually understand the accompaniment melody. ups and downs;
  • the bass track 74 is represented by the bass identifier 78.
  • the bass track file separated by the sound track separation algorithm is used as input, and the presence or absence of the bass sound is drawn through the sound recognition algorithm. Since the bass itself is at a low frequency and is difficult for users to hear clearly, drawing the bass sound track can make it easier for users to understand the entire composition of the music.
  • the music information display page may also include a lyrics display area, which is used to display the lyrics information of the target music currently being played, etc.
  • the music information display page of the target music there can be multiple ways to display the music information display page of the target music, specifically as follows: In response to the trigger operation on the music information display control, obtain each piece of music in the target music The audio track data of the element; based on the audio track data, draw the audio track corresponding to the music element on the preset music information display page; draw the beat line on the audio track according to the current playing time of the target music, and obtain the music information display page of the target music. That is, the updated default music information display page is displayed, and the music information display page is displayed.
  • the track data can be understood as the data corresponding to a music element in the track. Taking the music element as a drum beat as an example, the track data can be understood as all the drum beat information in the music.
  • the audio track data of each music element in the target music For example, you can obtain the historical audio track data, and query the target historical audio track data corresponding to the target music in the historical audio track data. When the target does not exist, When historical audio track data exists, the audio track data corresponding to each music element is segmented in the target music. When the target historical audio track data exists, the audio track data of each music element is filtered out from the target historical audio track data.
  • the historical track data can be understood as the track data obtained after segmenting the music track before the current moment.
  • the target historical track data does not exist in the historical track data, it can mean that the target music did not exist before the current moment. No segmentation has been performed. At this time, the target music needs to be segmented into tracks.
  • Figure 8 Figure 9 is a schematic diagram of the waveforms of different musical instruments.
  • Figure 9 is a schematic diagram of the waveform of sound. From this, it can be found that the final waveform of the sound is a combination of different waveforms. That is to say, the target music currently playing can also be understood as a combination of multiple different waveforms.
  • the segmentation of the target music means dividing these different waveforms into the form of audio tracks. Make a presentation.
  • the audio track data corresponding to the music elements is segmented.
  • the trained track segmentation network can be used to separate the frequency domain matrix into multiple track matrices. And each track matrix is used as the track data of each music element.
  • the network structure of the audio track segmentation network can adopt various image segmentation networks of related technologies, such as neural network models.
  • the segmentation results can be compared with the original corresponding single track data. , thereby obtaining segmentation loss information.
  • the target music tracks in the server there are many ways to segment the target music tracks in the server.
  • the target music is an uploaded audio file
  • the audio file needs to be extracted from the content to be uploaded and sent to the server.
  • the server uses the trained track segmentation network to segment the audio file; when the target music When the user selects music from the music list to be played, there is no need to upload the audio file of the target music.
  • the server can directly segment the audio file of the target music into tracks to obtain the track data corresponding to each music element.
  • the process of uploading the audio file of the target music to the server may include: filtering out the target permission information of the client from the preset permission information (including permissions granted to the client), and combining the audio file of the target music and the target permission information.
  • the process of uploading the audio file of the target music to the server can be shown in Figure 10.
  • the user enters the local content page to be uploaded and selects the video to confirm the import; the music information display device extracts the audio file of the target music. , and obtain the client's permission package.
  • the audio file and permission package of the target music are sent to the server's upload center through the client.
  • the upload center performs the permission package on the business side upload module. Unpack and obtain the client's permission information.
  • the upload center sends the permission information to the server's login center through the business-side upload module.
  • step 204 the login center verifies the client's permissions based on the permission information. Verification, that is, comparing the client's permissions with the permissions granted to the client in the preset permission information. If they are consistent, the verification result is successful.
  • step 205 the login middle station returns the verification result to the upload middle station.
  • step 206 if the verification result is successful, the upload middle station generates the file identification of the target music through the business side upload module, and then, in step 207
  • the cloud database can return the storage address of the target music, such as an address in a content distribution network, after storing the audio file of the target music.
  • the upload center can return the file identification and storage address of the target music to the terminal in step 209, and in step 210, when the client receives the text of the target music While identifying and storing the address, the terminal plays the target music.
  • CDN Content Delivery Network
  • the target music can be visualized through the server, and the target music can be parsed into discrete data of different tracks, thereby obtaining the track data of the target music.
  • the trigger operation of the music information display control creates a visualization task of the target music, and sends the task parameters of the visualization task to the server so that the server can query the historical track data of the target music; when there is no historical track data in the server, Receive the audio track data returned by the server after segmenting the target music tracks; based on the audio track data, draw the audio track on the preset music information display page.
  • the terminal when parsing the target music into track data of different tracks, in step 301, the terminal creates a visualization task in response to the triggering operation of the music information display control.
  • the server The logical server in performs parameter verification on the visualization task.
  • the logical server queries whether there is historical track data of the target music. When there is no historical track data, in step 304, the logical server The server creates a visual record.
  • the logical server sends the file identification of the target music to the audio server.
  • the audio server creates an audio track segmentation task based on the file identification.
  • the audio server returns the audio file to the logical server.
  • the logical server stores the task identifier in the cache.
  • the audio server when the audio server creates the audio track segmentation task, it may also create a step sub-pipeline corresponding to each step of the audio track segmentation task.
  • the audio server after completing the segmentation of the audio track, the audio server sends the audio track data of the target music to the logical server.
  • the logical server sends the audio track data of the target music to the terminal.
  • the terminal According to the audio track data, the audio track corresponding to each musical element of the target music is drawn.
  • step 313 the logic server can also send the step sub-pipeline to the wormhole.
  • step 314 the pipeline server runs the pipeline task.
  • step 315 it sends a pipeline acquisition request to the wormhole.
  • step 316 the wormhole The wormhole can return the step sub-pipeline to the pipeline server.
  • step 317 the wormhole stores the step sub-pipeline into the pipeline database.
  • Wormhole is a streaming data processing platform for real-time big data project implementers, which can replace other streaming processing platforms, such as Spark, Storm, etc.
  • the pipeline server and pipeline database are used to ensure the security of audio track data. Once the audio track data is abnormal or lost, the audio track data can be restored through the pipeline database.
  • the pipeline server and pipeline database are used to ensure the security of audio track data. Once the audio track data is abnormal or lost, the audio track data can be restored through the pipeline database.
  • the audio server processes the audio file of the target music, wherein the audio track is mainly segmented through the audio service, and in step 402, the processing result is uploaded to the audio track database.
  • the audio server splits the audio file it can also return the processing results to the logical server in step 403, so that the logical server can update the visualization record in step 404, and then perform the next visualization task in step 406.
  • the audio server can send the visualization flow of the target music to the wormhole in step 407.
  • step 408 the terminal generates a track segmentation progress polling request based on the target music, and in step 409, sends the track segmentation progress polling request to the logical server.
  • step 410 the logic server queries the visual record and obtains the track segmentation progress.
  • step 411 the track segmentation progress is packaged (that is, encapsulated into a data packet), and in step 412, returns the target music information to the terminal. Track split progress.
  • step 413 when the track segmentation progress indicates that the track segmentation of the target music is completed, the terminal downloads the track data of the target music.
  • step 414 the terminal draws the audio track of each music element of the target music on the preset music information display page based on the audio track data, obtains the music information display page, and displays the music information display page. It takes a certain amount of time for the server to separate the audio tracks. The time varies depending on the music.
  • the music information display device or the client in the music information display device can send a query request to the logical server through the polling interface to query the target music.
  • the logic server queries the visual record of the target music in the track database.
  • the segmented track data is packaged and the progress is returned. and the packaged audio track data, the terminal downloads the audio track data, and draws the audio track based on the audio track data to display the music information display page.
  • the audio server splits the audio file of the target music the segmented audio track data will be uploaded to the audio track data for storage. After obtaining the audio track data of at least one music element in the target music, it can be based on the audio track data. Track data, draw the audio track corresponding to the music element on the preset music information display page. There can be many ways to draw the audio track.
  • extract the audio data of each track element in the audio track from the audio track data and get Audio data set
  • filter the audio data in the audio data set to obtain a filtered audio data set
  • draw an audio track on the preset music information display page based on the filtered audio data set.
  • the audio track element can be understood as each element that makes up the audio track data.
  • the musical elements in the drum beat track data can be drum beats.
  • Music elements can also include many types, Taking the musical element as drum beats as an example, the musical element can also be divided into light drums, heavy drums, and so on. Filtering the audio data in the audio data collection is mainly to filter out abnormal data. There can be many ways to filter the audio data.
  • the audio data in the audio data collection can be sorted, and based on the sorting information, calculate The time interval between audio data, audio data whose time interval does not exceed the preset time interval threshold is deleted from the audio data set, and a filtered audio data set is obtained.
  • the preset time interval threshold can be 0 or other thresholds. When the preset time interval threshold is 0, audio data with a time interval of 0 can be deleted from the audio data set.
  • the audio track After filtering the audio data, you can draw the audio track on the preset music information display page based on the filtered audio data collection. There are many ways to draw the audio track. For example, take the drum beat track data as an example. Since the drum beat There is a lot of data. When drawing drum beats, you can use lightweight CALayer (a UI drawing tool) instead of heavyweight UIView (a UI drawing tool) to improve rendering performance.
  • CALayer a UI drawing tool
  • heavyweight UIView a UI drawing tool
  • the same processing method can be used for different types of audio tracks, or the audio track can be drawn using a processing method corresponding to the audio track type according to the audio track type.
  • the obtained drum track data is json data.
  • the json data can be as follows:
  • the target historical track data when the target historical track data exists, it means that the target music has been After being divided into multiple audio track data, at this time, there is no need to re-segment the audio tracks of the target music. You can directly filter out the audio track data corresponding to the music elements in the target historical audio track data, and draw it based on the audio track data. For audio tracks, the method of drawing audio tracks can be found above, so I won’t go into details here.
  • beat line After drawing the audio track, you can draw a beat line on the audio track according to the current playing time of the target music to obtain the target music information display page.
  • beat lines There are many ways to draw beat lines. For example, you can filter out the drum beat track in the audio track, and determine the beat information of the target music based on the drum beat track. Based on the current playing time and beat information of the target music, the beat line position of the played beat is identified on the audio track. Based on the beat line position, the beat line of the played beat is drawn on the audio track to obtain the target music information display page.
  • the beat information can be understood as information indicating the beat in the target music; the beat information can include information such as the number of beats contained in the target music, beat type, beat spacing, and the beat line position corresponding to each beat.
  • the drum beat track there are many ways to determine the beat information of the target music. For example, the drum beat type and time information of each drum beat are identified in the drum beat track; based on the time information, the distance between drum beats of different drum beat types is calculated. Time interval, get the time interval set, and determine the beat information of the target music according to the drum type and time interval set.
  • the drum beat type is used to indicate the type of drum beats in the drum beat track.
  • drum beats There can be multiple types of drum beats, for example, they can include light drums and heavy drums.
  • Time information can be understood as the trigger time or playback time of the drum beats in the drum track. According to the drum beat type and time interval set, there are many ways to determine the beat information of the target music.
  • sort the drum beat track and based on the sorting information, filter out the drum beat with the earliest time information in the drum beat track, that is, the drum beat that sounds first in the drum beat track, and obtain the candidate drum beat; when there is one candidate drum beat , use the candidate drum beat as the target drum beat, and when there are multiple candidate drum beats, use the next drum beat of the candidate drum beat as the target drum beat.
  • the beat type and time position of each beat line of the target music can be determined based on the time interval and the time information of the target drum beat, thereby obtaining the beat information of the target music.
  • There are many ways to determine the beat type and position of the beat line For example, according to the drum beat type of the target drum beat, determine the beat type of the beat line corresponding to the target drum beat; by fusing the time information of the target drum beat with the time interval, you can get The time position of each beat line of the target music; then, according to the beat type of the beat line corresponding to the target drum beat, determine the beat type of each beat line of the target music.
  • the timing sequence of the drum beats of the target music is as follows:
  • Light drum (PuckingDrum): 10740, 11330, 11920, 12520, 13110, 13710...
  • 10740ms is the time position of the beat line corresponding to the first drum beat, and the beat type of this beat line can be heavy beat.
  • 11330ms is the position of the beat line corresponding to the second drum beat (target drum beat), and the beat type of this beat line can be It is a tap; push forward a time interval, 10150ms is the time position of the previous beat line of the beat line corresponding to the first drum beat, the beat type is tap; push forward a time interval, 9560ms is the time before the tap.
  • the time position of a beat line, the beat type is rebeat; and so on, it is not necessarily divisible by 0 seconds, and it does not matter. There is not a beat line necessary for 0 seconds.
  • the main logic is determined based on the time interval. The time position and beat type of each beat line of the target music.
  • the beat line position of the played beat can be identified on the audio track based on the current playing time and beat information of the target music.
  • There are many ways to identify the beat line position of the played beat For example, you can compare the current playback moment of the target music with the time position of the beat line, and based on the comparison results, filter out the history of played beats in the beat line.
  • Beat line obtain the timeline information of the audio track; and based on the timeline information and the time position of the historical beat line, identify the beat line position of the played beat on the audio track.
  • the current playback time can be understood as the time when the user triggers the music information display control on the music information display page, or it can also be the time when the audio track corresponding to at least one music element of the target music is displayed on the music information display page.
  • the historical beat line can be a beat line whose time position is before the current playback moment. There can be many ways to filter out the historical beat lines that have played historical beats in the beat line. For example, if the current playback time is 2000ms, For example, you can filter out the beat lines whose time position is before 2000ms, thereby obtaining the historical beat lines.
  • the timeline information can be used to indicate the time information corresponding to each audio track element in the audio track, or can also be understood as the time information corresponding to each position in the audio track. Based on the timeline information and the time position of the historical beat line, there can be multiple ways of identifying the beat line position of the played beat on the track. For example, based on the timeline information, the target position corresponding to the time position of the played beat is identified on the audio track, and the target position is used as the beat line position of the played beat.
  • the played beat does not necessarily exist.
  • the drum track on the music information display page can also be replaced with a beat track.
  • the so-called beat track can be a beat line indicating the target music. audio track.
  • There are many ways to replace the drum track with a beat track For example, convert the beat information into the target track data of the beat track; draw the beat track on the preset music information display page based on the target track data. Replace the drum track with the beat track on the preset music information display page to get the music information display page of the target music.
  • the beat information can be converted into the data format of the track data to obtain the target track data, or in the beat information
  • the beat lines are merged to generate a beat line data group, and the beat line data group is sorted in time order to obtain the target track data of the beat track.
  • Step 103 Based on the playback progress of the target music, scroll the audio track and the beat line that follows the movement of the audio track on the music information display page.
  • the playback progress is used to indicate the progress information of the target music during playback, and the playback progress is continuously updated as the target music is played.
  • the details can be as follows: Based on the playback progress of the target music, on the music information display page The page scrolls the audio track and performs dynamic playback on the audio track. According to the beat identification line and the playback progress of the target music, a beat line is drawn on the audio track so that the beat line follows the audio track for scrolling display.
  • the details can be as follows:
  • the audio track is scrolled and displayed on the music information display page, and dynamic effects are played on the audio track.
  • the audio track segments corresponding to each playback moment can be identified in the audio track, the audio track segments can be sorted, and the audio track segments can be sorted based on the sorting information and the playback progress of the target music. Scroll the display and animate it on the audio track.
  • the timeline information of the audio track there can be multiple ways of identifying the audio track segment corresponding to each playback moment in the audio track. For example, on the music information display page, identify the audio track area where the audio track is displayed, and determine the display time interval corresponding to each playback moment based on the regional information of the audio track area; filter among the audio tracks based on the timeline information of the audio track.
  • the audio track segments corresponding to each display time interval are taken as the audio track segments corresponding to the playback time.
  • the audio track segments can be scrolled and displayed based on the sorting information and the playback progress of the target music. There are many ways to scroll the display. For example, based on the playback progress of the target music, the audio track segments corresponding to each playback moment can be scrolled through the playback timeline.
  • the beat identification line is used to indicate the beat position of the target music during playback. It can also be the current playback moment of the target music in the playback timeline. On the music information display page, it can be the center white line at the current moment.
  • the beat identification line there are many ways to identify the currently playing target drum beat in the drum track. For example, the position information of the beat identification line in the drum track and the position interval of each drum beat in the drum track can be obtained. Match the position information with the position interval, and filter out the successfully matched drum beats in the drum track as the target drum beat currently being played.
  • the number of target drum beats to be screened out may be one or more.
  • dynamic playback There are many ways to perform dynamic playback. For example, you can obtain the historical playback drum beat collection and query the target drum beat in the historical playback drum beat collection; when the target drum beat does not exist in the historical playback drum beat collection, obtain the drum beat type of the target drum beat. And based on the drum beat type, the dynamic effect type of the target drum beat is determined; based on the dynamic effect type, the target drum beat is played with dynamic effects on the drum track, and the target drum beat is added to the historical play drum beat collection.
  • the historically played drum beat set may be a drum beat set composed of drum beats that have been played before the current moment in the drum beat track of the target music.
  • the drum beat type of the target drum beat can be obtained to determine the dynamic effect type of the target drum beat.
  • the target drum beat is a light drum
  • the motion effect type of the target drum beat can be the light drum motion effect type
  • the motion effect type of the target drum beat can be the heavy drum motion effect type.
  • the target drum beat can be dynamically played on the drum track based on the dynamic effect type.
  • the dynamic effect There can be many ways to play the dynamic effect. For example, you can simulate a drum click to move.
  • the amplification and diffusion effect can be displayed effectively, so that the user can understand the rhythm of the current drum beat more clearly, or the drum beat can be played with a vibration effect.
  • the vibration amplitude of the light drum can be smaller than the vibration float of the heavy drum, or it can also be played during playback
  • the vibration effect simultaneously triggers the music information display device to vibrate.
  • the target drum beat after querying the target drum beat in the historically played drum beat set, when the target drum beat exists in the historically played drum beat set, it means that the target drum beat has been played before the current moment. At this time, the beat identifier can be obtained.
  • the current position information of the line in the drum beat track. If the current position information does not match the position interval of the target drum beat, delete the target drum beat in the historical playback drum set. If the current position information matches the position interval of the target drum beat, continue to play the target music. At this time, there is no need to perform dynamic playback on the target drum beat.
  • step 501 it is detected that the user triggers the playback control.
  • step 502 the target music is played.
  • step 503 the timeline is controlled to scroll.
  • step 504 all target drum beats that intersect with the white line at the current moment are found.
  • step 505 Query whether the target drum beat is in the played array. When the target drum beat is not in the played array, go to step 506.
  • step 506 the drum beat animation is played, and in step 507, the target drum beat is put into the played array.
  • step 508 to query whether the current moment has passed the range of the target drum beat.
  • step 508 the target drum beat is deleted from the played array, and then, step 502 is returned to continue playing the target music.
  • the frame rate of the player is 30.
  • the callback data is 33 frames per second, the player's callbacks are very sparse in the millisecond dimension, and the drumbeats may not match the callback time.
  • the player needs to ensure that the callback is based on the frame rate, because the subsequent data will be offset whenever a dropped frame occurs.
  • the offset distance of the current timeline of the drum track For example, based on the beat identification line and the playback progress of the target music, determine the offset distance of the current timeline of the drum track; calculate The ratio of the offset distance to the time interval of the beat line, the integer obtained by rounding up the ratio, and the integer obtained by rounding down, are used as the beat line information of the beat to be played in the current track segment, that is, the beat line Serial number; based on the beat line information, draw the beat line of the beat to be played on the current track segment.
  • the offset distance of the current time axis of the drum track can be the offset displacement of the beat identification line relative to the time starting point of the drum track, or it can also be understood as the playback time corresponding to the current playback progress (that is, the playback progress) The current distance from the start point of playback.
  • the offset distance of the current timeline of the drum track Based on the beat mark line and the playback progress of the target music, there are many ways to determine the offset distance of the current timeline of the drum track. For example, you can filter out the current frame corresponding to the drum beat bucket based on the playback progress of the target music.
  • the target drum point is divided into buckets; the target drum point closest to the beat identification line is identified in the target drum point bucket, and the current distance between the target drum point and the beat identification line is calculated; based on the current distance and the position information of the target drum point, the current location of the drum track is determined The offset distance of the timeline.
  • drum beat bucketing can be understood as data bucketing for storing drum beats in a drum track.
  • the drum beats in the drum beat track can also be stored in a preset bucket. bucket. For example, you can extract at least one drum beat from the drum track, identify the time information of the drum beat, classify the drum beats based on the time information, obtain the target drum beat corresponding to each time interval, and filter out each drum beat in the preset buckets.
  • the target bucket corresponding to a time interval is added, and the target drum beat corresponding to each time interval is added to the target bucket corresponding to the time interval to obtain the drum beat bucket corresponding to each time interval.
  • drum beats for example, according to the time of drum beats Stamp, group the drum beats whose timestamps are at the same second into one category.
  • the beat to be played can be understood as the beat that has not yet started playing in the current track segment.
  • the beat line information can be understood as the currently played beat.
  • the position information can be the sequence number of the beat line corresponding to the beat to be played. The specific information can be shown in Figure 17.
  • the beat line of the beat to be played can be drawn on the current track segment based on the beat line information.
  • There can be many ways to draw the beat line For example, round the beat line information to obtain the current rounded beat line information. Based on the playback progress of the target music, obtain the historical rounded beat line information. If the current rounded beat line information is The post-beat line information is different from the historical rounded beat line information. Based on the current rounded beat line information, a beat line of the beat to be played is drawn on the current track segment.
  • the serial number of the line is No. 4.
  • the historical rounded beat line information may be the rounded beat line information obtained by rounding at the next play callback of the current player callback.
  • the beat line of the beat to be played can be drawn on the current track segment based on the current rounded beat line information.
  • There are many ways to draw the beat line of the beat to be played For example, you can filter out the target beat line of the beat to be played from the beat lines of the target music based on the current rounded beat line information; calculate the current rounded beat line.
  • the information difference between the information and the beat line information of the beat to be played, and based on the information difference determine the delay drawing time of the target beat line; based on the delay drawing time, draw the target beat line of the beat to be played on the current track segment .
  • the delayed drawing time of the target beat line can be determined based on the information difference.
  • the delay drawing time There are many ways to determine the delay drawing time. For example, you can directly use this information difference as the time interval from now to the response of the next beat to be played, and use this time interval as the delay of the target beat line. drawing time, or you can also filter out the time intervals corresponding to the information difference in the preset time set to obtain the delayed drawing time of the target beat line.
  • the target beat line can be drawn in the current track segment based on the delay drawing time.
  • the current rounded beat line information is the same as the historical rounded beat line information, there is no need to execute any logic, and there is no need to perform delayed drawing.
  • the music information display page may also include a beat display area.
  • the beat display information corresponding to the beat line may also be displayed in the beat display area, for example, in the beat display area
  • the beat display information corresponding to each beat type is detected.
  • the beat display information corresponding to the beat line is dynamically displayed in the beat display area.
  • the beat display area 181 is an area used by the metronome to display beat information. There can be multiple ways of displaying the beat display information corresponding to each beat type in the beat display area 181 .
  • the beat display information displayed in the beat display area 181 can be a left indicator light corresponding to a heavy beat and a right indicator light corresponding to a tap.
  • the beat display area 181 may also include a beat playback control 182, which is used to control whether to play the beat sound corresponding to the beat line.
  • a beat playback control 182 which is used to control whether to play the beat sound corresponding to the beat line.
  • the target beat display information is filtered out from the beat display information, and the target beat display information is dynamically played.
  • the beat sound corresponding to the beat type is played.
  • the beat identifier corresponding to the beat line is determined, and the beat identifier is displayed in the beat display area 181 .
  • the beat type of the beat line is heavy beat
  • the left indicator light can be made to flash.
  • the beat line's beat type When the beat type is tap, you can get the right indicator light flashing.
  • the beat sound can be set to play.
  • the standard sound 1 corresponding to the heavy beat can also be played.
  • the beat type of the beat line is tap, At this time, in addition to the right indicator light flashing, the standard sound 2 corresponding to tapping can also be played.
  • drawing the beat line, flashing the indicator light and playing the beat sound can be linked.
  • the target music is played; in step 601, according to the playback progress of the target music, the offset distance of the current time axis of the drum track is determined, and the offset distance is determined.
  • the moving distance is divided by the time interval of the beat line to obtain the beat line information; in step 602, the beat line information is rounded to obtain the historical rounded beat line information; in step 603, the historical rounded beat line information is determined Is it the same as the current rounded beat line information? If yes, return to step 601. If not, go to step 604.
  • step 604 calculate the information difference between the current rounded beat line information and the beat line information. , and determine the delayed drawing time of the target beat line based on the information difference.
  • step 605 determine the beat type of the target beat line corresponding to the current rounded beat line information. When the beat type is heavy beat, draw a thick line, The left indicator light flashes and the heavy beat sound is played. When the beat type is tap, draw a thin line and the right indicator light flashes and the tap beat sound is played.
  • step 606 after waiting for the delayed drawing time, the beat line is drawn. For example,.
  • the beat line can also be drawn at the same time as the audio track is drawn, instead of being drawn in real time according to the playback progress of the target music.
  • the beat line and the audio track are drawn at the same time, it can be detected that the beat line logo coincides with the beat line, and the beat display information corresponding to the beat line is dynamically displayed in the beat display area.
  • the beat identifier 201 corresponding to the beat line can also be displayed in the beat display area.
  • the beat identifier 201 is used to indicate the beat that the beat line currently corresponds to within the beat interval.
  • the beat interval can be 4 beats or 8 beats, etc.
  • the position information of the beat line the way of identifying the beat corresponding to the beat line is determined. For example, according to the position information of the beat line, the beat sequence number of the beat line is determined, and the beat sequence number and the beat interval are determined. Through fusion, the beat logo corresponding to the beat line can be obtained, so that the beat logo can be displayed in the beat display area.
  • the music information display page may also include an adjustment control corresponding to each audio track.
  • the adjustment control may be a music element identifier corresponding to the audio track, and the audio track and the target music may be adjusted through the adjustment control.
  • There are many ways to adjust For example, you can obtain the adjustment control pair in response to a triggering operation on the adjustment control.
  • the target track is hidden on the music information display page based on the current playback volume.
  • the adjusted music is played.
  • the adjusted music is in The music in the target music after muting the audio file of the target track.
  • the audio track data of each audio track is equivalent to a separate audio file in m4a format.
  • How to obtain the current playback volume of the audio file of the target audio track corresponding to the adjustment control There can be many methods. For example, you can directly obtain the playback information of the target music, extract the playback volume information from the playback information, and identify the current playback volume of the audio file of the target track corresponding to the adjustment control in the playback volume information.
  • the target audio track can be hidden on the music information display page based on the current playback volume.
  • the mute volume is the playback volume in the mute state, which can also be understood as the playback volume being 0.
  • the current playback volume exceeds the mute volume, it means that the current playback state of the audio file of the target track is non-mute playback state.
  • the non-muted playback state there are many ways to add a mask layer to the target track. As shown in Figure 21, taking the target track as a drum track as an example, the corresponding target track is identified in the music information display page. In the audio track area, add a masking layer to cover the audio track area, so that the target audio track does not display the visualization effect on the music information display page.
  • the adjusted music based on the audio file of the target track. For example, you can adjust the current playback volume of the audio file of the target track to mute in the target music, obtain the adjusted music, and play the adjusted music. After the music is played, the historical playback volume of the audio file is updated based on the current playback volume of the audio file.
  • the target music can be regarded as the fused audio file obtained by fusing the audio files of multiple tracks, and then In the fused audio file, the audio file of the target track is muted, so that the adjusted music can be obtained.
  • the historical playback volume can be understood as the playback volume of the audio file of the target track saved before the current moment.
  • the target track The historical playback volume of the audio file of the track is replaced by the current playback volume.
  • the audio file of the target track can be adjusted from the mute playback state to the current playback volume for playback.
  • the user can also trigger the adjustment control to restore or restore the adjusted music, for example, in response to the triggering operation of the adjustment control, remove the mask layer on the target track , to display the target audio track on the music information display page, and adjust the playback volume of the audio file corresponding to the target audio track to the current playback volume based on the updated historical playback volume to play the target music.
  • adjusting the target audio track and the target music through the adjustment of the audio track can be understood as split-track audio-visual listening of the target music.
  • the user can also be prompted through a prompt message to perform single-track Silent audition, as shown in Figure 22, the user clicks the adjustment control (icon) in front of each audio track.
  • the adjustment control can also be a music element identifier, which can realize single-track mute for each audio track.
  • the user can also mute the audio at the same time.
  • Multiple audio tracks only a single sound track (audio track) is retained to achieve a better understanding of the sound of a single audio track in a piece of music, which can help users better understand music in layers.
  • the way of displaying music information of the target music can be mainly understood as music visualization of the target music, and visual display of the music elements and beat information in the target music.
  • the process of visualizing the target music there are two visualization paths, one is that the user selects existing music from the player's music library for visualization, the other is that when the user has other music for visualization, the user can upload the extracted music for visualization; for example, in Music information display page for visual display.
  • the embodiment of the present application displays the music information display page of the target music in response to the triggering operation of the music information display control in the music playback page.
  • the music information display page Including the audio track corresponding to at least one music element in the target music, and the beat line corresponding to the audio track, and then, based on the playback progress of the target music, scrolling display of the audio track and the beat line that follows the movement of the audio track on the music information display page; Since this solution can display the audio track corresponding to at least one music element in the target music and the beat line corresponding to the audio track while playing the target music, and it can also scroll the audio track and beat line according to the playback progress of the target music. Display, so that the content contained in the target music can be accurately displayed, therefore, the accuracy of music information display can be improved.
  • the music information display device will be specifically integrated into an electronic device, and the electronic device will be a terminal.
  • the music information display page includes vocal tracks, drum beat tracks, accompaniment tracks and bass tracks, and beat line information. Take the beat line number as an example for explanation.
  • Step 701 The terminal displays the music playback page of the currently played target music.
  • the terminal displays the music information display page of the target music in response to the triggering operation of the music information display control.
  • Step 703 Based on the playback progress of the target music, the terminal performs a scrolling display of the audio track on the music information display page, and performs dynamic effect playback on the audio track.
  • Step 705 The terminal adjusts the target audio track and the target music in response to the triggering operation on the adjustment control.
  • steps 701 to 705 can be understood with reference to the above, and will not be repeated here.
  • an embodiment of the present application also provides a music information display device.
  • the music information display device can be integrated in an electronic device, such as a server or terminal.
  • the terminal can include a tablet computer, a notebook computer, and a computer. /or personal computer, etc.
  • the music information display device may include a first display unit 301, a second display unit 302 and a scrolling display unit 303, as follows:
  • the first display unit 301 is configured to display a music play page of the currently played target music, where the music play page includes a music information display control.
  • the second display unit 302 is configured to display a music information display page of the target music in response to a triggering operation on the music information display control.
  • the music information display page includes a track corresponding to at least one music element in the target music and a corresponding track.
  • the beat line is used to indicate the tempo of the target music.
  • the scrolling display unit 303 is configured to scroll and display the audio track and the beat line that follows the movement of the audio track on the music information display page based on the playback progress of the target music.
  • the music information display device may also include a beat display unit 304, as shown in Figure 26, specifically as follows:
  • the beat display unit 304 is configured to display the beat display information corresponding to each beat type in the beat display area. When it is detected that a beat line is drawn on the track or track segment, or when the beat identification line coincides with the beat line, the beat display information is displayed in the beat display area. The area dynamically displays the beat display information corresponding to the beat line.
  • the music information display device may also include an adjustment unit 305, as shown in Figure 27, specifically as follows:
  • the adjustment unit 305 adjusts the audio track and the target music in response to the triggering operation of the adjustment control of the audio track in the music information display page.
  • the adjustment unit 305 may be specifically configured to, in response to a triggering operation on the adjustment control, obtain the current playback volume of the audio file of the target audio track corresponding to the adjustment control, and perform adjustments to the target audio track on the music information display page based on the current playback volume.
  • Hidden processing based on the audio file of the target track, plays the adjusted music.
  • the adjusted music is the music after muting the audio file of the target track in the target music.
  • each of the above units can be implemented as an independent entity, or can be combined in any way to be implemented as the same or several entities.
  • each of the above units please refer to the previous method embodiments, and will not be described again here.
  • An embodiment of the present application also provides an electronic device, as shown in Figure 28, which shows a schematic structural diagram of the electronic device involved in the embodiment of the present application. Specifically:
  • the electronic device may include components such as a processor 401 of one or more processing cores, a memory 402 of one or more computer-readable storage media, a power supply 403, and an input unit 404.
  • a processor 401 of one or more processing cores a memory 402 of one or more computer-readable storage media
  • a power supply 403 a power supply 403
  • an input unit 404 an input unit 404.
  • the processor 401 is the control center of the electronic device, using various interfaces and lines to connect various parts of the entire electronic device, by running or executing software programs and/or modules stored in the memory 402, and calling software programs stored in the memory 402. Data, perform various functions of electronic devices and process data.
  • the processor 401 may include one or more processing cores; preferably, the processor 401 may integrate an application processor and a modem processor, where the application processor mainly processes the operating system, user interface and applications. Programs, etc., the modem processor mainly handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 401.
  • the memory 402 can be used to store software programs and modules.
  • the processor 401 executes various functional applications and data processing by running the software programs and modules stored in the memory 402 .
  • the memory 402 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may store data based on Data created by the use of electronic devices, etc.
  • memory 402 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 402 may also include a memory controller to provide the processor 401 with access to the memory 402 .
  • the electronic device also includes a power supply 403 that supplies power to various components.
  • the power supply 403 can be logically connected to the processor 401 through a power management system, so that functions such as charging, discharging, and power consumption management can be implemented through the power management system.
  • the power supply 403 may also include one or more DC or AC power supplies, recharging systems, power failure detection circuits, power converters or inverters, power status indicators, and other arbitrary components.
  • the electronic device may also include an input unit 404 that may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
  • an input unit 404 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
  • the electronic device may also include a display unit and the like, which will not be described again here.
  • the processor 401 in the electronic device will load the executable files corresponding to the processes of one or more application programs into the memory 402 according to the following instructions, and the processor 401 will run the stored program.
  • the application program in the memory 402 realizes various functions of the music information display method described above in the embodiment of the present application.
  • embodiments of the present application provide a computer-readable storage medium in which a plurality of instructions are stored, and the instructions can be loaded by the processor to execute any of the music information display methods provided by the embodiments of the present application. step.
  • the computer-readable storage medium may include: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk, etc.
  • Embodiments of the present application provide a computer program product or computer program.
  • the computer program product or computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium.
  • the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device performs any of the various optional implementations of the above music information display aspect or the audio track and beat information display aspect. provided method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Library & Information Science (AREA)
  • Acoustics & Sound (AREA)
  • Auxiliary Devices For Music (AREA)

Abstract

本申请实施例公开了一种音乐信息展示方法、装置、设备、程序产品和存储介质;本申请实施例在显示当前播放的目标音乐的音乐播放页面后,响应于针对音乐播放页面中音乐信息展示控件的触发操作,显示目标音乐的音乐信息展示页面,该音乐信息展示页面包括目标音乐中至少一个音乐元素对应的音轨、以及音轨对应的节拍线,然后,基于目标音乐的播放进度,在音乐信息展示页面对音轨和跟随音轨运动的节拍线进行滚动展示。

Description

音乐信息展示方法、装置、设备、程序产品和存储介质
相关申请的交叉引用
本申请基于申请号为202210349425.8、申请日为2022年4月1日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及计算机技术领域,具体涉及一种音乐信息展示方法、装置、设备、程序产品和存储介质。
背景技术
随着互联网技术的飞速发展,网络上音乐创作的内容样式也越来越丰富。为了更加准确的了解音乐的内容,在听音乐的同时,还可以展示音乐信息。现有的音乐信息展示方法主要是通过识别音乐中人声音高并进行展示。而人声音高等并不能完全代表一首完整的音乐,也就不能准确的表达音乐的内容,因此,导致音乐信息展示的准确性较低。
发明内容
本申请实施例提供一种音乐信息展示方法、装置和计算机可读存储介质,可以提高音乐信息展示的准确性。
本申请实施例提供一种音乐信息展示方法,由电子设备执行,包括:
显示当前播放的目标音乐的音乐播放页面,所述音乐播放页面包括音乐信息展示控件;
响应于针对所述音乐信息展示控件的触发操作,其中,显示所述目标音乐的音乐信息展示页面,所述音乐信息展示页面包括所述目标音乐中至少一个音乐元素对应的音轨、以及所述音轨对应的节拍线,所述节拍线用于指示所述目标音乐的节拍;
基于所述目标音乐的播放进度,在所述音乐信息展示页面对所述音轨和跟随所述音轨运动的节拍线进行滚动展示。
本申请实施例提供一种音乐信息展示装置,包括:
第一显示单元,配置为显示当前播放的目标音乐的音乐播放页面,所述音乐播放页面包括音乐信息展示控件;
第二显示单元,配置为响应于针对所述音乐信息展示控件的触发操作,显示所述目标音乐的音乐信息展示页面,所述音乐信息展示页面包括所述目标音乐中至少一个音乐元素对应的音轨、以及所述音轨对应的节拍线,所述节拍线用于指示所述目标音乐的节拍;
滚动展示单元,配置为基于所述目标音乐的播放进度,在所述音乐信息展示页面对所述音轨和跟随所述音轨运动的节拍线进行滚动展示。
本申请实施例还提供一种电子设备,包括处理器和存储器,所述存储器存储有应用程序,所述处理器用于运行所述存储器内的应用程序实现本申请实施例提供的音乐信息 展示方法。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质存储有多条指令,所述指令适于处理器进行加载,以执行本申请实施例所提供的任一种音乐信息展示方法中的步骤。
本申请实施例还提供一种计算机程序产品,包括计算机可执行指令或计算机程序,其特征在于,所述计算机可执行指令或计算机程序被处理器执行时本申请实施例所提供的任一种音乐信息展示方法中的步骤。
本申请实施例可以在播放目标音乐的同时显示目标音乐中至少一个音乐元素对应的音轨和该音轨对应的节拍线,而且,还可以根据目标音乐的播放进度,对音轨和节拍线进行滚动展示,从而可以准确的展示目标音乐所包含的内容,因此,可以提升音乐信息展示的准确性。
附图说明
图1是本申请实施例提供的音乐信息展示方法的场景示意图;
图2是本申请实施例提供的音乐信息展示方法的流程示意图;
图3是本申请实施例提供的音乐待播放页面的页面示意图;
图4是本申请实施例提供的音乐播放页面的第一页面示意图;
图5是本申请实施例提供的本地待上传内容页面的页面示意图;
图6是本申请实施例提供的音乐播放页面的第二页面示意图;
图7是本申请实施例提供的音乐信息展示页面的页面示意图;
图8是本申请实施例提供的不同乐器的波形的示意图;
图9是本申请实施例提供的声音的波形示意图;
图10是本申请实施例提供的将目标音乐的音频文件上传至服务器的流程示意图;
图11是本申请实施例提供的服务器对音频进行可视化处理的流程示意图;
图12是本申请实施例提供的音频服务器对目标音乐的音频文件进行解析的流程示意图;
图13是本申请实施例提供的对鼓点音轨进行动效播放的流程示意图;
图14是本申请实施例提供的播放器回调日志的示意图;
图15是本申请实施例提供的回调间隔数据的示意图;
图16是本申请实施例提供的对鼓点进行分桶的示意图;
图17是本申请实施例提供的计算待播放节拍对应的节拍线的序号的示意图;
图18是本申请实施例提供的节拍展示区域的示意图;
图19是本申请实施例提供的节拍器与鼓点音轨同步算法的流程示意图;
图20是本申请实施例提供的展示的节拍标识的示意图;
图21是本申请实施例提供的添加蒙层后的音乐信息展示页面的页面示意图;
图22是本申请实施例提供的音乐信息展示页面中提示信息的示意图;
图23是本申请实施例提供的通过两种可视化路径对音乐进行可视化的示意图;
图24是本申请实施例提供的音乐信息展示方法的另一流程示意图;
图25是本申请实施例提供的音乐信息展示装置的结构示意图;
图26是本申请实施例提供的音乐信息展示装置的另一结构示意图;
图27是本申请实施例提供的音乐信息展示装置的另一结构示意图;
图28是本申请实施例提供的电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请实施例提供一种音乐信息展示方法、装置和计算机可读存储介质。其中,该音乐信息展示装置可以集成在电子设备中,该电子设备可以是服务器,也可以是终端等设备。
其中,服务器可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、网络加速服务(Content Delivery Network,CDN)、以及大数据和人工智能平台等基础云计算服务的云服务器。终端可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能手表等,但并不局限于此。终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。
例如,参见图1,以音乐信息展示装置集成在电子设备中为例。电子设备在显示当前播放的目标音乐的音乐播放页面后,响应于针对音乐播放页面中音乐信息展示控件的触发操作,显示目标音乐的音乐信息展示页面;该音乐信息展示页面包括目标音乐中至少一个音乐元素对应的音轨、以及音轨对应的节拍线;然后,基于目标音乐的播放进度,在音乐信息展示页面对音轨和跟随音轨运动的节拍线进行滚动展示,进而提升音乐信息展示的准确性。
其中,响应于,用于表示所执行的操作所依赖的条件或者状态,当满足所依赖的条件或状态时,所执行的一个或多个操作可以是实时的,也可以具有设定的延迟;在没有特别说明的情况下,所执行的多个操作不存在执行先后顺序的限制。
以下分别进行详细说明。需要说明的是,以下实施例的描述顺序不作为对实施例优选顺序的限定。
本申请实施例将从音乐信息展示装置的角度进行描述,该音乐信息展示装置具体可以集成在电子设备中,该电子设备可以是服务器,也可以是终端等设备;其中,该终端可以包括平板电脑、笔记本电脑、以及个人计算机(PC,Personal Computer)、可穿戴设备、虚拟现实设备或其他可以进行音乐信息展示的智能设备等设备。
如图2所示,本申请实施例的音乐信息展示方法的具体流程如下:
步骤101、显示当前播放的目标音乐的音乐播放页面。
其中,音乐播放页面为播放当前目标音乐的页面,音乐播放页面包括针对目标音乐的音乐信息展示控件。该音乐信息展示控件用于触发展示目标音乐的音乐信息,所谓音乐信息可以为目标音乐中的音乐元素。音乐元素可以为目标音乐中的声音旋律,音乐元素组成了目标音乐的内容,音乐元素的类型可以有多种,比如,可以包括人声音高、伴奏和乐器等多种声音旋律,乐器旋律有可以包括鼓点、贝斯、吉他或其他乐器的旋律。音乐播放页面还可以包括在播放目标音乐时,目标音乐的基础旋律、播放时间和歌词等信息。
其中,显示当前播放的目标音乐的音乐播放页面的方式可以有多种,具体可以如下:例如,可以显示音乐待播放页面,该音乐待播放页面包括待播放音乐列表,响应于针对待播放音乐列表的选择操作,播放选择操作对应的目标音乐,并显示当前播放的目标音乐的音乐播放页面。
图3所示,音乐待播放页面31可以为音乐库中的待播放音乐的展示页面,音乐待播放页面可以包括待播放音乐列表32,还可以包括音乐分类列表33和搜索控件34,待播放音乐列表可以为当前音乐库中的热门音乐列表。显示音乐待播放页面的方式可以有多种,比如,显示用户操作页面,用户操作页面包括创作控件,用户可以在用户操作页面触发创作控件,响应于针对创作控件的触发操作,显示音乐库对应的音乐待播放页面。
例如,当检测到用户在音乐分类列表上的选择操作时,还可以响应于针对音乐分类列表的选择操作,将待播放音乐列表替换为选择操作对应的目标音乐分类的待播放音乐列表。
如图4所示,在显示音乐待播放页面之后,便可以响应于针对待播放音乐列表的选择操作,播放选择操作对应的目标音乐,并显示当前播放的目标音乐的音乐播放页面。比如,获取选择操作对应的目标音乐的音频文件,播放音频文件,并显示当前播放的目标音乐的音乐播放页面;音乐待播放页面可以显示播放控件41,用于控制音乐暂停和继续播放;音乐播放界面可以显示播放进度条42,用于提示目标音乐当前的播放进度。
如图5所示,音乐待播放页面还可以包括上传控件51,该上传控件用于将终端的本地内容进行上传,并播放终端的本地内容中的目标音乐。因此,在显示音乐待播放页面之后,还可以响应于针对上传控件51的触发操作,显示本地待上传内容页面,该本地待上传内容页面包括至少一个待上传内容;响应于针对待上传内容的选择操作,播放选择操作对应的待上传内容的目标音乐,并显示当前播放的目标音乐的音乐播放页面。
其中,本地待上传内容页面用于展示终端本地的待上传内容,待上传内容的内容形式可以有多种,比如可以为音频、视频等包含音频或音乐的内容。以待上传内容为视频为例,响应于针对音乐待播放页面的上传控件的触发操作,显示本地待上传内容页面52。显示本地待上传内容页面52的方式可以有多种,比如,响应于针对上传控件51的触发操作,在得到同意拉起的授权时,拉起终端的本地相册或者本地内容数据库,基于本地相册或本地内容数据库,生成本地待上传内容页面52,并显示本地待上传内容页面52。
在显示本地待上传内容页面之后,便可以响应于针对待上传内容的选择操作,播放选择操作对应的待上传内容的目标音乐,并显示当前播放的目标音乐的音乐播放页面,比如,如图6所示,响应于针对待上传内容的选择操作,获取选择操作对应的待上传内容的目标音乐的音频文件,播放该音频文件,并显示目标音乐的音乐播放页面61,音频播放界面包括指示当前处于播放状态的控件62、播放进度条63以及音乐信息展示控件64。
其中,获取选择操作对应的待上传内容的目标音乐的音频文件的方式可以有多种。比如,在待上传内容中筛选出选择操作对应的目标待上传内容,当目标待上传内容为音频时,将目标待上传内容作为目标音乐的音频文件,当目标待上传内容为视频或者其他类型的内容时,便可以在目标待上传内容中提取出音频文件,并将该音频文件作为目标音乐的音频文件。从而无需用户手动进行格式的转换,兼容各种格式的内容的播放。
其中,从目标待上传内容中提取出音频文件的方式可以有多种。比如,可以在本地对目标待上传内容进行音频文件提取,得到音频文件,或者,还可以将目标待上传内容上传至服务器以提取音频文件,并接收服务器返回的音频文件,并将该音频文件作为目标音乐的音频文件。
步骤102、响应于针对音乐信息展示控件的触发操作,显示目标音乐的音乐信息展示页面。
如图7所示,音乐信息页面包括目标音乐中至少一个音乐元素对应的音轨(人声音轨71、鼓点音轨72、伴奏音轨73和贝斯音轨74)、以及音轨对应的节拍线79,还可以包括指示目标音乐的音乐元素的音乐元素标识,所谓音轨可以为展示目标音乐中音乐元 素的可视化轨道,节拍线用于指示目标音乐的节拍,以目标音乐的音乐元素为人声、鼓点、贝斯和伴奏为例,则音乐信息展示页面可以,各个音轨具体可以如下:
(1)人声音轨71,使用人声标识75表征,利用声音分轨算法分离出的人声音轨文件作为输入,通过算法将人声音轨文件进行音高分离(例如得到表现氛围的120个音高(pitch)),将人声音高进行归一化,在界面展示为24层音高的音阶图,实现当前歌曲中随着人声音高或音低的音调变化进行音阶绘制的效果;
(2)鼓点音轨72,使用鼓点标识76表征,利用声音分轨算法分离出的鼓点音轨文件作为输入,通过算法将鼓点音轨文件进行鼓点识别,将音乐基础节奏中的鼓点分为重鼓和轻鼓两类。其中重鼓使用一种标识(例如蓝色大圆绘制),轻鼓使用另一种标识(绿色小圆)绘制。根据音乐中鼓点出现的时间点将鼓点圆形图案绘制在鼓点轨道中。当音乐播放至当前鼓点时间节点时,此鼓点模拟鼓点击打动效进行放大扩散效果展示,能够让用户更清晰的了解到当前鼓点的节奏;
(3)伴奏音轨73,使用伴奏标识77表征,利用声音分轨算法分离出的伴奏音轨文件作为输入,通过波形图绘制算法将伴奏旋律的音高进行绘制,方便用户可视化的理解伴奏旋律的起伏;
(4)贝斯音轨74,使用贝斯标识78表征,利用声音分轨算法分离出的贝斯音轨文件作为输入,通过声音识别算法将贝斯声音的有无进行绘制。由于贝斯本身处于低频,对于用户而言很难听清,因此将贝斯声音轨道绘制出来能够更加方便用户理解音乐的全部构成。
在一些实施例选中,音乐信息展示页面还可以包括歌词显示区域,该歌词显示区域用于展示目标音乐当前播放的歌词信息等。
其中,响应于针对音乐信息展示控件的触发操作,显示目标音乐的音乐信息展示页面的方式可以有多种,具体可以如下:响应于针对音乐信息展示控件的触发操作,获取目标音乐中每一音乐元素的音轨数据;基于音轨数据,在预设音乐信息展示页面绘制音乐元素对应的音轨;根据目标音乐的当前播放时刻,在音轨绘制节拍线,得到目标音乐的音乐信息展示页面,也就是更新后的预设音乐信息展示页面,并显示音乐信息展示页面。
其中,音轨数据可以理解为音乐中一种音乐元素在轨道中所对应的数据,以音乐元素为鼓点为例,则音轨数据就可以理解为这首音乐中所有的鼓点信息。获取目标音乐中每一音乐元素的音轨数据的方式可以有多种,比如,可以获取历史音轨数据,并在历史音轨数据中查询目标音乐对应的目标历史音轨数据,当不存在目标历史音轨数据时,在目标音乐中分割出每一音乐元素对应的音轨数据,当存在目标历史音轨数据时,在目标历史音轨数据中筛选出每一音乐元素的音轨数据。
其中,历史音轨数据可以理解为在当前时刻之前对音乐进行音轨分割之后得到音轨数据,当历史音轨数据中不存在目标历史音轨数据时,就可以说明目标音乐在当前时刻之前并未进行分割,此时,就需要对该目标音乐进行音轨分割。在目标音乐中分割出每一音乐元素对应的音轨数据的方式可以有多种,比如,可以在终端本地对目标音乐进行分割,从而得到每一音乐元素对应的音轨数据,或者,还可以通过服务器对目标音乐进行分割,从而得到每一音乐元素对应的音轨数据。
其中,在对目标音乐分割之前,就必须要了解目标音乐的内容组成,由于声源的振动通常产生的并不是单一频率的声波,而是由基音和不同频率的泛音组成的复合声音,图8为不同乐器的波形的示意图,图9为声音的波形示意图,由此可以发现声音的最终波形是由不同波形组合而成。也就是当前播放的目标音乐也可以理解为是由多个不同波形组合而成,对目标音乐的分割,也就意味着将这些不同波形分割出来,以音轨的形式 进行展示。在终端本地对目标音乐进行分割的方式可以有多种,比如,可以对目标音乐进行频域转换,得到目标音乐的频域信息,在目标音乐中识别出至少一种音乐元素,在频域信息中分割出音乐元素对应的音轨数据。
其中,在频域信息中分割出音乐元素对应的音轨数据的方式可以有多种,比如,可以采用训练后的音轨分割网络将频域的矩阵进行分离,分割成多个音轨矩阵,并将每一音轨矩阵作为每一音乐元素的音轨数据。
其中,音轨分割网络的网络结构可以采用相关技术的各种的图像分割网络,例如神经网络模型,在对音轨分割网络进行训练的过程中,可以将分割结果与原先对应的单轨数据进行对比,从而得到分割损失信息。
其中,在服务器中对目标音乐进行音轨分割的方式可以有多种。比如,当目标音乐为上传的音频文件时,就需要在待上传内容中提取出音频文件,并将音频文件发送至服务器,服务器采用训练后的音轨分割网络对音频文件进行分割;当目标音乐为用户在待播放音乐列表中选择的音乐时,就无需上传该目标音乐的音频文件,服务器可以直接将目标音乐的音频文件进行音轨分割,从而得到每一音乐元素对应的音轨数据。
其中,将目标音乐的音频文件上传至服务器的过程可以包括:在预设权限信息(包括客户端被授予的权限)中筛选出客户端的目标权限信息,并将目标音乐的音频文件和目标权限信息发送至服务器,以便服务器基于所述目标权限信息对客户端进行校验;接收所述服务器在客户端检验通过后返回目标音乐的音频文件的文件标识和存储地址;基于文件标识和存储地址,播放目标音乐。
以待上传内容为视频为例,将目标音乐的音频文件上传至服务器的流程可以如图10所示,用户进入本地待上传内容页面,选择视频确认导入;音乐信息展示装置提取目标音乐的音频文件,并获取客户端的权限包,在步骤201中,通过客户端将目标音乐的音频文件和权限包发送至服务器的上传中台,在步骤202中,上传中台通过业务侧上传模块对权限包进行解包,得到客户端的权限信息,在步骤203中,上传中台通过业务侧上传模块将权限信息发送至服务器的登录中台,在步骤204中,登录中台基于权限信息对客户端的权限进行校验,也就是将客户端的权限与预设权限信息中针对客户端的授予的权限比对,如果一致说明校验结果为成功。在步骤205中,登录中台向上传中台返回校验结果,在步骤206中,如果校验结果为成功,则上传中台通过业务侧上传模块生成目标音乐的文件标识,然后,在步骤207中,向服务器的云数据库发送目标音乐的音频文件和文件标识,在步骤208中,云数据库在将目标音乐的音频文件存储之后,便可以返回目标音乐的存储地址例如在内容分发网络中的地址(Content Delivery Network,CDN)至上传中台,上传中台就可以在步骤209中向终端返回目标音乐的文件标识和存储地址至客户端,在步骤210中,当客户端接收到目标音乐的文本标识和存储地址时,终端播放目标音乐。
其中,将目标音乐的音频文件上传至服务器之后,便可以通过服务器对目标音乐进行可视化处理,将目标音乐解析成不同音轨的离散数据,从而得到目标音乐的音轨数据,比如,响应于针对音乐信息展示控件的触发操作,创建目标音乐的可视化任务,并将可视化任务的任务参数发送至服务器,以便服务器查询所述目标音乐的历史音轨数据;当服务器中不存在历史音轨数据时,接收服务器返回的对目标音乐进行音轨分割后的音轨数据;基于音轨数据,在预设音乐信息展示页面绘制音轨。
具体可以如图11所示,在将目标音乐解析成不同音轨的音轨数据时,在步骤301中,终端响应于针对音乐信息展示控件的触发操作,创建可视化任务,在步骤302中,服务器中的逻辑服务器对可视化任务进行参数校验,在步骤303中,逻辑服务器查询是否存在目标音乐的历史音轨数据,当不存在历史音轨数据时,在步骤304中,逻辑服务 器创建可视化记录,在步骤305中,逻辑服务器向音频服务器发送目标音乐的文件标识,在步骤306中,音频服务器基于文件标识创建音轨分割任务,在步骤307中,音频服务器向逻辑服务器返回音轨分割任务的任务标识,在步骤308中,逻辑服务器将任务标识存储在缓存中。在步骤309中,当音频服务器在创建音轨分割任务时,还可以创建音轨分割任务的每一步骤对应的步骤子流水。在步骤310中,音频服务器在对音轨分割完成后,向逻辑服务器发送目标音乐的音轨数据,在步骤311中,逻辑服务器将目标音乐的音轨数据发送至终端,在步骤312中,终端根据音轨数据,绘制目标音乐的各音乐元素对应的音轨。
在步骤313中,逻辑服务器还可以将步骤子流水发送虫洞(Wormhole),在步骤314中,流水服务器运行流水任务,在步骤315中,向虫洞发送流水获取请求,在步骤316中,虫洞就可以返回步骤子流水至流水服务器,最后,在步骤317中,虫洞将步骤子流水存储至流水数据库。
其中,虫洞(Wormhole)是一种面向实时大数据项目实施者的流式数据处理平台,可以替代其他流式处理平台,例如Spark、Storm等。
流水服务器和流水数据库是用来保证音轨数据的安全,一旦音轨数据出现异常或者丢失时,还可以通过流水数据库来恢复该音轨数据。
流水服务器和流水数据库是用来保证音轨数据的安全,一旦音轨数据出现异常或者丢失时,还可以通过流水数据库来恢复该音轨数据。
如图12所示,在步骤401中,音频服务器处理目标音乐的音频文件,其中,主要通过音频服务进行音轨分割,并在步骤402中向音轨数据库上传处理结果。音频服务器在对音频文件进行分割时,还可以在步骤403中回传处理结果至逻辑服务器,以便逻辑服务器在步骤404中更新可视化记录,从而在步骤406中进行下一个可视化任务。当目标音乐的可视化任务完成之后,音频服务器便可以在步骤407中向虫洞发送目标音乐的可视化流水。
在步骤408中,终端基于目标音乐,生成音轨分割进度轮询请求,在步骤409中,将音轨分割进度轮询请求发送至逻辑服务器。在步骤410中,逻辑服务器查询可视化记录,得到音轨分割进度,在步骤411中,将音轨分割进度组包(即封装为数据包),并在步骤412中,向终端返回针对目标音乐的音轨分割进度。
在步骤413中,当音轨分割进度指示目标音乐的音轨分割完成时,终端下载目标音乐的音轨数据。在步骤414中,终端基于音轨数据,在预设音乐信息展示页面绘制目标音乐的各音乐元素的音轨,得到音乐信息展示页面,并展示音乐信息展示页面。服务器对音轨分离需要一定的时间,根据音乐的不同,时间有所不同,音乐信息展示装置或者音乐信息展示装置中的客户端可以通过轮询接口,向逻辑服务器发送查询请求,以便查询目标音乐的音轨分割进度,逻辑服务器在音轨数据库中查询目标音乐的可视化记录,当目标音乐的可视化记录指示目标音乐的音轨分割完成时,将分割出的音轨数据进行组包,并返回进度和组包后的音轨数据,终端下载该音轨数据,并基于该音轨数据绘制音轨,以显示音乐信息展示页面。在音频服务器对目标音乐的音频文件进行分割的过程中,会将分割出的音轨数据上传至音轨数据进行存储,在获取目标音乐中至少一个音乐元素的音轨数据后,便可以基于音轨数据,在预设音乐信息展示页面绘制音乐元素对应的音轨,绘制音轨的方式可以有多种,比如,在音轨数据中提取出音轨中每一音轨元素的音频数据,得到音频数据集合,对音频数据集合中音频数据进行过滤,得到过滤后音频数据集合,基于过滤后音频数据集合,在预设音乐信息展示页面绘制音轨。
其中,音轨元素可以理解为组成音轨数据的每一个元素。比如,以鼓点对应的音轨数据为例,则鼓点音轨数据中的音乐元素就可以为鼓点。音乐元素还可以包括多种类型, 以音乐元素为鼓点为例,则该音乐元素还可以分为轻鼓和重鼓,等等。对音频数据集合中的音频数据进行过滤主要是为了过滤出异常数据,对音频数据进行过滤的方式可以有多种,比如,可以对音频数据集合中的音频数据进行排序,并基于排序信息,计算音频数据之间的时间间隔,在音频数据集合中删除时间间隔未超过预设时间间隔阈值的音频数据,得到过滤后音频数据集合。
其中,预设时间间隔阈值可以为0或者其他阈值,当预设时间间隔阈值为0时,就可以在音频数据集合中删除时间间隔为0的音频数据。
在对音频数据进行过滤之后,便可以基于过滤后音频数据集合,在预设音乐信息展示页面绘制音轨,绘制音轨的方式可以有多种,比如,以鼓点音轨数据为例,由于鼓点数据较多,绘制鼓点时可以采用轻量级的CALayer(一种UI绘制工具)代替重量级的UIView(一种UI绘制工具),以提高渲染性能。
其中,需要说明的是,针对不同类型音轨可以采用相同的处理方式,也可以根据音轨类型,采用该音轨类型对应的处理方式绘制该音轨。还是以鼓点音轨数据为例,获取到的鼓点音轨数据为json数据,该json数据可以如下所示:
在获取到鼓点音轨的json数据之后,将所有的轻鼓数据和重鼓数据进行合并,生成一个鼓点数据组,对鼓点数据组根据time字段进行升序排列,求出所有鼓点中最小的时间间隔,然后,删除时间间隔为0的鼓点,采用轻量级的CALayer在预设信息展示页面绘制鼓点音轨。
在一些实施例中,当存在目标历史音轨数据时,就说明目标音乐在当前时刻之前已 经被分割为多个音轨数据,此时,就不需要重新对目标音乐进行音轨分割,可以直接在目标历史音轨数据中筛选出音乐元素对应的音轨数据,并基于音轨数据绘制音轨,绘制音轨的方式可以参见上文,在此就不再一一赘述。
在绘制出音轨之后,便可以根据目标音乐的当前播放时刻,在音轨绘制节拍线,得到目标音乐信息展示页面。绘制节拍线的方式可以有多种,比如,可以在音轨中筛选出鼓点音轨,并基于鼓点音轨,确定目标音乐的节拍信息。基于目标音乐的当前播放时刻和节拍信息,在音轨上识别出已播放节拍的节拍线位置,基于节拍线位置,在音轨绘制出已播放节拍的节拍线,得到目标音乐信息展示页面。
其中,节拍信息可以理解为指示目标音乐中节拍的信息;该节拍信息可以包括目标音乐中包含的节拍数量,节拍类型、节拍间距和每一节拍对应的节拍线位置等信息。基于鼓点音轨,确定目标音乐的节拍信息的方式可以有多种,比如,在鼓点音轨中识别出每一鼓点的鼓点类型和时间信息;基于时间信息,计算不同鼓点类型的鼓点之间的时间间隔,得到时间间隔集合,根据鼓点类型和时间间隔集合,确定目标音乐的节拍信息。
其中,鼓点类型用于指示鼓点音轨中鼓点的类型,鼓点的类型可以有多种,比如,可以包括轻鼓和重鼓。时间信息可以理解为鼓点音轨中鼓点的触发时间或者播放时间。根据鼓点类型和时间间隔集合,确定目标音乐的节拍信息的方式可以有多种,比如,在时间间隔集合中筛选出数量最多的时间间隔,得到目标音乐的节拍线的时间间距,对鼓点音轨中的鼓点进行排序,以得到鼓点音轨的目标鼓点;根据时间间距和目标鼓点的时间信息,确定目标音乐的每一节拍线的节拍类型和时间位置,并将节拍线的节拍类型和时间位置作为节拍信息。
其中,对鼓点音轨进行排序,以得到鼓点音轨的目标鼓点的方式可以有多种。比如,对鼓点音轨进行排序,基于排序信息,在鼓点音轨中筛选出时间信息最早的鼓点,也就是在鼓点音轨中第一声响的鼓点,得到候选鼓点;当候选鼓点为一个时,将该候选鼓点作为目标鼓点,当候选鼓点存在多个时,将候选鼓点的下一个鼓点作为目标鼓点。
在得到鼓点音轨的目标鼓点之后,便可以基于时间间距和目标鼓点的时间信息,确定目标音乐的每一节拍线的节拍类型和时间位置,从而得到目标音乐的节拍信息。确定节拍线的节拍类型和位置的方式可以有多种,比如,根据目标鼓点的鼓点类型,确定目标鼓点对应的节拍线的节拍类型;将目标鼓点的时间信息与时间间距进行融合,就可以得到目标音乐的每一节拍线的时间位置;然后,根据目标鼓点对应的节拍线的节拍类型,确定目标音乐的每一节拍线的节拍类型。
其中,将目标鼓点的时间信息和时间间距进行融合,得到每一节拍线的时间位置的方式可以有多种。比如,将目标鼓点的时间信息往前移动一倍的时间间距,就可以得到目标鼓点对应的节拍线的前一个节拍线的时间位置。依次类推,往前或往后移动,直至下一个节拍线的时间位置为负数或者超过目标音乐的总时长时为止,从而就可以得到目标音乐的每一节拍线的时间位置。譬如,以目标音乐的鼓点响起时间顺序如下:
重鼓(SlowRhythm):10740、13110、15490;
轻鼓(PuckingDrum):10740、11330、11920、12520、13110、13710…
根据目标音乐的鼓点响起时间顺序,可以确定节拍线的时间间接为11330-10740=590毫秒(ms)。10740ms为第一个鼓点对应的节拍线的时间位置,且该节拍线的节拍类型可以为重拍,11330ms为第二个鼓点(目标鼓点)对应的节拍线的位置,该节拍线的节拍类型可以为轻拍;向前推一个时间间距,10150ms为第一个鼓点对应的节拍线的前一个节拍线的时间位置,节拍类型为轻拍;再向前推一个时间间距,9560ms为轻拍的前一个节拍线的时间位置,节拍类型为重拍;依次类推,到0秒时不一定整除,也没有关系,0秒也不是必须要存在一条节拍线,主要的逻辑还是根据时间间距,确定 目标音乐的每一节拍线的时间位置和节拍类型。
在确定出目标音乐的节拍信息之后,便可以基于目标音乐的当前播放时刻和节拍信息,在音轨上识别出已播放节拍的节拍线位置。识别出已播放节拍的节拍线位置的方式可以有多种,比如,可以将目标音乐的当前播放时刻与节拍线的时间位置进行对比,基于对比结果,在节拍线中筛选出已播放节拍的历史节拍线,获取音轨的时间轴信息;并基于时间轴信息和历史节拍线的时间位置,在音轨上识别出已播放节拍的节拍线位置。
其中,当前播放时刻可以理解为用户在音乐信息展示页面触发音乐信息展示控件的时间,或者,还可以为在音乐信息展示页面显示目标音乐的至少一个音乐元素对应的音轨时的时间。历史节拍线可以为在节拍线的时间位置在当前播放时刻之前的节拍线,在节拍线中筛选出已播放历史节拍的历史节拍线的方式可以有多种,比如,以当前播放时刻为2000ms为例,就可以筛选出时间位置在2000ms之前的节拍线,从而得到历史节拍线。
其中,时间轴信息可以用于指示音轨中每一音轨元素对应的时间信息,也可以理解为音轨中每一位置对应的时间信息。基于时间轴信息和历史节拍线的时间位置,在音轨上识别出已播放节拍的节拍线位置的方式可以有多种。比如,基于时间轴信息,在音轨上识别出已播放节拍的时间位置对应的目标位置,将该目标位置作为该已播放节拍的节拍线位置。
在识别出已播放节拍的节拍线位置之后,便可以在音轨上绘制已播放节拍的节拍线,绘制已播放节拍的节拍线的方式可以有多种。比如,可以在音轨的节拍线位置绘制节拍类型对应的节拍线。譬如,节拍类型为重拍时,就可以绘制粗线;节拍类型为轻拍时,绘制细线;或者,重拍对应长线,轻拍对应断线等等;或者,还可以,可以根据当前展示的音轨段,在历史节拍线中筛选出目标历史节拍线,在音轨的节拍线位置上绘制节拍类型对应的节拍线。
其中,已播放节拍并非一定存在,当前播放时刻之前不存在节拍线时,就可以确定不存在已播放历史节拍,此时,就不需要在音轨上绘制节拍线。
在一些实施例中,在基于鼓点音轨,确定目标音乐的节拍信息之后,还可以将音乐信息展示页面的鼓点音轨替换为节拍音轨,所谓节拍音轨就可以为指示目标音乐的节拍线的音轨。将鼓点音轨替换为节拍音轨的方式可以有多种,比如,将节拍信息转换为节拍音轨的目标音轨数据;基于目标音轨数据,在预设音乐信息展示页面绘制节拍音轨,在预设音乐信息展示页面将鼓点音轨替换为节拍音轨,得到目标音乐的音乐信息展示页面。
其中,将节拍信息转换为节拍音轨的目标音轨数据的方式可以有多种,比如,可以将节拍信息转换为音轨数据的数据格式,从而得到目标音轨数据,或者,在节拍信息中将节拍线进行合并,生成一个节拍线数据组,对节拍线数据组按照时间顺序进行排序,从而得到节拍音轨的目标音轨数据。
步骤103、基于目标音乐的播放进度,在音乐信息展示页面对音轨和跟随音轨运动的节拍线进行滚动展示。
其中,播放进度用于指示目标音乐在播放过程中的进度信息,该播放进度随着目标音乐的播放在不断更新。
其中,基于目标音乐的播放进度,在音乐信息展示页面对音轨和跟随音轨运动的节拍线进行滚动展示的方式可以有多种,具体可以如下:基于目标音乐的播放进度,在音乐信息展示页面对音轨进行滚动展示,并在音轨上进行动效播放,根据节拍标识线和目标音乐的播放进度,在音轨上绘制节拍线,以便节拍线跟随音轨进行滚动展示,具体可以如下:
S1、基于目标音乐的播放进度,在音乐信息展示页面对音轨进行滚动展示,并在音轨上进行动效播放。
例如,可以根据音轨的时间轴信息,在音轨中识别出每一播放时刻对应的音轨段,对音轨段进行排序,并基于排序信息和目标音乐的播放进度,对音轨段进行滚动展示,并在音轨上进行动效播放。
其中,根据音轨的时间轴信息,在音轨中识别出每一播放时刻对应的音轨段的方式可以有多种。比如,在音乐信息展示页面识别出展示音轨的音轨区域,并根据音轨区域的区域信息,确定每一播放时刻对应的展示时间区间;根据音轨的时间轴信息,在音轨中筛选出各展示时间区间对应的音轨段作为该播放时刻对应的音轨段。
在识别出每一播放时刻对应的音轨段之后,便可以基于排序信息和目标音乐的播放进度,对音轨段进行滚动展示。滚动展示的方式可以有多种,比如,可以基于目标音乐的播放进度,通过播放时间轴的形式,对每一播放时刻对应的音轨段进行滚动展示。
其中,在音轨上进行动效播放的方式可以有多种,比如,可以在音轨中筛选出鼓点音轨,并根据节拍标识线,在鼓点音轨中识别出当前播放的目标鼓点,根据目标鼓点的鼓点类型,在鼓点音轨上对目标鼓点进行动效播放。
其中,节拍标识线用于指示目标音乐在播放过程中所处的节拍位置,也可以为目标音乐在播放时间轴中的当前播放时刻,在音乐信息展示页面中可以为当前时刻的中心白线。根据节拍标识线,在鼓点音轨中识别出当前播放的目标鼓点的方式可以有多种,比如,可以获取节拍标识线在鼓点音轨的位置信息和鼓点音轨中每一鼓点的位置区间,将位置信息和位置区间进行匹配,并在鼓点音轨中筛选出匹配成功的鼓点作为当前播放的目标鼓点。
其中,需要说明的是,由于鼓点音轨中的鼓点数据较多,可以出现两个鼓点在鼓点音轨中有重合区域,因此,筛选出目标鼓点的数量可以为一个或者多个。
在识别出目标鼓点之后,便可以根据目标鼓点的鼓点类型,在鼓点音轨上对目标鼓点进行动效播放。进行动效播放的方式可以有多种,比如,可以获取历史播放鼓点集合,并在历史播放鼓点集合中查询目标鼓点;当历史播放鼓点集合中不存在目标鼓点时,获取目标鼓点的鼓点类型,并根据鼓点类型,确定目标鼓点的动效类型;基于动效类型,在鼓点音轨上对目标鼓点进行动效播放,并将目标鼓点添加至历史播放鼓点集合。
其中,历史播放鼓点集合可以为目标音乐的鼓点音轨中在当前时刻之前已经播放过的鼓点组成的鼓点集合。当历史播放鼓点集合中不存在目标鼓点时,也就意味该目标鼓点在之前并未被播放时,就可以获取目标鼓点的鼓点类型,从而确定目标鼓点的动效类型。比如,当目标鼓点为轻鼓时,该目标鼓点的动效类型就可以为轻鼓动效类型,当目标鼓点为重鼓时,该目标鼓点的动效类型就可以为重鼓动效类型。
在确定出目标鼓点的动效类型之后,便可以基于该动效类型,在鼓点音轨上对目标鼓点进行动效播放,进行动效播放的方式可以有多种,比如,可以模拟鼓点击打动效进行放大扩散效果展示,从而能够让用户更清晰的了解当前鼓点的节奏,或者,可以对鼓点进行振动动效播放,轻鼓的振动幅度可以小于重鼓的振动浮动,或者,还可以在播放振动动效的同时触发音乐信息展示装置进行振动。
在一些实施例中,在历史播放鼓点集合中查询目标鼓点之后,当历史播放鼓点集合中存在目标鼓点时,也就意味该目标鼓点在当前时刻之前已被播放,此时,就可以获取节拍标识线在鼓点音轨的当前位置信息,若当前位置信息与目标鼓点的位置区间不匹配,在历史播放鼓点集合中删除目标鼓点,若当前位置信息与目标鼓点的位置区间匹配,继续播放目标音乐,此时,对目标鼓点无需进行动效播放。
其中,在对鼓点音轨进行动效播放时,具体的流程可以如图13所示,在步骤501 中,检测到用户触发播放控件,在步骤502中,播放目标音乐,在步骤503中,控制时间轴滚动,在步骤504中,找到所有和当前时刻白线相交的目标鼓点,在步骤505中,查询目标鼓点是否在已播放数组中,当目标鼓点不在已播放数组中时,转入步骤506,在步骤506中播放鼓点动效,并在步骤507中将目标鼓点放入已播放数组。当目标鼓点在已播放数组中时,转入步骤508,以查询当前时刻是否已经过了目标鼓点的范围,当未超过目标鼓点的范围时,返回步骤502与继续播放目标音乐,当已超过目标鼓点的范围时转入步骤509,在步骤508中,从已播放数组中删除目标鼓点,然后,返回步骤502以继续播放目标音乐。
S2、根据节拍标识线和目标音乐的播放进度,在音轨上绘制节拍线,以便节拍线跟随音轨进行滚动展示。
例如,可以根据目标音乐的播放进度,在音轨段中筛选出当前展示的当前音轨段,基于节拍标识线,在当前音轨上绘制节拍线,并滚动展示节拍线。
其中,在目标音乐的播放过程中绘制节拍线时,就需要涉及到播放器的回调,因为,根据播放器回调的播放进度不一定能触发相应的鼓点,也就无法触发在音轨上绘制节拍线,比如,以播放器回调日志为图14所示,根据上面的数据和前面鼓点的数据,就可以发现鼓点的时间戳不在播放器回调时间戳上面,即两者不是完全重合,这就会导致鼓点动效/绘制节拍线等无法直接根据播放器回调来执行。在音轨的滚动展示过程中,在1秒内切换连环画超过24帧就被人眼识别为是流畅的动画/视频。视频/电影都是基于此原理。例如目前,播放器的帧率多少是30,当回调数据是1秒33帧时,导致播放器的回调在毫秒维度是非常稀疏的,鼓点不一定能吻合到回调的时间上。播放器需要确保按帧率回调,因为只要出现掉帧后续数据就有偏移。在对播放器回调的数据进行分析可以发现,在计算出的两个回调间隔之间并不是标准的,回调间隔的数据可以如图15所示,大部分数据是33(34也可以近似认为是33),但是有一个数据是11(后面的数据是67=33+34),这就导致后面的数据偏移了。由于播放器回调是不可能稳定的,也就是说不可能按每秒30帧标准回调(卡顿,等等一堆的因素),因此,在音轨绘制节拍线时就需要使得节拍线与鼓点音轨进行同步。在同步规则下,在某一个播放器回调中,绘制节拍线的方式可以有多种,比如,基于节拍标识线和目标音乐的播放进度,确定鼓点音轨的当前时间轴的偏移距离;计算偏移距离与节拍线的时间间距的比值,将比值的向上取整得到的整数,以及向下取整得到的整数,作为当前音轨段的待播放节拍的节拍线信息,也就是节拍线的序号;基于节拍线信息,在当前音轨段上绘制待播放节拍的节拍线。
其中,鼓点音轨的当前时间轴的偏移距离可以为节拍标识线相对于鼓点音轨的时间起点之间的偏移位移,也可以理解为当前播放进度对应的播放时刻(也就是播放进度)与播放起始点之间的当前距离。基于节拍标识线和目标音乐的播放进度,确定鼓点音轨的当前时间轴的偏移距离的方式可以有多种,比如,可以根据目标音乐的播放进度,在鼓点分桶中筛选出当前帧对应的目标鼓点分桶;在目标鼓点分桶中识别出节拍标识线最近的目标鼓点,并计算目标鼓点与节拍标识线的当前距离;基于当前距离和目标鼓点的位置信息,确定鼓点音轨的当前时间轴的偏移距离。
其中,鼓点分桶可以理解为存储鼓点音轨中鼓点的数据分桶,在鼓点分桶中筛选出当前帧对应的目标鼓点分桶之前,还可以将鼓点音轨中的鼓点存储至预设分桶。比如,可以在鼓点音轨中提取出至少一个鼓点,并识别出鼓点的时间信息,基于时间信息,对鼓点进行分类,得到每一时间区间对应的目标鼓点,在预设分桶中筛选出每一时间区间对应的目标分桶,并将每一时间区间对应的目标鼓点添加至该时间区间对应的目标分桶,得到各时间区间对应的鼓点分桶。
其中,基于时间信息,对鼓点进行分类的方式可以有多种,比如,根据鼓点的时间 戳,将时间戳在同一秒的鼓点都分为一类。在预设分桶中就可以筛选出每一秒对应的目标分桶,然后,将同一类的鼓点都存储至目标分桶,从而得到鼓点分桶。
其中,需要说明的是,对鼓点进行分桶的方式可以如图16所示,以一首3分钟左右的音乐为例,同类的拍子最多600个,平均分配到180秒(3分钟),每秒钟分配到的拍子不到4个。通过分桶的目的在于提高查找每帧中邻近鼓点的效率,从而计算出偏移距离。
在确定出鼓点音轨的当前时间轴的偏移距离之后,便可以计算偏移距离与节拍线的时间间距的比值,将比值的向上取整得到的整数,以及向下取整得到的整数,作为当前音轨段的待播放节拍的节拍线信息。待播放节拍可以理解为在当前音轨段中还未开始播放的节拍,当播放该待播放节拍时,就需要在当前音轨中绘制待播放节拍的节拍线,节拍线信息可以理解为当前播放进度对应的节拍线的位置信息,该位置信息可以为待播放节拍对应的节拍线的序号,具体可以如图17所示,比如,以偏移距离为100,节拍线的时间间距为30为例,则节拍线信息index=3.33,向上取整得到的整数为4,向下取整得到的整数3,也就意味着此时播放到了序号为3和4的节拍线之间。
在计算出待播放节拍的节拍线信息之后,便可以基于节拍线信息,在当前音轨段上绘制待播放节拍的节拍线。绘制节拍线的方式可以有多种,比如,对节拍线信息进行取整处理,得到当前取整后节拍线信息,基于目标音乐的播放进度,获取历史取整后节拍线信息,若当前取整后节拍线信息与历史取整后节拍线信息不同,基于当前取整后节拍线信息,在当前音轨段上绘制待播放节拍的节拍线。
其中,对节拍线信息进行取整处理的方式可以有多种,比如,以节拍线信息index=3.33为例,则当前取整后节拍线信息nextIndex=4,也就是说下一次要绘制的节拍线的序号为4号。
其中,历史取整后节拍线信息可以为在当前播放器回调的下一次播放回调时进行取整后得到的取整后节拍线信息。获取历史取整后节拍信息的方式可以有多种,比如,基于目标音乐的播放进度,确定当前播放器回调信息;在播放器回调日志中筛选出当前播放器回调信息之前的历史播放器回调信息;在预设取整后节拍信息集合中筛选出历史播放器回调信息对应的取整后节拍线信息,从而得到历史取整后节拍线信息。
若当前取整后节拍线信息与历史取整后节拍线信息不同,就可以基于当前取整后节拍线信息,在当前音轨段上绘制待播放节拍的节拍线。绘制待播放节拍的节拍线的方式可以有多种,比如,可以基于当前取整后节拍线信息,在目标音乐的节拍线中筛选出待播放节拍的目标节拍线;计算当前取整后节拍线信息和待播放节拍的节拍线信息的信息差值,并基于信息差值,确定目标节拍线的延时绘制时间;基于延时绘制时间,在当前音轨段上绘制待播放节拍的目标节拍线。
其中,计算当前取整后节拍线信息和节拍线信息的信息差值的方式可以有多种,比如,可以直接将当前取整后节拍线信息nextIndex-节拍线信息index,将这个差值offsetTime作为信息差值。
在计算出该信息差值之后,便可以基于信息差值,确定目标节拍线的延时绘制时间。确定延时绘制时间的方式可以有多种,比如,可以直接将这个信息差值作为从现在开始到下个待播放节拍的响应中间的时间间隔,并将该时间间隔作为目标节拍线的延时绘制时间,或者,还可以在预设时间集合中筛选出信息差值对应的时间间隔,从而得到目标节拍线的延时绘制时间。
在确定出延时绘制时间之后,便可以基于该延时绘制时间,在当前音轨段绘制目标节拍线,绘制目标节拍线的方式可以有多种,比如,基于该延时绘制时间,设置延时器,延迟offsetTime后,在当前音轨段绘制目标节拍线。
在一些实施例中,若当前取整后节拍线信息与历史取整后节拍线信息相同时,就不需要执行任何逻辑,也就不需要进行延时绘制。
在一些实施例中,音乐信息展示页面还可以包括节拍展示区域,在显示目标音乐的音乐信息展示页面之后,还可以在节拍展示区域展示节拍线对应的节拍展示信息,比如,在节拍展示区域展示每一节拍类型对应的节拍展示信息,当检测到音轨或音轨段上绘制节拍线,或,节拍标识线与节拍线重合时,在节拍展示区域对节拍线对应的节拍展示信息进行动态展示。
如图18所示,节拍展示区域181为节拍器用于展示节拍信息的区域,在节拍展示区域181展示每一节拍类型对应的节拍展示信息的方式可以有多种。比如,节拍展示区域181内展示的节拍展示信息就可以为重拍对应的左指示灯和轻拍对应的右指示灯。
其中,节拍展示区域181还可以包括节拍播放控件182,该节拍播放控件182用于控制是否播放节拍线对应的节拍声音。在节拍展示区域181对节拍线对应的节拍展示信息进行动态展示的方式可以有多种。比如,在节拍展示信息中筛选出目标节拍展示信息,并对目标节拍展示信息进行动效播放,响应于针对节拍播放控件的触发操作,播放节拍类型对应的节拍声音,根据节拍线的位置信息,确定节拍线对应的节拍标识,并在节拍展示区域181展示节拍标识。
其中,在节拍展示区域对节拍线对应的目标节拍展示信息进行动效播放的方式可以有多种,,当节拍线的节拍类型为重拍时,就可以使得左指示灯闪烁,当节拍线的节拍类型为轻拍时,就可以得到右指示灯闪烁。当用户在任意时候对节拍播放控件进行触发时,就可以设置播放节拍声音。当检测到绘制节拍线时,当节拍线的节拍类型为重拍时,此时,除了左指示灯闪烁,还可以播放重拍对应的标准声1,当节拍线的节拍类型为轻拍时,此时,除了右指示灯闪烁,还可以播放轻拍对应的标准声2。
需要说明的是,绘制节拍线、指示灯闪烁和播放节拍声音可以是联动的。如图19所示,在节拍器与鼓点音轨的同步算法中,播放目标音乐;在步骤601中,根据目标音乐的播放进度,确定出鼓点音轨的当前时间轴的偏移距离,将偏移距离除以节拍线的时间间距,得到节拍线信息;在步骤602中,对节拍线信息进行取整,获取历史取整后节拍线信息;在步骤603中,判断历史取整后节拍线信息与当前取整后节拍线信息是否相同,如果是则返回步骤601,如果否,则转入步骤604,在步骤604中,计算当前取整后节拍线信息与节拍线信息之间的信息差值,并基于信息差值确定目标节拍线的延时绘制时间,在步骤605中,确定当前取整后节拍线信息对应的目标节拍线的节拍类型,当节拍类型为重拍时,绘制粗线,左指示灯闪烁,播放重拍的节拍声音,当节拍类型为轻拍时,绘制细线,右指示灯闪烁,播放轻拍的节拍声音。在步骤606中,在等到延时绘制时间之后,绘制节拍线。例如,。
在一些实施例中,节拍线也可以在绘制音轨的时候同时绘制,而并非根据目标音乐的播放进度实时绘制。当节拍线跟音轨为同一时间绘制时,此时,就可以检测到节拍线标识与节拍线重合时,在节拍展示区域对节拍线对应的节拍展示信息进行动态展示。
在一些实施例中,如图20所示,还可以在节拍展示区域展示节拍线对应的节拍标识201,节拍标识201用于指示节拍线在节拍区间内当前所对应的节拍。节拍区间可以为4拍或者8拍等,根据节拍线的位置信息,确定节拍线对应的节拍标识的方式,比如,根据节拍线的位置信息,确定节拍线的节拍序号,将节拍序号和节拍区间进行融合,从而就可以得到节拍线对应的节拍标识,从而就可以在节拍展示区域展示节拍标识。
在一些实施例中,音乐信息展示页面还可以包括每一音轨对应的调整控件,调整控件可以为音轨对应的音乐元素标识,可以通过该调整控件对音轨和目标音乐进行调整。调整的方式可以有多种,比如,可以响应于针对调整控件的触发操作,获取调整控件对 应的目标音轨的音频文件的当前播放音量,根据当前播放音量,在音乐信息展示页面对目标音轨进行隐藏处理,基于目标音轨的音频文件,播放调整后音乐,该调整后音乐为在目标音乐中对目标音轨的音频文件进行静音播放后的音乐。
其中,对于目标音乐中分割出的各个音轨,每一个音轨的音轨数据都相当于一个单独的m4a格式的音频文件,获取调整控件对应的目标音轨的音频文件的当前播放音量的方式可以有多种,比如,可以直接获取目标音乐的播放信息,在播放信息中提取出播放音量信息,在播放音量信息中识别出调整控件对应的目标音轨的音频文件的当前播放音量。
在获取音频文件的当前播放音量之后,便可以基于当前播放音量,在音乐信息展示页面对目标音轨进行隐藏处理,隐藏处理的方式可以有多种,比如,若当前播放音量超过静音音量,则在音乐信息展示页面识别出调整控件对应的目标音轨,对目标音轨添加蒙层,以便在音乐信息展示页面隐藏目标音轨。
其中,静音音量为静音状态下的播放音量,也可以理解为播放音量为0。当前播放音量超过静音音量时,也就意味着目标音轨的音频文件的当前播放状态为非静音播放状态。在非静音播放状态下,对目标音轨添加蒙层的方式可以有多种,如图21所示,以目标音轨为鼓点音轨为例,在音乐信息展示页面中识别出目标音轨对应的音轨区域,在该音轨区域添加蒙层覆盖,从而使得目标音轨在音乐信息展示页面不显示可视化效果,。
其中,基于目标音轨的音频文件,播放调整后音乐的方式可以有多种,比如,可以在目标音乐中将目标音轨的音频文件的当前播放音量调整为静音,得到调整后音乐,播放调整后音乐,并基于音频文件的当前播放音量对音频文件的历史播放音量进行更新。
其中,将目标音轨的音频文件的当前播放音量调整为静音的方式可以有多种,比如,目标音乐可以看作是多个音轨的音频文件进行融合得到的融合后音频文件,就可以在融合后音频文件中对目标音轨的音频文件进行静音处理,从而就可以得到调整后音乐。
其中,历史播放音量可以理解为目标音轨的音频文件在当前时刻之前保存的播放音量,基于音频文件的当前播放音量对音频文件的历史播放音量进行更新的方式可以有多种,比如,将目标音轨的音频文件的历史播放音量替换为当前播放音量,当再次触发调整控件时,就可以将目标音轨的音频文件从静音播放状态调整为当前播放音量进行播放。
在一些实施例中,在播放调整后音乐之后,用户还可以触发调整控件,用于对调整后音乐进行恢复或者还原,比如,响应于针对调整控件的触发操作,去除目标音轨上的蒙层,以在音乐信息展示页面显示目标音轨,基于更新后历史播放音量,将目标音轨对应的音频文件的播放音量调整为当前播放音量,以播放目标音乐。
其中,需要说明的是,通过音轨的调整对目标音轨和目标音乐进行调整可以理解为对目标音乐进行分轨视听,在进入音乐信息展示页面之后,还可以通过提示信息提示用户可以进行单轨静音试听,具体可以如图22所示,用户单击每个音轨前的调整控件(icon),该调整控件也可以为音乐元素标识,可以实现对每个音轨实现单轨静音,当用户对其中的某一条音轨进行静音后,该音轨会隐藏可视化效果展示,其他剩余音轨仍正常展示,并播放剩余轨道混合的合成声音(调整后音乐),基于此功能,用户还可以同时静音多条音轨,仅保留单一声音轨道(音轨),以达到更好的了解一首音乐中单个音轨的声音的效果,从而可以帮助用户更好的进行分层理解音乐。
如图23所示,对目标音乐进行音乐信息展示的方式主要可以理解为对目标音乐进行音乐可视化,将该目标音乐中的音乐元素和节拍信息进行可视化展示,在对目标音乐进行可视化的过程中,可以有两种可视化路径,一种是用户从播放器的音乐库内选择已有的音乐进行可视化,另一种是用户有其他音乐进行可视化时,可以通过上传提取的音乐进行可视化;例如在音乐信息展示页面进行可视化展示。
由以上可知,本申请实施例在显示当前播放的目标音乐的音乐播放页面后,响应于针对音乐播放页面中音乐信息展示控件的触发操作,显示目标音乐的音乐信息展示页面,该音乐信息展示页面包括目标音乐中至少一个音乐元素对应的音轨、以及音轨对应的节拍线,然后,基于目标音乐的播放进度,在音乐信息展示页面对音轨和跟随音轨运动的节拍线进行滚动展示;由于该方案可以在播放目标音乐的同时显示目标音乐中至少一个音乐元素对应的音轨和该音轨对应的节拍线,而且,还可以根据目标音乐的播放进度,对音轨和节拍线进行滚动展示,从而可以准确的展示目标音乐所包含的内容,因此,可以提升音乐信息展示的准确性。
根据上面实施例所描述的方法,以下将举例作进一步详细说明。
在本申请实施例中,将以该音乐信息展示装置具体集成在电子设备,电子设备为终端,音乐信息展示页面包括人声音轨、鼓点音轨、伴奏音轨和贝斯音轨,节拍线信息为节拍线序号为例进行说明。
如图24所示,一种音乐信息展示方法,具体流程如下:
步骤701、终端显示当前播放的目标音乐的音乐播放页面。
702、终端响应于针对音乐信息展示控件的触发操作,显示目标音乐的音乐信息展示页面。
步骤703、终端基于目标音乐的播放进度,在音乐信息展示页面对音轨进行滚动展示,并在音轨上进行动效播放。
步骤705、终端响应于针对调整控件的触发操作,对目标音轨和目标音乐进行调整。
步骤701至步骤705的细节可以参照上文而理解,这里不再重复说明。
为了更好地实施以上方法,本申请实施例还提供一种音乐信息展示装置,该音乐信息展示装置可以集成在电子设备,比如服务器或终端等设备中,该终端可以包括平板电脑、笔记本电脑和/或个人计算机等。
例如,如图25所示,该音乐信息展示装置可以包括第一显示单元301、第二显示单元302和滚动展示单元303,如下:
(1)第一显示单元301;
第一显示单元301,配置为显示当前播放的目标音乐的音乐播放页面,该音乐播放页面包括音乐信息展示控件。
(2)第二显示单元302;
第二显示单元302,配置为响应于针对音乐信息展示控件的触发操作,显示目标音乐的音乐信息展示页面,该音乐信息展示页面包括目标音乐中至少一个音乐元素对应的音轨、以及音轨对应的节拍线,该节拍线用于指示目标音乐的节拍。
(3)滚动展示单元303;
滚动展示单元303,配置为基于目标音乐的播放进度,在音乐信息展示页面对音轨和跟随音轨运动的节拍线进行滚动展示。
在一些实施例中,音乐信息展示装置还可以包括节拍展示单元304,如图26所示,具体可以如下:
节拍展示单元304,配置为在节拍展示区域展示每一节拍类型对应的节拍展示信息,当检测到音轨或音轨段上绘制节拍线,或,节拍标识线与节拍线重合时,在节拍展示区域对节拍线对应的节拍展示信息进行动态展示。
在一些实施例中,音乐信息展示装置还可以包括调整单元305,如图27所示,具体可以如下:
调整单元305,响应于针对音乐信息展示页面中音轨的调整控件的触发操作,对音轨和目标音乐进行调整。
例如,调整单元305,具体可以配置为响应于针对调整控件的触发操作,获取调整控件对应的目标音轨的音频文件的当前播放音量,根据当前播放音量,在音乐信息展示页面对目标音轨进行隐藏处理,基于目标音轨的音频文件,播放调整后音乐,该调整后音乐为在目标音乐中对目标音轨的音频文件进行静音播放后的音乐。
具体实施时,以上各个单元可以作为独立的实体来实现,也可以进行任意组合,作为同一或若干个实体来实现,以上各个单元的具体实施可参见前面的方法实施例,在此不再赘述。
本申请实施例还提供一种电子设备,如图28所示,其示出了本申请实施例所涉及的电子设备的结构示意图,具体来讲:
该电子设备可以包括一个或者一个以上处理核心的处理器401、一个或一个以上计算机可读存储介质的存储器402、电源403和输入单元404等部件。本领域技术人员可以理解,图28中示出的电子设备结构并不构成对电子设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
处理器401是该电子设备的控制中心,利用各种接口和线路连接整个电子设备的各个部分,通过运行或执行存储在存储器402内的软件程序和/或模块,以及调用存储在存储器402内的数据,执行电子设备的各种功能和处理数据。在一些实施例中,处理器401可包括一个或多个处理核心;优选的,处理器401可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器401中。
存储器402可用于存储软件程序以及模块,处理器401通过运行存储在存储器402的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器402可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备的使用所创建的数据等。此外,存储器402可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器402还可以包括存储器控制器,以提供处理器401对存储器402的访问。
电子设备还包括给各个部件供电的电源403,电源403可以通过电源管理系统与处理器401逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源403还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
该电子设备还可包括输入单元404,该输入单元404可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。
尽管未示出,电子设备还可以包括显示单元等,在此不再赘述。具体在本申请实施例中,电子设备中的处理器401会按照如下的指令,将一个或一个以上的应用程序的进程对应的可执行文件加载到存储器402中,并由处理器401来运行存储在存储器402中的应用程序,从而实现本申请实施例上述的音乐信息展示方法的各种功能。
本领域普通技术人员可以理解,上述实施例的各种方法中的全部或部分步骤可以通过指令来完成,或通过指令控制相关的硬件来完成,该指令可以存储于一计算机可读存储介质中,并由处理器进行加载和执行。
为此,本申请实施例提供一种计算机可读存储介质,其中存储有多条指令,该指令能够被处理器进行加载,以执行本申请实施例所提供的任一种音乐信息展示方法中的步骤。以上各个操作的具体实施可参见前面的实施例,在此不再赘述。其中,该计算机可读存储介质可以包括:只读存储器(ROM,Read Only Memory)、随机存取记忆体(RAM, Random Access Memory)、磁盘或光盘等。
由于该计算机可读存储介质中所存储的指令,可以执行本申请实施例所提供的任一种音乐信息展示方法中的步骤,因此,可以实现本申请实施例所提供的任一种音乐信息展示方法所能实现的有益效果,详见前面的实施例,在此不再赘述。
本申请实施例提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中。计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行上述音乐信息展示方面或者音轨和节拍信息展示方面的各种可选实现方式中提供的方法。
以上对本申请实施例所提供的一种音乐信息展示方法、装置和计算机可读存储介质进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (20)

  1. 一种音乐信息展示方法,由电子设备执行,包括:
    显示当前播放的目标音乐的音乐播放页面,其中,所述音乐播放页面包括音乐信息展示控件;
    响应于针对所述音乐信息展示控件的触发操作,显示所述目标音乐的音乐信息展示页面,其中,所述音乐信息展示页面包括所述目标音乐中至少一个音乐元素对应的音轨、以及所述音轨对应的节拍线,所述节拍线用于指示所述目标音乐的节拍;
    基于所述目标音乐的播放进度,在所述音乐信息展示页面对所述音轨和跟随所述音轨运动的节拍线进行滚动展示。
  2. 根据权利要求1所述的音乐信息展示方法,其中,所述显示所述目标音乐的音乐信息展示页面,包括:
    获取所述目标音乐中至少一个音乐元素的音轨数据;
    基于所述音轨数据,在预设音乐信息展示页面绘制所述音乐元素对应的音轨;
    根据所述目标音乐的当前播放时刻,在所述音轨绘制节拍线,得到所述目标音乐的音乐信息展示页面,并显示所述音乐信息展示页面。
  3. 根据权利要求2所述的音乐信息展示方法,其中,所述根据所述目标音乐的当前播放时刻,在所述音轨绘制节拍线,得到所述目标音乐的音乐信息展示页面,包括:
    在所述音轨中筛选出鼓点音轨,并基于所述鼓点音轨,确定所述目标音乐的节拍信息;
    基于所述目标音乐的当前播放时刻和节拍信息,在所述音轨上识别出已播放节拍的节拍线位置;
    基于所述节拍线位置,在所述音轨绘制出所述已播放节拍的节拍线,得到所述目标音乐的音乐信息展示页面。
  4. 根据权利要求3所述的音乐信息展示方法,其中,所述基于所述鼓点音轨,确定所述目标音乐的节拍信息,包括:
    在所述鼓点音轨中识别出每一鼓点的鼓点类型和时间信息;
    基于所述时间信息,计算不同鼓点类型的鼓点之间的时间间隔,得到时间间隔集合;
    根据所述鼓点类型和时间间隔集合,确定所述目标音乐的节拍信息。
  5. 根据权利要求4所述的音乐信息展示方法,其中,所述根据所述鼓点类型和时间间隔集合,确定所述目标音乐的节拍信息,包括:
    在所述时间间隔集合中筛选出数量最多的时间间隔,得到所述目标音乐的节拍线的时间间距;
    对所述鼓点音轨中的鼓点进行排序,得到所述鼓点音轨的目标鼓点;
    根据所述时间间距和所述目标鼓点的时间信息,确定所述目标音乐的每一节拍线的节拍类型和时间位置,并将所述节拍线的节拍类型和时间位置作为节拍信息。
  6. 根据权利要求5所述的音乐信息展示方法,其中,所述基于所述目标音乐的当前播放时刻和节拍信息,在所述音轨上识别出已播放节拍的节拍线位置,包括:
    将所述目标音乐的当前播放时刻与所述节拍线的时间位置进行对比;
    基于对比结果,在所述节拍线中筛选出已播放节拍的历史节拍线;
    获取所述音轨的时间轴信息,并基于所述时间轴信息和所述历史节拍线的时间位置,在所述音轨上识别出已播放节拍的节拍线位置。
  7. 根据权利要求3所述的音乐信息展示方法,其中,在所述基于所述节拍线位置,在所述音轨绘制出所述已播放节拍的节拍线时,还包括:
    将所述节拍信息转换为节拍音轨的目标音轨数据;
    基于所述目标音轨数据,在所述预设音乐信息展示页面绘制所述节拍音轨;
    将所述预设音乐信息展示页面中的所述鼓点音轨替换为所述节拍音轨。
  8. 根据权利1至7任一项所述的音乐信息展示方法,其中,所述音乐信息展示页面包括节拍标识线,所述基于所述目标音乐的播放进度,在所述音乐信息展示页面对所述音轨和跟随所述音轨运动的节拍线进行滚动展示,包括:
    基于所述目标音乐的播放进度,在所述音乐信息展示页面上对所述音轨进行滚动展示,并在所述音轨进行动效播放;
    根据所述节拍标识线和目标音乐的播放进度,在所述音轨上绘制节拍线,以便所述节拍线跟随所述音轨进行滚动展示。
  9. 根据权利要求8所述的音乐信息展示方法,其中,所述基于所述目标音乐的播放进度,在所述音乐信息展示页面上对所述音轨进行滚动展示,包括:
    根据所述音轨的时间轴信息,在所述音轨中识别出每一播放时刻对应的音轨段;
    对所述音轨段进行排序,并基于排序信息和所述目标音乐的播放进度,对所述音轨段进行滚动展示。
  10. 根据权利要求9所述的音乐信息展示方法,其中,所述根据所述节拍标识线和目标音乐的播放进度,在所述音轨上绘制节拍线,以便所述节拍线跟随所述音轨进行滚动展示,包括:
    根据所述目标音乐的播放进度,在所述音轨段中选出当前展示的当前音轨段;
    基于所述节拍标识线和目标音乐的播放进度,确定所述鼓点音轨的当前时间轴的偏移距离;
    计算所述偏移距离与所述节拍线的时间间距的比值,以得到所述当前音轨段的待播放节拍的节拍线信息;
    基于所述节拍线信息,在所述当前音轨段上绘制所述待播放节拍的节拍线,并滚动展示所述节拍线。
  11. 根据权利要求10所述的音乐信息展示方法,其中,所述基于所述节拍标识线和目标音乐的播放进度,确定所述鼓点音轨的当前时间轴的偏移距离,包括:
    根据所述目标音乐的播放进度,在鼓点分桶中筛选出当前帧对应的目标鼓点分桶;
    在所述目标鼓点分桶中识别出与所述节拍标识线最近的目标鼓点,并计算所述目标鼓点与所述节拍标识线的当前距离;
    基于所述当前距离和所述目标鼓点的位置信息,确定所述鼓点音轨的当前时间轴的偏移距离。
  12. 根据权利要求11所述的音乐信息展示方法,其中,所述根据所述目标音乐的播放进度,在鼓点分桶中筛选出当前帧对应的目标鼓点分桶之前,还包括:
    在所述鼓点音轨中提取出至少一个鼓点,并识别出所述鼓点的时间信息;
    基于所述时间信息,对所述鼓点进行分类,得到每一时间区间对应的目标鼓点;
    在预设分桶中筛选出所述每一时间区间对应的目标分桶,并将所述每一时间区间对应的目标鼓点添加至所述时间区间对应的目标分桶,得到各时间区间对应的鼓点分桶。
  13. 根据权利要求10所述的音乐信息展示方法,其中,所述基于所述节拍线信息,在所述当前音轨段上绘制待播放节拍的节拍线,包括:
    对所述节拍线信息进行取整处理,得到当前取整后节拍线信息;
    基于所述目标音乐的播放进度,获取历史取整后节拍线信息;
    当所述当前取整后节拍线信息与历史取整后节拍线信息不同时,基于所述当前取整后节拍线信息,在所述当前音轨段上绘制待播放节拍的节拍线。
  14. 根据权利要求13所述的音乐信息展示方法,其中,所述基于所述当前取整后节拍线信息,在所述当前音轨段上绘制待播放节拍的节拍线,包括:
    基于所述当前取整后节拍线信息,在所述目标音乐的节拍线中筛选出待播放节拍的目标节拍线;
    计算所述当前取整后节拍线信息和所述待播放节拍的节拍线信息的信息差值,并基于所述信息差值,确定所述目标节拍线的延时绘制时间;
    基于所述延时绘制时间,在所述当前音轨段上绘制所述待播放节拍的目标节拍线。
  15. 根据权利要求2所述的音乐信息展示方法,其中,所述音乐信息展示页面还包括节拍展示区域,所述显示所述音乐信息展示页面之后,还包括:
    在所述节拍展示区域展示每一节拍类型对应的节拍展示信息;
    当检测到在音轨或音轨段上绘制节拍线,或,所述节拍标识线与所述节拍线重合时,在所述节拍展示区域对所述节拍线对应的节拍展示信息进行动态展示。
  16. 根据权利要求15所述的音乐信息展示方法,其中,所述节拍展示区域包括节拍播放控件,所述在所述节拍展示区域对所述节拍线对应的节拍展示信息进行动态展示,包括:
    根据所述节拍线的节拍类型,在所述节拍展示信息中筛选出目标节拍展示信息,并对目标节拍展示信息进行动效播放;
    响应于针对节拍播放控件的触发操作,播放所述节拍类型对应的节拍声音;
    根据所述节拍线的位置信息,确定所述节拍线对应的节拍标识,并在所述节拍展示区域展示所述节拍标识。
  17. 根据权利要求1至7任一项所述的音乐信息展示方法,其中,所述音乐信息展示页面还包括每一音轨对应的调整控件,所述显示所述目标音乐的音乐信息展示页面之后,还包括:
    响应于针对所述调整控件的触发操作,获取所述调整控件对应的目标音轨的音频文件的当前播放音量;
    根据所述当前播放音量,在所述音乐信息展示页面对所述目标音轨进行隐藏处理;
    基于所述目标音轨的音频文件,播放调整后音乐,所述调整后音乐为在所述目标音乐中对所述目标音轨的音频文件进行静音播放后的音乐。
  18. 一种音乐信息展示装置,包括:
    第一显示单元,配置为显示当前播放的目标音乐的音乐播放页面,所述音乐播放页面包括音乐信息展示控件;
    第二显示单元,配置为响应于针对所述音乐信息展示控件的触发操作,显示所述目标音乐的音乐信息展示页面,所述音乐信息展示页面包括所述目标音乐中至少一个音乐元素对应的音轨、以及所述音轨对应的节拍线,所述节拍线用于指示所述目标音乐的节拍;
    滚动展示单元,配置为基于所述目标音乐的播放进度,在所述音乐信息展示页面对所述音轨和跟随所述音轨运动的节拍线进行滚动展示。
  19. 一种计算机程序产品,包括计算机程序/指令,所述计算机程序/指令被处理器执行时实现权利要求1至17任一项所述音乐信息展示方法中的步骤。
  20. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有多条指令,所述指令适于处理器进行加载,以执行权利要求1至17任一项所述的音乐信息展示方法中的步骤。
PCT/CN2023/079797 2022-04-01 2023-03-06 音乐信息展示方法、装置、设备、程序产品和存储介质 WO2023185387A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210349425.8A CN116932809A (zh) 2022-04-01 2022-04-01 一种音乐信息展示方法、装置和计算机可读存储介质
CN202210349425.8 2022-04-01

Publications (1)

Publication Number Publication Date
WO2023185387A1 true WO2023185387A1 (zh) 2023-10-05

Family

ID=88198962

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/079797 WO2023185387A1 (zh) 2022-04-01 2023-03-06 音乐信息展示方法、装置、设备、程序产品和存储介质

Country Status (2)

Country Link
CN (1) CN116932809A (zh)
WO (1) WO2023185387A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008024486A2 (en) * 2006-08-24 2008-02-28 Fliptrack, Inc. Beat and text based editing and composing systems and methods
US20110011243A1 (en) * 2009-07-20 2011-01-20 Apple Inc. Collectively adjusting tracks using a digital audio workstation
CN110070849A (zh) * 2019-03-22 2019-07-30 福建星网视易信息系统有限公司 音频演奏方法及计算机可读存储介质
US20190341010A1 (en) * 2018-04-24 2019-11-07 Dial House, LLC Music Compilation Systems And Related Methods
CN114143587A (zh) * 2020-09-03 2022-03-04 上海哔哩哔哩科技有限公司 一种用于在目标音乐视频中乐谱展示的方法与设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008024486A2 (en) * 2006-08-24 2008-02-28 Fliptrack, Inc. Beat and text based editing and composing systems and methods
US20110011243A1 (en) * 2009-07-20 2011-01-20 Apple Inc. Collectively adjusting tracks using a digital audio workstation
US20190341010A1 (en) * 2018-04-24 2019-11-07 Dial House, LLC Music Compilation Systems And Related Methods
CN110070849A (zh) * 2019-03-22 2019-07-30 福建星网视易信息系统有限公司 音频演奏方法及计算机可读存储介质
CN114143587A (zh) * 2020-09-03 2022-03-04 上海哔哩哔哩科技有限公司 一种用于在目标音乐视频中乐谱展示的方法与设备

Also Published As

Publication number Publication date
CN116932809A (zh) 2023-10-24

Similar Documents

Publication Publication Date Title
AU2021201916B2 (en) Rhythmic Synchronization Of Cross Fading For Musical Audio Section Replacement For Multimedia Playback
US20220303644A1 (en) Method and device for obtaining video clip, server, and storage medium
JP7283496B2 (ja) 情報処理方法、情報処理装置およびプログラム
US10560657B2 (en) Systems and methods for intelligently synchronizing events in visual content with musical features in audio content
US20180137425A1 (en) Real-time analysis of a musical performance using analytics
US10799795B1 (en) Real-time audio generation for electronic games based on personalized music preferences
US10506268B2 (en) Identifying media content for simultaneous playback
US10915566B2 (en) System and method for automatic synchronization of video with music, and gaming applications related thereto
JP2017513049A (ja) カラオケ曲の演奏に関するフィードバックをユーザに提供する方法
US20210295578A1 (en) Method and apparatus for controlling avatars based on sound
JP7140221B2 (ja) 情報処理方法、情報処理装置およびプログラム
EP3945435A1 (en) Dynamic identification of unknown media
US20210034688A1 (en) Systems and methods for recommending collaborative content
CN114073854A (zh) 基于多媒体文件的游戏方法和系统
WO2023185387A1 (zh) 音乐信息展示方法、装置、设备、程序产品和存储介质
CN106448710B (zh) 一种音乐播放参数的校准方法及音乐播放设备
CN116932810A (zh) 一种音乐信息展示方法、装置和计算机可读存储介质
WO2023185425A1 (zh) 音乐匹配方法、装置、电子设备、存储介质及程序产品
KR102507870B1 (ko) 사용자 음원의 생성 및 경연을 위한 사용자 인터페이스 제공 장치 및 방법
US20220335974A1 (en) Multimedia music creation using visual input
JP6978028B2 (ja) 表示制御システム、表示制御方法、及び、プログラム
Doherty et al. A self-similarity approach to repairing large dropouts of streamed music
RODRÍGUEZ SALGADO Music recommendation system based on audio segmentation and feature evolution

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23777765

Country of ref document: EP

Kind code of ref document: A1