CN110532420B - Song processing method and device - Google Patents

Song processing method and device Download PDF

Info

Publication number
CN110532420B
CN110532420B CN201910780086.7A CN201910780086A CN110532420B CN 110532420 B CN110532420 B CN 110532420B CN 201910780086 A CN201910780086 A CN 201910780086A CN 110532420 B CN110532420 B CN 110532420B
Authority
CN
China
Prior art keywords
song
segment
fragment
data
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910780086.7A
Other languages
Chinese (zh)
Other versions
CN110532420A (en
Inventor
瞿靖坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201910780086.7A priority Critical patent/CN110532420B/en
Publication of CN110532420A publication Critical patent/CN110532420A/en
Application granted granted Critical
Publication of CN110532420B publication Critical patent/CN110532420B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • G06F16/639Presentation of query results using playlists

Abstract

The present disclosure relates to a song processing method and apparatus, when the method is applied in a server, the method includes the following steps: responding to a song request aiming at a target song in a first specified mode sent by a client, and judging whether the target song is marked by a song fragment of an account corresponding to the client or not, wherein the first specified mode is a mode allowing a user to request for acquiring the song fragment; and if so, acquiring first song fragment data marked by the account in the target song, and returning the first song fragment data to the client. According to the method, only the song fragments of the essence of the target song need to be returned to the client, and the whole song does not need to be returned to the client, so that network bandwidth resources are saved, and time consumed by a user for playing the whole song is saved.

Description

Song processing method and device
Technical Field
The present disclosure relates to the field of audio processing technologies, and in particular, to a song processing method and apparatus.
Background
With the development of science and technology and the development of communication technology, users increasingly use players to listen to songs and the like online. At present, songs are played from the beginning basically, and users hear favorite songs and add the favorite songs to own song lists. However, the user may not want to listen to the entire version of the song, but only the favorite portion of the user. Or when the user hears an inaudible song, the prelude dislikes to skip the song directly, and the wonderful segment of the song is missed.
Meanwhile, when recommending songs, the songs are generally recommended according to the playing amount of the songs or based on the types of songs listened by the user, and the recommending mode is to recommend the whole song as a unit, so that the recommending accuracy is not high.
Disclosure of Invention
The present disclosure provides a song processing method and apparatus, which at least solve the problems of low playing efficiency, low recommendation accuracy, and the like caused by playing and recommending songs with the dimension of the whole song in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a song processing method, which is applied in a server, the song processing method including:
responding to a song request aiming at a target song in a first specified mode sent by a client, and judging whether the target song is marked by a song fragment of an account corresponding to the client or not, wherein the first specified mode is a mode allowing a user to request for acquiring the song fragment;
and if so, acquiring first song fragment data marked by the account in the target song, and returning the first song fragment data to the client.
According to a second aspect of the embodiments of the present disclosure, there is provided a method for song processing, the method being applied in a server, the method for song processing including:
Receiving segment marking information which is sent by a client and obtained after song segment marking is carried out on a target song in a second designated mode, wherein the segment marking information comprises starting marking time and ending marking time, and the second designated mode is a mode which allows a user to mark song segments;
performing associated storage on the fragment marking information and the identification of the target song;
acquiring song segment characteristic information of first song segment data corresponding to the segment marking information, and searching similar song segment data matched with the song segment characteristic information from other songs on the basis of the song segment characteristic information;
and generating a similar song fragment list from the similar song fragment data, and recommending the similar song fragment list to the client.
According to a third aspect of the embodiments of the present disclosure, there is provided a song processing method, which is applied in a client, the song processing method including:
detecting a target song identifier selected by a current account in a first designated mode, generating a song request based on the target song identifier and the account identifier of the account, and sending the song request to a server, wherein the first designated mode is a mode allowing a user to request for acquiring song fragments;
Receiving song segment data of the target song returned by the server based on the song request;
and playing the song clip data.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a song processing method, which is applied in a client, the song processing method including:
when a second designated mode is triggered, detecting song segment marking on a target song to obtain segment marking information, wherein the segment marking information comprises starting marking time and ending marking time, and the second designated mode is a mode allowing a user to mark a song segment;
sending the identification of the target song, the current account identification and the fragment mark information to a server for storage;
receiving a similar song fragment data list sent by the server, wherein the similar song fragment data list is a list formed by similar song fragment data acquired from other songs on the basis of the song fragment characteristic information, and acquiring song fragment characteristic information corresponding to the song fragment data after the song fragment data corresponding to the fragment marking information is acquired by the server;
and displaying the similar song fragment data list.
According to a fifth aspect of the embodiments of the present disclosure, there is provided a song processing apparatus, the apparatus being applied in a server, the song processing apparatus including:
the system comprises a mark judging module, a first song fragment data acquiring module and a second song fragment data acquiring module, wherein the mark judging module is configured to respond to a song request which is sent by a client and aims at a target song in a first specified mode, judge whether the target song is subjected to song fragment marking by an account corresponding to the client, and if so, invoke the first song fragment data acquiring module; the first specified mode is a mode allowing a user to request to acquire a song fragment;
and the first song fragment data acquisition module is configured to acquire first song fragment data marked by the account in the target song and return the first song fragment data to the client.
According to a sixth aspect of the embodiments of the present disclosure, there is provided a song processing apparatus, which is applied in a server, the song processing apparatus including:
the system comprises a fragment marking information receiving module, a fragment marking information processing module and a fragment marking information processing module, wherein the fragment marking information is sent by a client and obtained after song fragment marking is carried out on a target song in a second designated mode, the fragment marking information comprises starting marking time and ending marking time, and the second designated mode is a mode allowing a user to mark song fragments;
The association storage module is configured to store the fragment marking information and the identification of the target song in an association manner;
the similar song fragment data acquisition module is configured to acquire song fragment characteristic information of first song fragment data corresponding to the fragment marking information and search similar song fragment data matched with the song fragment characteristic information from other songs on the basis of the song fragment characteristic information;
and the similar song fragment list generation module is configured to generate a similar song fragment list from the similar song fragment data and recommend the similar song fragment list to the client.
According to a seventh aspect of the embodiments of the present disclosure, there is provided a song processing apparatus, which is applied to a client, the song processing apparatus including:
the song request generating module is configured to detect a target song identification selected by a current account in a first designated mode, generate a song request based on the target song identification and the account identification of the account, and send the song request to a server, wherein the first designated mode is a mode allowing a user to request for obtaining song fragments;
A song fragment data receiving module configured to receive song fragment data of the target song returned by the server based on the song request;
a song clip data playing module configured to play the song clip data.
According to an eighth aspect of the embodiments of the present disclosure, there is provided a song processing apparatus, which is applied to a client, the song processing apparatus including:
the song segment marking information acquisition module is configured to detect song segment marking of a target song when a second specified mode is triggered, and acquire segment marking information, wherein the segment marking information comprises a starting marking time and an ending marking time, and the second specified mode is a mode allowing a user to mark song segments;
the fragment marking information sending module is configured to send the identification of the target song, the current account identification and the fragment marking information to a server for storage;
a similar song fragment data list receiving module configured to receive a similar song fragment data list sent by the server, where the similar song fragment data list is a list composed of similar song fragment data acquired from other songs based on the song fragment characteristic information, and after the server acquires the song fragment data corresponding to the fragment marking information, the song fragment characteristic information corresponding to the song fragment data is acquired;
And the similar song fragment data list display module is configured to display the similar song fragment data list.
According to a ninth aspect of embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method described above.
According to a tenth aspect of embodiments of the present disclosure, there is provided a storage medium having instructions that, when executed by a processor of the device, enable the electronic device to perform the above-described method.
According to an eleventh aspect of embodiments of the present disclosure, there is provided a computer program product comprising executable program code, wherein the program code, when executed by the above-described apparatus, implements the above-described method.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
in this embodiment, when the server receives a song request for a target song in a first specified mode sent by the client, if it is determined that the target song has been subjected to a song clip marking by an account corresponding to the client, first song clip data marked by a current account in the target song may be obtained and sent to the client.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flow diagram illustrating an embodiment of a server-based song processing method according to an exemplary embodiment.
Fig. 2 is a schematic diagram of a play interface in a first designated mode according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating an embodiment of a first song clip data acquisition manner according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating a second song clip data acquisition mode embodiment according to an exemplary embodiment.
Fig. 5 is a flow diagram illustrating another embodiment of a method for server-based song processing according to an exemplary embodiment.
Fig. 6 is a flowchart illustrating an embodiment of a similar song clip data acquisition manner according to an exemplary embodiment.
Fig. 7 is a flowchart illustrating another embodiment of a method for server-based song processing, according to an illustrative embodiment.
Fig. 8 is a flow diagram illustrating another embodiment of a method for client-based song processing in accordance with an exemplary embodiment.
Fig. 9 is a flow diagram illustrating another embodiment of a method for client-based song processing in accordance with an exemplary embodiment.
Fig. 10 is a block diagram illustrating a server-based song processing apparatus according to an exemplary embodiment.
Fig. 11 is a block diagram illustrating another server-based song processing apparatus according to an exemplary embodiment.
Fig. 12 is a block diagram illustrating another song processing apparatus based on a client according to an example embodiment.
Fig. 13 is a block diagram illustrating another client-based song processing apparatus according to an example embodiment.
Fig. 14 is a block diagram illustrating an apparatus for performing the above-described method embodiments, according to an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Fig. 1 is a flowchart illustrating an embodiment of a song processing method according to an exemplary embodiment, which may be applied to a server and may include the following steps.
In step S11, in response to a song request for a target song in a first specified mode sent by a client, determining whether the target song has been tagged with a song clip by an account corresponding to the client; if yes, go to step S12; if not, step S13 is executed.
In the present embodiment, the first specified mode is a mode that allows the user to request acquisition of a song clip.
The embodiment can provide a song processing mode with a first specified mode, wherein the first specified mode is that the song segment is taken as the dimension, and the song segment processing is not required to be carried out on the dimension of the whole song, so that the unit for processing the song is reduced, and the accuracy of the song processing is improved.
In an example, in a song playing interface of the client, an option of a first designated mode may be provided, as shown in the playing interface schematic diagram of fig. 2, the first designated mode is a paragraph mode in fig. 2, and when a user checks the option of the paragraph mode, a processing mode of the paragraph mode is entered. For example, in the paragraph mode, if the user wants to play a certain song (i.e. the target song), the client may generate a song request in the paragraph mode for the identification of the target song (e.g. the name of the target song, etc.), and send the song request to the server.
Illustratively, the song request may include an identification of the target song, a specified pattern identification indicating that the current pattern is a first specified pattern, an account identification corresponding to the client, and the like.
After receiving the song request, the server parses the song request to obtain a specified pattern identifier indicating a first specified pattern and an identifier of the target song, so that it can be determined that the song request is a request for the target song in the first specified pattern. Subsequently, the server may determine whether the target song has been tagged with a song clip by an account corresponding to the client.
In one possible implementation, the server may determine whether the target song has been tagged with the song clip by the account corresponding to the client in the following manner: acquiring a fragment mark list corresponding to the account identifier, wherein the fragment mark list comprises song identifiers of which song fragments are marked on the account; if the identification of the target song exists in the song identifications recorded in the fragment marking list, judging that the target song is subjected to song fragment marking by an account corresponding to the client; and if the song identification recorded in the segment marking list does not have the identification of the target song, judging that the target song is not subjected to song segment marking by the account corresponding to the client.
In an example of this embodiment, the client may perform song segment tagging by: as shown in fig. 2, when a song is played, a user may click the "mark" button in fig. 2 to mark a song segment of the current song so as to mark a favorite song segment of the song, and at this time, the client may determine that the current page is in a second specified mode according to the click operation of the user, where the second specified mode is a mode allowing the user to mark the song segment.
For example, in an exemplary operation scenario, for a currently playing song, when the user clicks the "mark" button in fig. 2, the second designated mode is entered, and when a position in the progress bar for playing the song is clicked, a time (e.g., 2 minutes and 30 seconds) corresponding to the position may be recorded as a start mark time, and a mark (e.g., a diamond mark on the progress bar in fig. 2) is displayed at the position. When the user clicks other positions of the progress bar, the time (e.g., 3 minutes and 10 seconds) corresponding to the other positions may be recorded as the end mark time, and a mark is displayed at the other positions. The song segment between the two tags is the user-tagged song segment, i.e., the song segment between 2 minutes 30 seconds and 3 minutes 10 seconds is the user-tagged song segment.
In one example, the position of the progress bar corresponding to the song segment marked by the user can be highlighted with a designated color in the client.
In implementation, the user may also adjust the marked song clip by dragging the mark on the progress bar.
It should be noted that the user may also click the playing progress bar multiple times to mark favorite song segments, and the position of each click and the position of the previous click may constitute a song segment.
The client can determine the segment marking information according to the positions marked by the user twice and sends the segment marking information to the server for storage. Illustratively, the segment marking information may include the time of two adjacent marks (i.e., the start mark time and the end mark time), the identification of the song, the identification of the account, the identification of the designated pattern indicating the second designated pattern, and the like.
Of course, when the client monitors the mark on the user adjustment progress bar, the latest segment mark information can be obtained and sent to the server.
After receiving the song request, the server may filter the song identifier of which the account identifier has been subjected to song segment marking according to the account identifier carried in the song request to form a segment marking list, and then judge whether the identifier of the target song exists in the segment marking list, if so, it indicates that the current account has been subjected to song segment marking on the target song; if not, it indicates that the current account is not marked with the target song.
Of course, this embodiment is not limited to the above determination method, and other determination methods may also be adopted, for example, an account list for marking a target song is obtained, and if the account list includes an account identifier carried in the song request, it indicates that the target song is already marked by the current account; if the account identifier carried by the song request is not included in the account list, it indicates that the target song is not marked by the current account.
In step S12, first song clip data tagged by the account in the target song is acquired, and the first song clip data is returned to the client.
In this embodiment, if a target song requested by the client is already previously song segment tagged by the current account, the server may obtain first song segment data tagged to the target song by the account, and return the first song segment data to the client for playing.
In one possible implementation, referring to the flowchart of the first song clip data acquisition manner embodiment shown in fig. 3, step S12 may further include step S121 and step S122.
In step S121, first segment tagging information that the account tags to the target song is acquired.
The first segment flag information is obtained by the user marking the target song in the second specification mode of the client, and may include, for example, a start flag time and an end flag time.
It should be noted that, if the user performs more than two song segment marking behaviors on the same song, the marked song segments are two or more segments, in this case, the start marking time and the end marking time are for each song segment, that is, the start marking time refers to the time corresponding to the position marked first in one song segment, rather than the start marking time for the whole song; the end-marker time is the time corresponding to the position of the post-marker in a song segment, not the end-marker time for the entire song.
In one implementation, the server may search, from a storage medium storing segment tag information, first segment tag information corresponding to the target song tagged by a user corresponding to the user identification.
In step S122, song data between the start marker time and the end marker time is intercepted from the song data of the target song as first song clip data.
In this step, after the server obtains the first segment flag information, song data between the start flag time and the end flag time may be extracted from the song data of the target song as first song segment data according to the start flag time and the end flag time recorded in the first segment flag information.
In step S13, it is determined whether the target song is tagged with a song clip by another account, and if the target song is tagged with a song clip by another account, second song clip data of the target song tagged with another account is acquired, and the second song clip data is returned to the client.
In this embodiment, if the target song requested by the user is not subjected to song segment marking by the current account, the server may determine whether the target song is subjected to song segment marking by another account, and if the song is subjected to song segment marking by another account, second song segment data of the target song marked by another account may be acquired, and the second song segment data is returned to the client for playing; and if the song is not subjected to segment marking by other accounts, directly returning the whole song to the client side for playing.
In one possible implementation, referring to the flowchart of the second song clip data acquisition manner embodiment shown in fig. 4, step S13 may further include step S131, step S132, and step S133.
In step S131, a segment marker record of the target song is obtained, the segment marker record including one or more second segment marker information.
In this step, for a target song, there may be multiple accounts for tagging song segments for the target song, and the server may receive one or more second segment tagging information for the target song, where each second segment tagging information includes a start tagging time and an end tagging time.
In one example, the second segment tagging information for the target song may be aggregated to generate a segment tagged record for the target song.
In step S132, the song segments marked most by the target song are counted according to the segment marking records.
In this step, the second segment tagging information of the target song may be counted to determine the most tagged song segments.
In one implementation, the second segment marking information may be compared pairwise to obtain overlapping portions, and then the overlapping portion with the largest number of overlapping times is used as the most marked song segment.
For example, the second segment marking information of the target song marked by the user a is 2 minutes 10 seconds to 2 minutes 40 seconds; the second segment marking information of the target song marked by the user B is 1 minute, 50 seconds to 2 minutes and 30 seconds; marking information of a second segment marked by the user C for the target song is 2 min 00 s-3 min 00 s; the second segment tagging information for tagging the target song by the user D is 0 min 50 sec to 1 min 50 sec. The most overlapping part of the four second segment marking information (i.e. the most marked song segments) is: 2 minutes 10 seconds to 2 minutes 30 seconds.
In step S133, the song data corresponding to the most labeled song clip is acquired as second song clip data.
In this step, after determining the song segments most marked by other accounts in the target song, the song data between the start marking time and the end marking time may be extracted from the song data of the target song as the second song segment data according to the start marking time and the end marking time corresponding to the song segments most marked.
After obtaining the second song clip data, the server may send the second song clip data to the client, and the client plays the second song clip data.
In this embodiment, when the server receives a song request for a target song in a first specified mode sent by the client, if it is determined that the target song has been subjected to a song clip marking by an account corresponding to the client, first song clip data marked by a current account in the target song may be obtained and sent to the client.
Fig. 5 is a flowchart illustrating another embodiment of a song processing method according to an exemplary embodiment, which may be applied to a server and may include the following steps.
In sub-step S21, in response to a song request sent by a client for a target song in a first specified mode, it is determined whether the target song has been tagged with a song clip by an account corresponding to the client, where the first specified mode is a mode that allows a user to request to obtain a song clip.
In sub-step S22, if the target song has been marked with a song fragment by the account corresponding to the client, first fragment marking information marked with the target song by the user is obtained, where the first fragment marking information includes a start marking time and an end marking time.
In sub-step S23, the song data between the start marker time and the end marker time is intercepted from the song data of the target song as first song clip data, and the first song clip data is returned to the client.
In sub-step S24, song clip characteristic information corresponding to the first song clip data is acquired.
In this step, after obtaining the first song segment data, the server may perform audio analysis on the first song segment data, thereby obtaining corresponding song segment characteristic information.
Illustratively, the song segment characteristic information may include, but is not limited to, a song type, a language used by the song, melody information, rhythm information, dynamics information, speed information, harmony information, and the like of each sample point. The song segment characteristic information of all the sampling points can form a song segment characteristic curve, such as a tune characteristic curve, a rhythm characteristic curve, a dynamics characteristic curve and the like.
The embodiment is not limited to the way of extracting the feature information of the song segments, and the extraction of the feature information of the song segments may be performed by referring to a music feature extraction way in the related art, for example, the feature information of the song segments may be extracted by using an MFCC (Mel-frequency cepstrum coefficients) model.
In sub-step S25, similar song segment data matching the song segment characteristic information is searched for from other songs based on the song segment characteristic information.
In this step, after the song segment characteristic information of the target song is obtained, similar song segment data matching the song segment characteristic information may be searched for from other songs according to the song segment characteristic information.
In a possible implementation manner, referring to the flowchart of the similar song fragment data obtaining manner embodiment shown in fig. 6, taking the song fragment characteristic information as a tune characteristic curve as an example, step S25 may further include steps S251 to S255.
In step S251, the song type of the target song is determined.
Illustratively, song genres may include pop music, country music, rock music, classical music, jazz, and so on.
In one implementation, the server may pre-store the song type corresponding to each song, and may search for the song type of the target song according to the correspondence.
In step S252, other songs belonging to the same song type are selected as candidate songs.
In this step, after the song type of the target song is determined, other songs belonging to the same song type in the song library may be used as candidate songs.
In other embodiments, candidate songs may also be determined in conjunction with the language used for the target song, e.g., songs in the song library that are of the same song type and in the same language may be used as candidate songs.
In step S253, tune characteristic curves of the candidate songs are respectively obtained.
In one implementation, the melody characteristic curve of each candidate song may be extracted in the same manner as the feature extraction of the first song segment data described above.
In step S254, the tune characteristic curves of the first song segment data are matched among the tune characteristic curves of the candidate songs, and a tune characteristic curve having a similarity greater than a preset similarity threshold with the tune characteristic curve of the first song segment data is obtained as a similar tune characteristic curve.
In this step, after the tune characteristic curves of the respective candidate songs are obtained, for each candidate song, the tune characteristic curve of the candidate song may be matched with the tune characteristic curve of the first song segment data, so as to determine whether there is a similar tune characteristic curve similar to the tune characteristic curve of the first song segment data in the tune characteristic curve of the candidate song.
In one example, the absolute value of the difference between the feature values of the sample points of the melody feature curve of the first song segment data and the similar melody feature curve does not exceed a preset difference threshold, for example, the absolute value of the difference between the feature value of the first sample point of the melody feature curve of the first song segment data and the feature value of the first sample point of the similar melody feature curve does not exceed 5.
In step S255, the song segment data corresponding to the similar tune characteristic curve is taken as similar song segment data.
In this step, after the similar tune characteristic curve is obtained, song segment data corresponding to the similar tune curve may be extracted from the candidate song as similar song segment data according to a time interval (start time and end time) in the similar tune characteristic curve.
In sub-step S26, a similar song clip list is generated from all the similar song clip data, and the similar song clip list is recommended to the client.
In an example, a similar song segment list may be generated according to the obtained song identifications corresponding to all similar song segment data and the corresponding segment time intervals, and the similar song segment list may be recommended to the client, so that the client presents the similar song segment list to the user.
In this embodiment, when a server receives a song request for a target song in a first specified mode sent by a client, if it is determined that the target song has been subjected to a song clip tagging by an account corresponding to the client, first song clip data marked by a current account in the target song may be obtained and sent to the client, and meanwhile, a similar song clip list may be determined according to the first song clip data, and the similar song clip list is recommended to the client, so that recommendation of similar song clips is achieved, and recommended song clips more conform to user preferences.
Fig. 7 is a flowchart illustrating another embodiment of a song processing method according to an exemplary embodiment, which may be applied to a server and may include the following steps.
In step S31, segment tagging information, which is sent by the client and obtained by tagging song segments of the target song in the second specification mode, is received, where the segment tagging information includes a start tagging time and an end tagging time.
Wherein the second specified mode is a mode that allows a user to mark a song clip.
In step S32, the segment flag information and the identifier of the target song are stored in association with each other.
In step S33, song segment feature information of the first song segment data corresponding to the segment flag information is obtained, and similar song segment data matching the song segment feature information is searched for from other songs based on the song segment feature information.
Illustratively, the song segment characteristic information may include, but is not limited to, a song type, a language used by the song, melody information, rhythm information, dynamics information, speed information, harmony information, and the like of each sample point. The song segment characteristic information of all the sampling points can form a song segment characteristic curve, such as a tune characteristic curve, a rhythm characteristic curve, a dynamics characteristic curve and the like.
In a possible implementation manner of this embodiment, taking the song segment characteristic information as a tune characteristic curve as an example, similar song segment data matching the song segment characteristic information may be searched for from other songs in the following manner:
determining a song type of the target song; selecting other songs belonging to the song type as candidate songs; respectively acquiring the melody characteristic curves of the candidate songs; matching the melody characteristic curves of the first song fragment data in the melody characteristic curves of the candidate songs to obtain a melody characteristic curve with similarity greater than a preset similarity threshold value with the melody characteristic curves of the first song fragment data as a similar melody characteristic curve; and taking the song fragment data corresponding to the similar tune characteristic curve as similar song fragment data.
In step S34, a similar song clip list is generated from the similar song clip data, and the similar song clip list is recommended to the client.
In this embodiment, after receiving first segment flag information obtained by performing song segment flag on a target song in a second specification mode sent by a client, when storing the first segment flag information, a server may further determine first song segment data according to the first segment flag information, then determine a similar song segment list according to the first song segment data, and recommend the similar song segment list to the client, so as to implement recommendation of similar song segments, and make the recommended song segments better conform to user preferences.
Fig. 8 is a flowchart illustrating another embodiment of a song processing method according to an exemplary embodiment, which may be applied to a client and may include the following steps.
In step S41, a target song identifier selected by the current account in the first designated mode is detected, a song request is generated based on the target song identifier and the account identifier of the account, and the song request is sent to the server.
In the present embodiment, the first specified mode is a mode that allows the user to acquire a song clip for a request.
The embodiment can provide a song processing mode of the paragraph mode, wherein the first specified mode is that the song segment is taken as the dimension, and the song processing is not required to be carried out in the dimension of the whole song, so that the unit for processing the song is reduced, and the accuracy of the song processing is improved.
In an example, in a song playing interface of the client, an option of a first designated mode may be provided, as shown in the playing interface schematic diagram of fig. 2, the first designated mode is a paragraph mode in fig. 2, and when a user checks the option of the paragraph mode, a processing mode of the paragraph mode is entered. For example, in the paragraph mode, if the user wants to play a certain song (i.e. the target song), the client may generate a song request in the paragraph mode for the identification of the target song (e.g. the name of the target song, etc.), and send the song request to the server.
Illustratively, the song request may include an identification of the target song, a specified pattern identification indicating that the current pattern is a first specified pattern, an account identification corresponding to the client, and the like.
In step S42, song clip data of the target song returned by the server based on the song request is received.
In this step, after transmitting the song request, the client may wait for song clip data of a target song corresponding to the song request transmitted by the server.
In one embodiment, the song segment data is song data corresponding to segment marking information that is marked on the target song by the current account in advance, wherein the segment marking information includes a start marking time and an end marking time.
For example, as shown in FIG. 2, while playing a song, the user may click the "tag" button in FIG. 2 to enter a second designated mode that allows the user to tag song segments, in which the user may tag song segments for the current song to tag his favorite song segments in the song. When a position in the progress bar for playing the song is clicked, a time (e.g., 2 minutes and 30 seconds) corresponding to the position may be recorded as a start mark time, and a mark (e.g., a diamond mark on the progress bar in fig. 2) is displayed at the position. When the user clicks other positions of the progress bar, the time (e.g., 3 minutes and 10 seconds) corresponding to the other positions may be recorded as the end mark time, and a mark is displayed at the other positions. The song segment between the two marks is the user-marked song segment, that is, the song segment between 2 minutes and 30 seconds and 3 minutes and 10 seconds is the user-marked song segment, and the two pieces of time information are the segment marking information.
In one example, the position of the progress bar corresponding to the song segment marked by the user can be highlighted with a designated color in the client.
In implementation, the user may also adjust the marked song clip by dragging the mark on the progress bar.
It should be noted that the user may also click the playing progress bar multiple times to mark favorite song segments, and the position of each click and the position of the previous click may constitute a song segment.
In a possible implementation manner, after it is detected that the user marks the target song to obtain the segment marking information, the current account identifier, the target song identifier and the segment marking information may also be sent to the server and stored by the server.
In other embodiments, the user may not manually mark the segment marking information, and the client may actively monitor a dragging behavior of the user on the song progress bar, and record a time point corresponding to the dragging of the user and a time point at which the user selects to play the next song as the segment marking information.
In another embodiment, the song segment data may also be song data corresponding to the song segment that is marked most by the server according to segment marking information that is marked by other users on the target song in advance.
In this embodiment, if the user has not previously performed segment tagging on the requested target song, the song segment data is segment data obtained after the other users perform segment tagging on the target song.
In step S43, the song clip data is played.
After the client receives the song segment data of the requested target song, the song segment data can be directly played without playing the whole target song from the beginning, so that the song playing time is saved.
In a possible implementation manner, this embodiment may further include the following steps:
and receiving a similar song fragment data list recommended by the server, and displaying the similar song fragment data list.
Illustratively, the similar song segment data list is a list composed of similar song segment data acquired from other songs based on the song segment characteristic information, and after the server acquires the song segment data corresponding to the segment marking information, the song segment characteristic information corresponding to the song segment data is acquired.
When the user clicks one item in the similar song segment data list, the corresponding similar song segment data can be directly played without playing the whole song, so that the time for the client to play the similar song is saved, and the accuracy of hitting the preference of the user is high.
Fig. 9 is a flowchart illustrating another embodiment of a song processing method according to an exemplary embodiment, which may be applied to a client and may include the following steps.
In step S51, when the second specific mode is triggered, song segment marking for the target song is detected, and segment marking information is obtained, where the segment marking information includes a start marking time and an end marking time.
In step S52, the identifier of the target song, the current account identifier, and the clip tag information are sent to a server for storage.
In step S53, a similar song clip data list recommended by the server is received and presented.
The similar song fragment data list is a list formed by similar song fragment data acquired from other songs on the basis of the song fragment characteristic information, and the song fragment characteristic information corresponding to the song fragment data is acquired after the song fragment data corresponding to the fragment marking information is acquired by the server.
It should be noted that the similar song segment data list may be a similar song segment that is automatically recommended to the client by the server; a similar song clip recommending entry may also be set in the play interface of the client, and when the user triggers the entry, the client may send a similar song clip recommending request to the server, and the server obtains a similar song clip data list according to the request and returns the same to the client.
In this embodiment, the user may mark a song segment of the target song by triggering the second specifying mode, and the client obtains segment marking information of the user mark, and sends the identifier of the target song, the current account identifier, and the segment marking information to the server for storage. Subsequently, the client can receive the similar song fragment data list returned by the server and display the similar song fragment data list, so that similar songs obtained by the user are similar song fragments instead of complete songs, the transmission bandwidth of the server is saved, the time for playing the similar songs by the client is saved, and the accuracy of hitting the preference of the user is high.
Fig. 10 is a block diagram illustrating a song processing apparatus according to an example embodiment. Referring to fig. 10, the apparatus is applied to a server, and the song processing apparatus includes: a mark judging module 1001, a first song segment data acquiring module 1002 and a second song segment data acquiring module 1003.
The marking judgment module 1001 is configured to respond to a song request, sent by a client, for a target song in a first specified mode, judge whether the target song is subjected to song segment marking by an account corresponding to the client, and if so, invoke a first song segment data acquisition module; the first specified mode is a mode allowing a user to request to acquire a song fragment;
A first song segment data obtaining module 1002, configured to obtain first song segment data marked by the account in the target song, and return the first song segment data to the client.
In a possible implementation manner of this embodiment, the song processing apparatus further includes:
the second song segment data obtaining module 1003 is configured to, if the target song is not subjected to song segment marking by the account corresponding to the client, determine whether the target song is subjected to song segment marking by another account, and if the target song is subjected to song segment marking by another account, obtain second song segment data of the target song marked by another account, and return the second song segment data to the client.
In a possible implementation manner of this embodiment, the song request includes an account identifier and an identifier of a target song; the flag determination module 1001 is specifically configured to:
acquiring a fragment mark list corresponding to the account identifier, wherein the fragment mark list comprises song identifiers of which song fragments are marked on the account;
if the identification of the target song exists in the song identifications recorded in the fragment marking list, judging that the target song is subjected to song fragment marking by an account corresponding to the client;
And if the song identification recorded in the segment marking list does not have the identification of the target song, judging that the target song is not subjected to song segment marking by the account corresponding to the client.
In a possible implementation manner of this embodiment, the first song clip data acquiring module 1002 includes:
a first segment marking information obtaining module, configured to obtain first segment marking information that the account marks the target song, where the first segment marking information includes a start marking time and an end marking time, the first segment marking information is information obtained by marking the target song by the account in a second specified mode of the client, and the second specified mode is a mode that allows a user to mark song segments;
and the song data intercepting module is configured to intercept song data between the starting mark time and the ending mark time from the song data of the target song as first song fragment data.
In a possible implementation manner of this embodiment, the song processing apparatus further includes:
the song fragment characteristic information acquisition module is configured to respond to the first fragment mark information sent by the client to acquire song fragment characteristic information corresponding to the first song fragment data;
The similar song fragment data searching module is configured to search similar song fragment data matched with the song fragment characteristic information from other songs on the basis of the song fragment characteristic information;
and the similar song fragment list generation module is configured to generate a similar song fragment list from all similar song fragment data and recommend the similar song fragment list to the client.
In a possible implementation manner of this embodiment, the song segment characteristic information includes a tune characteristic curve;
the similar song segment data lookup module is specifically configured to:
determining a song type of the target song;
selecting other songs belonging to the song type as candidate songs;
respectively acquiring the melody characteristic curves of the candidate songs;
matching the melody characteristic curves of the first song fragment data in the melody characteristic curves of the candidate songs to obtain a melody characteristic curve with similarity greater than a preset similarity threshold value with the melody characteristic curves of the first song fragment data as a similar melody characteristic curve;
and taking the song fragment data corresponding to the similar tune characteristic curve as similar song fragment data.
In a possible implementation manner of this embodiment, the second song clip data obtaining module 1003 includes:
a segment marker record acquisition module configured to acquire a segment marker record of the target song, the segment marker record including one or more second segment marker information;
and the second song segment data determining module is configured to count the song segments of the target song marked most according to the segment marking records, and acquire the song data corresponding to the song segments marked most as second song segment data.
Fig. 11 is a block diagram illustrating another song processing apparatus according to an example embodiment. Referring to fig. 11, the apparatus is applied to a server, and the song processing apparatus includes: the system comprises a segment marking information receiving module 1101, an association storage module 1102, a similar song segment data acquisition module 1103 and a similar song segment list generating module 1104.
The segment marking information receiving module 1101 is configured to receive segment marking information which is sent by a client and obtained after song segment marking is performed on a target song in a second specified mode, wherein the segment marking information includes a start marking time and an end marking time, and the second specified mode is a mode which allows a user to mark song segments;
An association storage module 1102 configured to store the segment flag information and the identifier of the target song in an association manner;
a similar song segment data obtaining module 1103 configured to obtain song segment feature information of the first song segment data corresponding to the segment marking information, and search similar song segment data matching the song segment feature information from other songs based on the song segment feature information;
a similar song segment list generating module 1104 configured to generate a similar song segment list from the similar song segment data and recommend the similar song segment list to the client.
In a possible implementation manner of this embodiment, the song segment characteristic information includes a tune characteristic curve;
the similar-song clip data acquisition module 1103 is specifically configured to:
determining a song type of the target song;
selecting other songs belonging to the song type as candidate songs;
respectively acquiring the melody characteristic curves of the candidate songs;
matching the melody characteristic curves of the first song fragment data in the melody characteristic curves of the candidate songs to obtain a melody characteristic curve with similarity greater than a preset similarity threshold value with the melody characteristic curves of the first song fragment data as a similar melody characteristic curve;
And taking the song fragment data corresponding to the similar tune characteristic curve as similar song fragment data.
Fig. 12 is a block diagram illustrating another song processing apparatus according to an example embodiment. Referring to fig. 11, the apparatus is applied to a client, and the song processing apparatus includes: a song request generation module 1201, a song clip data reception module 1202, and a song clip data play module 1203.
A song request generating module 1201, configured to detect a target song identifier selected by a current account in a first specified mode, generate a song request based on the target song identifier and the account identifier of the account, and send the song request to a server, where the first specified mode is a mode that allows a user to request for obtaining a song segment;
a song clip data receiving module 1202 configured to receive song clip data of the target song returned by the server based on the song request;
a song clip data playing module 1203 configured to play the song clip data.
In a possible implementation manner of this embodiment, the song segment data is song data corresponding to segment marking information that is marked on the target song by the user in advance, where the segment marking information includes a start marking time and an end marking time;
The song processing apparatus further includes:
a segment tagging information sending module configured to send the account identification, the target song identification, and the segment tagging information to a server.
In a possible implementation manner of this embodiment, the song processing apparatus further includes:
a similar song fragment data list receiving module configured to receive a similar song fragment data list recommended by the server, where the similar song fragment data list is a list composed of similar song fragment data acquired from other songs based on the song fragment feature information, and after the server acquires the song fragment data corresponding to the fragment marking information, the song fragment feature information corresponding to the song fragment data is acquired;
and the similar song fragment data list display module is configured to display the similar song fragment data list.
In a possible implementation manner of this embodiment, the song segment data is song data corresponding to the song segment that is marked most by the server according to segment marking information that is previously marked by other accounts for the target song.
Fig. 13 is a block diagram illustrating another song processing apparatus according to an example embodiment. Referring to fig. 11, the apparatus is applied to a client, and the song processing apparatus includes: a clip tag information obtaining module 1301, a clip tag information sending module 1302, a similar song clip data list receiving module 1303 and a similar song clip data list displaying module 1304.
A segment marking information obtaining module 1301 configured to detect song segment marking on a target song when a second specified mode is triggered, and obtain segment marking information, where the segment marking information includes a start marking time and an end marking time, and the second specified mode is a mode allowing a user to mark a song segment;
a fragment tag information sending module 1302, configured to send the identifier of the target song, the current account identifier, and the fragment tag information to a server for storage;
a similar song fragment data list receiving module 1303, configured to receive a similar song fragment data list sent by the server, where the similar song fragment data list is a list formed by similar song fragment data acquired from other songs based on the song fragment characteristic information, and after the server acquires the song fragment data corresponding to the fragment marking information, the similar song fragment data list acquires song fragment characteristic information corresponding to the song fragment data;
A similar song segment data list presentation module 1304 configured to present the similar song segment data list.
With regard to the apparatus and system in the above embodiments, the specific manner in which each module performs operations has been described in detail in the embodiments related to the method, and will not be described in detail here.
Fig. 14 is a block diagram illustrating an apparatus for performing the above-described method embodiments, according to an example embodiment.
In an exemplary embodiment, there is also provided a storage medium comprising instructions, such as a memory comprising instructions, executable by a processor of an apparatus to perform the method embodiments of fig. 1-9 described above. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the methods of the embodiments of fig. 1-9 described above.
The embodiments of the present disclosure also provide a storage medium, and when executed by a processor of the device, the instructions in the storage medium enable the device to perform the method in the embodiments of fig. 1 to 9.
The disclosed embodiments also provide a computer program product comprising executable program code, wherein the program code, when executed by the above-described apparatus, implements the method according to the embodiments of fig. 1-9.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (30)

1. A song processing method is applied to a server, and comprises the following steps:
responding to a song request aiming at a target song in a first specified mode sent by a client, and judging whether the target song is marked by a song fragment of an account corresponding to the client or not, wherein the first specified mode is a mode allowing a user to request for acquiring the song fragment;
if yes, first song fragment data marked by the account in the target song are obtained, and the first song fragment data are returned to the client;
and recommending similar song segments to the client according to the first song segment data.
2. The song processing method of claim 1, further comprising:
if the target song is not subjected to song segment marking by the account corresponding to the client, judging whether the target song is subjected to song segment marking by other accounts;
and if the target song is subjected to song fragment marking by other accounts, acquiring second song fragment data of the target song marked by other accounts, and returning the second song fragment data to the client.
3. The song processing method of claim 1 or 2, wherein the song request includes an account identification and an identification of a target song; the step of judging whether the target song is marked by the song fragment corresponding to the client side or not comprises the following steps:
acquiring a fragment mark list corresponding to the account identifier, wherein the fragment mark list comprises song identifiers of which song fragments are marked on the account;
if the identification of the target song exists in the song identifications recorded in the fragment marking list, judging that the target song is subjected to song fragment marking by an account corresponding to the client;
and if the song identification recorded in the segment marking list does not have the identification of the target song, judging that the target song is not subjected to song segment marking by the account corresponding to the client.
4. The song processing method according to claim 1 or 2, wherein the step of acquiring first song clip data marked by the account in the target song includes:
acquiring first segment marking information of the account for marking the target song, wherein the first segment marking information comprises starting marking time and ending marking time, the first segment marking information is information obtained by marking the target song by the account in a second designated mode of the client, and the second designated mode is a mode allowing a user to mark song segments;
And intercepting song data between the starting mark time and the ending mark time from the song data of the target song as first song fragment data.
5. The song processing method of claim 4, further comprising:
responding to the first segment marking information sent by the client, and acquiring song segment characteristic information corresponding to the first song segment data;
based on the song segment characteristic information, similar song segment data matched with the song segment characteristic information are searched from other songs;
and generating a similar song fragment list from all similar song fragment data, and recommending the similar song fragment list to the client.
6. The song processing method of claim 5, wherein the song segment characteristic information includes a tune characteristic curve;
the step of searching similar song segment data matched with the song segment characteristic information from other songs on the basis of the song segment characteristic information comprises the following steps:
determining a song type of the target song;
selecting other songs belonging to the song type as candidate songs;
Respectively acquiring the melody characteristic curves of the candidate songs;
matching the melody characteristic curves of the first song fragment data in the melody characteristic curves of the candidate songs to obtain a melody characteristic curve with similarity greater than a preset similarity threshold value with the melody characteristic curves of the first song fragment data as a similar melody characteristic curve;
and taking the song fragment data corresponding to the similar tune characteristic curve as similar song fragment data.
7. The song processing method according to claim 2, wherein the step of acquiring second song clip data in which the target song is tagged by other users includes:
acquiring a segment marker record of the target song, wherein the segment marker record comprises one or more second segment marker information;
according to the fragment marking record, counting the most marked song fragments of the target song;
and acquiring song data corresponding to the most marked song segments as second song segment data.
8. A song processing method is applied to a server, and comprises the following steps:
receiving segment marking information which is sent by a client and obtained after song segment marking is carried out on a target song in a second designated mode, wherein the segment marking information comprises starting marking time and ending marking time, and the second designated mode is a mode which allows a user to mark song segments;
Performing associated storage on the fragment marking information and the identification of the target song;
acquiring song segment characteristic information of first song segment data corresponding to the segment marking information, and searching similar song segment data matched with the song segment characteristic information from other songs on the basis of the song segment characteristic information;
and generating a similar song fragment list from the similar song fragment data, and recommending the similar song fragment list to the client.
9. The method of claim 8, wherein the song segment characteristic information includes a tune characteristic curve;
the step of searching similar song segment data matched with the song segment characteristic information from other songs on the basis of the song segment characteristic information comprises the following steps:
determining a song type of the target song;
selecting other songs belonging to the song type as candidate songs;
respectively acquiring the melody characteristic curves of the candidate songs;
matching the melody characteristic curves of the first song fragment data in the melody characteristic curves of the candidate songs to obtain a melody characteristic curve with similarity greater than a preset similarity threshold value with the melody characteristic curves of the first song fragment data as a similar melody characteristic curve;
And taking the song fragment data corresponding to the similar tune characteristic curve as similar song fragment data.
10. A song processing method is applied to a client side, and comprises the following steps:
detecting a target song identifier selected by a current account in a first designated mode, generating a song request based on the target song identifier and the account identifier of the account, and sending the song request to a server, wherein the first designated mode is a mode allowing a user to request for acquiring song fragments;
receiving song fragment data of the target song returned by the server based on the song request;
playing the song clip data;
and receiving similar song fragments recommended by the server according to the song fragment data.
11. The song processing method according to claim 10, wherein the song segment data is song data corresponding to segment marking information that is previously marked on the target song by the account, and the segment marking information includes a start marking time and an end marking time;
before the detecting a target song identifier selected by the current account in the first designated mode, generating a song request based on the target song identifier and the account identifier of the account, and sending the song request to the server, the song processing method further includes:
And sending the account identification, the target song identification and the fragment mark information to a server.
12. The song processing method of claim 11, wherein after the sending the account identification, the target song identification, and the segment tagging information to a server, the song processing method further comprises:
receiving a similar song fragment data list recommended by the server, wherein the similar song fragment data list is a list formed by similar song fragment data acquired from other songs on the basis of the song fragment characteristic information, and acquiring song fragment characteristic information corresponding to the song fragment data after the server acquires the song fragment data corresponding to the fragment marking information;
and displaying the similar song fragment data list.
13. The song processing method according to claim 10, wherein the song segment data is song data corresponding to a song segment in which the target song is most labeled, which is counted by the server according to segment labeling information that is previously labeled to the target song by another account.
14. A song processing method is applied to a client side, and comprises the following steps:
When a second designated mode is triggered, detecting song segment marking of a target song to obtain segment marking information, wherein the segment marking information comprises starting marking time and ending marking time, and the second designated mode is a mode allowing a user to mark song segments;
sending the identification of the target song, the current account identification and the fragment mark information to a server for storage;
receiving a similar song fragment data list sent by the server, wherein the similar song fragment data list is a list formed by similar song fragment data acquired from other songs on the basis of the song fragment characteristic information, and acquiring song fragment characteristic information corresponding to the song fragment data after the song fragment data corresponding to the fragment marking information is acquired by the server;
and displaying the similar song fragment data list.
15. A song processing apparatus, wherein the apparatus is applied to a server, the song processing apparatus comprising:
the system comprises a mark judging module, a first song fragment data acquiring module and a second song fragment data acquiring module, wherein the mark judging module is configured to respond to a song request which is sent by a client and aims at a target song in a first specified mode, judge whether the target song is subjected to song fragment marking by an account corresponding to the client, and if so, invoke the first song fragment data acquiring module; the first specified mode is a mode allowing a user to request to acquire a song fragment;
A first song segment data acquisition module configured to acquire first song segment data marked by the account in the target song and return the first song segment data to the client; and recommending similar song segments to the client according to the first song segment data.
16. The song processing apparatus of claim 15, further comprising:
and the second song segment data acquisition module is configured to judge whether the target song is subjected to song segment marking by other accounts if the target song is not subjected to song segment marking by the account corresponding to the client, acquire second song segment data of the target song marked by other accounts if the target song is subjected to song segment marking by other accounts, and return the second song segment data to the client.
17. The song processing apparatus of claim 15 or 16, wherein the song request comprises an account identification and an identification of a target song; the mark determination module is specifically configured to:
acquiring a fragment mark list corresponding to the account identifier, wherein the fragment mark list comprises song identifiers of which song fragments are marked on the account;
If the identification of the target song exists in the song identifications recorded in the fragment marking list, judging that the target song is subjected to song fragment marking by an account corresponding to the client;
and if the song identification recorded in the segment marking list does not have the identification of the target song, judging that the target song is not subjected to song segment marking by the account corresponding to the client.
18. The song processing apparatus according to claim 15 or 16, wherein the first song clip data acquisition module includes:
a first segment marking information obtaining module, configured to obtain first segment marking information that the account marks the target song, where the first segment marking information includes a start marking time and an end marking time, the first segment marking information is information obtained by marking the target song by the account in a second specified mode of the client, and the second specified mode is a mode that allows a user to mark song segments;
and the song data intercepting module is configured to intercept song data between the starting mark time and the ending mark time from the song data of the target song as first song fragment data.
19. The song processing apparatus of claim 18, further comprising:
the song fragment characteristic information acquisition module is configured to respond to the first fragment mark information sent by the client to acquire song fragment characteristic information corresponding to the first song fragment data;
the similar song fragment data searching module is configured to search similar song fragment data matched with the song fragment characteristic information from other songs on the basis of the song fragment characteristic information;
and the similar song fragment list generation module is configured to generate a similar song fragment list from all similar song fragment data and recommend the similar song fragment list to the client.
20. The song processing apparatus of claim 19, wherein the song clip characteristic information includes a tune characteristic curve;
the similar song segment data lookup module is specifically configured to:
determining a song type of the target song;
selecting other songs belonging to the song type as candidate songs;
respectively acquiring the melody characteristic curves of the candidate songs;
matching the melody characteristic curves of the first song fragment data in the melody characteristic curves of the candidate songs to obtain a melody characteristic curve with similarity greater than a preset similarity threshold value with the melody characteristic curves of the first song fragment data as a similar melody characteristic curve;
And taking the song fragment data corresponding to the similar tune characteristic curve as similar song fragment data.
21. The song processing apparatus according to claim 16, wherein the second song clip data acquisition module includes:
a segment marker record obtaining module configured to obtain a segment marker record of the target song, the segment marker record including one or more second segment marker information;
and the second song segment data determining module is configured to count the song segments of the target song marked most according to the segment marking records, and acquire the song data corresponding to the song segments marked most as second song segment data.
22. An apparatus for song processing, wherein the apparatus is applied in a server, the apparatus for song processing comprises:
the system comprises a fragment marking information receiving module, a fragment marking information processing module and a fragment marking information processing module, wherein the fragment marking information is sent by a client and obtained after song fragment marking is carried out on a target song in a second designated mode, the fragment marking information comprises starting marking time and ending marking time, and the second designated mode is a mode allowing a user to mark song fragments;
The association storage module is configured to store the fragment marking information and the identification of the target song in an association manner;
a similar song segment data acquisition module configured to acquire song segment feature information of first song segment data corresponding to the segment marking information, and search similar song segment data matched with the song segment feature information from other songs based on the song segment feature information;
and the similar song fragment list generation module is configured to generate a similar song fragment list from the similar song fragment data and recommend the similar song fragment list to the client.
23. The apparatus of claim 22, wherein the song segment characteristic information comprises a tune characteristic curve;
the similar song segment data acquisition module is specifically configured to:
determining a song type of the target song;
selecting other songs belonging to the song type as candidate songs;
respectively acquiring the melody characteristic curves of the candidate songs;
matching the melody characteristic curves of the first song fragment data in the melody characteristic curves of the candidate songs to obtain a melody characteristic curve with similarity greater than a preset similarity threshold value with the melody characteristic curves of the first song fragment data as a similar melody characteristic curve;
And taking the song fragment data corresponding to the similar tune characteristic curve as similar song fragment data.
24. A song processing apparatus, wherein the apparatus is applied to a client, the song processing apparatus comprising:
the song request generating module is configured to detect a target song identification selected by a current account in a first designated mode, generate a song request based on the target song identification and the account identification of the account, and send the song request to a server, wherein the first designated mode is a mode allowing a user to request for obtaining song fragments;
a song fragment data receiving module configured to receive song fragment data of the target song returned by the server based on the song request; receiving similar song fragments recommended by the server according to the song fragment data;
a song clip data playing module configured to play the song clip data.
25. The song processing apparatus of claim 24, wherein the song segment data is song data corresponding to segment marking information that is previously marked on the target song by the account, and the segment marking information comprises a start marking time and an end marking time;
The song processing apparatus further includes:
a segment tagging information sending module configured to send the account identification, the target song identification, and the segment tagging information to a server.
26. The song processing apparatus of claim 25, further comprising:
a similar song fragment data list receiving module configured to receive a similar song fragment data list recommended by the server, where the similar song fragment data list is a list composed of similar song fragment data acquired from other songs based on the song fragment feature information, and after the server acquires the song fragment data corresponding to the fragment marking information, the song fragment feature information corresponding to the song fragment data is acquired;
and the similar song fragment data list display module is configured to display the similar song fragment data list.
27. The song processing apparatus according to claim 24, wherein the song clip data is song data corresponding to a song clip to which the target song is most tagged by the server according to clip tagging information that is previously tagged to the target song by another account.
28. A song processing apparatus, wherein the apparatus is applied to a client, the song processing apparatus comprising:
the song segment marking information acquisition module is configured to detect song segment marking of a target song when a second specified mode is triggered, and acquire segment marking information, wherein the segment marking information comprises a starting marking time and an ending marking time, and the second specified mode is a mode allowing a user to mark song segments;
the fragment marking information sending module is configured to send the identification of the target song, the current account identification and the fragment marking information to a server for storage;
a similar song fragment data list receiving module configured to receive a similar song fragment data list sent by the server, where the similar song fragment data list is a list composed of similar song fragment data acquired from other songs based on the song fragment characteristic information, and after the server acquires the song fragment data corresponding to the fragment marking information, the song fragment characteristic information corresponding to the song fragment data is acquired;
and the similar song fragment data list display module is configured to display the similar song fragment data list.
29. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of any one of claims 1-7 or any one of claims 8-9 or any one of claims 10-13 or 14.
30. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the method of any one of claims 1-7 or 8-9 or 10-13 or 14.
CN201910780086.7A 2019-08-22 2019-08-22 Song processing method and device Active CN110532420B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910780086.7A CN110532420B (en) 2019-08-22 2019-08-22 Song processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910780086.7A CN110532420B (en) 2019-08-22 2019-08-22 Song processing method and device

Publications (2)

Publication Number Publication Date
CN110532420A CN110532420A (en) 2019-12-03
CN110532420B true CN110532420B (en) 2022-08-12

Family

ID=68662638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910780086.7A Active CN110532420B (en) 2019-08-22 2019-08-22 Song processing method and device

Country Status (1)

Country Link
CN (1) CN110532420B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117390217A (en) * 2023-12-13 2024-01-12 杭州网易云音乐科技有限公司 Method, device, equipment and medium for determining song segments

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1746969A (en) * 2004-09-09 2006-03-15 英保达股份有限公司 Method for selecting song and audio-frequency player
CN1987846A (en) * 2005-12-19 2007-06-27 英保达股份有限公司 Method and its device for personalized audition digital music data
CN101131693A (en) * 2006-08-25 2008-02-27 佛山市顺德区顺达电脑厂有限公司 Music playing system and method thereof
CN101022468A (en) * 2007-03-05 2007-08-22 华为技术有限公司 Mobile terminal cue sound playing method and device
CN104750839B (en) * 2015-04-03 2019-02-15 魅族科技(中国)有限公司 A kind of data recommendation method, terminal and server
CN105426085B (en) * 2015-12-10 2018-01-23 广东欧珀移动通信有限公司 A kind of music file intercept method and user terminal
CN108228882B (en) * 2018-01-26 2019-12-17 维沃移动通信有限公司 recommendation method and terminal device for song audition fragments

Also Published As

Publication number Publication date
CN110532420A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
CN110024412B (en) Video live broadcast method, device and system
KR101578279B1 (en) Methods and systems for identifying content in a data stream
CN105074697B (en) For inferring the accumulation of the real-time crowdsourcing data of the metadata about entity
CN105190618B (en) Acquisition, recovery and the matching to the peculiar information from media file-based for autofile detection
US10887031B2 (en) Vehicle-based media system with audio ad and navigation-related action synchronization feature
US20140172429A1 (en) Local recognition of content
EP2602630A2 (en) Method of characterizing the overlap of two media segments
US20100023328A1 (en) Audio Recognition System
US11558661B2 (en) Methods and apparatus to identify streaming media sources
US9524715B2 (en) System and method for content recognition in portable devices
CN105335414A (en) Music recommendation method, device and terminal
CN110677735A (en) Video positioning method and device
CN104750839A (en) Data recommendation method, terminal and server
CN110532420B (en) Song processing method and device
US20100306073A1 (en) Identifying and purchasing pre-recorded content
CN109271501B (en) Audio database management method and system
CN107079068A (en) Ad content is shared to the method and system of secondary device from main equipment
KR101181732B1 (en) Method for generating video markup data based on video fingerprint data and method and system for providing information using the same
US20120254356A1 (en) Information processing apparatus, information processing method, and program
CN108777804B (en) Media playing method and device
KR20110010084A (en) Method and system for providing contents related service using fingerprint data
US11830043B2 (en) Apparatus, system, and method for audio based browser cookies
US20130159107A1 (en) Advertisement providing apparatus and method for providing advertisements
CN114969437A (en) Video processing method and system
CN112153469A (en) Multimedia resource playing method, device, terminal, server and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant