CN110830595A - Personalized music pushing method and system - Google Patents

Personalized music pushing method and system Download PDF

Info

Publication number
CN110830595A
CN110830595A CN201911300407.5A CN201911300407A CN110830595A CN 110830595 A CN110830595 A CN 110830595A CN 201911300407 A CN201911300407 A CN 201911300407A CN 110830595 A CN110830595 A CN 110830595A
Authority
CN
China
Prior art keywords
information
music
user
audio
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911300407.5A
Other languages
Chinese (zh)
Other versions
CN110830595B (en
Inventor
詹华洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201911300407.5A priority Critical patent/CN110830595B/en
Publication of CN110830595A publication Critical patent/CN110830595A/en
Application granted granted Critical
Publication of CN110830595B publication Critical patent/CN110830595B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/55Push-based network services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Abstract

The invention discloses a personalized music recommendation method and system, which can analyze the favorite degree of a certain type of music of a user by collecting the behaviors of the user when listening to the music, thereby pushing the music more conforming to the personality of the user.

Description

Personalized music pushing method and system
Technical Field
The invention relates to the technical field of internet data processing, in particular to a personalized music pushing method and system.
Background
Along with the continuous popularization and development of mobile devices, users listen to music at any time and any place through the mobile devices, and how to recommend optimal music to the users becomes a difficult problem. The existing recommendation methods mainly comprise music content recommendation, music correlation recommendation, knowledge recommendation and collaborative filtering recommendation.
The recommendation accuracy is not high for the existing recommendation methods, and many recommendation methods such as music relevance recommendation cannot give personalized recommendation and cannot take various factors into consideration when recommending.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a personalized music pushing method and a personalized music pushing system, which can analyze the favorite degree of a certain type of music of a user by collecting the behaviors of the user when listening to the music, so as to push the music more conforming to the personality of the user.
The invention adopts the following technical scheme for realizing the purpose:
in a first aspect, the present invention provides a personalized music pushing method, including:
acquiring a personality label of a user, wherein the personality label comprises an active label and an inactive label;
acquiring matched music according to the active tags in the personalized tags, and pushing the matched music to the user for playing;
when a song cutting instruction of a user is acquired;
acquiring the total time length of the currently played music and the played time length;
when the ratio of the played time length to the total time length is smaller than a preset value;
acquiring a personalized tag matched with the currently played music from the personalized tags of the users, and recording the personalized tag as an inactive tag;
and acquiring matched music according to the active tag, and pushing the matched music to the user for playing.
In an embodiment of the present invention, the personalized music pushing method further includes:
when the ratio of the played time length to the total time length is not less than a preset value;
acquiring lyric information of currently played music, wherein the lyric information comprises lyric words and sentences and time nodes matched with the lyric words and sentences;
acquiring matched time nodes from the lyric information according to the played time length, and recording the matched time nodes as matched nodes;
obtaining lyric word and sentence information of a time node behind the matched node;
when the acquired lyric word information is not empty;
acquiring a personalized tag matched with the currently played music from the personalized tags of the users, and recording the personalized tag as an inactive tag;
and acquiring matched music according to the active tag, and pushing the matched music to the user for playing.
In an embodiment of the present invention, the personalized music pushing method further includes:
when a random play instruction of a user is acquired;
randomly acquiring music from a preset music library, and pushing the music to the user for playing;
acquiring sound input information of user equipment in real time;
in a preset period, generating audio comparison information according to the acquired sound input information;
acquiring preset template information according to the currently played music and the current playing progress;
comparing the audio comparison information with the template information;
when the audio contrast information is consistent with the template information;
acquiring a preset label of currently played music;
and adding the preset label as an active label of the user.
In an embodiment of the present invention, the template information is lyric information of music;
comparing the audio comparison information with the template information, specifically including:
performing semantic recognition on the audio comparison information, and generating audio comparison character information of the user;
comparing the audio frequency comparison word information with the lyric information;
when the audio contrast word information is consistent with the lyric information;
and judging that the audio comparison information is consistent with the template information.
In an embodiment of the present invention, the template information is template spectrum information of music;
comparing the audio comparison information with the template information, specifically including:
carrying out Fourier transform on the audio contrast information and generating audio contrast frequency spectrum information of the user;
comparing the audio frequency comparison spectrum information with the template spectrum information;
when the audio contrast spectrum information is consistent with the template spectrum information;
and judging that the audio comparison information is consistent with the template information.
In a second aspect, the invention further provides a personalized music pushing system, which comprises a personalized tag management module, a music playing module, an instruction acquisition module and a judgment module;
the personalized tag management module is used for acquiring a personalized tag of a user, wherein the personalized tag comprises an active tag and an inactive tag;
the music playing module is used for acquiring matched music according to the active tags in the individual tags and pushing the matched music to the user for playing;
the instruction acquisition module is used for acquiring a control instruction input by a user;
when the instruction acquisition module acquires a song cutting instruction of a user, the music playing module is further used for acquiring matched music according to the active tag and pushing the matched music to the user for playing;
the judging module is used for acquiring the total time length of the currently played music and the played time length;
the judging module is further configured to calculate a ratio of the played duration to the total duration;
when the judging module judges that the ratio of the played time length to the total time length is less than a preset value;
the personality label management module is further configured to obtain a personality label matched with currently played music from the personality labels of the user, and record the personality label as an inactive label.
In an embodiment of the present invention, when the determining module determines that the ratio of the played duration to the total duration is not less than a preset value, the determining module is further configured to obtain lyric information of a currently played music, where the lyric information includes lyric words and time nodes matched with the lyric words;
the judging module is also used for acquiring matched time nodes from the lyric information according to the played time length and recording the matched time nodes as matched nodes;
the judging module is also used for acquiring the lyric word and sentence information of the time node behind the matched node;
when the lyric word information acquired by the judging module is not empty;
the personality label management module is further configured to obtain a personality label matched with currently played music from the personality labels of the user, and record the personality label as an inactive label.
In an embodiment of the present invention, the personalized music pushing system further includes an audio input module;
when the instruction acquisition module acquires a random play instruction of a user;
the music playing module is also used for randomly acquiring music from a preset music library and pushing the music to the user for playing;
the audio input module is used for acquiring sound input information of the user equipment in real time;
the audio input module is further used for generating audio comparison information according to the acquired sound input information in a preset period;
the judging module is also used for acquiring preset template information according to the currently played music and the current playing progress;
the judging module is further configured to compare the audio comparison information with the template information;
when the judgment module judges that the audio comparison information is consistent with the template information;
the personalized tag management module is also used for acquiring a preset tag of the currently played music;
the personalized tag management module is also used for adding the preset tag as an active tag of the user.
In an embodiment of the present invention, the template information is lyric information of music;
the judgment module is also used for carrying out semantic recognition on the audio comparison information and generating audio comparison character information of the user;
the judging module is also used for comparing the audio comparison character information with the lyric information;
and when the audio comparison character information is consistent with the lyric information, the judgment module judges that the audio comparison information is consistent with the template information.
In an embodiment of the present invention, the template information is template spectrum information of music;
the judgment module is further used for carrying out Fourier transform on the audio comparison information and generating audio comparison frequency spectrum information of the user;
the judging module is further configured to compare the audio frequency comparison spectrum information with the template spectrum information;
and when the audio frequency comparison frequency spectrum information is consistent with the template frequency spectrum information, the judgment module judges that the audio frequency comparison information is consistent with the template information.
Compared with the prior art, the invention has the beneficial effects that:
according to the personalized music pushing method and system, the favorite degree of a certain type of music of the user can be analyzed by collecting the behaviors of the user when listening to the music, and then the music more conforming to the personality of the user is pushed.
Drawings
Fig. 1 is a schematic flow chart illustrating a personalized music push method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a personalized music push system according to an embodiment of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings and specific embodiments, wherein the exemplary embodiments and descriptions are only used for explaining the present invention, but not for limiting the present invention.
In a first aspect, as shown in fig. 1, the present invention provides a personalized music pushing method, including:
s100, obtaining a personalized tag of a user, wherein the personalized tag comprises an active tag and an inactive tag;
s200, acquiring matched music according to the active tags in the personalized tags, and pushing the matched music to the user for playing;
s300, when a song cutting instruction of a user is acquired;
s400, acquiring the total time length and the played time length of the currently played music;
s500, when the ratio of the played time length to the total time length is less than a preset value;
s600, acquiring a personalized tag matched with the currently played music from the personalized tags of the users, and recording the personalized tag as an inactive tag;
and S700, acquiring matched music according to the active label and pushing the matched music to the user for playing.
In an embodiment of the present invention, the personalized music pushing method further includes:
when the ratio of the played time length to the total time length is not less than a preset value;
acquiring lyric information of currently played music, wherein the lyric information comprises lyric words and sentences and time nodes matched with the lyric words and sentences;
acquiring matched time nodes from the lyric information according to the played time length, and recording the matched time nodes as matched nodes;
obtaining lyric word and sentence information of a time node behind the matched node;
when the acquired lyric word information is not empty;
acquiring a personalized tag matched with the currently played music from the personalized tags of the users, and recording the personalized tag as an inactive tag;
and acquiring matched music according to the active tag, and pushing the matched music to the user for playing.
In an embodiment of the present invention, the personalized music pushing method further includes:
when a random play instruction of a user is acquired;
randomly acquiring music from a preset music library, and pushing the music to the user for playing;
acquiring sound input information of user equipment in real time;
in a preset period, generating audio comparison information according to the acquired sound input information;
acquiring preset template information according to the currently played music and the current playing progress;
comparing the audio comparison information with the template information;
when the audio contrast information is consistent with the template information;
acquiring a preset label of currently played music;
and adding the preset label as an active label of the user.
In an embodiment of the present invention, the template information is lyric information of music;
comparing the audio comparison information with the template information, specifically including:
performing semantic recognition on the audio comparison information, and generating audio comparison character information of the user;
comparing the audio frequency comparison word information with the lyric information;
when the audio contrast word information is consistent with the lyric information;
and judging that the audio comparison information is consistent with the template information.
In an embodiment of the present invention, the template information is template spectrum information of music;
comparing the audio comparison information with the template information, specifically including:
carrying out Fourier transform on the audio contrast information and generating audio contrast frequency spectrum information of the user;
comparing the audio frequency comparison spectrum information with the template spectrum information;
when the audio contrast spectrum information is consistent with the template spectrum information;
and judging that the audio comparison information is consistent with the template information.
In a specific application scenario of the present invention, the method provided by the first aspect of the present invention is implemented by an intelligent terminal of a user;
specifically, after a user logs in a personal account at an intelligent terminal, such as a smart phone of the user, the intelligent terminal acquires a user personality tag according to personal account information input by the user, wherein the user personality tag can be stored in a memory of the intelligent terminal of the user or a cloud server, and the intelligent terminal sends an acquisition request to the cloud server to acquire a related personality tag of the user;
after the intelligent terminal acquires the personality label of the user, such as ancient style, popularity, rock and the like, wherein the personality label comprises an active label and an inactive label because the preference of the user to music is often changed in different periods, and the intelligent terminal acquires matched music from a local music library or an online music library according to the active label and pushes the matched music to the user for playing; when a user sends a song-cutting instruction to the intelligent terminal, if a 'next' function button is clicked, the intelligent terminal judges that the user wants to acquire matched music according to the active tag and pushes the matched music to the user for playing, so that the intelligent terminal acquires the total time length and the played time length of the music currently played, if the total time is 4 minutes and 30 seconds, the played time length is 1 minute and 20 seconds, the intelligent terminal judges that the played time length accounts for about 30 percent of the total time length and is less than the preset 80 percent, the intelligent terminal judges that the music is obviously not played at the moment, and therefore the interest of the user on the music is possibly reduced, the intelligent terminal marks the individual tag matched with the currently played music in the individual tags of the user as an inactive tag, and simultaneously acquires the matched music from a local music library or an on-line music library again according to the active tag;
furthermore, following the above example, the intelligent terminal obtains the total duration of the music currently being played and the played duration, if the total time is 4 minutes and 30 seconds and the played time is 4 minutes and 00 seconds, the intelligent terminal judges that the played time accounts for about 88 percent of the total time and is more than the preset 80 percent, then the intelligent terminal obtains the lyric information of the music currently being played, and obtains the lyric information of the music in the last 30 seconds according to the played time length, when the lyric information of the music obtained by the intelligent terminal in the last 30 seconds is not empty, the intelligent terminal judges that although the music has been played by 88%, the main content has not been played completely, therefore, the interest of the user in the music may be reduced, so the intelligent terminal marks the personality label matched with the currently played music in the personality labels of the user as an inactive label, and simultaneously, acquiring matched music from a local music library or an online music library according to the active label.
In another specific application scenario of the invention, following the above example, when the intelligent terminal acquires a random play instruction sent by a user, the intelligent terminal randomly acquires music from a local music library or an online music library and pushes the music to the user for playing; meanwhile, the intelligent terminal acquires voice input information of the user in real time, for example, the voice information of the user is acquired in real time through a microphone of a mobile phone; in a preset period, for example, every 30 seconds, the intelligent terminal generates a section of contrast audio from the acquired sound information; the template information of the currently played music is compared, and when the comparison audio is consistent with the template information of the currently played music, the user is judged to like the music, so that the intelligent terminal acquires a preset label of the currently played music and sets the preset label as an active label of the user;
the template information can be stored in a local memory of the intelligent terminal or in the cloud server, and the intelligent terminal calls the corresponding template from the cloud server through communication;
specifically, the template information may include lyric information, when a user hears favorite music, the user may sing along with the favorite music, at this time, the intelligent terminal generates a section of comparison audio according to the obtained sound information, performs semantic recognition on the comparison audio, extracts semantic information in the comparison audio and converts the semantic information into words, then the intelligent terminal compares the words of the comparison audio with lyrics of the currently playing music, and when the repetition degree of the words of the comparison audio and the lyrics of the currently playing music satisfies a certain value, if 80% of the words of the comparison audio and the lyrics of the currently playing music are the same, the intelligent terminal judges that the audio comparison information is the same as the template information at this time;
specifically, the template information may further include frequency spectrum information of music, when a user hears favorite music, due to unfamiliarity with lyrics, a complete sentence may not be sung completely, humming often along with the music, at this time, the intelligent terminal generates a piece of comparison audio according to the acquired sound information, performs fourier transform on the comparison audio, thereby obtaining frequency spectrum information of the comparison audio, and the intelligent terminal compares the frequency spectrum information of the comparison audio with frequency spectrum information of the music currently being played, and when the repetition degree of the two exceeds a preset value, if the repetition degree exceeds 80%, the intelligent terminal judges that the audio comparison information is consistent with the template information at this time.
In a second aspect, the present invention further provides a personalized music pushing system, which includes a personalized tag management module 100, a music playing module 200, an instruction obtaining module 300, and a determining module 400;
the personality tag management module 100 is configured to obtain a personality tag of a user, where the personality tag includes an active tag and an inactive tag;
the music playing module 200 is configured to obtain matching music according to an active tag in the personalized tags, and push the matching music to the user for playing;
the instruction obtaining module 300 is configured to obtain a control instruction input by a user;
when the instruction obtaining module 300 obtains a song cutting instruction of the user, the music playing module 200 is further configured to obtain matched music according to the active tag, and push the matched music to the user for playing;
the determining module 400 is configured to obtain a total duration of currently played music and a played duration;
the determining module 400 is further configured to calculate a ratio of the played duration to the total duration;
when the judging module 400 judges that the ratio of the played time length to the total time length is less than a preset value;
the personality tag management module 100 is further configured to obtain a personality tag matched with currently played music from the personality tags of the user, and record the personality tag as an inactive tag.
In an embodiment of the present invention, when the determining module 400 determines that the ratio of the played duration to the total duration is not less than a preset value, the determining module 400 is further configured to obtain lyric information of the currently played music, where the lyric information includes lyric words and time nodes matched with the lyric words;
the judging module 400 is further configured to obtain a matched time node from the lyric information according to the played duration, and record the matched time node as a matched node;
the judging module 400 is further configured to obtain lyric word and sentence information of a time node after the matching node;
when the lyric word information acquired by the judging module 400 is not empty;
the personality tag management module 100 is further configured to obtain a personality tag matched with currently played music from the personality tags of the user, and record the personality tag as an inactive tag.
In an embodiment of the present invention, the personalized music pushing system further includes an audio input module;
when the instruction obtaining module 300 obtains a random play instruction of a user;
the music playing module 200 is further configured to randomly acquire music from a preset music library and push the music to the user for playing;
the audio input module is used for acquiring sound input information of the user equipment in real time;
the audio input module is further used for generating audio comparison information according to the acquired sound input information in a preset period;
the judging module 400 is further configured to obtain preset template information according to the currently played music and the current playing progress;
the determining module 400 is further configured to compare the audio comparison information with the template information;
when the judgment module 400 judges that the audio comparison information is consistent with the template information;
the personalized tag management module 100 is further configured to obtain a preset tag of currently played music;
the personality tag management module 100 is further configured to add the preset tag as an active tag of the user.
In an embodiment of the present invention, the template information is lyric information of music;
the judgment module 400 is further configured to perform semantic recognition on the audio comparison information and generate audio comparison text information of the user;
the judging module 400 is further configured to compare the audio comparison text information with the lyric information;
when the audio comparison text information is consistent with the lyric information, the determining module 400 determines that the audio comparison information is consistent with the template information.
In an embodiment of the present invention, the template information is template spectrum information of music;
the judgment module 400 is further configured to perform fourier transform on the audio comparison information, and generate audio comparison spectrum information of the user;
the judging module 400 is further configured to compare the audio frequency comparison spectrum information with the template spectrum information;
when the audio frequency comparison spectrum information is consistent with the template spectrum information, the determining module 400 determines that the audio frequency comparison information is consistent with the template information.
In a specific application scenario of the present invention, the system provided by the second aspect of the present invention is integrated in an intelligent terminal of a user;
specifically, after a user logs in a personal account at an intelligent terminal, such as a smart phone of the user, the intelligent terminal acquires a user personality tag according to personal account information input by the user, wherein the user personality tag can be stored in a memory of the intelligent terminal of the user or a cloud server, and the intelligent terminal sends an acquisition request to the cloud server to acquire a related personality tag of the user;
after the intelligent terminal acquires the personality label of the user, such as ancient style, popularity, rock and the like, wherein the personality label comprises an active label and an inactive label because the preference of the user to music is often changed in different periods, and the intelligent terminal acquires matched music from a local music library or an online music library according to the active label and pushes the matched music to the user for playing; when a user sends a song-cutting instruction to the intelligent terminal, if a 'next' function button is clicked, the intelligent terminal judges that the user wants to acquire matched music according to the active tag and pushes the matched music to the user for playing, so that the intelligent terminal acquires the total time length and the played time length of the music currently played, if the total time is 4 minutes and 30 seconds, the played time length is 1 minute and 20 seconds, the intelligent terminal judges that the played time length accounts for about 30 percent of the total time length and is less than the preset 80 percent, the intelligent terminal judges that the music is obviously not played at the moment, and therefore the interest of the user on the music is possibly reduced, the intelligent terminal marks the individual tag matched with the currently played music in the individual tags of the user as an inactive tag, and simultaneously acquires the matched music from a local music library or an on-line music library again according to the active tag;
furthermore, following the above example, the intelligent terminal obtains the total duration of the music currently being played and the played duration, if the total time is 4 minutes and 30 seconds and the played time is 4 minutes and 00 seconds, the intelligent terminal judges that the played time accounts for about 88 percent of the total time and is more than the preset 80 percent, then the intelligent terminal obtains the lyric information of the music currently being played, and obtains the lyric information of the music in the last 30 seconds according to the played time length, when the lyric information of the music obtained by the intelligent terminal in the last 30 seconds is not empty, the intelligent terminal judges that although the music has been played by 88%, the main content has not been played completely, therefore, the interest of the user in the music may be reduced, so the intelligent terminal marks the personality label matched with the currently played music in the personality labels of the user as an inactive label, and simultaneously, acquiring matched music from a local music library or an online music library according to the active label.
In another specific application scenario of the invention, following the above example, when the intelligent terminal acquires a random play instruction sent by a user, the intelligent terminal randomly acquires music from a local music library or an online music library and pushes the music to the user for playing; meanwhile, the intelligent terminal acquires voice input information of the user in real time, for example, the voice information of the user is acquired in real time through a microphone of a mobile phone; in a preset period, for example, every 30 seconds, the intelligent terminal generates a section of contrast audio from the acquired sound information; the template information of the currently played music is compared, and when the comparison audio is consistent with the template information of the currently played music, the user is judged to like the music, so that the intelligent terminal acquires a preset label of the currently played music and sets the preset label as an active label of the user;
the template information can be stored in a local memory of the intelligent terminal or in the cloud server, and the intelligent terminal calls the corresponding template from the cloud server through communication;
specifically, the template information may include lyric information, when a user hears favorite music, the user may sing along with the favorite music, at this time, the intelligent terminal generates a section of comparison audio according to the obtained sound information, performs semantic recognition on the comparison audio, extracts semantic information in the comparison audio and converts the semantic information into words, then the intelligent terminal compares the words of the comparison audio with lyrics of the currently playing music, and when the repetition degree of the words of the comparison audio and the lyrics of the currently playing music satisfies a certain value, if 80% of the words of the comparison audio and the lyrics of the currently playing music are the same, the intelligent terminal judges that the audio comparison information is the same as the template information at this time;
specifically, the template information may further include frequency spectrum information of music, when a user hears favorite music, due to unfamiliarity with lyrics, a complete sentence may not be sung completely, humming often along with the music, at this time, the intelligent terminal generates a piece of comparison audio according to the acquired sound information, performs fourier transform on the comparison audio, thereby obtaining frequency spectrum information of the comparison audio, and the intelligent terminal compares the frequency spectrum information of the comparison audio with frequency spectrum information of the music currently being played, and when the repetition degree of the two exceeds a preset value, if the repetition degree exceeds 80%, the intelligent terminal judges that the audio comparison information is consistent with the template information at this time.
It should be understood that the above examples are only for clearly showing the technical solutions of the present invention, and are not intended to limit the embodiments of the present invention. It will be apparent to those skilled in the art from this disclosure that various changes and modifications can be made herein without departing from the spirit and scope of the invention. Therefore, the protection scope of the present patent should be subject to the appended claims.

Claims (10)

1. A method for personalized music push, comprising:
acquiring a personality label of a user, wherein the personality label comprises an active label and an inactive label;
acquiring matched music according to the active tags in the personalized tags, and pushing the matched music to the user for playing;
when a song cutting instruction of a user is acquired;
acquiring the total time length of the currently played music and the played time length;
when the ratio of the played time length to the total time length is smaller than a preset value;
acquiring a personalized tag matched with the currently played music from the personalized tags of the users, and recording the personalized tag as an inactive tag;
and acquiring matched music according to the active tag, and pushing the matched music to the user for playing.
2. The method for personalized music push according to claim 1, further comprising:
when the ratio of the played time length to the total time length is not less than a preset value;
acquiring lyric information of currently played music, wherein the lyric information comprises lyric words and sentences and time nodes matched with the lyric words and sentences;
acquiring matched time nodes from the lyric information according to the played time length, and recording the matched time nodes as matched nodes;
obtaining lyric word and sentence information of a time node behind the matched node;
when the acquired lyric word information is not empty;
acquiring a personalized tag matched with the currently played music from the personalized tags of the users, and recording the personalized tag as an inactive tag;
and acquiring matched music according to the active tag, and pushing the matched music to the user for playing.
3. The method for personalized music push according to claim 1, further comprising:
when a random play instruction of a user is acquired;
randomly acquiring music from a preset music library, and pushing the music to the user for playing;
acquiring sound input information of user equipment in real time;
in a preset period, generating audio comparison information according to the acquired sound input information;
acquiring preset template information according to the currently played music and the current playing progress;
comparing the audio comparison information with the template information;
when the audio contrast information is consistent with the template information;
acquiring a preset label of currently played music;
and adding the preset label as an active label of the user.
4. The method as claimed in claim 3, wherein the template information is lyric information of the music;
comparing the audio comparison information with the template information, specifically including:
performing semantic recognition on the audio comparison information, and generating audio comparison character information of the user;
comparing the audio frequency comparison word information with the lyric information;
when the audio contrast word information is consistent with the lyric information;
and judging that the audio comparison information is consistent with the template information.
5. The method as claimed in claim 3, wherein the template information is template spectrum information of music;
comparing the audio comparison information with the template information, specifically including:
carrying out Fourier transform on the audio contrast information and generating audio contrast frequency spectrum information of the user;
comparing the audio frequency comparison spectrum information with the template spectrum information;
when the audio contrast spectrum information is consistent with the template spectrum information;
and judging that the audio comparison information is consistent with the template information.
6. A personalized music pushing system is characterized by comprising a personalized tag management module, a music playing module, an instruction acquisition module and a judgment module;
the personalized tag management module is used for acquiring a personalized tag of a user, wherein the personalized tag comprises an active tag and an inactive tag;
the music playing module is used for acquiring matched music according to the active tags in the individual tags and pushing the matched music to the user for playing;
the instruction acquisition module is used for acquiring a control instruction input by a user;
when the instruction acquisition module acquires a song cutting instruction of a user, the music playing module is further used for acquiring matched music according to the active tag and pushing the matched music to the user for playing;
the judging module is used for acquiring the total time length of the currently played music and the played time length;
the judging module is further configured to calculate a ratio of the played duration to the total duration;
when the judging module judges that the ratio of the played time length to the total time length is less than a preset value;
the personality label management module is further configured to obtain a personality label matched with currently played music from the personality labels of the user, and record the personality label as an inactive label.
7. The system as claimed in claim 6, wherein when the determining module determines that the ratio of the played duration to the total duration is not less than a predetermined value, the determining module is further configured to obtain lyric information of the currently played music, wherein the lyric information includes lyric words and time nodes matching the lyric words;
the judging module is also used for acquiring matched time nodes from the lyric information according to the played time length and recording the matched time nodes as matched nodes;
the judging module is also used for acquiring the lyric word and sentence information of the time node behind the matched node;
when the lyric word information acquired by the judging module is not empty;
the personality label management module is further configured to obtain a personality label matched with currently played music from the personality labels of the user, and record the personality label as an inactive label.
8. The personalized music push system of claim 6, further comprising an audio input module;
when the instruction acquisition module acquires a random play instruction of a user;
the music playing module is also used for randomly acquiring music from a preset music library and pushing the music to the user for playing;
the audio input module is used for acquiring sound input information of the user equipment in real time;
the audio input module is further used for generating audio comparison information according to the acquired sound input information in a preset period;
the judging module is also used for acquiring preset template information according to the currently played music and the current playing progress;
the judging module is further configured to compare the audio comparison information with the template information;
when the judgment module judges that the audio comparison information is consistent with the template information;
the personalized tag management module is also used for acquiring a preset tag of the currently played music;
the personalized tag management module is also used for adding the preset tag as an active tag of the user.
9. The personalized music push system of claim 8, wherein the template information comprises lyric information of the music;
the judgment module is also used for carrying out semantic recognition on the audio comparison information and generating audio comparison character information of the user;
the judging module is also used for comparing the audio comparison character information with the lyric information;
and when the audio comparison character information is consistent with the lyric information, the judgment module judges that the audio comparison information is consistent with the template information.
10. The personalized music push system of claim 8, wherein the template information comprises template spectrum information of music;
the judgment module is further used for carrying out Fourier transform on the audio comparison information and generating audio comparison frequency spectrum information of the user;
the judging module is further configured to compare the audio frequency comparison spectrum information with the template spectrum information;
and when the audio frequency comparison frequency spectrum information is consistent with the template frequency spectrum information, the judgment module judges that the audio frequency comparison information is consistent with the template information.
CN201911300407.5A 2019-12-17 2019-12-17 Personalized music pushing method and system Active CN110830595B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911300407.5A CN110830595B (en) 2019-12-17 2019-12-17 Personalized music pushing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911300407.5A CN110830595B (en) 2019-12-17 2019-12-17 Personalized music pushing method and system

Publications (2)

Publication Number Publication Date
CN110830595A true CN110830595A (en) 2020-02-21
CN110830595B CN110830595B (en) 2022-08-02

Family

ID=69546015

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911300407.5A Active CN110830595B (en) 2019-12-17 2019-12-17 Personalized music pushing method and system

Country Status (1)

Country Link
CN (1) CN110830595B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080133441A1 (en) * 2006-12-01 2008-06-05 Sun Microsystems, Inc. Method and system for recommending music
US20100185671A1 (en) * 2009-01-19 2010-07-22 Microsoft Corporation Personalized media recommendation
CN102622445A (en) * 2012-03-15 2012-08-01 华南理工大学 User interest perception based webpage push system and webpage push method
CN103023971A (en) * 2012-11-15 2013-04-03 广州酷狗计算机科技有限公司 Information pushing method and system of music sharing radio stations
CN105828117A (en) * 2016-03-02 2016-08-03 乐视云计算有限公司 Video automatic push method based on user behavior analysis and video automatic push device thereof
CN108197327A (en) * 2018-02-07 2018-06-22 腾讯音乐娱乐(深圳)有限公司 Song recommendations method, apparatus and storage medium
CN110493654A (en) * 2019-08-20 2019-11-22 安徽抖范视频科技有限公司 The recommendation of video and playback method and device in a kind of list of videos
CN110515816A (en) * 2019-08-20 2019-11-29 安徽抖范视频科技有限公司 A kind of analysis method and analysis system of user behavior

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080133441A1 (en) * 2006-12-01 2008-06-05 Sun Microsystems, Inc. Method and system for recommending music
US20100185671A1 (en) * 2009-01-19 2010-07-22 Microsoft Corporation Personalized media recommendation
CN102622445A (en) * 2012-03-15 2012-08-01 华南理工大学 User interest perception based webpage push system and webpage push method
CN103023971A (en) * 2012-11-15 2013-04-03 广州酷狗计算机科技有限公司 Information pushing method and system of music sharing radio stations
CN105828117A (en) * 2016-03-02 2016-08-03 乐视云计算有限公司 Video automatic push method based on user behavior analysis and video automatic push device thereof
CN108197327A (en) * 2018-02-07 2018-06-22 腾讯音乐娱乐(深圳)有限公司 Song recommendations method, apparatus and storage medium
CN110493654A (en) * 2019-08-20 2019-11-22 安徽抖范视频科技有限公司 The recommendation of video and playback method and device in a kind of list of videos
CN110515816A (en) * 2019-08-20 2019-11-29 安徽抖范视频科技有限公司 A kind of analysis method and analysis system of user behavior

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
姚谊等: "基于社会化标签多维去噪的音乐推荐方法", 《工业控制计算机》 *

Also Published As

Publication number Publication date
CN110830595B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
US10832686B2 (en) Method and apparatus for pushing information
CN107918653B (en) Intelligent playing method and device based on preference feedback
CN107766482B (en) Information pushing and sending method, device, electronic equipment and storage medium
US9190052B2 (en) Systems and methods for providing information discovery and retrieval
US9928834B2 (en) Information processing method and electronic device
US20200075024A1 (en) Response method and apparatus thereof
US8798995B1 (en) Key word determinations from voice data
US20130006627A1 (en) Method and System for Communicating Between a Sender and a Recipient Via a Personalized Message Including an Audio Clip Extracted from a Pre-Existing Recording
KR20190024711A (en) Information verification method and device
US11127399B2 (en) Method and apparatus for pushing information
JP2019061662A (en) Method and apparatus for extracting information
CN107247769A (en) Method for ordering song by voice, device, terminal and storage medium
WO2017191696A1 (en) Information processing system and information processing method
CN106888154B (en) Music sharing method and system
KR20160106075A (en) Method and device for identifying a piece of music in an audio stream
CN104091596A (en) Music identifying method, system and device
CN110968673B (en) Voice comment playing method and device, voice equipment and storage medium
CN114125506B (en) Voice auditing method and device
CN108777804B (en) Media playing method and device
CN110830595B (en) Personalized music pushing method and system
CN111061845A (en) Method, apparatus and computer storage medium for managing chat topics of chat room
CN113032616A (en) Audio recommendation method and device, computer equipment and storage medium
CN110176227B (en) Voice recognition method and related device
KR100888341B1 (en) System and Method for Searching a Sound Source, Server for Searching a Sound Source Therefor
CN112233648A (en) Data processing method, device, equipment and storage medium combining RPA and AI

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant