CN114357236A - Music recommendation method and device, electronic equipment and computer readable storage medium - Google Patents

Music recommendation method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN114357236A
CN114357236A CN202111672072.7A CN202111672072A CN114357236A CN 114357236 A CN114357236 A CN 114357236A CN 202111672072 A CN202111672072 A CN 202111672072A CN 114357236 A CN114357236 A CN 114357236A
Authority
CN
China
Prior art keywords
music
recommended
user
dynamic
static
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111672072.7A
Other languages
Chinese (zh)
Inventor
李涵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuo Erzhi Lian Wuhan Research Institute Co Ltd
Original Assignee
Zhuo Erzhi Lian Wuhan Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuo Erzhi Lian Wuhan Research Institute Co Ltd filed Critical Zhuo Erzhi Lian Wuhan Research Institute Co Ltd
Priority to CN202111672072.7A priority Critical patent/CN114357236A/en
Publication of CN114357236A publication Critical patent/CN114357236A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a music recommendation method, a music recommendation device, electronic equipment and a computer readable storage medium; the method comprises the following steps: obtaining static characteristic data and dynamic characteristic data of a user; performing feature extraction on the static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding static features; performing feature extraction on the dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding dynamic features; performing feature fusion on the static features and the dynamic features through a feature fusion layer of the music recommendation model to obtain corresponding fusion features; through an output layer of the music recommendation model, on the basis of the fusion characteristics, performing prediction processing on music to be recommended on a user to obtain a corresponding music set to be recommended; and determining recommended music for the user based on the music set to be recommended, and recommending the recommended music to the user. Through the method and the device, the recommended music can better accord with the preference of the user.

Description

Music recommendation method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to computer technologies, and in particular, to a music recommendation method and apparatus, an electronic device, and a computer-readable storage medium.
Background
In an existing music recommendation mode, music recommendation is usually performed on a user directly according to static attributes of the user and combined with preferences of a large number of users with similar attributes, and the music recommended in the mode cannot be correspondingly distinguished according to individual differences of each user, so that the recommended music is not in accordance with the individual preferences of the users.
Disclosure of Invention
The embodiment of the application provides a music recommendation method and device, electronic equipment and a computer readable storage medium, which can enable recommended music to better accord with user preferences.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides a music recommendation method, which comprises the following steps:
obtaining static characteristic data and dynamic characteristic data of a user;
performing feature extraction on the static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding static features;
performing feature extraction on the dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding dynamic features;
performing feature fusion on the static features and the dynamic features through a feature fusion layer of the music recommendation model to obtain corresponding fusion features;
through an output layer of the music recommendation model, based on the fusion characteristics, performing prediction processing on music to be recommended on the user to obtain a corresponding music set to be recommended;
and determining recommended music aiming at the user based on the music set to be recommended, and recommending the recommended music to the user.
In the foregoing solution, the determining, based on the to-be-recommended music set, recommended music for the user includes:
acquiring time information and position information of a user;
determining user behavior of the user based on the time information and the location information;
determining a recommended music category set for the user based on the user behavior, the corresponding relationship between the user behavior and the recommended music category;
selecting the music to be recommended with the music category belonging to the recommended music category set from the music to be recommended set;
and taking the selected music to be recommended as the recommended music for the user.
In the above scheme, the method further comprises:
receiving a user operation for the recommended music;
re-determining recommended music for the user based on the user operation;
recommending the re-determined recommended music to the user.
In the foregoing solution, the re-determining the recommended music for the user based on the user operation includes:
adding operation data corresponding to the user operation to the dynamic characteristic data to obtain new dynamic characteristic data;
based on the static characteristic data and the new dynamic characteristic data, performing prediction processing on the music to be recommended to the user through the music recommendation model to obtain a new music set to be recommended;
and re-determining the recommended music for the user based on the new music set to be recommended.
In the above solution, before obtaining the static feature data and the dynamic feature data of the user, the method further includes:
obtaining sample static characteristic data, sample dynamic characteristic data and a sample recommended music set;
performing feature extraction on the sample static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding sample static features;
performing feature extraction on the sample dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding sample dynamic features;
performing feature fusion on the static features and the dynamic features through a feature fusion layer of the music recommendation model to obtain corresponding sample fusion features;
through the output layer of the music recommendation model, based on the sample fusion characteristics, performing prediction processing on music to be recommended to obtain a corresponding prediction recommendation music set;
updating model parameters of the music recommendation model based on an error between the predicted recommended music set and the sample recommended music set.
An embodiment of the present application provides a music recommendation device, including: .
The acquisition module is used for acquiring the static characteristic data and the dynamic characteristic data of the user;
the first feature extraction module is used for extracting features of the static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding static features;
the second feature extraction module is used for performing feature extraction on the dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding dynamic features;
the characteristic fusion module is used for performing characteristic fusion on the static characteristic and the dynamic characteristic through a characteristic fusion layer of the music recommendation model to obtain corresponding fusion characteristics;
the prediction module is used for performing prediction processing on the music to be recommended to the user through an output layer of the music recommendation model based on the fusion characteristics to obtain a corresponding music set to be recommended;
and the recommending module is used for determining recommended music aiming at the user based on the music set to be recommended and recommending the recommended music to the user.
An embodiment of the present application provides an electronic device, including:
a memory for storing executable instructions;
and the processor is used for realizing the music recommendation method provided by the embodiment of the application when the processor executes the executable instructions stored in the memory.
The embodiment of the application provides a computer-readable storage medium, which stores executable instructions for causing a processor to execute the method for recommending music provided by the embodiment of the application.
The method comprises the steps of obtaining static characteristic data and dynamic characteristic data of a user; performing feature extraction on the static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding static features; performing feature extraction on the dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding dynamic features; performing feature fusion on the static features and the dynamic features through a feature fusion layer of the music recommendation model to obtain corresponding fusion features; through an output layer of the music recommendation model, based on the fusion characteristics, performing prediction processing on music to be recommended on the user to obtain a corresponding music set to be recommended; and determining recommended music for the user based on the music set to be recommended, and recommending the recommended music to the user, so that the recommended music can better accord with the preference of the user.
Drawings
FIG. 1 is a schematic diagram of an alternative structure of a music recommendation system provided in an embodiment of the present application;
fig. 2 is an alternative structural schematic diagram of an electronic device provided in an embodiment of the present application;
FIG. 3 is a schematic flow chart of an alternative music recommendation method provided by an embodiment of the present application;
FIG. 4 is a schematic diagram illustrating an alternative refinement of step 306 provided by an embodiment of the present application;
FIG. 5 is an alternative diagram of a correspondence between user behavior and recommended music categories provided by embodiments of the present application;
FIG. 6 is an alternative flow chart illustrating steps following step 306 provided by embodiments of the present application;
FIG. 7 is a schematic diagram illustrating an alternative flowchart of step 602 provided by an embodiment of the present application;
FIG. 8 is a schematic flow chart of an alternative process of steps prior to step 301 provided by embodiments of the present application;
fig. 9 is an alternative flowchart of a music recommendation method according to an embodiment of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
In the following description, references to the terms "first \ second \ third" are only to distinguish similar objects and do not denote a particular order, but rather the terms "first \ second \ third" are used to interchange specific orders or sequences, where appropriate, so as to enable the embodiments of the application described herein to be practiced in other than the order shown or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
Before further detailed description of the embodiments of the present application, terms and expressions referred to in the embodiments of the present application will be described, and the terms and expressions referred to in the embodiments of the present application will be used for the following explanation.
1) Artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results.
In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
Machine Learning (ML) is a multi-domain cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and formal education learning.
The embodiment of the application provides a music recommendation method and device, electronic equipment and a computer readable storage medium, which can enable recommended music to better accord with user preferences.
First, a music recommendation system provided in an embodiment of the present application is described, referring to fig. 1, where fig. 1 is an optional architecture diagram of a music recommendation system 100 provided in an embodiment of the present application, and a terminal 103 is connected to a server 101 through a network 102. In some embodiments, the terminal 103 may be, but is not limited to, a laptop, a tablet, a desktop computer, a smart phone, a dedicated messaging device, a portable gaming device, a smart speaker, a smart watch, and the like. The server 101 may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud service, a cloud database, cloud computing, a cloud function, cloud storage, a Network service, cloud communication, a middleware service, a domain name service, a security service, a Content Delivery Network (CDN) service, and a big data and artificial intelligence platform. The network 102 may be a wide area network or a local area network, or a combination of both. The terminal 103 and the server 101 may be directly or indirectly connected through wired or wireless communication, and the embodiment of the present application is not limited thereto.
The server 101 is configured to train the music recommendation model and send the music recommendation model to the terminal 103.
The terminal 103 is used for receiving the music recommendation model sent by the server 101; obtaining static characteristic data and dynamic characteristic data of a user; performing feature extraction on the static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding static features; performing feature extraction on the dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding dynamic features; performing feature fusion on the static features and the dynamic features through a feature fusion layer of the music recommendation model to obtain corresponding fusion features; through an output layer of the music recommendation model, based on the fusion characteristics, performing prediction processing on music to be recommended on the user to obtain a corresponding music set to be recommended; and determining recommended music aiming at the user based on the music set to be recommended, and recommending the recommended music to the user.
Next, an electronic device for implementing the music recommendation method according to an embodiment of the present application is described, referring to fig. 2, fig. 2 is an optional schematic structural diagram of an electronic device 200 according to an embodiment of the present application, and in practical applications, the electronic device 200 may be implemented as the terminal 103 or the server 101 in fig. 1, and the electronic device is taken as the terminal 103 shown in fig. 1 as an example, so as to describe the electronic device for implementing the music recommendation method according to the embodiment of the present application. The electronic device 200 shown in fig. 2 includes: at least one processor 201, memory 205, at least one network interface 202, and a user interface 203. The various components in the electronic device 200 are coupled together by a bus system 204. It is understood that the bus system 204 is used to enable communications among the components. The bus system 204 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 204 in fig. 2.
The Processor 201 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 203 includes one or more output devices 2031, including one or more speakers and/or one or more visual display screens, that enable the presentation of media content. The user interface 203 also includes one or more input devices 2032 including user interface components that facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 205 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 205 may optionally include one or more storage devices physically located remote from processor 201.
The memory 205 includes either volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 205 described in embodiments herein is intended to comprise any suitable type of memory.
In some embodiments, the memory 205 is capable of storing data, examples of which include programs, modules, and data structures, or a subset or superset thereof, in support of various operations, in embodiments of the present application, the memory 205 has stored therein an operating system 2051, a network communication module 2052, a presentation module 2053, an input processing module 2054, and a music recommendation device 2055; in particular, the amount of the solvent to be used,
an operating system 2051, which includes system programs for handling various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and for handling hardware-based tasks;
a network communication module 2052 for communicating to other computing devices via one or more (wired or wireless) network interfaces 202, exemplary network interfaces 202 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
a presentation module 2053 for enabling presentation of information (e.g., user interfaces for operating peripherals and displaying content and information) via one or more output devices 2031 (e.g., display screens, speakers, etc.) associated with the user interface 203;
an input processing module 2054 for detecting one or more user inputs or interactions from one of the one or more input devices 2032 and for translating the detected inputs or interactions.
In some embodiments, the music recommendation device provided by the embodiments of the present application may be implemented in software, and fig. 2 shows a music recommendation device 2055 stored in the memory 205, which may be software in the form of programs and plug-ins, and includes the following software modules: the obtaining module 20551, the first feature extraction module 20552, the second feature extraction module 20553, the feature fusion module 20554, the prediction module 25555, and the recommendation module 20556, which are logical and thus may be arbitrarily combined or further separated depending on the functionality implemented. The functions of the respective modules will be explained below.
In other embodiments, the music recommendation Device provided in the embodiments of the present Application may be implemented in hardware, and as an example, the music recommendation Device provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the music recommendation method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may employ one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The music recommendation method provided by the embodiment of the present application will be described in conjunction with an exemplary application and implementation of the server provided by the embodiment of the present application.
Referring to fig. 3, fig. 3 is an alternative flowchart of a music recommendation method provided in an embodiment of the present application, which will be described with reference to the steps shown in fig. 3.
301, obtaining static characteristic data and dynamic characteristic data of a user;
step 302, performing feature extraction on the static feature data through a first feature extraction layer of a music recommendation model to obtain corresponding static features;
step 303, performing feature extraction on the dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding dynamic features;
step 304, performing feature fusion on the static features and the dynamic features through a feature fusion layer of the music recommendation model to obtain corresponding fusion features;
305, performing prediction processing on the music to be recommended to the user through an output layer of the music recommendation model based on the fusion characteristics to obtain a corresponding music set to be recommended;
step 306, determining recommended music for the user based on the music set to be recommended, and recommending the recommended music to the user.
The music recommendation model may be implemented by a neural Network model, such as a U-Network (U-Network), a Full Convolutional Network (FCN), a Feature Pyramid Network (FPN), a Long-Short Term Memory Network (LSTM), and so on.
It should be noted that, the number of the static feature data and the number of the dynamic feature data provided in the embodiment of the present application are both multiple. The user's static characteristic data may be, but is not limited to, the user's gender, age, occupation, school calendar, location, etc. The dynamic feature data can be, but is not limited to, download, collection, loop play, skip, historical operation information, etc. of the user in the process of listening to the song.
In the embodiment of the application, after the plurality of static feature data are obtained, the plurality of static feature data are preprocessed. Specifically, the server performs coding processing on each static feature data to obtain a corresponding coded representation, and then performs feature splicing on the coded representations corresponding to all the static feature data to obtain a coded vector. Illustratively, for the static characteristic data of gender, 0 is used for male, and 1 is used for female; for the static characteristic data of the age, the segments of 0-10, 11-20, 21-30, 31-40, 41-50, 51-60, 61-70, 71-80, 81-90, 90 and the like are used for representing, the user represents 1 in the corresponding age group, and the user does not represent 0 in the age group; static characteristic data such as profession, school calendar and location can be expressed by one-hot. Combining the coded representation of gender, age, occupation, academic calendar and location together, a coded vector consisting of 0 and 1 is formed, and the dimension is denoted as M. And when the code vectors obtained by preprocessing the static characteristic data corresponding to the N users are combined, obtaining an N-M static characteristic matrix. Here, the static feature data and the dynamic feature data corresponding to the N users are processed at the same time, and music recommendation can be performed to the N users at the same time. Wherein N is a positive integer greater than or equal to 2. In addition, after obtaining the plurality of dynamic feature data, the server also preprocesses the plurality of dynamic feature data. Specifically, the server preprocesses dynamic characteristic data such as downloading, collecting, circulating playing, skipping, historical operation information and the like. Illustratively, the server constructs a user-music-operation dynamic feature matrix in U × P × K dimensions, where U is the number of users, P is the number of music, and K is the number of operation types (e.g., download, collection, loop play, skip four operations, and K is 4). When the user behavior is positive behaviors such as downloading, collection, circular playing and the like, the record of the position of the corresponding element of the matrix is 1. And recording-1 at the corresponding element position of the dynamic feature matrix when negative behaviors such as song cutting and the like occur. For example, if the user 1 downloads music 1, the position element of [0,0,0] in the dynamic feature matrix is marked as 1. Because the matrix is sparse, the server also performs Principal Component Analysis (PCA) dimension reduction processing on the matrix to obtain a dynamic feature matrix after dimension reduction.
In actual implementation, the server inputs the obtained static feature data into a first feature extraction layer of the music recommendation model, and performs feature extraction on the static feature data through the first feature extraction layer to obtain a corresponding static feature F1. It should be understood that the input to the first feature extraction layer of the music recommendation model is the static feature matrix obtained after the above pre-processing. Meanwhile, the server inputs the obtained dynamic feature data to a second feature extraction layer of the music recommendation model, and performs feature extraction on the dynamic feature data through the second feature extraction layer to obtain corresponding dynamic features F2. It should be understood that the input to the second feature extraction layer of the music recommendation model is the dynamic feature matrix obtained after the preprocessing. And then inputting the static features output by the first feature extraction layer and the dynamic features output by the second feature extraction layer into a feature fusion layer of the music recommendation model, and performing feature fusion on the static features F1 and the dynamic features F2 through the feature fusion layer to obtain corresponding fusion features. Here, the feature fusion may be feature concatenation, that is, concatenating the static feature and the dynamic feature into a concatenated feature, resulting in a fused feature F3. And finally, through an output layer of the music recommendation model, based on the fusion characteristics, music recommendation is performed on the user to obtain a corresponding music set to be recommended. Here, the music set to be recommended includes a plurality of music to be recommended.
Referring to fig. 4, fig. 4 is a schematic view of an optional detailed flow of step 306 provided in this embodiment of the application, and in some embodiments, step 306 may also be implemented as follows:
step 401, obtaining time information and position information of a user;
step 402, determining the user behavior of the user based on the time information and the position information;
step 403, determining a recommended music category set for the user based on the user behavior, the corresponding relationship between the user behavior and the recommended music category;
step 404, selecting music to be recommended from the music to be recommended set, wherein the music category belongs to the music to be recommended category set;
step 405, the selected music to be recommended is used as the recommended music for the user, and the recommended music is recommended to the user.
In actual implementation, the server obtains current time information and current location information of the user. Specifically, the server obtains the network time through accessing the network to obtain the current time information, and receives the positioning information sent by the user terminal to obtain the current position information of the user. Then, the server analyzes the user behavior of the user based on the time information and the position information. Specifically, the server may also determine a moving speed of the user based on a change in the location information of the user over time information, and analyze the user behavior of the user in combination with the moving speed and the location information of the user. For example, when the location information of the user is in a residential area, and the time information is morning or evening, and the moving speed of the user is 0, it may be determined that the user is currently getting up or falling asleep. Then, the server determines a set of recommended music categories for the user based on the determined user behavior and the corresponding relationship between the user behavior and the recommended music categories. Here, the correspondence between the user behavior and the recommended music category may be stored in advance, may be stored in the form of a table, or may be stored in the form of a database. Illustratively, referring to fig. 5, fig. 5 is an optional schematic diagram of a correspondence relationship between user behaviors and recommended music categories provided by an embodiment of the present application. When the user behavior is getting up or falling asleep, the corresponding recommended music category may be light music, classical (or ancient style), popular, and the like. When the user behavior is work or learning, the corresponding recommended music category may be popular, light music, etc. When the user acts as a subway or a bus, the corresponding recommended music category can be popular music, light music, ballad and the like. When the user behavior is driving, the corresponding recommended music category can be rock, pop, rap, etc. When the user's behavior is strenuous exercise, the corresponding recommended music category may be rock, pop, jazz, electronic, etc. When the user's behavior is a sport with low intensity, the corresponding music category may be popular, light music, dance music, classical (or ancient) and the like. When the user behavior is dining, the corresponding recommended music category may be pop, light music, classical (or classical), jazz, etc.
And then, the server selects recommended music of which the music category belongs to the recommended music category set from the music set to be recommended, takes the selected recommended music as the recommended music for the user, and recommends the recommended music to the user. Illustratively, when the music set to be recommended includes music a, music B, and music C, if the music category of music a is music light, the music category of music B is rap, the music category of music C is classical, and the recommended music category set determined based on the user behavior includes music light, classical (or classical), and popular, music a and music C are determined to be recommended music belonging to the recommended music category set, and are recommended to the user.
In some embodiments, referring to fig. 6, fig. 6 is an optional flowchart of steps after step 306 provided in this application, and in some embodiments, the method may further perform:
step 601, receiving user operation aiming at the recommended music;
step 602, based on the user operation, re-determining recommended music for the user;
step 603, recommending the re-determined recommended music to the user.
In practical implementation, the user operations include downloading, collecting, playing in a loop, skipping and other operations during the process of listening to the song. The server receives user operation of the user for recommending music through the user terminal. And after receiving the user operation, re-determining the recommended music for the user. For example, when the user operation is skipping, the server re-determines and recommends music to the corresponding user.
In an actual scene, because the music library is huge, the music set to be recommended is determined from the music library, the number of music in the music set to be recommended is very large, and the number of music to be recommended obtained after screening of the recommended music category set obtained by determining the user behavior is also very large, in this embodiment of the application, the server selects recommended music with the recommended number from the plurality of recommended music as the recommendation list to recommend the user. Here, the server may be selected in a random manner. The user operation may be for the entire recommendation list or for one or more pieces of music in the recommendation list. When the user operation is directed to the entire recommendation list, if the user sends a skip operation to the recommendation list, the server returns to step 301, continues to perform redetermination of the music set to be recommended through the music recommendation model, and redetermines a plurality of pieces of recommended music from the redetermined music set to be recommended. In this embodiment of the application, the server compares the multiple pieces of redetermined recommended music (the redetermined recommended list) with the multiple pieces of recommended music (the last recommended list) before the user operates, and recommends the multiple pieces of redetermined recommended music to the user when the coincidence degree of the two is smaller than the coincidence degree threshold. Here, the degree of coincidence may be a ratio of the same recommended music to the recommended number (i.e., the number of pieces of recommended music in the recommendation list) in both recommendation lists. And when the contact ratio is larger than or equal to the contact ratio threshold value, the server continuously determines a new recommendation list again. Here, the server may remove the recommended music that coincides with the previous recommended list, and select a new music to be recommended from the newly determined music set to be recommended to supplement.
In some embodiments, referring to fig. 7, fig. 7 is an optional detailed flowchart of step 602 provided in this embodiment, and in some embodiments, step 602 may also be implemented as follows:
step 701, adding operation data corresponding to the user operation to the dynamic characteristic data to obtain new dynamic characteristic data;
step 702, performing prediction processing on the music to be recommended to the user through the music recommendation model based on the static characteristic data and the new dynamic characteristic data to obtain a new music set to be recommended;
and 703, re-determining the recommended music for the user based on the new music set to be recommended.
In practical implementation, after the server receives the user operation of the user, the set of music to be recommended may be re-determined by using the user operation in combination with the music recommendation model. Specifically, the server obtains operation data corresponding to user operation, and adds the operation data as new dynamic characteristic data to an existing dynamic characteristic data set to obtain a new dynamic characteristic data set. And then inputting the existing static characteristic data and the new dynamic characteristic data set into a music recommendation model to obtain a new music set to be recommended, and re-determining the recommended music for the user based on the new music set to be recommended. Here, the process of re-determining the recommended music for the user refers to the foregoing steps, which are not described herein again.
Referring to fig. 8, fig. 8 is an optional flowchart of steps before step 301 provided in this embodiment of the present application, and in some embodiments, before step 301, the method may further perform:
step 801, obtaining sample static characteristic data, sample dynamic characteristic data and a sample recommended music set;
step 802, performing feature extraction on the sample static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding sample static features;
step 803, performing feature extraction on the sample dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding sample dynamic features;
step 804, performing feature fusion on the static features and the dynamic features through a feature fusion layer of the music recommendation model to obtain corresponding sample fusion features;
step 805, performing prediction processing on music to be recommended through an output layer of the music recommendation model based on the sample fusion characteristics to obtain a corresponding prediction recommendation music set;
step 806, updating model parameters of the music recommendation model based on an error between the predicted recommended music set and the sample recommended music set.
In practical implementation, before the music recommendation model is used for predicting the music set to be recommended, the music recommendation model is trained. Specifically, the server obtains sample static characteristic data and sample dynamic characteristic data, inputs the sample static characteristic data and the sample dynamic characteristic data into a music recommendation model, predicts music to be recommended by referring to the recommendation model to obtain a corresponding predicted recommended music set, determines an error between the predicted recommended music set and the sample recommended music set, and updates model parameters of the music recommendation model based on the error, so that the music recommendation model is trained.
In the embodiment of the application, static characteristic data and dynamic characteristic data of a user are obtained; performing feature extraction on the static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding static features; performing feature extraction on the dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding dynamic features; performing feature fusion on the static features and the dynamic features through a feature fusion layer of the music recommendation model to obtain corresponding fusion features; through an output layer of the music recommendation model, on the basis of the fusion characteristics, performing prediction processing on music to be recommended on a user to obtain a corresponding music set to be recommended; based on the music set to be recommended, the recommended music for the user is determined, and the recommended music is recommended to the user, so that the music recommended to the user is closer to the user preference.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
Referring to fig. 9, fig. 9 is an alternative flowchart of a music recommendation method provided in an embodiment of the present application. The following is a description with reference to the steps.
And S1, establishing user characteristics according to the registration information and the behavior data of the user. The user characteristics comprise static characteristic data such as gender, age, occupation and the like of the user and dynamic characteristic data such as downloading, collection, circular playing, skipping, historical operation information and the like in the process of listening to songs.
S2, because the static feature data and the dynamic feature data are in text format, the static feature data and the dynamic feature data are converted into word vectors by using a word2vec method.
S3, inputting the static features into a convolutional neural network (such as VGG, ResNet and the like) to extract the features, and obtaining a feature vector F1;
s4, inputting the dynamic characteristics into a long-short time memory network (LSTM) to extract the characteristics, and obtaining characteristics F2;
s5, cascading the characteristic F1 and the characteristic F2 to obtain a final characteristic F3;
s6, training the network, inferring the works preferred by the user, and recording the works as a set S to be recommended;
and S7, analyzing the user position information and the related environment.
(1) Location information of a user is gathered using a wireless communication network device. The mobile phone end can utilize a self-contained positioning system, and the PC end needs to use API with positioning functions of Baidu, Goods and the like.
(2) The location information and the time information are taken into consideration by the recommendation system, the user behavior is presumed, and the recommended music category is selected according to the behavior. When the user behavior is uncertain, the processing can be performed according to the union of the recommended types of the possible behaviors. For example, when the location is at home or dormitory, it is possible to fall asleep and learn, and the recommendation type is the union of the two. This step can be replaced with existing music type recommendation methods, with flexibility.
S8, if the user does not click the refresh button, recommending the music meeting the category in the set S to be recommended to the user; if the user clicks the refresh button, an adjustment should be made according to S9 to decrease the song-cutting-like songs and increase the cycling-like songs. The similarity determination method may be performed according to existing technical solutions.
And S9, because the types of music recommended to the user are various, fine adjustment can be performed according to the operation of the user when listening to the songs. And setting a refreshing button on the recommendation interface, returning to S1 when the user clicks refreshing, reducing the similar songs of the user in the song cutting operation, and increasing the similar songs of the songs played circularly.
According to the method and the device, the static and dynamic characteristics of the user are extracted firstly, the music preferred by the user is selected, then the behavior scene of the user is judged according to the acquired geographic position and time of the user, and then type screening is carried out according to the behavior scene. In addition, when recommending, also provide and refresh the operation, operate and adjust the recommendation result according to the user real-time.
Continuing with the exemplary structure of the music recommendation device 2055 implemented as software modules provided by the embodiments of the present application, in some embodiments, as shown in fig. 2, the software modules stored in the music recommendation device 2055 of the memory 205 may include:
an obtaining module 20551, configured to obtain static feature data and dynamic feature data of a user;
the first feature extraction module 20552 is configured to perform feature extraction on the static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding static features;
a second feature extraction module 20553, configured to perform feature extraction on the dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding dynamic features;
the feature fusion module 20554 is configured to perform feature fusion on the static features and the dynamic features through a feature fusion layer of the music recommendation model to obtain corresponding fusion features;
the prediction module 25555 is configured to perform prediction processing on the music to be recommended for the user based on the fusion features through an output layer of the music recommendation model to obtain a corresponding music set to be recommended;
a recommending module 20556, configured to determine, based on the set of music to be recommended, recommended music for the user, and recommend the recommended music to the user.
In some embodiments, the recommendation module is further configured to obtain time information and location information of the user; determining user behavior of the user based on the time information and the location information; determining a recommended music category set for the user based on the user behavior, the corresponding relationship between the user behavior and the recommended music category; selecting the music to be recommended with the music category belonging to the recommended music category set from the music to be recommended set; and taking the selected music to be recommended as the recommended music for the user.
In some embodiments, the recommendation module is further configured to receive a user action for the recommended music; re-determining recommended music for the user based on the user operation; recommending the re-determined recommended music to the user.
In some embodiments, the recommending module is further configured to add operation data corresponding to the user operation to the dynamic feature data to obtain new dynamic feature data; based on the static characteristic data and the new dynamic characteristic data, performing prediction processing on the music to be recommended to the user through the music recommendation model to obtain a new music set to be recommended; and re-determining the recommended music for the user based on the new music set to be recommended.
In some embodiments, the apparatus further comprises: the model training module is used for obtaining sample static characteristic data, sample dynamic characteristic data and a sample recommended music set through music; performing feature extraction on the sample static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding sample static features; performing feature extraction on the sample dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding sample dynamic features; performing feature fusion on the static features and the dynamic features through a feature fusion layer of the music recommendation model to obtain corresponding sample fusion features; through the output layer of the music recommendation model, based on the sample fusion characteristics, performing prediction processing on music to be recommended to obtain a corresponding prediction recommendation music set; updating model parameters of the music recommendation model based on an error between the predicted recommended music set and the sample recommended music set.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the music recommendation method according to the embodiment of the present application.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions, which when executed by a processor, cause the processor to execute the music recommendation method provided by the embodiments of the present application.
In some embodiments, the computer-readable storage medium may be memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash, magnetic surface memory, optical disk, or CD-ROM; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext Markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code).
By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.
In conclusion, the recommended music can better accord with the user preference through the embodiment of the application.
The above description is only an example of the present application, and is not intended to limit the scope of the present application. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present application are included in the protection scope of the present application.

Claims (10)

1. A music recommendation method, comprising:
obtaining static characteristic data and dynamic characteristic data of a user;
performing feature extraction on the static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding static features;
performing feature extraction on the dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding dynamic features;
performing feature fusion on the static features and the dynamic features through a feature fusion layer of the music recommendation model to obtain corresponding fusion features;
through an output layer of the music recommendation model, based on the fusion characteristics, performing prediction processing on music to be recommended on the user to obtain a corresponding music set to be recommended;
and determining recommended music aiming at the user based on the music set to be recommended, and recommending the recommended music to the user.
2. The music recommendation method according to claim 1, wherein the determining the recommended music for the user based on the set of music to be recommended comprises:
acquiring time information and position information of a user;
determining user behavior of the user based on the time information and the location information;
determining a recommended music category set for the user based on the user behavior, the corresponding relationship between the user behavior and the recommended music category;
selecting the music to be recommended with the music category belonging to the recommended music category set from the music to be recommended set;
and taking the selected music to be recommended as the recommended music for the user.
3. The music recommendation method of claim 1, further comprising:
receiving a user operation for the recommended music;
re-determining recommended music for the user based on the user operation;
recommending the re-determined recommended music to the user.
4. The music recommendation method according to claim 3, wherein said re-determining the recommended music for the user based on the user operation comprises:
adding operation data corresponding to the user operation to the dynamic characteristic data to obtain new dynamic characteristic data;
based on the static characteristic data and the new dynamic characteristic data, performing prediction processing on the music to be recommended to the user through the music recommendation model to obtain a new music set to be recommended;
and re-determining the recommended music for the user based on the new music set to be recommended.
5. The music recommendation method according to claim 1, wherein before obtaining the static feature data and the dynamic feature data of the user, the method further comprises:
obtaining sample static characteristic data, sample dynamic characteristic data and a sample recommended music set;
performing feature extraction on the sample static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding sample static features;
performing feature extraction on the sample dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding sample dynamic features;
performing feature fusion on the static features and the dynamic features through a feature fusion layer of the music recommendation model to obtain corresponding sample fusion features;
through the output layer of the music recommendation model, based on the sample fusion characteristics, performing prediction processing on music to be recommended to obtain a corresponding prediction recommendation music set;
updating model parameters of the music recommendation model based on an error between the predicted recommended music set and the sample recommended music set.
6. A music recommendation device, comprising:
the acquisition module is used for acquiring the static characteristic data and the dynamic characteristic data of the user;
the first feature extraction module is used for extracting features of the static feature data through a first feature extraction layer of the music recommendation model to obtain corresponding static features;
the second feature extraction module is used for performing feature extraction on the dynamic feature data through a second feature extraction layer of the music recommendation model to obtain corresponding dynamic features;
the characteristic fusion module is used for performing characteristic fusion on the static characteristic and the dynamic characteristic through a characteristic fusion layer of the music recommendation model to obtain corresponding fusion characteristics;
the prediction module is used for performing prediction processing on the music to be recommended to the user through an output layer of the music recommendation model based on the fusion characteristics to obtain a corresponding music set to be recommended;
and the recommending module is used for determining recommended music aiming at the user based on the music set to be recommended and recommending the recommended music to the user.
7. The music recommendation device of claim 6, wherein the recommendation module is further configured to:
acquiring time information and position information of a user;
determining user behavior of the user based on the time information and the location information;
determining a recommended music category set for the user based on the user behavior, the corresponding relationship between the user behavior and the recommended music category;
selecting the music to be recommended with the music category belonging to the recommended music category set from the music to be recommended set;
and taking the selected music to be recommended as the recommended music for the user.
8. The music recommendation device of claim 6, wherein the recommendation module is further configured to:
receiving a user operation for the recommended music;
re-determining recommended music for the user based on the user operation;
recommending the re-determined recommended music to the user.
9. An electronic device, comprising:
a memory for storing executable instructions;
a processor for implementing the music recommendation method of any one of claims 1 to 5 when executing executable instructions stored in the memory.
10. A computer-readable storage medium storing executable instructions for implementing the music recommendation method of any one of claims 1 to 5 when executed by a processor.
CN202111672072.7A 2021-12-31 2021-12-31 Music recommendation method and device, electronic equipment and computer readable storage medium Pending CN114357236A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111672072.7A CN114357236A (en) 2021-12-31 2021-12-31 Music recommendation method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111672072.7A CN114357236A (en) 2021-12-31 2021-12-31 Music recommendation method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114357236A true CN114357236A (en) 2022-04-15

Family

ID=81104685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111672072.7A Pending CN114357236A (en) 2021-12-31 2021-12-31 Music recommendation method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114357236A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117033693A (en) * 2023-10-08 2023-11-10 联通沃音乐文化有限公司 Method and system for cloud processing in mixed mode

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117033693A (en) * 2023-10-08 2023-11-10 联通沃音乐文化有限公司 Method and system for cloud processing in mixed mode
CN117033693B (en) * 2023-10-08 2024-03-08 联通沃音乐文化有限公司 Method and system for cloud processing in mixed mode

Similar Documents

Publication Publication Date Title
CN112203122B (en) Similar video processing method and device based on artificial intelligence and electronic equipment
US8909653B1 (en) Apparatus, systems and methods for interactive dissemination of knowledge
EP3493032A1 (en) Robot control method and companion robot
CN111274473B (en) Training method and device for recommendation model based on artificial intelligence and storage medium
CN112749262B (en) Question-answering processing method and device based on artificial intelligence, electronic equipment and storage medium
CN111818370B (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
CN112257661A (en) Identification method, device and equipment of vulgar image and computer readable storage medium
CN111046158B (en) Question-answer matching method, model training method, device, equipment and storage medium
CN112541120B (en) Recommendation comment generation method, device, equipment and medium
CN109299375A (en) Information personalized push method, device, electronic equipment and storage medium
CN113821654A (en) Multimedia data recommendation method and device, electronic equipment and storage medium
CN112040273A (en) Video synthesis method and device
CN114911915A (en) Knowledge graph-based question and answer searching method, system, equipment and medium
CN114357236A (en) Music recommendation method and device, electronic equipment and computer readable storage medium
CN111192170A (en) Topic pushing method, device, equipment and computer readable storage medium
CN110781377A (en) Article recommendation method and device
KR102119518B1 (en) Method and system for recommending product based style space created using artificial intelligence
CN116700839B (en) Task processing method, device, equipment, storage medium and program product
US20230351473A1 (en) Apparatus and method for providing user's interior style analysis model on basis of sns text
CN113704620A (en) User label updating method, device, equipment and medium based on artificial intelligence
CN116662527A (en) Method for generating learning resources and related products
CN115827978A (en) Information recommendation method, device, equipment and computer readable storage medium
CN111143693B (en) Training method and device for feature processing model based on artificial intelligence
CN111274480B (en) Feature combination method and device for content recommendation
CN111222011B (en) Video vector determining method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination