CN106202073A - Music recommends method and system - Google Patents
Music recommends method and system Download PDFInfo
- Publication number
- CN106202073A CN106202073A CN201510213529.6A CN201510213529A CN106202073A CN 106202073 A CN106202073 A CN 106202073A CN 201510213529 A CN201510213529 A CN 201510213529A CN 106202073 A CN106202073 A CN 106202073A
- Authority
- CN
- China
- Prior art keywords
- emotion
- vector
- music
- user
- signature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 230000008451 emotion Effects 0.000 claims abstract description 379
- 238000004458 analytical method Methods 0.000 claims abstract description 23
- 239000013598 vector Substances 0.000 claims description 305
- 238000004364 calculation method Methods 0.000 claims description 66
- 230000002996 emotional effect Effects 0.000 claims description 50
- 238000012937 correction Methods 0.000 claims description 18
- 230000008859 change Effects 0.000 claims description 13
- 230000000694 effects Effects 0.000 claims description 12
- 238000000605 extraction Methods 0.000 claims description 9
- 230000033001 locomotion Effects 0.000 claims description 9
- 238000012217 deletion Methods 0.000 claims description 7
- 230000037430 deletion Effects 0.000 claims description 7
- 230000008878 coupling Effects 0.000 abstract 1
- 238000010168 coupling process Methods 0.000 abstract 1
- 238000005859 coupling reaction Methods 0.000 abstract 1
- 230000006399 behavior Effects 0.000 description 23
- 230000036651 mood Effects 0.000 description 17
- 238000010586 diagram Methods 0.000 description 10
- 239000000284 extract Substances 0.000 description 3
- 238000005457 optimization Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 208000019901 Anxiety disease Diseases 0.000 description 1
- 206010027940 Mood altered Diseases 0.000 description 1
- 208000031481 Pathologic Constriction Diseases 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000036506 anxiety Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000007418 data mining Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 230000007510 mood change Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000033764 rhythmic process Effects 0.000 description 1
- 208000037804 stenosis Diseases 0.000 description 1
- 230000036262 stenosis Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of music and recommend method and system, its method includes: be analyzed the music in Qu Ku, determines music emotion feature;The affective state of user is identified, determines user feeling feature;User feeling feature is mated with music emotion feature, generates the music recommendation list that the affective state with user matches.The music of the present invention recommends method and system, carry out user feeling analysis, affective state according to user, method according to affection need coupling, it is recommended that the song of respective classes, it is achieved music accurate, automatic, efficient is recommended, and content recommendation more has leap, pre-setting without user, intelligence degree is higher, improves the use impression of user.
Description
Technical Field
The invention relates to the technical field of data mining, in particular to a music recommendation method and system.
Background
With the continuous development and popularization of computer networks, people can conveniently and quickly acquire increasingly abundant music resources, so that people urgently need a new technology to manage the music resources and realize effective retrieval and access of mass music resources. The traditional music retrieval is limited to the retrieval of the reference information of music such as music song names, singer names, word makers, music makers and the like, and the retrieval is far from satisfying the retrieval and management of music from music contents.
Currently, music recommendation of music portal sites, internet radio stations, music playing software and the like mostly adopts user playing history records or user interaction input to analyze and classify user preferences, selects songs with similar or consistent styles and contents in a song library to push, and needs playing record accumulation or active participation of users. The music emotion is very important information for depicting music works, the emotion expressed by the music is accurately identified, people can be helped to search and access music suitable for the user more quickly, at present, the music suitable for the user is not recommended according to the emotion state of the user when the music is recommended to the user, the disadvantages exist in the aspects of recommendation diversification and the like, and particularly when the recommended music is greatly different from the emotion state of the user, the service sensibility of the user is influenced.
Disclosure of Invention
In view of the above, an object of the present invention is to provide a music recommendation method, which can recommend music matching with the emotional state of the user.
A music recommendation method, comprising: analyzing music in a music library to determine music emotion characteristics; identifying the emotional state of the user and determining the emotional characteristics of the user; and matching the emotional characteristics of the user with the music emotional characteristics to generate a music recommendation list matched with the emotional state of the user.
According to an embodiment of the present invention, further, the analyzing music in the music library and determining the emotional characteristics of the music include: extracting key words of waveforms and lyrics of music in a music library; acquiring a waveform emotion vector corresponding to the waveform in a waveform emotion dictionary; acquiring a lyric emotion vector corresponding to the lyric keyword in a lyric emotion dictionary; carrying out vector weighted superposition calculation on the waveform emotion vector and the lyric emotion vector to obtain a music emotion feature vector; and the waveform emotion vector and the lyric emotion vector are two-dimensional vectors.
According to an embodiment of the present invention, further, the identifying the emotional state of the user and determining the emotional characteristic of the user includes: acquiring an individual signature of a social application of a user through a third-party interface; extracting signature keywords and symbols from the personalized signature; acquiring a signature keyword emotion vector corresponding to the signature keyword in a signature keyword emotion dictionary; obtaining a corresponding symbol emotion vector of the symbol in a symbol emotion dictionary; and carrying out vector weighting superposition calculation on the signature keyword emotion vector and the symbol emotion vector to obtain a user emotion feature vector, wherein the signature keyword emotion vector and the symbol emotion vector are two-dimensional vectors.
According to an embodiment of the present invention, further, the identifying the emotional state of the user and determining the emotional characteristic of the user includes: calculating an operation statistic value operated by a user on the touch screen; the operational statistics include: the force average value, the force change rate, the key interval time, the contact motion speed and the key deletion frequency; acquiring an operation emotion vector corresponding to the operation statistic in an operation behavior emotion dictionary; wherein the operation emotion vector is a two-dimensional vector; and carrying out vector weighted superposition calculation on the operation feeling vector to obtain the user feeling feature vector.
According to an embodiment of the present invention, further, the calculating the operation statistics of the user operation on the touch screen, and acquiring the corresponding operation emotion vector of the operation statistics in the operation behavior emotion dictionary includes: detecting whether a user operates on the touch screen, if so, recording an operation time point, and recording operation signals of the user, including signal intensity, contact coordinates, a delete key and a return key position; and combining the operation signals into an operation flow according to continuity: setting a time threshold, and when the interval between two operation signals is smaller than the threshold, classifying the two operation signals into the same operation flow; and acquiring the corresponding emotion vector of the operation in each operation flow in the operation behavior emotion dictionary.
According to an embodiment of the present invention, further, the performing vector weighted superposition calculation on the operation feeling vector and acquiring a user emotion feature vector includes: and carrying out vector weighted superposition calculation on the emotion vectors of the operation flows in the set time to obtain the emotion feature vectors of the users.
According to an embodiment of the present invention, further, the matching the emotional features of the user with the emotional features of the music and generating a music recommendation list matching the emotional state of the user includes: and matching calculation is carried out on the user emotion characteristic vector and the music emotion characteristic vector based on preset compliance, force and correction degree, and music with high matching degree is selected to generate the music recommendation list.
According to an embodiment of the present invention, further, the matching calculation of the user emotion feature vector and the music emotion feature vector based on the preset compliance, strength, and rectification degree specifically includes:
F=kaA+kbB+kcC+kdD;
wherein the user emotion feature vector is E (v)e,ae),veFor pleasure, aeThe activity is shown; the characteristic emotion vector of the music is M (v)m,am),vmFor pleasure, amThe activity is shown; f is the matching degree; a is the compliance; b is the force; C. d is the correction degree; k is a preset parameter;
C=vm-ve;D=am-ae。
the invention aims to provide a music recommendation system which can recommend music matched with the emotional state of a user.
A music recommendation system comprising: the music emotion analysis unit is used for analyzing the music in the music database and determining music emotion characteristics; the user emotion analysis unit is used for identifying the emotion state of the user and determining the emotion characteristics of the user; and the music recommendation generating unit is used for matching the emotional characteristics of the user with the music emotional characteristics and generating a music recommendation list matched with the emotional state of the user.
According to an embodiment of the present invention, further, the music emotion analyzing unit includes: the waveform extraction module is used for extracting the waveform of the music in the music library; the lyric extraction module is used for extracting lyric keywords of music in a music library; the waveform emotion vector calculation module is used for acquiring a waveform emotion vector corresponding to the waveform in a waveform emotion dictionary; the lyric emotion vector calculation module is used for acquiring a lyric emotion vector corresponding to the lyric keyword in a lyric emotion dictionary; the music emotion vector calculation module is used for carrying out vector weighted superposition calculation on the waveform emotion vector and the lyric emotion vector to obtain a music emotion feature vector; and the waveform emotion vector and the lyric emotion vector are two-dimensional vectors.
According to an embodiment of the present invention, further, the user emotion analysis unit includes: a social application sentiment subunit comprising: the personalized signature acquisition module is used for acquiring a personalized signature of the social application of the user; the vocabulary symbol extraction module is used for extracting signature keywords and symbols from the personalized signature; the vocabulary emotion calculation module is used for acquiring a signature keyword emotion vector corresponding to the signature keyword in a signature keyword emotion dictionary; the symbol emotion calculation module is used for acquiring a symbol emotion vector corresponding to the symbol in a symbol emotion dictionary; and the individual signature emotion meter module is used for carrying out vector weighted superposition calculation on the signature keyword emotion vector and the symbol emotion vector to obtain a user emotion feature vector, wherein the signature keyword emotion vector and the symbol emotion vector are two-dimensional vectors.
According to an embodiment of the present invention, further, the user emotion analysis unit includes: the user operation emotion subunit comprises: the operation statistic module is used for calculating an operation statistic value operated by a user on the touch screen; the operational statistics include: the force average value, the force change rate, the key interval time, the contact motion speed and the key deletion frequency; the operation emotion vector calculation module is used for acquiring an operation emotion vector corresponding to the operation statistic in the operation behavior emotion dictionary; wherein the operation emotion vector is a two-dimensional vector; and the user emotion vector calculation module is used for carrying out vector weighted superposition calculation on the operation emotion vector to obtain a user emotion feature vector.
According to an embodiment of the present invention, the operation statistic module is further configured to detect whether a user operates on the touch screen, and if so, record an operation time point and record an operation signal of the user, including signal strength, a contact coordinate, a delete key, and a return key position; and combining the operation signals into an operation flow according to continuity: setting a time threshold, and when the interval between two operation signals is smaller than the threshold, classifying the two operation signals into the same operation flow; the operation emotion vector calculation module is further configured to obtain an emotion vector corresponding to the operation in each operation stream in the operation behavior emotion dictionary.
According to an embodiment of the present invention, the user emotion vector calculation module is further configured to perform vector weighted superposition calculation on emotion vectors of an operation flow in a set time to obtain a user emotion feature vector.
According to an embodiment of the present invention, further, the music recommendation generating unit includes: the emotion vector matching module is used for matching and calculating the user emotion feature vector and the music emotion feature vector based on preset compliance, force and correction degree; and the music recommendation generating unit is used for selecting music with high matching degree to generate the music recommendation list.
According to an embodiment of the present invention, further, the emotion vector matching module performs matching calculation on the user emotion feature vector and the music emotion feature vector based on a preset degree of compliance, strength, and degree of rectification specifically includes: k is F ═ kaA+kbB+kcC+kdD; wherein the user emotion feature vector is E (v)e,ae),veFor pleasure, aeThe activity is shown; the characteristic emotion vector of the music is M (v)m,am),vmFor pleasure, amThe activity is shown; f is the matching degree; a is the compliance; b is the force; C. d is the correction degree; k is a preset parameter;
C=vm-ve;D=am-ae。
according to the music recommendation method and system, the emotion state of the user is analyzed and compared with the emotion vector of the music in the music library, and music recommendation is completed according to the set emotion requirement matching rule.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow diagram of one embodiment of a music recommendation method in accordance with the present invention;
FIG. 2 is a flow diagram of another embodiment of a music recommendation method in accordance with the present invention;
FIG. 3 is a flow diagram of yet another embodiment of a music recommendation method in accordance with the present invention;
FIG. 4 is a model diagram of emotion vectors in an emotion dictionary.
FIG. 5 is a block diagram of an embodiment of a music recommendation system according to the present invention;
FIG. 6 is a block diagram of a music emotion analysis unit according to an embodiment of the music recommendation system of the present invention;
FIG. 7 is a block diagram of a user emotion analysis unit according to an embodiment of the music recommendation system of the present invention;
FIG. 8 is a block diagram of a social application sentiment subunit according to an embodiment of the music recommendation system of the present invention;
FIG. 9 is a block diagram of a user-operated emotion subunit according to an embodiment of the music recommendation system of the present invention;
fig. 10 is a schematic block diagram of a music recommendation generating unit according to an embodiment of the music recommendation system of the present invention.
Detailed Description
The present invention now will be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments of the invention are shown. The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. The technical solution of the present invention is described in various aspects below with reference to various figures and embodiments.
Fig. 1 is a flowchart of an embodiment of a music recommendation method according to the present invention, as shown in fig. 1:
step 101, analyzing music in a music library to determine music emotion characteristics.
And 102, identifying the emotional state of the user and determining the emotional characteristics of the user.
And 103, matching the emotional characteristics of the user with the music emotional characteristics to generate a music recommendation list matched with the emotional state of the user.
According to the music recommendation method in the embodiment, the emotion state of the user is analyzed, the emotion state is compared with the emotion vector of the music in the music library, and the music recommendation is completed according to the set emotion requirement matching rule.
Analyzing music in a music library and determining music emotion characteristics comprise: and extracting the waveform and lyric keywords of the music in the music library. And acquiring a waveform emotion vector corresponding to the waveform in the waveform emotion dictionary. And acquiring a corresponding lyric emotion vector of the lyric keyword in a lyric emotion dictionary. And carrying out vector weighted superposition calculation on the waveform emotion vector and the lyric emotion vector to obtain a music emotion characteristic vector. The waveform emotion vector and the lyric emotion vector are both two-dimensional vectors.
The emotional state of the user can be analyzed by various methods, such as analyzing the personal signature in the social network site, the operation of the user on a touch screen, and the like. For example, through a third party interface, a personalized signature of a user's social application is obtained. And extracting signature keywords and symbols from the personalized signature.
And acquiring a signature keyword emotion vector corresponding to the signature keyword in a signature keyword emotion dictionary. And acquiring a corresponding symbol emotion vector of the symbol in the symbol emotion dictionary. And carrying out vector weighting superposition calculation on the signature keyword emotion vector and the symbol emotion vector to obtain a user emotion feature vector, wherein the signature keyword emotion vector and the symbol emotion vector are two-dimensional vectors.
And extracting a user individual signature list on a third-party application through a data interface, such as WeChat, Feixin, QQ and the like. Writing the vocabulary entry emotion vectors of the text words, the music waveforms and the expression symbols of the individual signatures of the songs, receiving and analyzing the feedback of the emotion vectors and storing the feedback, performing individuation and optimization of a dictionary, and maintaining a signature keyword emotion dictionary, a waveform emotion dictionary and the like.
In one embodiment, keywords and symbols of the signature are respectively extracted, waveforms of the music and lyric keywords are extracted, emotion distribution statistics is carried out according to a dictionary, respective emotion vectors are obtained, and characteristic emotion vectors of the signature and the song are determined through vocabulary entry emotion vector superposition. Calculating the compliance, the strength and the correction of the individual signature and the emotion vector of the music, setting the weight, comprehensively obtaining the matching degree of the signature and the music and extracting the music with high matching degree.
Fig. 2 is a flowchart of another embodiment of a music recommendation method according to the present invention, as shown in fig. 2:
step 201, a lyric emotion dictionary, a waveform emotion dictionary, a symbol emotion dictionary and the like are established through experiments, and a two-dimensional emotion vector is established for the text vocabulary of the commonly used lyrics, the waveform of the music and the symbol expression of the individual signature.
And extracting the waveform and lyric of the music in the music library. And performing waveform emotion vector calculation on the music piece by combining the waveform emotion dictionary. And combining the lyric emotion dictionary to calculate the keyword emotion vector of the lyric of the music. And after summarizing and analyzing, determining the emotional characteristic vector of the music.
Step 202, when a user logs in a music playing application, obtaining social application personalized signature reading permission.
Step 203, obtaining the personalized signature of the social application through a third-party interface.
Step 204, extracting keywords and symbols from the signature. And combining the signature keyword emotion dictionary and the symbol emotion dictionary to calculate the emotion vectors of the individual signatures, and determining the emotion characteristic vectors of the signatures after summarizing and analyzing.
Step 205, matching the emotion feature vector of the individual signature with the emotion feature vector of the music in the music library, performing emotion requirement matching analysis based on the compliance, the strength and the correction degree, selecting Top-100 of the matching degree, and generating a recommendation list.
And step 206, randomly selecting 20 heads in the list and transmitting the list to a recommendation column.
According to the music recommendation method in the embodiment, the personalized signature and the music are matched with each other through the emotion classification of the music and the emotion classification of the personalized signature, so that more accurate music requirement guidance is provided for the user.
In one embodiment, operational statistics of a user's operations on a touch screen are calculated. The operational statistics include: the average value of the force, the change rate of the force, the interval time of the keys, the movement speed of the contact, the frequency of deleting the keys and the like. And acquiring an operation emotion vector corresponding to the operation statistic in the operation behavior emotion dictionary. And the operation emotion vector is a two-dimensional vector, and vector weighting superposition calculation is carried out on the operation emotion vector to obtain the user emotion feature vector.
The operation behavior reflects the recent emotional condition of the user, and the application area stenosis problem of the mood voice recognition and the cost problem of the mood image recognition are recognized through behavior characteristic analysis. The method has the advantages of fast and efficient operation behavior-mood matching, convenient reduction of fine marketing cost, improvement of the delivery accuracy of music, advertisement conversion rate, service satisfaction and the like.
The data of a mobile phone input end sensor is accessed, the force, frequency and track of mobile phone operation behaviors such as key pressing, clicking, drawing and the like of a user are recorded in real time, and characteristic parameter matching and emotion classification are carried out according to a behavior emotion dictionary, so that the real-time mood of the user is output, the mood change of the user can be known in time, and no additional working cost such as interactive input or hardware induction is used.
And accessing sensor data at the input end of the mobile phone to detect user operation. And recording the force change and the track coordinate change of the operation, and the positions of the deletion key and the return key. And performing emotional correspondence on the operation strength, strength change rate, frequency change rate, track, speed and the like, and establishing entries and corresponding emotional vectors for different behavior characteristics. And adjustable parameters are introduced to realize individuation.
And extracting operation records, and merging the records according to the continuity degree to form a plurality of operation flows. And calculating the characteristic parameters of each operation flow, matching the characteristic parameters with the entries of the emotion dictionary, and giving corresponding emotion vectors. Carrying out weighted superposition on emotion vectors on the operation flow in a certain time; and classifying and outputting the moods according to the superposed total emotional vectors.
For example, it is detected whether the user operates on the touch screen, and if so, the operating time point is recorded, and the operating signal of the user, including the signal intensity, the touch point coordinates, the delete key, the return key position, and the like, is recorded. And merging the operation signals into an operation stream according to continuity, setting a time threshold, and classifying the two operation signals into the same operation stream when the interval between the two operation signals is smaller than the threshold. And acquiring the corresponding emotion vector of the operation in each operation flow in the operation behavior emotion dictionary. And carrying out vector weighted superposition calculation on the emotion vectors of the operation flows in the set time to obtain the emotion feature vectors of the users.
Fig. 3 is a flowchart of another embodiment of a music recommendation method according to the present invention, as shown in fig. 3:
step 301, the system is initialized, and the operation behavior emotion dictionary is updated by using the historical data.
The operation behavior emotion dictionary carries out the value calibration of the emotion vector according to the characteristic parameters of the operation, if the force is large, the click process is fast, the key frequency of a delete key is high, the a of the emotion vectoreLarge dimension value, veThe dimension value is small. And personalized parameter adjustment can be performed according to the operation average value and manual supervision feedback of the user.
Step 302, detecting whether the user performs mobile phone operation, and recording an operation time point.
Step 303, accessing sensor signals of the input end of the mobile phone, wherein the sensor signals comprise signal intensity, contact coordinates, a delete key, a return key position and the like.
And 304, merging the operation streams of the signal records according to continuity, setting a time threshold, and merging the two operation streams into the same operation stream when the interval between the two operation signals is smaller than the threshold.
And 305, analyzing the operation flow signal, and calculating parameters such as the force average value, the change rate, the key interval time, the contact motion speed, the key frequency and the like of the operation. And feeding the average value back to the operation behavior emotion dictionary for storage and optimization.
And step 306, matching the operation flow parameters with the operation behavior emotion dictionary, and giving corresponding emotion vectors.
Step 307, setting a time length, and performing superposition analysis on the emotion vectors of each operation flow within a certain time length to obtain a total emotion vector within a time period.
And 308, determining the mood type according to the total emotion vector, and outputting a time sequence mood result.
And 309, feeding back the accuracy of the result to the operation behavior emotion dictionary, and optimizing the operation behavior emotion dictionary.
For example, the emotion dictionaries adopt a V-A model of Russell or a Thayer model. Various emotion vectors in the V-A model emotion dictionary are two-dimensional vectors, and the emotion vectors comprise two real number dimensions (V, a) of joy and liveness. As shown in fig. 4, according to the V-a model, a waveform emotion dictionary and lyric emotion vectors are established, two-dimensional emotion vectors are determined among 2 emotion vectors, real numbers are endowed in dimensionality, the magnitude of the numerical value reflects the emotion intensity, and the positive and negative of the data reflects the emotion direction.
Extracting waveform A and lyric keyword 'positive energy' of music in a music library, obtaining a waveform emotion vector (0.5, 0.2) corresponding to the waveform A in a waveform emotion dictionary, and obtaining a lyric emotion vector (0.3, 0.1) corresponding to the 'positive energy' in the lyric emotion dictionary. And performing vector weighted superposition calculation on the waveform emotion vector (0.5, 0.2) and the lyric emotion vector (0.3, 0.1), wherein for example, the weights are both 0.5, and then the music emotion feature vector (0.4, 0.15) of the music is obtained. The algorithm of other emotional feature vectors is the same as the algorithm for calculating the music emotional feature vector, and no example is given.
In one embodiment, a user downloads and installs a customer service APP at a terminal, and inquires about the permission of acquiring the sensor signal input by the terminal. Background detection and recording of sensor signal time, force and track; the signal is divided into different operation streams according to the continuity. And calculating the force average value, the force change rate, the key interval time, the contact motion speed, the key deletion frequency and the like of the user operation behavior. And calibrating the real-time characteristic parameters according to the operation history average data, matching the real-time characteristic parameters with an emotion dictionary, and determining an emotion vector. Performing emotion vector superposition on the operation flow within 15 minutes, and determining and recording the mood state of the user;
for example, a user downloads and installs a music APP at a terminal, and inquires about the permission of acquiring a sensor signal input by the terminal. And (3) detecting and recording the time, the force and the track of the sensor signal by the background, and dividing the signal into different operation flows according to the continuity. And calculating the force average value, the force change rate, the key interval time, the contact motion speed, the key deletion frequency and the like of the user operation behavior. And calibrating the real-time characteristic parameters according to the operation history average data, matching with an emotion dictionary, determining emotion vectors, performing emotion vector superposition on the operation flow within 30 minutes, and determining and recording the mood state of the user. When the user opens the music APP, music recommendation is carried out according to the mood of the user, and recommendation of advertisements, videos and the like can also be carried out.
The recommendation rules may be located as: recommending music with strong rhythm sense if the mood of the user is in the happy interval; when the mood of the user is in the anxiety interval, the user can calm and relax the music; in the tired mood area, push lyric music. Judging the accuracy of mood pushing according to the type of music listened to by the user and the switching frequency, and feeding back the result to the music APP. And the APP performs parameter optimization according to the feedback.
In one embodiment, sentiment analysis and categorization is performed on music pieces of a music library. For pure music, the location is determined by waveform analysis, and for songs, the location is determined by combining waveform analysis and lyric emotion analysis. And obtaining a characteristic emotion vector for each music piece to finish emotion classification of the music library music pieces. When a user logs in a playing interface, the user requests to obtain the individual signature reading permission.
And acquiring the signature history of the user, and selecting the text state in a certain period. And matching and counting the keywords and the emoticons of the text state, and giving a current characteristic emotion vector to the user. And calculating the matching degree of the user emotion vectors and the music emotion vectors in the music library to obtain a music matching list. Selecting songs with the top 10 matching degrees for recommending music
The user emotion feature vector can be obtained through user operation, analysis of individual signatures and the like, and vector weighted superposition calculation can be performed. For example, the signature keyword emotion vector and the symbol emotion vector are subjected to vector weighted superposition calculation to obtain a user emotion feature vector (0.6, 0.2), the operation emotion vector corresponding to the obtained operation statistic in the operation behavior emotion dictionary is (0.7.0.3), and if the set weight values are all 0.5, the user emotion feature vector subjected to vector weighted superposition calculation is (0.65, 0.25).
Matching calculation is carried out on the user emotion characteristic vector and the music emotion characteristic vector based on preset compliance, force and correction degree, and the method specifically comprises the following steps:
F=kaA+kbB+kcC+kdD;
the user emotion feature vector is E (v)e,ae),veFor pleasure, aeThe activity is shown; the characteristic emotion vector of music is M (v)m,am),vmFor pleasure, amThe activity is shown; f is the matching degree.
A is compliance, B is force, C, D is correction, and K is a predetermined parameter, as shown in the following formula:
in one embodiment, the plurality of emotion dictionaries can be based on a V-A emotion space model of Russell, emotion vectors determined in the plurality of emotion dictionaries contain two real number dimensions (V, a) of joy and liveness, and the greater the positive value, the more joyful and comfortable the mood is. The recommendation system aims to adjust the mood of the user to a forward space and reduce the inverse and over-excitation of the user.
For example, an emotion dictionary is established, and an emotion vector is compiled for words, waveforms and symbols. And extracting the vocabulary, the symbol and the waveform in the signature and the music, and performing emotion vector superposition to obtain the characteristic emotion vector. E (v)e,ae) Is a characteristic emotion vector of the user, veFor pleasure, aeIs liveness.
In the case of normalization, such as the user being in a weekend leisure state, mood is pleasant, but not very exciting, one possible value for E is (0.8, 0.2). If the pressure is in the working pressure, the pleasure is low, the activity is low, and the possible value of E is (-0.9, -0.5). M (v)m,am) Is a characteristic emotion vector of music, vmFor pleasure, amIs liveness. For example, in love in May, M is a possible value (0.9,0.7), in Shang Hui Mei, M is (-0.5, -0.6).
The vector directions are similar, reducing the inverse. The degree of compliance A is calculated as follows:
for example: user E is (0.8,0.2), and for song "love" M is (0.9,0.7), a is 0.438. For song listen to the sea, M ═ 0.5, -0.6, and a ═ 0.312. Love going is consistent with the current mood.
The vector sizes are similar, and emotional overstimulation is reduced; the force B is calculated as follows, and a force value is obtained through vector calculation:
for example, user E ═ 0.8, 0.2. For song loving, B is 0.315; for song "listen to the sea", B-0.044. The "listen to the sea" is more matched in dynamics.
The vector difference should be towards positive space; the degree of correction C, D is calculated as follows:
C=vm-ve,D=am-ae;
for song loving, C is 0.1 and D is 0.5. For song listen to the sea, C-1.3 and D-0.8. The song "love you" belongs to forward correction, and the song "listen to the sea" belongs to reverse correction.
Comprehensively calculating the matching degree; k is a user personalization coefficient, as follows:
F=kaA+kbB+kcC+kdD。
the K coefficient is adjusted according to the experimental result. E.g. ka=0.7、kb=-0.8、kc=0.9、kdIf the song "love" F is 0.595 for 0.9, and if the song "listen to the sea" F is-2.144, the song "love" matches the emotional state of the user to a higher degree.
According to the music recommendation method provided by the embodiment, the emotion analysis of the user is performed, the songs of corresponding categories are recommended according to the emotion state of the user and the emotion requirement matching method, and accurate, automatic and efficient music recommendation is realized.
As shown in fig. 5, the present invention provides a music recommendation system 4. The music emotion analysis unit 41 analyzes music in the music library and determines a music emotion characteristic. The user emotion analysis unit 42 identifies the emotional state of the user and determines the user emotion characteristics. Music recommendation generation section 43 matches the user emotion characteristics with the music emotion characteristics, and generates a music recommendation list matching the emotion state of the user.
As shown in fig. 6, the waveform extraction module 411 extracts a waveform of music in a music library. The lyric extraction module 412 extracts lyric keywords of music in the song library. The waveform emotion vector calculation module 413 acquires a waveform emotion vector corresponding to the waveform in the waveform emotion dictionary. The lyric emotion vector calculation module 414 obtains a lyric emotion vector corresponding to the lyric keyword in the lyric emotion dictionary. The music emotion vector calculation module 415 performs vector weighted superposition calculation on the waveform emotion vector and the lyric emotion vector to obtain a music emotion feature vector. The waveform emotion vector and the lyric emotion vector are both two-dimensional vectors.
As shown in fig. 7, user emotion analyzing section 42 includes: a social application emotion subunit 51 and a user operation emotion subunit 52. As shown in fig. 8, the personality signature obtaining module 511 obtains the personality signature of the social application of the user. The vocabulary symbol extraction module 512 extracts signature keywords and symbols from the personality signature. The vocabulary emotion calculating module 513 obtains a signature keyword emotion vector corresponding to the signature keyword in the signature keyword emotion dictionary. The symbol emotion calculation module 514 obtains a symbol emotion vector corresponding to the symbol in the symbol emotion dictionary. The individual signature emotion meter module 515 performs vector weighted superposition calculation on the signature keyword emotion vector and the symbol emotion vector to obtain a user emotion feature vector, where the signature keyword emotion vector and the symbol emotion vector are both two-dimensional vectors.
As shown in fig. 9, the operation statistic module 521 calculates operation statistics of the user's operation on the touch screen, where the operation statistics include: the average value of the force, the change rate of the force, the interval time of the keys, the movement speed of the contact, the frequency of deleting the keys and the like. The operation emotion vector calculation module 522 obtains an operation emotion vector corresponding to the operation statistic in the operation behavior emotion dictionary, and the operation emotion vector is a two-dimensional vector. The user emotion vector calculation module 523 performs vector weighted superposition calculation on the operation emotion vectors to obtain user emotion feature vectors.
The operation counting module 521 detects whether the user operates on the touch screen, and if so, records an operation time point and records operation signals of the user, including signal intensity, contact coordinates, a delete key and a return key position. And combining the operation signals into an operation flow according to the continuity: and setting a time threshold, and classifying the two operation signals into the same operation flow when the interval between the two operation signals is smaller than the threshold. The operation emotion vector calculation module 522 obtains an emotion vector corresponding to the operation in each operation flow in the operation behavior emotion dictionary. The user emotion vector calculation module 523 performs vector weighted superposition calculation on the emotion vectors of the operation flows in the set time to obtain the user emotion feature vector.
As shown in fig. 10, the emotion vector matching module 431 performs matching calculation on the user emotion feature vector and the music emotion feature vector based on the preset compliance, strength and rectification degree. The music recommendation generation unit 432 selects a music generation music recommendation list with a high matching degree. The emotion vector matching module 431 specifically performs matching calculation on the user emotion feature vector and the music emotion feature vector based on preset compliance, strength and rectification degree as follows: k is F ═ kaA+kbB+kcC+kdD; wherein, the user emotion feature vector is E (v)e,ae),veFor pleasure, aeThe activity is shown; the characteristic emotion vector of music is M (v)m,am),vmFor pleasure, amThe activity is shown; f is the matching degree; a is the compliance; b is the force; C. d is the correction degree; k is a preset parameter;
C=vm-ve;D=am-ae。
according to the music recommendation method and system provided by the embodiment, the emotion analysis of the user is performed, the tracks of the corresponding category are recommended according to the emotion state of the user and the emotion requirement matching method, accurate, automatic and efficient music recommendation is achieved, the recommendation content is more leap-over, the user does not need to set in advance, and the intelligent degree of the system is higher.
The method and system of the present invention may be implemented in a number of ways. For example, the methods and systems of the present invention may be implemented in software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustrative purposes only, and the steps of the method of the present invention are not limited to the order specifically described above unless specifically indicated otherwise. Furthermore, in some embodiments, the present invention may also be embodied as a program recorded in a recording medium, the program including machine-readable instructions for implementing a method according to the present invention. Thus, the present invention also covers a recording medium storing a program for executing the method according to the present invention.
The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
Claims (16)
1. A music recommendation method, comprising:
analyzing music in a music library to determine music emotion characteristics;
identifying the emotional state of the user and determining the emotional characteristics of the user;
and matching the emotional characteristics of the user with the music emotional characteristics to generate a music recommendation list matched with the emotional state of the user.
2. The method of claim 1, wherein analyzing music in the music gallery to determine musical emotional characteristics comprises:
extracting key words of waveforms and lyrics of music in a music library;
acquiring a waveform emotion vector corresponding to the waveform in a waveform emotion dictionary;
acquiring a lyric emotion vector corresponding to the lyric keyword in a lyric emotion dictionary;
carrying out vector weighted superposition calculation on the waveform emotion vector and the lyric emotion vector to obtain a music emotion feature vector; and the waveform emotion vector and the lyric emotion vector are two-dimensional vectors.
3. The method of claim 2, wherein identifying the emotional state of the user and determining the emotional characteristic of the user comprises:
acquiring an individual signature of a social application of a user through a third-party interface;
extracting signature keywords and symbols from the personalized signature;
acquiring a signature keyword emotion vector corresponding to the signature keyword in a signature keyword emotion dictionary;
obtaining a corresponding symbol emotion vector of the symbol in a symbol emotion dictionary;
and carrying out vector weighting superposition calculation on the signature keyword emotion vector and the symbol emotion vector to obtain a user emotion feature vector, wherein the signature keyword emotion vector and the symbol emotion vector are two-dimensional vectors.
4. The method of claim 2 or 3, wherein the identifying the emotional state of the user and determining the emotional characteristic of the user comprises:
calculating an operation statistic value operated by a user on the touch screen; the operational statistics include: the force average value, the force change rate, the key interval time, the contact motion speed and the key deletion frequency;
acquiring an operation emotion vector corresponding to the operation statistic in an operation behavior emotion dictionary; wherein the operation emotion vector is a two-dimensional vector;
and carrying out vector weighted superposition calculation on the operation feeling vector to obtain the user feeling feature vector.
5. The method of claim 4, wherein the calculating the operation statistics of the user's operation on the touch screen, and the obtaining the corresponding operation emotion vectors of the operation statistics in the operation behavior emotion dictionary comprises:
detecting whether a user operates on the touch screen, if so, recording an operation time point, and recording operation signals of the user, including signal intensity, contact coordinates, a delete key and a return key position;
and combining the operation signals into an operation flow according to continuity: setting a time threshold, and when the interval between two operation signals is smaller than the threshold, classifying the two operation signals into the same operation flow;
and acquiring the corresponding emotion vector of the operation in each operation flow in the operation behavior emotion dictionary.
6. The method of claim 5, wherein the vector weighted superposition calculation of the operation feeling vector and the obtaining of the user emotion feature vector comprises:
and carrying out vector weighted superposition calculation on the emotion vectors of the operation flows in the set time to obtain the emotion feature vectors of the users.
7. The method of claim 4, wherein matching the user emotional characteristics to the music emotional characteristics, generating a music recommendation list that matches the emotional state of the user comprises:
and matching calculation is carried out on the user emotion characteristic vector and the music emotion characteristic vector based on preset compliance, force and correction degree, and music with high matching degree is selected to generate the music recommendation list.
8. The method of claim 7, wherein: the matching calculation of the user emotion feature vector and the music emotion feature vector based on preset compliance, force and correction specifically comprises the following steps:
F=kaA+kbB+kcC+kdD;
wherein the user emotion feature vector is E (v)e,ae),veFor pleasure, aeThe activity is shown; the characteristic emotion vector of the music is M (v)m,am),vmFor pleasure, amThe activity is shown; f is the matching degree; a is the compliance; b is the force; C. d is the correction degree; k is a preset parameter;
wherein,
C=vm-ve;D=am-ae。
9. a music recommendation system, comprising:
the music emotion analysis unit is used for analyzing the music in the music database and determining music emotion characteristics;
the user emotion analysis unit is used for identifying the emotion state of the user and determining the emotion characteristics of the user;
and the music recommendation generating unit is used for matching the emotional characteristics of the user with the music emotional characteristics and generating a music recommendation list matched with the emotional state of the user.
10. The system of claim 9, wherein the music emotion analysis unit includes:
the waveform extraction module is used for extracting the waveform of the music in the music library;
the lyric extraction module is used for extracting lyric keywords of music in a music library;
the waveform emotion vector calculation module is used for acquiring a waveform emotion vector corresponding to the waveform in a waveform emotion dictionary;
the lyric emotion vector calculation module is used for acquiring a lyric emotion vector corresponding to the lyric keyword in a lyric emotion dictionary;
the music emotion vector calculation module is used for carrying out vector weighted superposition calculation on the waveform emotion vector and the lyric emotion vector to obtain a music emotion feature vector; and the waveform emotion vector and the lyric emotion vector are two-dimensional vectors.
11. The system of claim 10, wherein the user emotion analysis unit includes:
a social application sentiment subunit comprising:
the personalized signature acquisition module is used for acquiring a personalized signature of the social application of the user;
the vocabulary symbol extraction module is used for extracting signature keywords and symbols from the personalized signature;
the vocabulary emotion calculation module is used for acquiring a signature keyword emotion vector corresponding to the signature keyword in a signature keyword emotion dictionary;
the symbol emotion calculation module is used for acquiring a symbol emotion vector corresponding to the symbol in a symbol emotion dictionary;
and the individual signature emotion meter module is used for carrying out vector weighted superposition calculation on the signature keyword emotion vector and the symbol emotion vector to obtain a user emotion feature vector, wherein the signature keyword emotion vector and the symbol emotion vector are two-dimensional vectors.
12. The system of claim 10 or 11, wherein the user emotion analyzing unit includes:
the user operation emotion subunit comprises:
the operation statistic module is used for calculating an operation statistic value operated by a user on the touch screen; the operational statistics include: the force average value, the force change rate, the key interval time, the contact motion speed and the key deletion frequency;
the operation emotion vector calculation module is used for acquiring an operation emotion vector corresponding to the operation statistic in the operation behavior emotion dictionary; wherein the operation emotion vector is a two-dimensional vector;
and the user emotion vector calculation module is used for carrying out vector weighted superposition calculation on the operation emotion vector to obtain a user emotion feature vector.
13. The system of claim 12, wherein:
the operation counting module is also used for detecting whether a user operates on the touch screen, if so, recording an operation time point, and recording operation signals of the user, including signal intensity, contact coordinates, a delete key and a return key position; and combining the operation signals into an operation flow according to continuity: setting a time threshold, and when the interval between two operation signals is smaller than the threshold, classifying the two operation signals into the same operation flow;
the operation emotion vector calculation module is further configured to obtain an emotion vector corresponding to the operation in each operation stream in the operation behavior emotion dictionary.
14. The system of claim 13, wherein:
the user emotion vector calculation module is further used for carrying out vector weighted superposition calculation on the emotion vectors of the operation streams within the set time to obtain the user emotion feature vectors.
15. The system of claim 12, wherein:
the music recommendation generation unit includes:
the emotion vector matching module is used for matching and calculating the user emotion feature vector and the music emotion feature vector based on preset compliance, force and correction degree;
and the music recommendation generating unit is used for selecting music with high matching degree to generate the music recommendation list.
16. The system of claim 15, wherein:
the emotion vector matching module is used for matching and calculating the user emotion feature vector and the music emotion feature vector based on preset compliance, force and correction degree, and specifically comprises the following steps:
F=KaA+kbB+kcC+kdD;
wherein the user emotion feature vector is E (v)e,ae),veFor pleasure, aeThe characteristic emotion vector of the music is M (v) for livenessm,am),vmFor pleasure, amThe activity is shown; f is the matching degree; a is the compliance; b is the force; C. d is the correction degree; k is a preset parameter;
wherein,
C=vm-ve;D=am-ae。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510213529.6A CN106202073B (en) | 2015-04-30 | 2015-04-30 | Music recommendation method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510213529.6A CN106202073B (en) | 2015-04-30 | 2015-04-30 | Music recommendation method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106202073A true CN106202073A (en) | 2016-12-07 |
CN106202073B CN106202073B (en) | 2020-02-14 |
Family
ID=57458242
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510213529.6A Active CN106202073B (en) | 2015-04-30 | 2015-04-30 | Music recommendation method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106202073B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106802943A (en) * | 2017-01-03 | 2017-06-06 | 海信集团有限公司 | Music based on message of film and TV recommends method and device |
CN108228831A (en) * | 2018-01-03 | 2018-06-29 | 韦德永 | A kind of intelligent music commending system |
CN108845744A (en) * | 2018-06-29 | 2018-11-20 | 河南工业大学 | The method and system of its affective state are judged by the behavior of user's operation electronic equipment |
CN108958605A (en) * | 2018-06-29 | 2018-12-07 | 河南工业大学 | Adaptability judges the method and system of intelligent mobile terminal use state |
CN109241312A (en) * | 2018-08-09 | 2019-01-18 | 广东数相智能科技有限公司 | Compose a poem to a given tune of ci method, apparatus and the terminal device of melody |
CN109273025A (en) * | 2018-11-02 | 2019-01-25 | 中国地质大学(武汉) | A kind of China National Pentatonic emotion identification method and system |
WO2019218462A1 (en) * | 2018-05-14 | 2019-11-21 | 平安科技(深圳)有限公司 | Song list generation method and apparatus, and terminal device and medium |
CN110597960A (en) * | 2019-09-17 | 2019-12-20 | 香港教育大学 | Personalized online course and occupation bidirectional recommendation method and system |
CN110807681A (en) * | 2019-09-10 | 2020-02-18 | 咪咕文化科技有限公司 | Product customization method, electronic device and storage medium |
CN111428487A (en) * | 2020-02-27 | 2020-07-17 | 支付宝(杭州)信息技术有限公司 | Model training method, lyric generation method, device, electronic equipment and medium |
WO2021216016A1 (en) * | 2020-11-25 | 2021-10-28 | Ayna Pinar | System and method that enables music making from content with emotional analysis |
JP7417889B2 (en) | 2019-10-21 | 2024-01-19 | パナソニックIpマネジメント株式会社 | Content recommendation system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090281906A1 (en) * | 2008-05-07 | 2009-11-12 | Microsoft Corporation | Music Recommendation using Emotional Allocation Modeling |
CN103412646A (en) * | 2013-08-07 | 2013-11-27 | 南京师范大学 | Emotional music recommendation method based on brain-computer interaction |
CN103559233A (en) * | 2012-10-29 | 2014-02-05 | 中国人民解放军国防科学技术大学 | Extraction method for network new words in microblogs and microblog emotion analysis method and system |
CN103970873A (en) * | 2014-05-14 | 2014-08-06 | 中国联合网络通信集团有限公司 | Music recommending method and system |
CN103970806A (en) * | 2013-02-05 | 2014-08-06 | 百度在线网络技术(北京)有限公司 | Method and device for establishing lyric-feelings classification models |
CN104123355A (en) * | 2014-07-17 | 2014-10-29 | 深圳市明康迈软件有限公司 | Music recommendation method and system |
-
2015
- 2015-04-30 CN CN201510213529.6A patent/CN106202073B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090281906A1 (en) * | 2008-05-07 | 2009-11-12 | Microsoft Corporation | Music Recommendation using Emotional Allocation Modeling |
CN103559233A (en) * | 2012-10-29 | 2014-02-05 | 中国人民解放军国防科学技术大学 | Extraction method for network new words in microblogs and microblog emotion analysis method and system |
CN103970806A (en) * | 2013-02-05 | 2014-08-06 | 百度在线网络技术(北京)有限公司 | Method and device for establishing lyric-feelings classification models |
CN103412646A (en) * | 2013-08-07 | 2013-11-27 | 南京师范大学 | Emotional music recommendation method based on brain-computer interaction |
CN103970873A (en) * | 2014-05-14 | 2014-08-06 | 中国联合网络通信集团有限公司 | Music recommending method and system |
CN104123355A (en) * | 2014-07-17 | 2014-10-29 | 深圳市明康迈软件有限公司 | Music recommendation method and system |
Non-Patent Citations (1)
Title |
---|
JEREMY N. BAILENSON等: ""Virtual interpersonal touch: expressing and recognizing emotions through haptic devices"", 《HUMAN-COMPUTER INTERACTION》 * |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106802943A (en) * | 2017-01-03 | 2017-06-06 | 海信集团有限公司 | Music based on message of film and TV recommends method and device |
CN106802943B (en) * | 2017-01-03 | 2020-06-09 | 海信集团有限公司 | Music recommendation method and device based on movie and television information |
CN108228831B (en) * | 2018-01-03 | 2020-01-07 | 江苏易邦信息技术有限公司 | Intelligent music recommendation system |
CN108228831A (en) * | 2018-01-03 | 2018-06-29 | 韦德永 | A kind of intelligent music commending system |
WO2019218462A1 (en) * | 2018-05-14 | 2019-11-21 | 平安科技(深圳)有限公司 | Song list generation method and apparatus, and terminal device and medium |
CN108845744A (en) * | 2018-06-29 | 2018-11-20 | 河南工业大学 | The method and system of its affective state are judged by the behavior of user's operation electronic equipment |
CN108958605A (en) * | 2018-06-29 | 2018-12-07 | 河南工业大学 | Adaptability judges the method and system of intelligent mobile terminal use state |
CN109241312A (en) * | 2018-08-09 | 2019-01-18 | 广东数相智能科技有限公司 | Compose a poem to a given tune of ci method, apparatus and the terminal device of melody |
CN109241312B (en) * | 2018-08-09 | 2021-08-31 | 广东数相智能科技有限公司 | Melody word filling method and device and terminal equipment |
CN109273025A (en) * | 2018-11-02 | 2019-01-25 | 中国地质大学(武汉) | A kind of China National Pentatonic emotion identification method and system |
CN110807681A (en) * | 2019-09-10 | 2020-02-18 | 咪咕文化科技有限公司 | Product customization method, electronic device and storage medium |
CN110597960A (en) * | 2019-09-17 | 2019-12-20 | 香港教育大学 | Personalized online course and occupation bidirectional recommendation method and system |
CN110597960B (en) * | 2019-09-17 | 2022-11-15 | 香港教育大学 | Personalized online course and occupation bidirectional recommendation method and system |
JP7417889B2 (en) | 2019-10-21 | 2024-01-19 | パナソニックIpマネジメント株式会社 | Content recommendation system |
CN111428487A (en) * | 2020-02-27 | 2020-07-17 | 支付宝(杭州)信息技术有限公司 | Model training method, lyric generation method, device, electronic equipment and medium |
CN111428487B (en) * | 2020-02-27 | 2023-04-07 | 支付宝(杭州)信息技术有限公司 | Model training method, lyric generation method, device, electronic equipment and medium |
WO2021216016A1 (en) * | 2020-11-25 | 2021-10-28 | Ayna Pinar | System and method that enables music making from content with emotional analysis |
Also Published As
Publication number | Publication date |
---|---|
CN106202073B (en) | 2020-02-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106202073B (en) | Music recommendation method and system | |
CN107818781B (en) | Intelligent interaction method, equipment and storage medium | |
CN107609101B (en) | Intelligent interaction method, equipment and storage medium | |
US20200401612A1 (en) | Computer speech recognition and semantic understanding from activity patterns | |
CN107832286B (en) | Intelligent interaction method, equipment and storage medium | |
Schedl et al. | Music recommender systems | |
CN110888990B (en) | Text recommendation method, device, equipment and medium | |
CN104836720B (en) | Method and device for information recommendation in interactive communication | |
Cebrián et al. | Music recommendations with temporal context awareness | |
KR20130055748A (en) | System and method for recommending of contents | |
CN106919575A (en) | application program searching method and device | |
CN109275047A (en) | Video information processing method and device, electronic equipment, storage medium | |
CN109582869A (en) | A kind of data processing method, device and the device for data processing | |
CN107564526A (en) | Processing method, device and machine readable media | |
CN109101505A (en) | A kind of recommended method, recommendation apparatus and the device for recommendation | |
US11200264B2 (en) | Systems and methods for identifying dynamic types in voice queries | |
CN114817582A (en) | Resource information pushing method and electronic device | |
Dubey et al. | Digital Content Recommendation System through Facial Emotion Recognition | |
CN111460215B (en) | Audio data processing method and device, computer equipment and storage medium | |
Walha et al. | A Lexicon approach to multidimensional analysis of tweets opinion | |
KR102642358B1 (en) | Apparatus and method for recommending music based on text sentiment analysis | |
CN107807949A (en) | Intelligent interactive method, equipment and storage medium | |
KR101525400B1 (en) | Computer-executable sensibility keyword classification method and computer-executable device performing the same | |
CN114756646A (en) | Conversation method, conversation device and intelligent equipment | |
CN105515938B (en) | Method and device for generating communication information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |