CN112487236A - Method, device, equipment and storage medium for determining associated song list - Google Patents

Method, device, equipment and storage medium for determining associated song list Download PDF

Info

Publication number
CN112487236A
CN112487236A CN202011388415.2A CN202011388415A CN112487236A CN 112487236 A CN112487236 A CN 112487236A CN 202011388415 A CN202011388415 A CN 202011388415A CN 112487236 A CN112487236 A CN 112487236A
Authority
CN
China
Prior art keywords
song
song list
singing
target
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011388415.2A
Other languages
Chinese (zh)
Inventor
萧永乐
顾旻玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Music Entertainment Technology Shenzhen Co Ltd
Original Assignee
Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Music Entertainment Technology Shenzhen Co Ltd filed Critical Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority to CN202011388415.2A priority Critical patent/CN112487236A/en
Publication of CN112487236A publication Critical patent/CN112487236A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/638Presentation of query results
    • G06F16/639Presentation of query results using playlists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/355Class or cluster creation or modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for determining an associated song list, and belongs to the technical field of internet. The method comprises the following steps: determining a description information characteristic vector corresponding to the song list description information of a target song list, and determining a song characteristic vector corresponding to each song included in the target song list, wherein the song characteristic vector corresponding to the song is used for representing one or more song lists to which the song belongs in a song list library; determining a menu feature vector of the target menu based on the song feature vector and the description information feature vector; determining the similarity between the singing sheet feature vector of the target singing sheet and the singing sheet feature vectors of other singing sheets except the target singing sheet in the singing sheet library, and determining other singing sheets corresponding to the target similarity meeting preset conditions in the similarity as the associated singing sheets of the target singing sheet. By adopting the method and the device, the accuracy of determining the associated song list can be improved.

Description

Method, device, equipment and storage medium for determining associated song list
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method, an apparatus, a device, and a storage medium for determining an associated song title.
Background
With the development of internet technology, it is common for users to play music through music applications in mobile terminals.
Some of the menus including songs with the same characteristics are shown in the music application, for example, the songs included in the menu are all songs of 90 years or all songs in english, etc. These menus can be set by the user, for example, the user can set some songs with the same characteristics or favorite by himself in the same menu, set the name of the menu, add a classification label for describing the characteristics of the songs in the menu to the menu, and the like, and then can share the menu in the music application program. The user can also select different song lists for listening in the music application program according to the preference of the user to music.
Some music applications may push an associated song list corresponding to the user-selected song list to the user after the user selects to listen to one song list in order to intelligently push the song list to meet the preference of the user for music, wherein songs in the associated song list may have the same characteristics as songs in the user-selected song list, such as popular songs in chinese, songs in all 90 years, and the like.
In the course of implementing the present application, the inventors found that the related art has at least the following problems:
in the related art, whether two song lists are related song lists is generally determined according to whether the names and the classification labels of the two song lists are related, but because the names or the classification labels of the song lists are set by most users, the description of the characteristics of songs in the song lists is possibly inaccurate, and the accuracy of determining whether the two song lists are related song lists only through the names or the classification labels of the song lists is low.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for determining an associated song list, which can improve the accuracy of determining the associated song list. The technical scheme is as follows:
in a first aspect, a method for determining an associated song list is provided, the method comprising:
determining a description information characteristic vector corresponding to the song list description information of a target song list, and determining a song characteristic vector corresponding to each song included in the target song list, wherein the song characteristic vector corresponding to the song is used for representing one or more song lists to which the song belongs in a song list library;
determining a menu feature vector of the target menu based on the song feature vector and the description information feature vector;
determining the similarity between the singing sheet feature vector of the target singing sheet and the singing sheet feature vectors of other singing sheets except the target singing sheet in the singing sheet library, and determining other singing sheets corresponding to the target similarity meeting preset conditions in the similarity as the associated singing sheets of the target singing sheet.
Optionally, the song list description information includes a song list name and a classification label of the song list;
the determining of the description information feature vector corresponding to the song list description information of the target song list comprises the following steps:
determining phrase vectors corresponding to phrases in the song list name of the target song list based on the corresponding relation between the preset phrases and the phrase vectors;
determining a first position corresponding to at least one classification label of the target song list based on a corresponding relation between a preset classification label and each position in a preset label vector, setting a numerical value of the first position in the preset label vector as a first numerical value, and setting numerical values of other positions except the first position in the preset label vector as second numerical values to obtain a label vector corresponding to at least one label of the target song list;
the determining of the song feature vector corresponding to each song included in the target song list includes:
and determining the song characteristic vectors corresponding to the songs in the target song list based on the preset corresponding relation between the songs and the song characteristic vectors.
Optionally, the determining the menu feature vector of the target menu based on the song feature vector and the description information feature vector includes:
and forming a synthetic vector by the song characteristic vector and the description information characteristic vector, and inputting the synthetic vector into a trained singing form characteristic extraction model to obtain the singing form characteristic vector of the target singing form.
Optionally, before the inputting the synthetic vector into the trained singing sheet feature extraction model to obtain the singing sheet feature vector of the target singing sheet, the method further includes:
acquiring a first synthetic vector of a target sample song list, a second synthetic vector of a positive sample song list corresponding to the target sample song list and a third synthetic vector of a negative sample song list corresponding to the target sample song list, wherein the positive sample song list is an associated song list of the target sample song list, and the negative sample song list is a non-associated song list of the target sample song list;
inputting the first synthetic vector, the second synthetic vector and the third synthetic vector into the singing form feature extraction model to be trained respectively to obtain a first singing form feature vector corresponding to the target sample singing form, a second singing form feature vector corresponding to the positive sample singing form and a third singing form feature vector corresponding to the negative sample singing form;
and determining residual values corresponding to the first song feature vector, the second song feature vector and the third song feature vector based on a preset loss function, and training the singing sheet feature extraction model to be trained based on the residual values to obtain the trained singing sheet feature extraction model.
Optionally, before obtaining the first synthetic vector of the target sample song list, the second synthetic vector of the positive sample song list corresponding to the target sample song list, and the third synthetic vector of the negative sample song list corresponding to the target sample song list, the method further includes:
acquiring a first song list set listened by each user in a preset time period;
for a first song list set listened by each user, deleting a first song list in the first song list set, wherein the listening time length of the first song list set is not within a preset listening time length range, and obtaining a second song list set;
for each second song list set, determining a second song list with the listening times exceeding a first preset time in the second song list set, determining the deletion probability corresponding to the second song list based on the listening times of the second song list and the corresponding relation between the first preset time and the deletion probability, and performing song list deletion processing on the second song list set based on the deletion probability corresponding to each second song list to obtain each third song list set;
determining any song list as a target sample song list, determining the song list which simultaneously exists in the same third song list set with the target sample song list and has the frequency exceeding a second preset frequency as a positive sample song list corresponding to the target sample song list, and determining other song lists except the positive sample song list in the third song list set as negative sample song lists corresponding to the target sample song list.
In a second aspect, there is provided an apparatus for determining an associated song list, the apparatus comprising:
the system comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for determining a description information characteristic vector corresponding to the song list description information of a target song list and determining a song characteristic vector corresponding to each song included in the target song list, and the song characteristic vector corresponding to the song is used for representing one or more song lists to which the song belongs in a song list library;
a second determining module, configured to determine a menu feature vector of the target menu based on the song feature vector and the description information feature vector;
and the third determining module is used for determining the similarity between the singing sheet feature vector of the target singing sheet and the singing sheet feature vectors of other singing sheets except the target singing sheet in the singing sheet library, and determining other singing sheets corresponding to the target similarity meeting preset conditions in the similarity as the associated singing sheets of the target singing sheet.
Optionally, the song list description information includes a song list name and a classification label of the song list;
the first determining module is configured to:
determining phrase vectors corresponding to phrases in the song list name of the target song list based on the corresponding relation between the preset phrases and the phrase vectors; determining a first position corresponding to at least one classification label of the target song list based on a corresponding relation between a preset classification label and each position in a preset label vector, setting a numerical value of the first position in the preset label vector as a first numerical value, and setting numerical values of other positions except the first position in the preset label vector as second numerical values to obtain a label vector corresponding to at least one label of the target song list;
and determining the song characteristic vectors corresponding to the songs in the target song list based on the preset corresponding relation between the songs and the song characteristic vectors.
Optionally, the second determining module is configured to:
and forming a synthetic vector by the song characteristic vector and the description information characteristic vector, and inputting the synthetic vector into a trained singing form characteristic extraction model to obtain the singing form characteristic vector of the target singing form.
Optionally, the second determining module is further configured to:
acquiring a first synthetic vector of a target sample song list, a second synthetic vector of a positive sample song list corresponding to the target sample song list and a third synthetic vector of a negative sample song list corresponding to the target sample song list, wherein the positive sample song list is an associated song list of the target sample song list, and the negative sample song list is a non-associated song list of the target sample song list;
inputting the first synthetic vector, the second synthetic vector and the third synthetic vector into the singing form feature extraction model to be trained respectively to obtain a first singing form feature vector corresponding to the target sample singing form, a second singing form feature vector corresponding to the positive sample singing form and a third singing form feature vector corresponding to the negative sample singing form;
and determining residual values corresponding to the first song feature vector, the second song feature vector and the third song feature vector based on a preset loss function, and training the singing sheet feature extraction model to be trained based on the residual values to obtain the trained singing sheet feature extraction model.
Optionally, the second determining module is further configured to:
acquiring a first song list set listened by each user in a preset time period;
for a first song list set listened by each user, deleting a first song list in the first song list set, wherein the listening time length of the first song list set is not within a preset listening time length range, and obtaining a second song list set;
for each second song list set, determining a second song list with the listening times exceeding a first preset time in the second song list set, determining the deletion probability corresponding to the second song list based on the listening times of the second song list and the corresponding relation between the first preset time and the deletion probability, and performing song list deletion processing on the second song list set based on the deletion probability corresponding to each second song list to obtain each third song list set;
determining any song list as a target sample song list, determining the song list which simultaneously exists in the same third song list set with the target sample song list and has the frequency exceeding a second preset frequency as a positive sample song list corresponding to the target sample song list, and determining other song lists except the positive sample song list in the third song list set as negative sample song lists corresponding to the target sample song list.
In a third aspect, a computer device is provided, which includes a processor and a memory, wherein at least one instruction is stored in the memory, and the at least one instruction is loaded and executed by the processor to implement the operations performed by the method for determining an associated song list as described above.
In a fourth aspect, a computer-readable storage medium is provided, wherein at least one instruction is stored in the storage medium, and the instruction is loaded and executed by a processor to implement the operations performed by the method for determining an associated song list as described above.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
determining the feature vector of the target menu through the feature vector of the description information of the menu and the feature vector of the song corresponding to each song of the menu, and then determining the menu corresponding to the feature vector of the menu with the highest similarity with the feature vector of the target menu as the associated menu of the target menu. It can be seen that in the embodiment of the application, when the associated menu is determined, besides the menu description information, the song feature vectors of the songs in the menu are also used, and the reference information for determining the associated menu is increased, so that the accuracy for determining the associated menu is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow chart of a method for determining an associated song title provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a method for determining an associated song title provided by an embodiment of the present application;
FIG. 3 is a schematic diagram of a method for determining an associated song title provided by an embodiment of the present application;
FIG. 4 is a flow chart of a method for obtaining a pair of sample songs in an embodiment of the present application;
fig. 5 is a schematic structural diagram of an apparatus for determining an associated song list according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The method for determining the associated song list can be realized by a terminal or a server, wherein the terminal can run a music playing application program for playing music, the terminal can be provided with components such as a camera, a microphone and an earphone, the terminal has a communication function and can be accessed to the internet, and the terminal can be a mobile phone, a tablet personal computer, an intelligent wearable device, a desktop computer, a notebook computer and the like. The server may be a background server of the music playing application, and the server may establish communication with the terminal. The server may be a single server or a server group, and if the server is a single server, the server may be responsible for all processing in the following scheme, and if the server is a server group, different servers in the server group may be respectively responsible for different processing in the following scheme, and the specific processing allocation condition may be arbitrarily set by a technician according to actual needs, and is not described herein again.
In a music playing application program, a song recommendation interface is generally provided, different songs and different menus can be displayed in the song recommendation interface, and a plurality of songs with the same characteristics can be included in the menu, for example, songs included in the menu in 90 years, or english songs, etc. When the song list is displayed in the song recommending interface, the song list cover corresponding to the song list, the song list name, the classification label corresponding to the song list and the like can be displayed. The user can select the song list in the song recommendation interface according to the preference of the user and listen to each song in the selected song list. The user may also set a song list in the music playing application, for example, some songs that the user likes to listen to may be set in one song list, and a cover of the song list, a name of the song list, a category label of the song list, etc. for example, the name of the song list may be "favorite songs in middle school era," and the category label may add "campus", "youth", "after 90", "love song", etc. According to the method for determining the associated song list, the corresponding associated song list can be recommended for the user according to the song list listened to by the user, namely the songs in the recommended song list have the same characteristics as the songs in the song list listened to by the user.
Fig. 1 is a flowchart of a method for determining an associated song title according to an embodiment of the present application. Referring to fig. 1, this embodiment includes steps 101-103.
Step 101, determining a description information feature vector corresponding to the song list description information of the target song list, and determining a song feature vector corresponding to each song included in the target song list.
The song list library may include all song lists in the corresponding music playing application program, the target song list may be any one of the song lists in the song list library, and the song list description information includes the song list name, the song list detail description, the classification label of the song list, and the like. The song feature vector corresponding to the song in the target song list is used for representing one or more song lists to which the song belongs in the song list library, and can be obtained by performing feature extraction on identification information of the one or more song lists to which the song belongs.
In the present embodiment, the determination process of the feature vector of the descriptive information is described by taking the song title and the category label as examples.
In implementation, the server may obtain the song list name and the classification label of the target song list, then obtain the description keyword in the song list name and the classification label, then input the description keyword into a pre-trained semantic recognition model, extract the feature vector corresponding to the description keyword, and determine the feature vector corresponding to the description keyword as the description information feature vector corresponding to the song list description information. For each song in the target song list, the identification information of one or more song lists containing the song can be obtained, then the identification information of the song list or the song lists is combined into a vector, and the feature extraction is carried out through a trained neural network model to obtain the song feature vector of the song. The identification information of the song list may be an identification of the song list, such as an Identification Document (ID) of the song list.
Optionally, the server may pre-store a corresponding relationship between the description information and the description information feature vector, and a corresponding relationship between each song and the song feature vector, and then determine the description information feature vector corresponding to the song list description information of the target song list and the song feature vector corresponding to each song in the target song list according to the pre-stored corresponding relationship, where the corresponding processing is as follows:
and step 1011, determining phrase vectors corresponding to the phrases in the song list name of the target song list based on the preset corresponding relationship between the phrases and the phrase vectors.
The phrase may be a description keyword that may appear in the name of the song menu. A technician may pre-establish a phrase library, then perform vector unsupervised training on each phrase using a neural network model, for example, a word2vec model, to obtain a phrase vector corresponding to each phrase, and then may store a corresponding relationship between each phrase and the phrase vector. After the song list name of the target song list is obtained, word cutting processing can be carried out on the song list name according to a preset corpus and stop words, namely, description keywords appearing in the song list name are extracted. For example, the song is named "love song that people in the 90 s must listen to", and the corresponding description keywords may be "90 s" and "love song", etc. After the description keywords appearing in the song list name are determined, the phrase vectors corresponding to the description keywords can be determined according to the corresponding relationship between the phrases and the phrase vectors.
The number of the description keywords corresponding to different song list names may be different, so the number of the obtained phrase vectors may also be different. Therefore, the obtained phrase vectors can be processed, for example, all the phrase vectors corresponding to the song title can be subjected to minimum pooling, maximum pooling and average pooling, so as to obtain minimum pooling, maximum pooling and average pooling of all the phrase vectors corresponding to the song title. And then, the minimum pooling vector, the maximum pooling vector and the average pooling vector form a phrase vector corresponding to the target song list, so that although different song list names correspond to different numbers of description keywords, the different numbers of description keywords can be represented by fixed-dimension vectors through pooling processing.
Step 1012, determining a first position corresponding to at least one classification tag of the target song list based on the corresponding relationship between the preset classification tag and each position in the preset tag vector, setting the value of the first position in the preset tag vector as a first value, and setting the values of other positions except the first position in the preset tag vector as second values to obtain a tag vector corresponding to at least one tag of the target song list.
The preset tag vector may be a fixed-dimension vector. The technical personnel can count the classification labels appearing in each song list in advance to establish a classification label library, or can establish the classification label library in advance for the user to select the classification labels from the classification label library to add into the corresponding song list. Technical personnel can set the corresponding relation between all classification labels in the classification label library and each position of a preset label vector, and after the classification labels of the target song list are obtained, label vectors corresponding to at least one classification label of the target song list can be generated according to one-hot codes. For example, if the number of all the category labels in the category label library is 100, the preset label vector may include 100 positions, and if the target song list includes 4 category labels and corresponds to the first position, the second position, the tenth position and the twenty-second position of the preset label vector respectively, the first position, the second position, the tenth position and the twenty-second position may be collectively referred to as the first position, the value of the first position in the preset label vector may be set to a first value, for example, 1, and the value of the second position except the first position in the preset label vector may be set to a second value, for example, 0, so that the label vector corresponding to at least one label of the target song list may be obtained.
In addition, the preset tag vector is generally a one-dimensional vector, so that the tag vector corresponding to at least one tag can be subjected to dimension-increasing processing, for example, TF-IDF processing, to obtain a high-dimensional tag vector corresponding to at least one tag of the target song list.
And 1013, determining song characteristic vectors corresponding to the songs in the target song list based on the preset corresponding relation between the songs and the song characteristic vectors.
Where each song may correspond to a unique song identification, such as a song ID. Each song list may correspond to unique song list identification information, and the identification information of the song list may be a song list identification, such as a song list ID. The technical personnel can select a part of songs in the song library as sample songs in advance, for each sample song, the identification information of all the songs including the sample song is obtained in the sample song library, and then the vector consisting of the identification information of all the songs including the sample song is used for carrying out vector unsupervised training on the neural network model to obtain the trained neural network model. The neural network model may be a word2vec model, and the sample song list library may be a song list library composed of a part of song lists selected in the song list library in advance, for example, by setting a date, all song lists established before a certain date are composed into the sample song list library.
After the trained neural network model is obtained, for each song in the song library, the identification information of all the songs of each song in the song list library is input into the trained neural network model for feature extraction, so that a song feature vector corresponding to each song is obtained, and then the corresponding relation between the song ID and the song feature vector can be stored. For each song in the target list, the song feature vector corresponding to each song in the target list can be determined according to the song ID of each song and the correspondence between the stored song ID and the song feature vector.
In addition, based on the neural network model trained in advance, the feature vectors of the songs in the target song list can be obtained in the following manner. And for the songs in the target song list, determining all the song lists to which the songs belong and obtaining the identification information of each song list, and further inputting the identification information of all the song lists of the songs into a trained neural network model for feature extraction to obtain the song feature vectors corresponding to the songs. And extracting the characteristics of each song in the target song list according to the method to obtain the respective song characteristic vector of each song.
It should be noted that, for any two songs, if all the song sheets respectively containing the two songs are approximately the same, the song feature vectors corresponding to the two songs may have higher similarity.
Since the number of songs included in different song lists may be different, the number of obtained song feature vectors may also be different. Therefore, the obtained song feature vectors may be processed, for example, the minimum pooling process, the maximum pooling process, and the average pooling process may be performed on all the song feature vectors corresponding to the song list, so as to obtain the minimum pooling vector, the maximum pooling vector, and the average pooling vector of all the song feature vectors corresponding to the song list, respectively. And then, composing the corresponding minimum pooling vector, the maximum pooling vector and the average pooling vector into the song feature vector corresponding to the target song list, so that although the number of songs corresponding to different song lists is different, the different numbers of songs can be represented by fixed-dimension vectors through pooling processing.
It should be noted that the processing in steps 1011, 1012, and 1013 is not sequential in time sequence. As shown in fig. 2, after obtaining the phrase vector, the tag vector and the song feature vector corresponding to the target song list, the phrase vector, the tag vector and the song feature vector may be combined into a composite vector corresponding to the target song list, i.e., a song list feature vector.
And 102, determining the singing sheet characteristic vector of the target singing sheet based on the song characteristic vector and the description information characteristic vector.
In implementation, after obtaining the song feature vector and the description information feature vector of the target song list, the song feature vector and the description information feature vector may be spliced to form the song list feature vector of the target song list.
Optionally, the song feature vector and the description information feature vector may be combined into a composite vector, and the composite vector is input to the trained singing form feature extraction model to obtain the singing form feature vector of the target singing form.
In implementation, a synthetic vector can be composed of the song feature vector and the description information feature vector, and then the synthetic vector is input into the trained singing form feature extraction model to obtain the singing form feature vector of the target singing form.
The training process of the singing list feature extraction model can be as follows:
step 1021, a first synthetic vector of the target sample song list, a second synthetic vector of the positive sample song list corresponding to the target sample song list and a third synthetic vector of the negative sample song list corresponding to the target sample song list are obtained.
The target sample song list can be any one sample song list, the positive sample song list is the associated song list of the target sample song list, and the negative sample song list is the non-associated song list of the target sample song list. For ease of understanding, the target sample song sheet and the positive sample song sheet may be referred to as a positive sample song sheet pair, and the target sample song sheet and the negative sample song sheet may be referred to as a negative sample song sheet pair.
Step 1022, inputting the first synthetic vector, the second synthetic vector and the third synthetic vector into the singing form feature extraction model to be trained respectively to obtain a first singing form feature vector corresponding to the target sample singing form, a second singing form feature vector corresponding to the positive sample singing form and a third singing form feature vector corresponding to the negative sample singing form;
in implementation, three identical singing bill feature extraction models to be trained, such as a singing bill feature extraction model A, a singing bill feature extraction model B and a singing bill feature extraction model C, can be set. As shown in fig. 3, the first synthetic vector is input to the singing style feature extraction model a, the second synthetic vector is input to the singing style feature extraction model B, and the third synthetic vector is input to the singing style feature extraction model C, respectively. The three singing bill feature extraction models can form a singing bill feature extraction model training framework which respectively comprises an input layer, a presentation layer and a matching layer. At an input layer, 780-dimensional first, second and third synthetic vectors may be input into the singing style feature extraction model training framework, respectively. The characteristics of the first synthetic vector, the second synthetic vector and the third synthetic vector can be respectively extracted in the presentation layer, and a first singing form characteristic vector corresponding to the target sample singing form, a second singing form characteristic vector corresponding to the positive sample singing form and a third singing form characteristic vector corresponding to the negative sample singing form are respectively obtained. In the matching layer, the first singing form feature vector and the second singing form feature vector are subjected to corresponding first cosine similarity, the first singing form feature vector and the third singing form feature vectorThe singing sheet feature vector calculates the corresponding second cosine similarity, that is, the matching layer can calculate the cosine similarity between two singing sheets in the positive sample singing sheet pair and calculate the cosine similarity between two singing sheets in the negative sample singing sheet pair. For example, R (S, Q +) in the figure represents that cosine similarity calculation is carried out on a first singing style feature vector S and a second singing style feature vector Q +, P (D + | S) is corresponding first cosine similarity obtained through calculation, R (S, Q-) represents that cosine similarity calculation is carried out on the first singing style feature vector S and a third singing style feature vector Q-, and P (D- | S) is second cosine similarity obtained through calculation. The presentation layer uses a two-layer fully-connected network, WiRepresenting the ith layer weight matrix, biRepresents the i-th layer bias term, so the first hidden layer vector 11The (256-dimensional vector) and the output vector y (128-dimensional vector) can be represented as:
l1=f(W1x+b1)
y=f(W2l1+b2)
where f represents the activation function of the hidden layer and the output, embodiments of the present application use the tanh activation function. The final output y is a 128-dimensional singing list low-dimensional semantic vector. x is the input vector of the input layer, the first synthetic vector, the second synthetic vector and the third synthetic vector can be respectively substituted into the above formula as input vectors to respectively obtain the corresponding hidden layer vector l1The first synthesized vector, the second synthesized vector, and the third synthesized vector may be obtained by the processing in step 101, and are not described herein again.
And 1023, determining residual values corresponding to the first song feature vector, the second song feature vector and the third song feature vector of the song feature vectors based on a preset loss function, and training the singing sheet feature extraction model to be trained based on the residual values to obtain the trained singing sheet feature extraction model.
In implementation, corresponding residual values can be determined according to the first cosine similarity, the second cosine similarity and a preset loss function, and then the singing style feature extraction model A, the singing style feature extraction model B and the singing style feature extraction model C can be trained respectively according to the determined residual values to obtain the trained singing style feature extraction model.
The embodiment of the application can establish a DSSM (Deep Structured Semantic Models) to train the singing sheet feature extraction model, namely, the training can be respectively carried out on the singing sheet feature extraction model by setting a plurality of groups of singing sheet sample spaces, each singing sheet sample space can comprise a preset number of singing sheets, a positive sample singing sheet pair and a negative sample singing sheet pair are included in the preset number of singing sheets, and corresponding loss functions are as follows:
Figure BDA0002810517140000111
Figure BDA0002810517140000112
where γ is the softmax smoothing factor, D+For the corresponding confidence similarity singing sheet (i.e., positive sample singing sheet) of the target sample singing sheet S, D' is the entire singing sheet sample space. R (S, D)+) The cosine similarity of the first singing list feature vector of the target sample singing list and the second singing list feature vector of the corresponding positive sample singing list, and R (S, D') the cosine similarity of the first singing list feature vector of the target sample singing list and the singing list feature vector of any sample in the corresponding singing list sample space. L (Λ) is a residual value. In the training process, the residual error value can be reversely propagated in the full connection of the representation layers, and the model is converged through random gradient reduction to obtain the parameters of each network layer in the singing sheet feature extraction model.
In addition, the singing bill feature extraction model A, the singing bill feature extraction model B and the singing bill feature extraction model C which are trained are the same singing bill feature extraction models, and one of the singing bill feature extraction models is applied during application.
And 103, determining the similarity between the singing sheet feature vector of the target singing sheet and the singing sheet feature vectors of other singing sheets except the target singing sheet in the singing sheet library, and determining other singing sheets corresponding to the target similarity meeting the preset condition in the similarity as the associated singing sheets of the target singing sheet.
In implementation, when the server receives a notification of pushing an associated song list of a target song list to the terminal, the server may obtain a song list feature vector of the target song list and song list feature vectors of other song lists in the song list library except the target song list, and then calculate similarity between the song list feature vector of the target song list and the song list feature vectors of the other song lists, for example, calculate a spatial distance value between the song list feature vector of the target song list and the song list feature vectors of the other song lists. And then determining the target similarity according to a preset condition in a plurality of similarities corresponding to the target singing list obtained through calculation, and then determining the singing list corresponding to the target similarity as an associated singing list of the target singing list. The preset condition may be preset by a technician, for example, a similarity exceeding a similarity threshold among the multiple similarities may be determined as the target similarity, or a preset number of similarities with the highest similarity among the multiple similarities may be determined as the target similarity, and the like. And then determining the singing list corresponding to the target similarity as the associated singing list of the target singing list. For example, 3 sings with the highest similarity are determined as the associated sings of the target singing. The server may then send a push notification to the terminal including the associated song list play option.
According to the embodiment of the application, the feature vector of the singing list of the target singing list is determined through the feature vector of the description information of the singing list and the feature vector of the singing corresponding to each song of the singing list, and then the singing list corresponding to the feature vector of the singing list with the highest similarity with the feature vector of the singing list of the target singing list is determined as the associated singing list of the target singing list. According to the embodiment of the application, the corresponding associated song list can be determined through the description information of the songs and the song list in the song list, the reference information for determining the associated song list is enriched, and the accuracy for determining the associated song list can be improved.
Fig. 4 is a flowchart of a method for obtaining a sample song list pair in an embodiment of the present application, where the method may be used to obtain behavior data of a user listening to a song list before training a song list feature extraction model, that is, before performing steps 1021 and 1023, and determine a positive sample song list and a negative sample song list corresponding to each training sample song list according to the behavior data of the user listening to the song list, where the method includes:
step 401, obtaining a first song list set listened to by each user in a preset time period.
The preset time period may be set by a technician, and may be a week or a month, and the like, and the specific time period length is not limited herein. The first song list set listened to by the user is the set of all the song lists listened to by the user within the preset time period.
And 402, deleting the first song list with the listening time length not within the preset listening time length range in the first song list set from the first song list set listened by each user to obtain a second song list set.
In practice, some users may switch to other songs after listening for a short time because some users choose to listen to a certain song list, and may not be the favorite style of the song list, and some users may choose some song lists randomly and play them circularly only for playing music in some occasions, such as background music in a shopping mall. Therefore, technicians can set a corresponding listening duration range by calculating the average listening duration of the singing lists of the users, and then delete the first singing list with the listening duration not within the preset listening duration range from the first singing list set listened by each user to obtain a singing single line set of each user, namely a second singing list set. In addition, nonsense named song list and song list lacking classification label in the second song list set can be eliminated, wherein the name of the nonsense named song list can be stored in the server in advance, and when the name of the song list is the name of the prestored nonsense named song list, the corresponding song list can be deleted.
And step 403, for each second song list set, determining a second song list with the listening times exceeding a first preset time in the second song list set, determining the deletion probability corresponding to the second song list based on the listening times of the second song list and the corresponding relation between the first preset time and the deletion probability, and performing song list deletion processing on the second song list set based on the deletion probability corresponding to each second song list to obtain each third song list set.
In practice, since some of the menus are hot, the number of listening to the song is more than a first preset number. For example, a song whose number of listening times exceeds 100 ten thousand is a popular song. And the popular song list may appear in the second song list set of the respective users because of the large number of listening times of the popular song list. That is, even two popular song sheets with a large difference in the styles of songs may exist in the second plurality of song sheets at the same time, which may affect the accuracy of the subsequent determination of the positive sample song sheet pair. Therefore, technicians can set corresponding deletion probability according to the listening times of the song list, for a second song list set corresponding to each user, the deletion probability value corresponding to the hot song list can be determined according to the corresponding relation between the preset listening times and the deletion probability of the hot song list in the second song list, then the hot song list is deleted according to the deletion probability so as to reduce the probability that the second song list set comprises the hot song list, and then the second song list set subjected to the song list deletion processing is determined as a third song list set. For example, in the second singing sheet set of 1000 users acquired, the popular singing sheet a appears in the 500 second singing sheet sets, and the deletion probability of the popular singing sheet a is determined to be 50%, the number of the third singing sheet set including the popular singing sheet a may be approximately 250 after the singing sheet deletion processing is performed on the popular singing sheet a. In this way, by setting the deletion probability to delete the popular song list in the song list set, the probability of the popular song list existing in the song list set can be reduced, so as to avoid that the popular song lists of different styles are possibly determined as a positive sample song list pair during the subsequent step 404.
Step 404, determining any song list as a target sample song list, determining the song list which exists in the same third song list set with the target sample song list for more than a second preset number of times as a positive sample song list corresponding to the target sample song list, and determining other song lists except the positive sample song list in the third song list set as negative sample song lists corresponding to the target sample song list.
In practice, for any one of the menus, if there are menus whose numbers of times that the menu exists in the same third menu set at the same time exceed the second preset number of times, that is, there are more users listening to the two menus at the same time, the two menus may be determined as a positive sample menu pair, and in all the third menu sets, the menus other than the positive sample menu pair may be determined as a negative sample menu pair for the menu.
According to the embodiment of the application, the listening behavior data of each user to the song list is obtained, the song list set listened to by the user is screened according to the listening behavior data of the user to the song list, and then the associated positive sample song list pair and the non-associated negative sample song list pair are determined according to the listening data of the user to the screened song list, so that a training sample is provided for the song feature extraction model.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
Fig. 5 is a schematic structural diagram of an apparatus for determining an associated song list according to an embodiment of the present application, where the apparatus may be a server in the above embodiment, and the apparatus includes:
a first determining module 510, configured to determine a descriptive information feature vector corresponding to the song list descriptive information of a target song list, and determine a song feature vector corresponding to each song included in the target song list, where the song feature vector corresponding to the song is used to represent one or more song lists to which the song belongs in a song list library;
a second determining module 520, configured to determine a menu feature vector of the target menu based on the song feature vector and the description information feature vector;
a third determining module 530, configured to determine similarity between the feature vector of the target song list and the feature vectors of the other song lists in the song list library except the target song list, and determine, as the associated song list of the target song list, the other song lists corresponding to the target similarity meeting a preset condition in the similarity.
Optionally, the song list description information includes a song list name and a classification label of the song list;
the first determining module 510 is configured to:
determining phrase vectors corresponding to phrases in the song list name of the target song list based on the corresponding relation between the preset phrases and the phrase vectors; determining a first position corresponding to at least one classification label of the target song list based on a corresponding relation between a preset classification label and each position in a preset label vector, setting a numerical value of the first position in the preset label vector as a first numerical value, and setting numerical values of other positions except the first position in the preset label vector as second numerical values to obtain a label vector corresponding to at least one label of the target song list;
and determining the song characteristic vectors corresponding to the songs in the target song list based on the preset corresponding relation between the songs and the song characteristic vectors.
Optionally, the second determining module 520 is configured to:
and forming a synthetic vector by the song characteristic vector and the description information characteristic vector, and inputting the synthetic vector into a trained singing form characteristic extraction model to obtain the singing form characteristic vector of the target singing form.
Optionally, the second determining module 520 is further configured to:
acquiring a first synthetic vector of a target sample song list, a second synthetic vector of a positive sample song list corresponding to the target sample song list and a third synthetic vector of a negative sample song list corresponding to the target sample song list, wherein the positive sample song list is an associated song list of the target sample song list, and the negative sample song list is a non-associated song list of the target sample song list;
inputting the first synthetic vector, the second synthetic vector and the third synthetic vector into the singing form feature extraction model to be trained respectively to obtain a first singing form feature vector corresponding to the target sample singing form, a second singing form feature vector corresponding to the positive sample singing form and a third singing form feature vector corresponding to the negative sample singing form;
and determining residual values corresponding to the first song feature vector, the second song feature vector and the third song feature vector based on a preset loss function, and training the singing sheet feature extraction model to be trained based on the residual values to obtain the trained singing sheet feature extraction model.
Optionally, the second determining module 520 is further configured to:
acquiring a first song list set listened by each user in a preset time period;
for a first song list set listened by each user, deleting a first song list in the first song list set, wherein the listening time length of the first song list set is not within a preset listening time length range, and obtaining a second song list set;
for each second song list set, determining a second song list with the listening times exceeding a first preset time in the second song list set, determining the deletion probability corresponding to the second song list based on the listening times of the second song list and the corresponding relation between the first preset time and the deletion probability, and performing song list deletion processing on the second song list set based on the deletion probability corresponding to each second song list to obtain each third song list set;
determining any song list as a target sample song list, determining the song list which simultaneously exists in the same third song list set with the target sample song list and has the frequency exceeding a second preset frequency as a positive sample song list corresponding to the target sample song list, and determining other song lists except the positive sample song list in the third song list set as negative sample song lists corresponding to the target sample song list.
It should be noted that: the apparatus for determining an associated song list provided in the above embodiment is only illustrated by the division of the above functional modules when determining an associated song list, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the apparatus for determining the associated song list and the method for determining the associated song list provided by the above embodiments belong to the same concept, and the specific implementation process thereof is detailed in the method embodiments and will not be described herein again.
Fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 600 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 601 and one or more memories 602, where the memory 602 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 601 to implement the methods provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, there is also provided a computer readable storage medium, such as a memory, comprising instructions executable by a processor in a terminal to perform the method of determining an associated song list of the above embodiments. The computer readable storage medium may be non-transitory. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (12)

1. A method of determining an associated song title, the method comprising:
determining a description information characteristic vector corresponding to the song list description information of a target song list, and determining a song characteristic vector corresponding to each song included in the target song list, wherein the song characteristic vector corresponding to the song is used for representing one or more song lists to which the song belongs in a song list library;
determining a menu feature vector of the target menu based on the song feature vector and the description information feature vector;
determining the similarity between the singing sheet feature vector of the target singing sheet and the singing sheet feature vectors of other singing sheets except the target singing sheet in the singing sheet library, and determining other singing sheets corresponding to the target similarity meeting preset conditions in the similarity as the associated singing sheets of the target singing sheet.
2. The method of claim 1, wherein the song menu description information includes a song menu name and a category label of the song menu;
the determining of the description information feature vector corresponding to the song list description information of the target song list comprises the following steps:
determining phrase vectors corresponding to phrases in the song list name of the target song list based on the corresponding relation between the preset phrases and the phrase vectors;
determining a first position corresponding to at least one classification label of the target song list based on a corresponding relation between a preset classification label and each position in a preset label vector, setting a numerical value of the first position in the preset label vector as a first numerical value, and setting numerical values of other positions except the first position in the preset label vector as second numerical values to obtain a label vector corresponding to at least one label of the target song list;
the determining of the song feature vector corresponding to each song included in the target song list includes:
and determining the song characteristic vectors corresponding to the songs in the target song list based on the preset corresponding relation between the songs and the song characteristic vectors.
3. The method of claim 1, wherein determining a menu feature vector for the target menu based on the song feature vector and the descriptive information feature vector comprises:
and forming a synthetic vector by the song characteristic vector and the description information characteristic vector, and inputting the synthetic vector into a trained singing form characteristic extraction model to obtain the singing form characteristic vector of the target singing form.
4. The method of claim 3, wherein before inputting the synthetic vector into the trained singing sheet feature extraction model to obtain the singing sheet feature vector of the target singing sheet, the method further comprises:
acquiring a first synthetic vector of a target sample song list, a second synthetic vector of a positive sample song list corresponding to the target sample song list and a third synthetic vector of a negative sample song list corresponding to the target sample song list, wherein the positive sample song list is an associated song list of the target sample song list, and the negative sample song list is a non-associated song list of the target sample song list;
inputting the first synthetic vector, the second synthetic vector and the third synthetic vector into the singing form feature extraction model to be trained respectively to obtain a first singing form feature vector corresponding to the target sample singing form, a second singing form feature vector corresponding to the positive sample singing form and a third singing form feature vector corresponding to the negative sample singing form;
and determining residual values corresponding to the first song feature vector, the second song feature vector and the third song feature vector based on a preset loss function, and training the singing sheet feature extraction model to be trained based on the residual values to obtain the trained singing sheet feature extraction model.
5. The method of claim 4, wherein the obtaining the first synthetic vector of the target sample song list, the second synthetic vector of the positive sample song list corresponding to the target sample song list, and the third synthetic vector of the negative sample song list corresponding to the target sample song list is preceded by:
acquiring a first song list set listened by each user in a preset time period;
for a first song list set listened by each user, deleting a first song list in the first song list set, wherein the listening time length of the first song list set is not within a preset listening time length range, and obtaining a second song list set;
for each second song list set, determining a second song list with the listening times exceeding a first preset time in the second song list set, determining the deletion probability corresponding to the second song list based on the listening times of the second song list and the corresponding relation between the first preset time and the deletion probability, and performing song list deletion processing on the second song list set based on the deletion probability corresponding to each second song list to obtain each third song list set;
determining any song list as a target sample song list, determining the song list which simultaneously exists in the same third song list set with the target sample song list and has the frequency exceeding a second preset frequency as a positive sample song list corresponding to the target sample song list, and determining other song lists except the positive sample song list in the third song list set as negative sample song lists corresponding to the target sample song list.
6. An apparatus for determining an associated song title, the apparatus comprising:
the system comprises a first determination module, a second determination module and a third determination module, wherein the first determination module is used for determining a description information characteristic vector corresponding to the song list description information of a target song list and determining a song characteristic vector corresponding to each song included in the target song list, and the song characteristic vector corresponding to the song is used for representing one or more song lists to which the song belongs in a song list library;
a second determining module, configured to determine a menu feature vector of the target menu based on the song feature vector and the description information feature vector;
and the third determining module is used for determining the similarity between the singing sheet feature vector of the target singing sheet and the singing sheet feature vectors of other singing sheets except the target singing sheet in the singing sheet library, and determining other singing sheets corresponding to the target similarity meeting preset conditions in the similarity as the associated singing sheets of the target singing sheet.
7. The apparatus of claim 6, wherein the song list description information includes a song list name and a category label of the song list;
the first determining module is configured to:
determining phrase vectors corresponding to phrases in the song list name of the target song list based on the corresponding relation between the preset phrases and the phrase vectors; determining a first position corresponding to at least one classification label of the target song list based on a corresponding relation between a preset classification label and each position in a preset label vector, setting a numerical value of the first position in the preset label vector as a first numerical value, and setting numerical values of other positions except the first position in the preset label vector as second numerical values to obtain a label vector corresponding to at least one label of the target song list;
and determining the song characteristic vectors corresponding to the songs in the target song list based on the preset corresponding relation between the songs and the song characteristic vectors.
8. The apparatus of claim 6, wherein the second determining module is configured to:
and forming a synthetic vector by the song characteristic vector and the description information characteristic vector, and inputting the synthetic vector into a trained singing form characteristic extraction model to obtain the singing form characteristic vector of the target singing form.
9. The apparatus of claim 8, wherein the second determining module is further configured to:
acquiring a first synthetic vector of a target sample song list, a second synthetic vector of a positive sample song list corresponding to the target sample song list and a third synthetic vector of a negative sample song list corresponding to the target sample song list, wherein the positive sample song list is an associated song list of the target sample song list, and the negative sample song list is a non-associated song list of the target sample song list;
inputting the first synthetic vector, the second synthetic vector and the third synthetic vector into the singing form feature extraction model to be trained respectively to obtain a first singing form feature vector corresponding to the target sample singing form, a second singing form feature vector corresponding to the positive sample singing form and a third singing form feature vector corresponding to the negative sample singing form;
and determining residual values corresponding to the first song feature vector, the second song feature vector and the third song feature vector based on a preset loss function, and training the singing sheet feature extraction model to be trained based on the residual values to obtain the trained singing sheet feature extraction model.
10. The apparatus of claim 9, wherein the second determining module is further configured to:
acquiring a first song list set listened by each user in a preset time period;
for a first song list set listened by each user, deleting a first song list in the first song list set, wherein the listening time length of the first song list set is not within a preset listening time length range, and obtaining a second song list set;
for each second song list set, determining a second song list with the listening times exceeding a first preset time in the second song list set, determining the deletion probability corresponding to the second song list based on the listening times of the second song list and the corresponding relation between the first preset time and the deletion probability, and performing song list deletion processing on the second song list set based on the deletion probability corresponding to each second song list to obtain each third song list set;
determining any song list as a target sample song list, determining the song list which simultaneously exists in the same third song list set with the target sample song list and has the frequency exceeding a second preset frequency as a positive sample song list corresponding to the target sample song list, and determining other song lists except the positive sample song list in the third song list set as negative sample song lists corresponding to the target sample song list.
11. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to perform operations performed by the method of determining an associated song list according to any one of claims 1 to 5.
12. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by the method of determining an associated song list according to any one of claims 1 to 5.
CN202011388415.2A 2020-12-01 2020-12-01 Method, device, equipment and storage medium for determining associated song list Pending CN112487236A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011388415.2A CN112487236A (en) 2020-12-01 2020-12-01 Method, device, equipment and storage medium for determining associated song list

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011388415.2A CN112487236A (en) 2020-12-01 2020-12-01 Method, device, equipment and storage medium for determining associated song list

Publications (1)

Publication Number Publication Date
CN112487236A true CN112487236A (en) 2021-03-12

Family

ID=74938765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011388415.2A Pending CN112487236A (en) 2020-12-01 2020-12-01 Method, device, equipment and storage medium for determining associated song list

Country Status (1)

Country Link
CN (1) CN112487236A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023158384A3 (en) * 2022-02-21 2023-11-09 脸萌有限公司 Information processing method and apparatus, and device, storage medium and program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023158384A3 (en) * 2022-02-21 2023-11-09 脸萌有限公司 Information processing method and apparatus, and device, storage medium and program

Similar Documents

Publication Publication Date Title
CN109165302B (en) Multimedia file recommendation method and device
CN108305643B (en) Method and device for determining emotion information
CN103377258B (en) Method and apparatus for carrying out classification display to micro-blog information
CN106940726B (en) Creative automatic generation method and terminal based on knowledge network
CN111258995B (en) Data processing method, device, storage medium and equipment
US11158349B2 (en) Methods and systems of automatically generating video content from scripts/text
CN112328849A (en) User portrait construction method, user portrait-based dialogue method and device
CN110297880B (en) Corpus product recommendation method, apparatus, device and storage medium
CN108304424B (en) Text keyword extraction method and text keyword extraction device
US20230237255A1 (en) Form generation method, apparatus, and device, and medium
CN111737414A (en) Song recommendation method and device, server and storage medium
CN111400513B (en) Data processing method, device, computer equipment and storage medium
CN112434533B (en) Entity disambiguation method, entity disambiguation device, electronic device, and computer-readable storage medium
CN108345612A (en) A kind of question processing method and device, a kind of device for issue handling
CN114706945A (en) Intention recognition method and device, electronic equipment and storage medium
CN111090771A (en) Song searching method and device and computer storage medium
CN114328838A (en) Event extraction method and device, electronic equipment and readable storage medium
CN116738250A (en) Prompt text expansion method, device, electronic equipment and storage medium
CN116882372A (en) Text generation method, device, electronic equipment and storage medium
CN109190116B (en) Semantic analysis method, system, electronic device and storage medium
CN113573128B (en) Audio processing method, device, terminal and storage medium
CN112487236A (en) Method, device, equipment and storage medium for determining associated song list
CN109727091A (en) Products Show method, apparatus, medium and server based on dialogue robot
CN115525740A (en) Method and device for generating dialogue response sentence, electronic equipment and storage medium
US20220318318A1 (en) Systems and methods for automated information retrieval

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination