CN114999611A - Model training and information recommendation method and device - Google Patents

Model training and information recommendation method and device Download PDF

Info

Publication number
CN114999611A
CN114999611A CN202210908680.1A CN202210908680A CN114999611A CN 114999611 A CN114999611 A CN 114999611A CN 202210908680 A CN202210908680 A CN 202210908680A CN 114999611 A CN114999611 A CN 114999611A
Authority
CN
China
Prior art keywords
audio
target
emotion information
user
user emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210908680.1A
Other languages
Chinese (zh)
Other versions
CN114999611B (en
Inventor
张长浩
许小龙
傅欣艺
王维强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202210908680.1A priority Critical patent/CN114999611B/en
Publication of CN114999611A publication Critical patent/CN114999611A/en
Application granted granted Critical
Publication of CN114999611B publication Critical patent/CN114999611B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • G06F16/635Filtering based on additional data, e.g. user or group profiles
    • G06F16/636Filtering based on additional data, e.g. user or group profiles by using biological or physiological data
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physiology (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Psychiatry (AREA)
  • Pathology (AREA)
  • Social Psychology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Hospice & Palliative Care (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Psychology (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The specification discloses a method and a device for model training and information recommendation, which can acquire user emotion information and standard audio data corresponding to the user emotion information, input the user emotion information into a generated model to be trained, so that the generated model determines map characteristics according to a target node matched with the user emotion information in a pre-constructed knowledge map, and generates target audio according to the map characteristics, wherein the knowledge map is used for representing the association relation between various audio related information and various user emotion information, then, the generated model can be trained by taking the minimized difference between the target audio and the standard audio data as an optimization target, the generated model is used for generating audio for a target user according to the user emotion information of the target user, thereby generating audio suitable for the user to a certain extent, and to improve the rationality of generating audio for the user.

Description

Method and device for model training and information recommendation
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for model training and information recommendation.
Background
In the field of music therapy, doctors can play appropriate music for patients according to their moods.
In the prior art, a user can select music from an existing music library according to experience to play music for the user, but in this way, the most suitable music cannot be selected for the user, and the music selected manually may be relatively single.
Therefore, how to provide more reasonable music for the user and protect the privacy data of the user is an urgent problem to be solved.
Disclosure of Invention
The specification provides a method and a device for model training and information recommendation, so as to generate music more suitable for the psychology of a user.
The technical scheme adopted by the specification is as follows:
the present specification provides a method of model training comprising:
acquiring user emotion information and standard audio data corresponding to the user emotion information;
inputting the user emotion information into a generation model to be trained, so that the generation model determines a map feature corresponding to the user emotion information according to a target node matched with the user emotion information in a pre-constructed knowledge map, and generates a target audio according to the map feature, wherein the knowledge map is used for representing the association relation between various audio related information and various user emotion information;
and training the generated model by taking the minimized difference between the target audio and the standard audio data as an optimization target, wherein the trained generated model is used for generating audio for the target user according to the user emotion information of the target user.
Optionally, inputting the user emotion information into a generative model to be trained, so that the generative model determines a graph feature corresponding to the user emotion information according to a target node matched with the user emotion information in a pre-constructed knowledge graph, and generates a target audio according to the graph feature, including:
inputting the user emotion information and the standard audio data into the generation model so that the generation model determines audio features corresponding to the standard audio data, determines map features corresponding to the user emotion information according to target nodes matched with the user emotion information in the knowledge map, and generates the target audio according to the audio features and the map features.
Optionally, training the generative model with an optimization goal of minimizing a difference between the target audio and the standard audio data comprises:
training the generative model with the optimization goals of minimizing the difference between the spectral features and the audio features, and minimizing the difference between the target audio and the standard audio data.
Optionally, the inputting the user emotion information into a generative model to be trained, so that the generative model determines a graph feature according to a target node matched with the user emotion information in a pre-constructed knowledge graph, including:
inputting the user emotion information into a generation model to be trained, so that the generation model queries other nodes located in a preset adjacent range of a target node according to the target node matched with the user emotion information in a pre-constructed knowledge graph, and a sub-graph formed by the target node and the other nodes is used as a target sub-graph;
and determining the map features according to the target subgraph.
Optionally, the inputting the user emotion information into a generative model to be trained, so that the generative model determines a graph feature according to a target node matched with the user emotion information in a pre-constructed knowledge graph, including:
inputting the user emotion information and supplementary information into a generation model to be trained, so that the generation model determines map features according to target nodes matched with the user emotion information and the supplementary information in the knowledge map, wherein the supplementary information comprises audio related information matched with the user emotion information, and the audio related information comprises at least one of audio rhythm information, audio style information and instrument information.
Optionally, the generating a model comprises: an audio encoding sub-model, an audio decoding sub-model and an atlas sub-model;
inputting the user emotion information and the standard audio data into the generation model, so that the generation model determines audio features corresponding to the standard audio data, determines map features according to target nodes matched with the user emotion information in the knowledge map, and generates the target audio according to the audio features and the map features, wherein the steps of:
inputting the standard audio data into the audio coding submodel to obtain the audio characteristics, and inputting user emotion information into the map submodel to enable the map submodel to obtain the map characteristics based on the target node;
and inputting the map features and the audio features into the audio decoding submodel to generate the target audio.
In this specification, a method for recommending information includes:
acquiring user emotion information of a target user;
inputting the user emotion information into a trained generation model, so that the generation model determines map features according to target nodes matched with the user emotion information in a pre-constructed knowledge map, and generates audio according to the map features, wherein the generation model is obtained by training through a model training method;
recommending the generated audio to the target user.
The present specification provides an apparatus for model training, comprising:
the acquisition module is used for acquiring user emotion information and standard audio data corresponding to the user emotion information;
the input module is used for inputting the user emotion information into a generated model to be trained so that the generated model determines map features according to target nodes matched with the user emotion information in a pre-constructed knowledge map, and generates target audio according to the map features, wherein the knowledge map is used for representing the incidence relation between various audio related information and various user emotion information;
and the training module is used for training the generated model by taking the minimized difference between the target audio and the standard audio data as an optimization target, and the trained generated model is used for generating audio for the target user according to the user emotion information of the target user.
This specification provides an apparatus for information recommendation, including:
the acquisition module is used for acquiring user emotion information of a target user;
the input module is used for inputting the user emotion information into a trained generative model so as to enable the generative model to determine map characteristics according to target nodes matched with the user emotion information in a pre-constructed knowledge map and generate audio according to the map characteristics, and the generative model is obtained by training through a model training method;
and the recommending module is used for recommending the generated audio to the target user.
The present specification provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described method of model training or information recommendation.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the above method of model training or information recommendation when executing the program.
The technical scheme adopted by the specification can achieve the following beneficial effects:
in the model training and information recommendation method provided in this specification, user emotion information and standard audio data corresponding to the user emotion information may be acquired, and the user emotion information is input to a generation model to be trained, so that the generation model determines a map feature according to a target node matched with the user emotion information in a pre-constructed knowledge map, and generates a target audio according to the map feature, where the knowledge map mentioned here is used to represent an association relationship between various audio related information and various user emotion information, and then, a generation model may be trained with minimizing a difference between the target audio and the standard audio data as an optimization target, and the generated model after training is used to generate audio for a target user according to the user emotion information of the target user.
It can be seen from the above that, the method can generate music by generating a model and encoding a sub-graph conforming to the user psychological information determined in a pre-constructed knowledge graph containing a large number of associations between audio-related information and user psychological information according to standard audio data suitable for the emotion of a specific user, so that in the process of generating music, music can be generated by combining the associations between the large number of pieces of music-related information and the emotion of the user, and audio suitable for the user can be generated for the user as much as possible.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification and are incorporated in and constitute a part of this specification, illustrate embodiments of the specification and together with the description serve to explain the specification and not to limit the specification in a non-limiting sense. In the drawings:
FIG. 1 is a schematic flow chart of a method of model training in the present specification;
FIG. 2 is a schematic diagram of a generative model provided in this specification;
FIG. 3 is a schematic representation of the form of a knowledge-graph as provided herein;
FIG. 4 is a flow chart illustrating a method for information recommendation in the present specification;
FIG. 5 is a schematic diagram of an apparatus for model training provided herein;
FIG. 6 is a schematic diagram of an information recommendation apparatus provided herein;
fig. 7 is a schematic diagram of an electronic device corresponding to fig. 1 or 4 provided in the present specification.
Detailed Description
In order to make the objects, technical solutions and advantages of the present disclosure more clear, the technical solutions of the present disclosure will be clearly and completely described below with reference to the specific embodiments of the present disclosure and the accompanying drawings. It is to be understood that the embodiments described are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present specification without any creative effort belong to the protection scope of the present specification.
The technical solutions provided by the embodiments of the present description are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a model training method in this specification, which specifically includes the following steps:
s100: obtaining user emotion information and standard audio data corresponding to the user emotion information.
In practical application, a demand for generating music often exists, on the basis, a service platform can construct a knowledge graph for representing the association relation between various audio related information and various user emotion information, and the knowledge graph can be used for training a model capable of generating audio (including music, video and the like) for a user according to the emotion of the user in the subsequent process. The audio related information may include audio tempo information, audio style information, music genre, artist and instrument information, etc., that is, the audio related information may be music related information in multiple dimensions and audio.
Based on this, the service platform may obtain a training sample, where the training sample may include user emotion information and standard audio data corresponding to the user emotion information, and the user psychological information is used to indicate an emotional state of the user, for example, whether the user is happy, sad, or melancholy. Of course, the training samples may include some other supplementary information in addition to the emotional information of the user. For example, user basic information such as age, gender, constellation, etc. may also be included; for another example, the audio related information matched with the emotion information of the user may be included, when the audio is generated by using the generation model, the audio related information matched with the emotion information of the user may refer to a music style suitable for the user, which is input by the user himself or judged by a doctor who performs music treatment on the user according to the psychological state of the user, and in the training process, the audio related information matched with the emotion information of the user may be given according to expert experience. The standard audio data may also refer to audio data given by an associated specialist to be suitable for the user under the emotional information of the user.
The scheme can be applied to various scenes, such as a music treatment scene and a red packet sending scene, for example, in the music treatment scene, a user (which can be understood as a patient needing music treatment) can speak or input a sentence, the service platform can determine the emotion of the user based on the sentence and provide suitable music for the user, in the red packet sending scene, the sentence attached to the red packet can be input when the user sends the red packet, the emotion of the user can be determined through the sentence, corresponding music can be generated for the user, and the user receiving the red packet can play the music for the user receiving the red packet when receiving the red packet.
Therefore, the emotion information of the user can be determined through a section of text or voice input by the user, the section of text or voice can be input by the user when the model is generated and the business is executed, and in the model training stage, a plurality of texts can be obtained in batch and obtained through manual screening.
S102: inputting the user emotion information into a generation model to be trained, so that the generation model determines graph characteristics according to target nodes matched with the user emotion information in a pre-constructed knowledge graph, and generates target audio according to the graph characteristics, wherein the knowledge graph is used for representing the association relation between various audio related information and various user emotion information.
S104: and training the generated model by taking the minimized difference between the target audio and the standard audio data as an optimization target, wherein the trained generated model is used for generating audio for the target user according to the user emotion information of the target user.
After the training sample is obtained, user emotion information in the training sample can be input into a generation model to be trained, so that the generation model determines graph characteristics corresponding to the user emotion information according to a target node matched with the user emotion information in a pre-constructed knowledge graph, generates target audio according to the graph characteristics, trains the generation model by taking the minimized difference between the target audio and the standard audio data as an optimization target, and the trained generation model is used for generating audio for a target user according to the user emotion information of the target user, wherein the knowledge graph can be used for representing the incidence relation between various audio related information and various user emotion information.
The user emotion information and the standard audio data can be input into a generating model, so that the generating model determines the audio characteristics corresponding to the standard audio data, determines the map characteristics corresponding to the user emotion information according to the target nodes matched with the user emotion information in the knowledge map, and generates the target audio according to the audio characteristics and the map characteristics. Then, the generated model can be trained by taking the minimized difference between the atlas feature obtained by the generated model and the audio feature and the minimized difference between the target audio generated by the generated model and the standard audio data in the training sample as the training targets.
It should be noted that, in the process of actually generating audio for the user, the audio is generated directly through the emotion information of the user, and there is no standard audio data, so in the above training process, it is necessary to minimize the difference between the map feature and the audio feature so as to make the map feature learn the audio feature as much as possible, so that when generating audio for the user using the generation model, even if there is no audio, the service platform can generate audio for the user through the audio feature learned by the model structure for generating the map feature in the generation model in the training stage.
The knowledge graph comprises nodes (which can be called emotion nodes) corresponding to different types of user emotion information. Therefore, the target node mentioned above may specifically refer to an emotion node corresponding to the emotion information of the user in the training sample in the knowledge graph, and of course, the target node may further include an emotion node having a direct or indirect connection relationship with the emotion node, and of course, the feature of the target node or the feature of a node having a direct or indirect connection relationship with the target node is not necessarily determined by the target node. Specifically, when the target node is used to determine the map features, the emotion information of the user can be input into the generative model to be trained, so that the generative model can query other nodes located in a preset adjacent range of the target node according to the target node matched with the emotion information of the user in the pre-constructed knowledge map, a sub-map formed by the target node and the other nodes is used as a target sub-map, and the map features are determined according to the target sub-map.
The graph features determined according to the target subgraph may be obtained according to node features of all nodes included in the target subgraph, and the obtained graph features may be obtained by performing average pooling on the node features.
In this specification, the service platform may generate audio with reference to other information, for example, in addition to the user emotion information. Other information may include: some audio-related information input by the user or audio-related information given by a doctor in combination with the emotion of the user may further include some information of the user, that is, basic information of the user, so that when the model is trained, the emotion information of the user and the supplementary information may be input into the generation model to be trained, so that the generation model determines the map features according to target nodes matched with the emotion information of the user and the supplementary information in the pre-constructed knowledge map.
The supplementary information may include audio related information matched with the emotion information of the user, and the audio related information matched with the emotion information of the user may refer to audio related information of the audio of the user, which is determined according to the emotion information of the user based on the experience of related experts and is suitable for the emotion information of the user. Certainly, the supplementary information may further include basic user information and emotion information of the user input by the user, where in the training process, the supplementary information may be determined manually, and after the generated model is applied on line to generate music for the user, the generated model may be trained continuously through user behavior, so that new training data may be obtained through the user behavior information, and the supplementary information in the new training data may include audio related information of audio listened by the user under the emotion information determined by a doctor or audio related information input by the user, and actual basic user information of the user.
It should be noted that two structures may exist in the generation model, one of the structures is used to obtain the above-mentioned map features, and the other structure is used to obtain the audio features corresponding to the above-mentioned standard audio data, that is, the generation model may include an audio coding sub-model, an audio decoding sub-model, and a map sub-model, the audio coding sub-model may refer to the structure used to generate the audio features in the generation model, and the map sub-model may refer to the part of the generation model used to generate the map features of the partial structures in the knowledge map that match with the emotion information of the user.
The standard audio data can be input into an audio coding submodel to obtain audio characteristics, user emotion information is input into a map submodel to enable the map submodel to obtain map characteristics based on the target nodes, the map characteristics and the audio characteristics are input into an audio decoding submodel to generate target audio, and then the audio coding submodel, the audio decoding submodel and the map submodel can be jointly trained by taking the minimum deviation between the map characteristics and the audio characteristics and the minimum deviation between the target audio and the standard audio data in a training sample as training targets.
The model structure of a particular generative model may be as shown in FIG. 2.
Fig. 2 is a schematic structural diagram of a generative model provided in this specification.
The audio encoding submodel and the audio decoding submodel shown in fig. 2 may specifically be composed of an auto encoder, that is, a self-encoding generator structure, and the map submodel may specifically be composed of a GNN structure, and of course, a structure of a discriminant model may be accessed after the audio decoding submodel, so that the audio decoding submodel and the discriminant model may form a GAN network structure, so that the audio decoding submodel generates more real audio.
It should be noted that the above-mentioned constructed knowledge graph may represent the association between various audio-related information and various user emotion information, and of course, the knowledge graph may also represent the association between various audio-related information and the association between some information of the user (i.e. the user basic information) and various audio-related information.
Specifically, when the knowledge graph is constructed, information nodes corresponding to each type of audio related information and emotion nodes corresponding to each type of user emotion information may be constructed, edges between the information nodes may be constructed according to the association relationship between each type of audio related information, and edges between the information nodes and emotion nodes may be constructed according to the association relationship between each type of user emotion information and each type of audio related information, so as to obtain the knowledge graph.
For example, the audio related information may include music genre, audio rhythm, and the like, and expert experience in the music therapy field may give information about what mood requires what genre, rhythm, style, and the like of music, so that an association relationship between each user mood information and each audio related information may be obtained through the information, and then an edge between an information node and a psychological node is constructed and obtained, and the user basic information may include age, gender, and the like, and also what kind of music is suitable for a user of which age and gender, and thus an edge between a node corresponding to the user basic information and the information node is constructed and obtained.
The form of the knowledge-graph in the present specification can be specifically shown in fig. 3.
FIG. 3 is a formal schematic of a knowledge-graph as provided in the present specification.
As can be seen from fig. 3, the audio related information may include song names, artists, musical instrument information, genre information, etc., and the knowledge graph may show the relationship between these audio related information, that is, which song belongs to which genre, singer, musical instrument, etc., and may also show what emotion is suitable for what song, so that the generation model may query a sub-graph matching with the emotion information and the supplemental information of the user in the knowledge graph as a target sub-graph according to the emotion information of the user and the supplemental information, wherein, when generating audio using the generation model, it is possible to select a plurality of target sub-graphs, and then a segment of audio may be generated through each target sub-graph, the user may select a desired audio by himself for playing, and the audio selected by the user is more inclined music by the user, and may be according to the audio selected by the user, and obtaining a training sample again, and continuing training the generation model according to the training sample.
Fig. 4 is a schematic flow chart of a method for information recommendation in this specification, which specifically includes the following steps:
s400: and acquiring user emotion information of the target user.
S402: inputting the user emotion information into a trained generation model, so that the generation model determines map features corresponding to the user emotion information according to target nodes matched with the user emotion information in a pre-constructed knowledge map, and generates audio according to the map features, wherein the generation model is obtained by training through a model training method.
S403: recommending the generated audio to the target user.
In the process of actually using the generative model to generate the audio, the service platform can acquire user emotion information of a target user, and input the user emotion information into the trained generative model, so that the generative model determines map features corresponding to the user emotion information according to target nodes matched with the user emotion information in a pre-constructed knowledge map, generates the audio according to the map features, and recommends the generated audio to the target user, wherein the generative model is obtained by training through the model training method.
The target user may be a user who sends an audio generation request to the service platform through a terminal or a client, the target user may be a patient or a doctor who needs to perform treatment on the patient in a music treatment scene, and in a red packet sending scene, the target user may be a user who sends or receives a red packet. The service platform receives an audio generation request of a target user, and can input user emotion information in the audio generation request into the generation model.
It can be seen from the above method that the method can generate music by generating a model, encoding the sub-images conforming to the emotion information of the user determined from the audio data suitable for the emotion information of the specific user and the pre-constructed knowledge graph containing the association relationship between a large amount of audio related information and various emotion information of the user, so that in the process of generating music, music can be generated by combining the association between a large amount of audio related information and the emotion state of the user, and audio suitable for the user can be generated for the user as much as possible.
Based on the same idea, the present specification further provides a device for model training and information recommendation, as shown in fig. 5 and fig. 6.
Fig. 5 is a schematic diagram of a model training apparatus provided in this specification, which specifically includes:
the acquiring module 501 is configured to acquire user emotion information and standard audio data corresponding to the user emotion information;
an input module 502, configured to input the user emotion information into a generative model to be trained, so that the generative model determines a map feature according to a target node that is matched with the user emotion information in a pre-constructed knowledge map, and generates a target audio according to the map feature, where the knowledge map is used to represent an association relationship between various audio-related information and various user emotion information;
a training module 503, configured to train the generated model with a minimum difference between the target audio and the standard audio data as an optimization target, where the trained generated model is used to generate an audio for a target user according to user emotion information of the target user.
Optionally, the input module 502 is specifically configured to input the user emotion information and the standard audio data into the generation model, so that the generation model determines an audio feature corresponding to the standard audio data, determines a map feature corresponding to the user emotion information according to a target node in the knowledge map, which is matched with the user emotion information, and generates the target audio according to the audio feature and the map feature.
Optionally, the training module 503 is specifically configured to train the generative model with the objective of optimizing the difference between the atlas feature and the audio feature and the difference between the target audio and the standard audio data.
Optionally, the input module 502 is specifically configured to input the user emotion information into a generative model to be trained, so that the generative model queries, according to a target node that is matched with the user emotion information in a pre-constructed knowledge graph, other nodes located within a preset adjacent range of the target node, and uses a sub-graph formed by the target node and the other nodes as a target sub-graph; and determining the map features according to the target subgraph.
Optionally, the input module 502 is specifically configured to input the user emotion information and supplementary information into a generative model to be trained, so that the generative model determines a graph feature according to a target node in the knowledge graph, where the target node matches the user emotion information and the supplementary information, where the supplementary information includes audio-related information matching the user emotion information, and the audio-related information includes at least one of audio rhythm information, audio style information, and instrument information.
Optionally, the generating a model comprises: an audio encoding sub-model, an audio decoding sub-model and an atlas sub-model;
the input module 502 is specifically configured to input the standard audio data into the audio coding sub-model to obtain the audio features, and input user emotion information into the map sub-model, so that the map sub-model obtains the map features based on the target nodes; and inputting the map features and the audio features into the audio decoding submodel to generate the target audio.
Fig. 6 is a schematic diagram of an information recommendation apparatus provided in this specification, which specifically includes:
an obtaining module 601, configured to obtain user emotion information of a target user;
an input module 602, configured to input the user emotion information into a trained generative model, so that the generative model determines, according to a target node that is matched with the user emotion information in a pre-constructed knowledge graph, a graph feature corresponding to the user emotion information, and generates an audio according to the graph feature, where the generative model is obtained by training through a model training method;
a recommending module 603, configured to recommend the generated audio to the target user.
The present specification also provides a computer-readable storage medium storing a computer program operable to execute the above-described abnormality detection method.
This specification also provides a schematic block diagram of the electronic device shown in fig. 7. As shown in fig. 7, at the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile memory, but may also include hardware required for other services. The processor reads the corresponding computer program from the nonvolatile memory to the memory and then runs the computer program to realize the model training and information recommendation method. Of course, besides the software implementation, the present specification does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may be hardware or logic devices.
In the 90 s of the 20 th century, improvements in a technology could clearly distinguish between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually manufacturing an Integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to the software compiler used in program development, but the original code before compiling is also written in a specific Programming Language, which is called Hardware Description Language (HDL), and the HDL is not only one kind but many kinds, such as abel (advanced boot Expression Language), ahdl (alternate Language Description Language), communication, CUPL (computer universal Programming Language), HDCal (Java Hardware Description Language), langa, Lola, mylar, HDL, PALASM, rhydl (runtime Description Language), vhjhdul (Hardware Description Language), and vhygl-Language, which are currently used commonly. It will also be apparent to those skilled in the art that hardware circuitry for implementing the logical method flows can be readily obtained by a mere need to program the method flows with some of the hardware description languages described above and into an integrated circuit.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in purely computer readable program code means, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the various elements may be implemented in the same one or more software and/or hardware implementations of the present description.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
This description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing nodes that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory nodes.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the system embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and reference may be made to the partial description of the method embodiment for relevant points.
The above description is only an example of the present specification, and is not intended to limit the present specification. Various modifications and alterations to this description will become apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of the present specification should be included in the scope of the claims of the present specification.

Claims (11)

1. A method of model training, comprising:
acquiring user emotion information and standard audio data corresponding to the user emotion information;
inputting the user emotion information into a generation model to be trained, so that the generation model determines a map feature corresponding to the user emotion information according to a target node matched with the user emotion information in a pre-constructed knowledge map, and generates a target audio according to the map feature, wherein the knowledge map is used for representing the association relation between various audio related information and various user emotion information;
and training the generated model by taking the minimized difference between the target audio and the standard audio data as an optimization target, wherein the trained generated model is used for generating audio for the target user according to the user emotion information of the target user.
2. The method of claim 1, wherein the step of inputting the user emotion information into a generative model to be trained so that the generative model determines a graph feature corresponding to the user emotion information according to a target node matched with the user emotion information in a pre-constructed knowledge graph, and generates a target audio according to the graph feature comprises the steps of:
inputting the user emotion information and the standard audio data into the generation model, so that the generation model determines audio features corresponding to the standard audio data, determines map features corresponding to the user emotion information according to target nodes matched with the user emotion information in the knowledge map, and generates the target audio according to the audio features and the map features.
3. The method of claim 2, training the generative model with the goal of minimizing the difference between the target audio and the standard audio data as an optimization goal, comprising:
training the generative model with the optimization goals of minimizing the difference between the spectral features and the audio features, and minimizing the difference between the target audio and the standard audio data.
4. The method of claim 1, inputting the user emotion information into a generative model to be trained, so that the generative model determines graph characteristics according to target nodes matched with the user emotion information in a pre-constructed knowledge graph, comprising:
inputting the user emotion information into a generation model to be trained, so that the generation model queries other nodes located in a preset adjacent range of a target node according to the target node matched with the user emotion information in a pre-constructed knowledge graph, and a sub-graph formed by the target node and the other nodes is used as a target sub-graph;
and determining the map features according to the target subgraph.
5. The method of claim 1, inputting the user emotion information into a generative model to be trained, so that the generative model determines graph characteristics according to target nodes matched with the user emotion information in a pre-constructed knowledge graph, comprising:
inputting the user emotion information and supplementary information into a generation model to be trained, so that the generation model determines map features according to target nodes matched with the user emotion information and the supplementary information in the knowledge map, wherein the supplementary information comprises audio related information matched with the user emotion information, and the audio related information comprises at least one of audio rhythm information, audio style information and instrument information.
6. The method of claim 2, the generating a model comprising: an audio encoding sub-model, an audio decoding sub-model and an atlas sub-model;
inputting the user emotion information and the standard audio data into the generation model, so that the generation model determines audio features corresponding to the standard audio data, determines graph features according to target nodes matched with the user emotion information in the knowledge graph, and generates the target audio according to the audio features and the graph features, wherein the generation model comprises the following steps:
inputting the standard audio data into the audio coding sub-model to obtain the audio characteristics, and inputting user emotion information into the map sub-model to enable the map sub-model to obtain the map characteristics based on the target node;
and inputting the map features and the audio features into the audio decoding submodel to generate the target audio.
7. A method of information recommendation, comprising:
acquiring user emotion information of a target user;
inputting the user emotion information into a trained generative model, so that the generative model determines map features corresponding to the user emotion information according to target nodes matched with the user emotion information in a pre-constructed knowledge map, and generates audio according to the map features, wherein the generative model is obtained by training according to the method of any one of claims 1 to 6;
recommending the generated audio to the target user.
8. An apparatus for model training, comprising:
the acquisition module is used for acquiring user emotion information and standard audio data corresponding to the user emotion information;
the input module is used for inputting the user emotion information into a generated model to be trained so that the generated model determines map features according to target nodes matched with the user emotion information in a pre-constructed knowledge map, and generates target audio according to the map features, wherein the knowledge map is used for representing the incidence relation between various audio related information and various user emotion information;
and the training module is used for training the generated model by taking the minimized difference between the target audio and the standard audio data as an optimization target, and the trained generated model is used for generating audio for the target user according to the user emotion information of the target user.
9. An apparatus for information recommendation, comprising:
the acquisition module is used for acquiring user emotion information of a target user;
an input module, configured to input the user emotion information into a trained generative model, so that the generative model determines, according to a target node matched with the user emotion information in a pre-constructed knowledge graph, a graph feature corresponding to the user emotion information, and generates an audio according to the graph feature, where the generative model is obtained by training according to the method of any one of claims 1 to 6;
and the recommending module is used for recommending the generated audio to the target user.
10. A computer-readable storage medium, storing a computer program which, when executed by a processor, implements the method of any of claims 1 to 7.
11. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of any of the preceding claims 1 to 7 when executing the program.
CN202210908680.1A 2022-07-29 2022-07-29 Model training and information recommendation method and device Active CN114999611B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210908680.1A CN114999611B (en) 2022-07-29 2022-07-29 Model training and information recommendation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210908680.1A CN114999611B (en) 2022-07-29 2022-07-29 Model training and information recommendation method and device

Publications (2)

Publication Number Publication Date
CN114999611A true CN114999611A (en) 2022-09-02
CN114999611B CN114999611B (en) 2022-12-20

Family

ID=83022265

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210908680.1A Active CN114999611B (en) 2022-07-29 2022-07-29 Model training and information recommendation method and device

Country Status (1)

Country Link
CN (1) CN114999611B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140052731A1 (en) * 2010-08-09 2014-02-20 Rahul Kashinathrao DAHULE Music track exploration and playlist creation
CN104318931A (en) * 2014-09-30 2015-01-28 百度在线网络技术(北京)有限公司 Emotional activity obtaining method and apparatus of audio file, and classification method and apparatus of audio file
US20150268800A1 (en) * 2014-03-18 2015-09-24 Timothy Chester O'Konski Method and System for Dynamic Playlist Generation
CN105426382A (en) * 2015-08-27 2016-03-23 浙江大学 Music recommendation method based on emotional context awareness of Personal Rank
US20170125059A1 (en) * 2015-11-02 2017-05-04 Facebook, Inc. Systems and methods for generating videos based on selecting media content items and moods
CN108153810A (en) * 2017-11-24 2018-06-12 广东小天才科技有限公司 A kind of music recommends method, apparatus, equipment and storage medium
CN108874957A (en) * 2018-06-06 2018-11-23 华东师范大学 The dialog mode music recommended method indicated based on Meta-graph knowledge mapping
CN110473567A (en) * 2019-09-06 2019-11-19 上海又为智能科技有限公司 Audio-frequency processing method, device and storage medium based on deep neural network
CN111859008A (en) * 2019-04-29 2020-10-30 深圳市冠旭电子股份有限公司 Music recommending method and terminal
CN112417172A (en) * 2020-11-23 2021-02-26 东北大学 Construction and display method of multi-modal emotion knowledge graph
CN112765398A (en) * 2021-01-04 2021-05-07 珠海格力电器股份有限公司 Information recommendation method and device and storage medium
CN112754502A (en) * 2021-01-12 2021-05-07 曲阜师范大学 Automatic music switching method based on electroencephalogram signals
CN112883209A (en) * 2019-11-29 2021-06-01 阿里巴巴集团控股有限公司 Recommendation method and processing method, device, equipment and readable medium for multimedia data
CN113139080A (en) * 2021-04-15 2021-07-20 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Music emotional recommendation method and system
CN113157965A (en) * 2021-05-07 2021-07-23 杭州网易云音乐科技有限公司 Audio visual model training and audio visual method, device and equipment
WO2021168563A1 (en) * 2020-02-24 2021-09-02 Labbe Aaron Method, system, and medium for affective music recommendation and composition
CN113938755A (en) * 2021-09-18 2022-01-14 海信视像科技股份有限公司 Server, terminal device and resource recommendation method

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140052731A1 (en) * 2010-08-09 2014-02-20 Rahul Kashinathrao DAHULE Music track exploration and playlist creation
US20150268800A1 (en) * 2014-03-18 2015-09-24 Timothy Chester O'Konski Method and System for Dynamic Playlist Generation
CN104318931A (en) * 2014-09-30 2015-01-28 百度在线网络技术(北京)有限公司 Emotional activity obtaining method and apparatus of audio file, and classification method and apparatus of audio file
CN105426382A (en) * 2015-08-27 2016-03-23 浙江大学 Music recommendation method based on emotional context awareness of Personal Rank
US20170125059A1 (en) * 2015-11-02 2017-05-04 Facebook, Inc. Systems and methods for generating videos based on selecting media content items and moods
CN108153810A (en) * 2017-11-24 2018-06-12 广东小天才科技有限公司 A kind of music recommends method, apparatus, equipment and storage medium
CN108874957A (en) * 2018-06-06 2018-11-23 华东师范大学 The dialog mode music recommended method indicated based on Meta-graph knowledge mapping
CN111859008A (en) * 2019-04-29 2020-10-30 深圳市冠旭电子股份有限公司 Music recommending method and terminal
CN110473567A (en) * 2019-09-06 2019-11-19 上海又为智能科技有限公司 Audio-frequency processing method, device and storage medium based on deep neural network
CN112883209A (en) * 2019-11-29 2021-06-01 阿里巴巴集团控股有限公司 Recommendation method and processing method, device, equipment and readable medium for multimedia data
WO2021168563A1 (en) * 2020-02-24 2021-09-02 Labbe Aaron Method, system, and medium for affective music recommendation and composition
CN112417172A (en) * 2020-11-23 2021-02-26 东北大学 Construction and display method of multi-modal emotion knowledge graph
CN112765398A (en) * 2021-01-04 2021-05-07 珠海格力电器股份有限公司 Information recommendation method and device and storage medium
CN112754502A (en) * 2021-01-12 2021-05-07 曲阜师范大学 Automatic music switching method based on electroencephalogram signals
CN113139080A (en) * 2021-04-15 2021-07-20 合肥综合性国家科学中心人工智能研究院(安徽省人工智能实验室) Music emotional recommendation method and system
CN113157965A (en) * 2021-05-07 2021-07-23 杭州网易云音乐科技有限公司 Audio visual model training and audio visual method, device and equipment
CN113938755A (en) * 2021-09-18 2022-01-14 海信视像科技股份有限公司 Server, terminal device and resource recommendation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘恒等: "运用高斯混合模型识别动物声音情绪", 《国外电子测量技术》 *
王晰巍等: "社交网络舆情知识图谱发展动态及趋势研究", 《情报学报》 *

Also Published As

Publication number Publication date
CN114999611B (en) 2022-12-20

Similar Documents

Publication Publication Date Title
US10055493B2 (en) Generating a playlist
JP4581476B2 (en) Information processing apparatus and method, and program
CN110599998B (en) Voice data generation method and device
CN115952272B (en) Method, device and equipment for generating dialogue information and readable storage medium
JP6954981B2 (en) Speech generation methods, devices, equipment and storage media
CN111292733A (en) Voice interaction method and device
CN109828748A (en) Code naming method, system, computer installation and computer readable storage medium
CN116628198A (en) Training method and device of text generation model, medium and electronic equipment
CN115203394A (en) Model training method, service execution method and device
EP3779814A1 (en) Method and device for training adaptation level evaluation model, and method and device for evaluating adaptation level
CN111966334A (en) Service processing method, device and equipment
JP2019091416A5 (en)
KR20170136200A (en) Method and system for generating playlist using sound source content and meta information
CN111507726B (en) Message generation method, device and equipment
CN115129878A (en) Conversation service execution method, device, storage medium and electronic equipment
CN115982416A (en) Data processing method and device, readable storage medium and electronic equipment
CN117332282B (en) Knowledge graph-based event matching method and device
CN106775567B (en) Sound effect matching method and system
CN114999611B (en) Model training and information recommendation method and device
CN107810474B (en) Automatic import and dependency in large-scale source code repository
CN116127003A (en) Text processing method, device, electronic equipment and storage medium
CN110704742B (en) Feature extraction method and device
CN117992600B (en) Service execution method and device, storage medium and electronic equipment
KR20190009821A (en) Method and system for generating playlist using sound source content and meta information
CN115910002B (en) Audio generation method, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant