CN111026908B - Song label determining method, device, computer equipment and storage medium - Google Patents

Song label determining method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN111026908B
CN111026908B CN201911261720.2A CN201911261720A CN111026908B CN 111026908 B CN111026908 B CN 111026908B CN 201911261720 A CN201911261720 A CN 201911261720A CN 111026908 B CN111026908 B CN 111026908B
Authority
CN
China
Prior art keywords
type
tag
song
model
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911261720.2A
Other languages
Chinese (zh)
Other versions
CN111026908A (en
Inventor
缪畅宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201911261720.2A priority Critical patent/CN111026908B/en
Publication of CN111026908A publication Critical patent/CN111026908A/en
Application granted granted Critical
Publication of CN111026908B publication Critical patent/CN111026908B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/65Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a song label determining method, a song label determining device, computer equipment and a storage medium, and belongs to the field of data processing. The method comprises the following steps: acquiring frequency domain information of a target song; inputting the frequency domain information into a first type tag determination model, and determining a first type tag of the target song based on the frequency domain information by the first type tag determination model, wherein the first type tag is used for representing the tag type of a second type tag, and the second type tag is used for representing the audio characteristics of the target song; determining a second type tag determination model corresponding to the first type tag based on the first type tag of the target song; the frequency domain information is input into a second type of tag determination model, and at least one second type of tag of the target song is determined by the second type of tag determination model based on the frequency domain information. According to the method and the device, the first type tags and the second type tags are automatically determined for the target songs through the model, manual operation is not needed, and the efficiency of determining the song tags is greatly improved.

Description

Song label determining method, device, computer equipment and storage medium
Technical Field
The present application relates to the field of data processing, and in particular, to a method and apparatus for determining a song label, a computer device, and a storage medium.
Background
With the popularity of intelligent terminals, more and more people will listen to songs through song applications installed on the intelligent terminals. In recommending songs to a user, each song application recommends songs to the user based on the user's history of listening to songs, songs of singers focused by the user, and songs collected by the user, and during the process of song recommendation, the song is often marked based on the song marks of the songs, so that the marking of the songs is essential in the process of making song recommendations.
In the related art, labels are often added to songs manually, but the efficiency of manually adding the labels to the songs is low, so that some songs are not labeled later, and the unlabeled songs may be favorite songs of users, but the unlabeled songs may not be recommended to the users, so that the recommendation effect of the songs is poor. Thus, there is a need for a method of automatically determining tags for songs.
Disclosure of Invention
The embodiment of the application provides a song label determining method, a song label determining device, computer equipment and a storage medium, which can solve the problem of low efficiency of adding song labels in related technologies. The technical scheme is as follows:
In one aspect, a song label determining method is provided, the method including:
acquiring frequency domain information of a target song;
inputting the frequency domain information into a first type tag determination model, and determining a first type tag of the target song based on the frequency domain information by the first type tag determination model, wherein the first type tag is used for representing a tag type of a second type tag, and the second type tag is used for representing an audio feature of the target song;
determining a second type tag determination model corresponding to the first type tag based on the first type tag of the target song;
the frequency domain information is input into the second type tag determination model, and at least one second type tag of the target song is determined by the second type tag determination model based on the frequency domain information.
In one aspect, there is provided a song label determining apparatus, the apparatus comprising:
the acquisition module is used for acquiring the frequency domain information of the target song;
the first type tag determining module is used for inputting the frequency domain information into a first type tag determining model, and determining a first type tag of the target song based on the frequency domain information by the first type tag determining model, wherein the first type tag is used for representing the tag type of a second type tag, and the second type tag is used for representing the characteristics of the target song;
The first determining module is used for determining a second type tag determining model corresponding to the first type tag based on the first type tag of the target song;
and the second type tag determining module is used for inputting the frequency domain information into the second type tag determining model, and determining at least one second type tag of the target song based on the frequency domain information by the second type tag determining model.
In one possible embodiment, the apparatus further comprises:
the training module is used for carrying out model training based on a sample data set to obtain the second type tag determination model, wherein the sample data set comprises audio signals of a plurality of sample songs and second type tags of each sample song, and each sample song is a song which is reviewed by a user.
In one possible embodiment, the apparatus further comprises:
the label dictionary obtaining module is used for obtaining a label dictionary corresponding to each first type label, and the label dictionary is generated based on text information marked with each first type label;
the matching module is used for matching each tag word contained in the tag dictionary with the comment text of the sample song to obtain at least one tag word;
A second determining module, configured to determine a relevance between the at least one tag word and comment text of the sample song;
and the third determining module is used for determining the tag words with the relevance meeting the target condition as the second type tags of the sample songs.
In one possible embodiment, the training module includes:
the first input unit is used for inputting frequency domain information of any sample song into an initial model corresponding to a first type tag to which the sample song belongs;
the prediction unit is used for predicting probability information of any second type tag of the sample song based on the frequency domain information of any sample song by the initial model;
a first determining unit, configured to determine a predicted second type tag of the sample song based on the probability information;
and the adjusting unit is used for adjusting the model parameters of the initial model based on the difference information of the second type label of the sample song and the predicted second type label of the sample song until the model parameters of the initial model meet the target cut-off condition, stopping training the initial model, and taking the trained initial model as the second type label determination model.
In one possible implementation manner, the second type tag determining module includes:
a second input unit for inputting the frequency domain information into the second type tag determination model, and outputting probability information of a plurality of second type tags by the second type tag determination model;
and the second determining unit is used for determining at least one second type tag conforming to the target probability information as at least one second type tag of the target song.
In one possible implementation, the first type tag determination model is a classification model.
In one possible implementation, the second type of tag determination model is a sequence annotation model.
In one aspect, a computer device is provided that includes one or more processors and one or more memories having stored therein at least one program code loaded and executed by the one or more processors to implement the operations performed by the song label determination method.
In one aspect, a storage medium having stored therein at least one program code loaded and executed by a processor to perform operations performed by the song label determination method is provided.
According to the song label determining method provided by the application, the computer equipment can automatically add the first type label to the target song based on the frequency domain information of the song and the first type label determining model without manually adding the label to the target song, namely, the songs are classified into different major categories by the first type label. After that, the computer device can select the second type tag determination model corresponding to the first tag, input the frequency domain information of the song into the second type tag determination model, and automatically add the second type tag to the target song by the second type tag determination model, thereby realizing automatic tagging to the target song and improving the efficiency of adding the song tag.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an implementation environment of a song label determining method according to an embodiment of the present application;
Fig. 2 is a flowchart of a song label determining method according to an embodiment of the present application;
FIG. 3 is a flowchart of a method for determining a song label according to an embodiment of the present application;
FIG. 4 is a flowchart of a training method for a first type of tag determination model according to an embodiment of the present application;
FIG. 5 is a schematic structural diagram of a first type of tag determination model according to an embodiment of the present application;
FIG. 6 is a flowchart of a training method for a second type of tag determination model according to an embodiment of the present application;
FIG. 7 is a schematic structural diagram of a second type of tag determination model according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a song label determining apparatus according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
Machine Learning (ML) is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, etc. It is specially studied how a computer simulates or implements the learning behavior of a human to acquire new knowledge or skills, reorganizing existing knowledge sub-models to continuously improve its own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
The scheme provided by the embodiment of the application relates to the technology of artificial intelligence such as machine learning, and the like, and is specifically described by the following embodiments:
fig. 1 is a schematic diagram of an implementation environment of a song label determining method according to an embodiment of the present application, and referring to fig. 1, the implementation environment includes a terminal 110 and a server 140.
The terminal 110 is connected to the server 110 through a wireless network or a wired network. Terminal 110 may be a smart phone, tablet, portable computer, or the like. The terminal 110 installs and runs an application program supporting song label determination technology. The application may be a song play class application or the like. The terminal 110 is an exemplary terminal used by a user, and a user account is logged into an application running in the terminal 110.
Terminal 110 is connected to server 140 via a wireless network or a wired network.
Optionally, the server 140 includes: an access server, a song label determination server and a database. The access server is used to provide access services for the terminal 110. The song label determining server is used for providing a background service related to song label determination. The database may include a song information database, a user information database, etc., but of course, the song label determination server may be one or more based on the different services provided by the server may correspond to different databases. When the song tag determination server is a plurality of song tag determination servers, there are at least two song tag determination servers for providing different services and/or there are at least two song tag determination servers for providing the same service, such as providing the same service in a load balancing manner, the embodiment of the present application is not limited thereto.
Terminal 110 may refer broadly to one of a plurality of terminals, with the present embodiment being illustrated only by terminal 110.
Those skilled in the art will recognize that the number of terminals may be greater or lesser. For example, the number of the terminals may be only one, or the number of the terminals may be tens or hundreds, or more, where other terminals are also included in the implementation environment. The embodiment of the application does not limit the number of terminals and the equipment type.
Fig. 2 is a schematic flow chart of a method for determining a song label according to an embodiment of the present application, and fig. 3 is a flow chart of a method for determining a song label according to an embodiment of the present application, and referring to fig. 2 and 3, the method for determining a song label according to an embodiment of the present application may be more clearly understood by referring to fig. 2 and 3, and includes:
301. the computer device obtains frequency domain information for the target song.
In one possible implementation, the computer device may obtain songs from the song database that have no tags and have not been reviewed, and take the songs as target songs.
In one possible implementation, the computer device may sample the target song based on the target sampling frequency to obtain audio information of the target song. The computer device can integrate N sampling points into one audio frame, so that the speed of processing the audio signal by the computer device is improved, the process is also called framing, N is the number of the sampling points, N is a positive integer, the size of N can be set according to actual needs, for example, N can be 256 or 512, and the size of N is not limited in the embodiment of the application. In addition, when the computer device performs framing, an overlapping portion may be set between two adjacent audio frames, where the overlapping portion is called frame shift, and the size of the frame shift is related to N, for example, may be 1/2 or 1/3 of N, which is not limited in the embodiment of the present application. By adopting the framing mode, the excessive change between two adjacent audio frames can be avoided, so that the computer equipment can obtain more accurate effect in the subsequent processing process of the audio information.
After framing, the computer device may also window the audio frames, in particular, the computer device may multiply each audio frame with a window function resulting in a windowed audio frame. For example, after windowing, the computer device may place the target song in a discrete time sequence T i [T 1 ,T 2 ……T n ]Expressed by each T i Representing the pitch of the target song at the i-th moment. The window function may be a hamming window, hanning window, triangular window, blackman window, or the like, which is not limited by the embodiment of the present application.
After windowing, the computer device may perform time-to-frequency conversion on the windowed audio frames, and convert the audio information in the time domain to the frequency domain, resulting in a frequency spectrum for each audio frame. By such time-frequency conversion, the computer device can more conveniently acquire the characteristics of the audio signal, which is beneficial for the computer device to further analyze the frequency domain information of the target song, such as calculationThe machine equipment can adopt time-frequency conversion to make discrete time sequence T i [T 1 ,T 2 ……T n ]Represented as a discrete frequency sequence F j [F 1 ,F 2 ……F n ]Wherein F is i Is the amplitude of the target song at the j-th frequency. The specific time-frequency conversion method may adopt fourier transform or wavelet transform, which is not limited in the embodiment of the present application. It should be noted that the computer device may directly convert the discrete frequency sequence F j [F 1 ,F 2 ……F n ]As frequency domain information of the target song, the discrete frequency sequence F can also be obtained by the following two steps j [F 1 ,F 2 ……F n ]Further processing is performed to obtain more accurate frequency domain information.
The first step, the computer device may use a filter bank to filter the spectrum of each audio frame, and reduce the spectrum of each audio frame, that is, frequency multiply and accumulate each audio frame with each filter in the filter bank to obtain the energy value of each audio frame in the frequency band corresponding to the filter, where the number of filters in the filter bank may be set according to the actual needs, and the embodiment of the present application does not limit the present application.
And step two, the computer equipment can perform discrete cosine change on the frequency spectrum of each reduced audio frame to obtain the frequency domain information of the target song. It should be noted that the above manner of obtaining the frequency domain of the target song is merely an exemplary description provided in the embodiment of the present application, and in fact, the computer device may also obtain the frequency domain information of the target song in other manners, which is not limited in the embodiment of the present application.
302. The computer device inputs the frequency domain information into a first type tag determination model, and the first type tag determination model determines a first type tag of the target song based on the frequency domain information, wherein the first type tag is used for representing a tag type of a second type tag, and the second type tag is used for representing an audio feature of the target song.
The first type of tag may be a tag set in advance for representing a type of a second type of tag, for example, the first type of tag may be an emotion tag of a song or a style tag of a song, and the second type of tag may be a tag belonging to any first type of tag, for example { happy, hi }, { country music, blue } under a style tag of a song, and { bass } under a pitch tag of a song, etc. belong to the second type of tag, and it should be noted that the specific tag names are set only for easy understanding and cannot cause undue limitation to the present application.
In one possible implementation manner, the first type tag determination model is trained based on the frequency domain information of the sample song and the first type tags of the sample song, so that the first type tag determination model has the capability of determining the first type tags to which the target song belongs based on the frequency domain information of the target song, wherein the first type tag determination model can be a classification model, the classification model can determine whether the target song belongs to the first type tags according to the frequency domain information of the target song, each first type tag has one first type tag determination model corresponding to the first type tag, that is, the number of first type tag determination models is the same as the number of first type tags, the model structure of each first type tag determination model can be the same, and the model parameters of each first type tag determination model can be different, and the first type tag determination model is used for determining whether the target song belongs to a certain first type tag.
In the embodiment of the present application, the training process of the first type tag determination model may refer to 401-403, which is not described herein.
The classification model may be a shallow model or a deep model, wherein the shallow model may be a support vector machine (Support Vector Machine, SVM), a logistic regression (Logistic Regression, LR), a Decision Tree (DT), etc., and the deep model may be a convolutional neural network (Convolutional Neural Networks, CNN) and a recurrent neural network (Recurrent Neural Network, RNN), etc., which are not limited in the classification of the classification model according to the embodiments of the present application.
In one possible implementation, the computer device may input the frequency domain information into a first type of tag determination model, which predicts a first probability that the target song belongs to the first type of tag based on the frequency domain information, determines that the target song belongs to the first type of tag when the first probability is greater than the target probability, and determines that the target song does not belong to the first type of tag when the first probability is less than the target probability.
For ease of understanding, the embodiment of the present application will be described by taking the first type of tag determination model as the LR model as an example, but the present application is not limited thereto.
After the frequency domain information is input into the first type tag determining model by the computer equipment, the first type tag determining model calculates the frequency domain information based on the trained weight and offset parameters, and then the calculation result is mapped to the interval of (0, 1) through an S-shaped growth function (Sigmoid) to obtain the first probability that the target song belongs to the first type tag. The computer device may compare the first probability with the target probability, and determine that the target song belongs to the first type of tag when the first probability is greater than the target probability, or determine that the target song does not belong to the first type of tag, where the target probability may be set according to an actual situation, in simple terms, if the accuracy of the model prediction result determined by the first type of tag is desired to be higher, the target probability may be set higher, and if the coverage of the first type of tag is desired to be greater, the target probability may be correspondingly reduced.
303. The computer device determines a second type tag determination model corresponding to the first type tag based on the first type tag of the target song.
The first type of tag is used to represent the category of the second type of tag, that is, there are one or more second type of tags under each first type of tag. In one possible implementation, the computer device may determine a plurality of second type tags corresponding to the first type tags based on the first type tags, thereby determining a second type tag determination model corresponding to the first type tags, that is, each first type tag corresponds to a different second type tag determination model.
In a possible implementation, the second type of tag determination model is trained based on the frequency domain information of the sample song and the second type of tag to which the sample song belongs, so that the second type of tag determination model has the capability of determining the second type of the target song based on the frequency domain information of the target song.
304. The computer device inputs the frequency domain information into a second type of tag determination model, which determines at least one second type of tag of the target song based on the frequency domain information.
In particular, the computer device may input frequency domain information of the target song into the second type tag determination model, and the probability information of the plurality of second type tags is output by the second type tag determination model. At least one second type tag that meets the target probability information is determined as at least one second type tag for the target song. In addition, the second type of tags are actually generated based on comment text of the sample song, and specific generation steps can be seen in 601-603.
In the embodiment of the present application, the training process of the second type tag determination model may refer to 601-606, which is not described herein.
In one possible implementation, the second type of tag determination model may be a sequence annotation model. The sequence annotation model may employ CNN, RNN or conditional random field (Conditional Random Field, CRF).
For the sake of understanding, this second type tag determination model is taken as a CRF model as an example, and of course, any of the above models may be taken as the second type tag determination model, or any model capable of implementing sequence labeling may be taken as the second type tag determination model, which is not limited in the embodiment of the present application.
In a possible implementation manner, the computer equipment inputs the frequency domain information of the target song into a trained second-class tag determination model, the frequency domain information of the target song of the second-class tag determination model is brought into the characteristic function of each second-class tag, the result output by the characteristic function of each second-class tag is scored, and when the scoring result accords with the target scoring result, the second-class tag corresponding to the characteristic function is determined to belong to the target song; and when the scoring result does not accord with the target scoring result, determining that the target song does not belong to the second class label corresponding to the characteristic function. In particular, the second type of tag determines a substantial set of sequences of model outputs, e.g., (110), if the location of the first digit indicates happiness, the location of the second digit indicates pleasure, the location of the third digit indicates sadness, and a 1 indicates belonging to, and a 0 indicates not belonging to, then the second type of tag for the target song may be determined to be happy and pleasant.
According to the method provided by the embodiment of the application, the computer equipment can automatically add the label to the target song based on the frequency domain information of the target song, so that the efficiency of adding the label to the target song is greatly improved, and a good foundation is laid for realizing the follow-up song recommendation function. Meanwhile, based on the trained first type tag determination model and the trained second type tag determination model, tags are added for target songs, and the characteristic of strong generalization capability of the deep learning model can be utilized to add tags for more types of target songs.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein.
Fig. 4 is a flowchart of a training method of a first type of tag determination model according to an embodiment of the present application, referring to fig. 4, the method includes:
401. the computer device obtains frequency domain information for a plurality of sample songs.
Wherein the sample song may be a song that has been reviewed by the user and has been tagged with a first type of tag and a second type of tag.
In a possible implementation manner, the frequency domain information of the sample song may be obtained in a similar manner to 301, which is not described herein.
402. The computer device inputs frequency domain information of any one sample song into a first initial model, predicts first probability information of any one first type label of the sample song based on the frequency domain information of any one sample song by the first initial model, and determines a predicted first type label of the sample song based on the first probability information.
In one possible implementation, the computer device may generate a respective plurality of first initial models based on the number of first type tags, each first initial model being stored in binding with a corresponding first type tag. When the computer equipment trains a first initial model corresponding to a certain first type tag, the first initial model can be directly called, frequency domain information of a sample song is input into the first initial model, the first initial model carries out operation based on initialized model parameters and the frequency domain information of the sample song to obtain first probability information representing that the sample song belongs to the first type tag, the first probability information is compared with first preset probability information, when the first probability information is larger than the first preset probability information, the first initial model can output information of the sample song belonging to the first type tag, the first type tag can be a predicted first type tag, otherwise, the first initial model can output information of the sample song not belonging to the first type tag, in other words, the first initial model can be a two-class model, and each output of the model is only possible, namely that the sample song belongs to the first type tag or the sample song not belongs to the first type tag. For example, after the operation, the first initial model performs the operation based on the initialized model parameters and the frequency domain information of the sample song, so that the first probability information of the sample song belonging to a certain first type label is 40, and the preset threshold value is 35, and then the first initial model may output the information of the sample song belonging to the first type label, such as 1, and then the first type label is the predicted first type label of the sample song.
403. And the computer equipment adjusts the model parameters of the first initial model based on the first type label of the sample song and the first difference information of the predicted first type label of the sample song until the model parameters of the first initial model meet the target cut-off condition, and stops training the first initial model, and the trained first initial model is used as the first type label to determine the model.
In one possible implementation, the computer device may compare the actual first type tag of the sample song with the predicted first type tag output by the first initial model to obtain first difference information, adjust the model parameters based on the first difference information, then predict based on the frequency domain information of the next sample song to obtain a predicted first type tag of the next sample song, and adjust the model parameters again based on the predicted first type tag of the next sample song and the actual first type tag of the next sample song until the number of iterations of the model reaches a preset number or the number of times the model successfully predicts the first tag reaches a target value, and stop training the model to determine the model as the first type tag. A specific first type of tag determination model structure may be seen in fig. 5.
Fig. 6 is a flowchart of a training method for a second type of tag determination model according to an embodiment of the present application, with reference to fig. 6, the method includes:
601. the computer equipment obtains a label dictionary corresponding to each first type label, and the label dictionary is generated based on text information marked with each first type label.
The first type of label may be preset, for example, the first type of label may be an emotion label of a song or a style label of a song, and the first type of label is used for classifying the second type of label.
In one possible implementation, a computer device may obtain target text information, the target text information being text information tagged with a first type of tag. For example, the first type of tag is "mood", then the computer device may obtain text related to "mood" from the network as target text information. For example, the computer device may obtain the target text information based on the web crawler, but the target text information may be obtained by other methods, and the method for obtaining the target text information is not limited in the embodiments of the present application.
After the target text information is obtained, the computer device may obtain a set of tag words based on the target text information, the tag words being obtained from a combination of consecutive characters in the target text information. Specifically, the computer device may combine consecutive characters in the target text based on different numbers to obtain multiple character sets, or take the first type tag as "mood" as an example, the computer device may obtain character sets such as "happy, fierce, happy, and fierce".
After obtaining the plurality of character sets, the computer device may obtain a cross-correlation (PMI) of each of the consecutive characters, and determine whether collocation between the characters is reasonable based on the cross-correlation. For example, if the PMI value of "top-mining" is less than the PMI value of "top-mining," the computer device may determine that "top-mining" is a more accurate, more commonly used set of characters than "top-mining. The PMI value calculation can be specifically performed by the formula (1).
Wherein x and y represent two consecutive characters, P (x) And P (y) Respectively representing the occurrence probability of the character x and the character y in all text information marked with the first type labels; f (F) (x) And F (y) Representing the frequency of occurrence of the character x and the character y in the text information marked with the first type tag; f (F) (x,y) Representing the frequency of the simultaneous occurrence of the character x and the character y in all text information marked with the first type of labels; k represents the total number of characters of all text information labeled with the first type of label.
In one possible implementation, the computer device may determine a set of characters having a PMI value higher than the target PMI value as the target tag word, and set a plurality of target tag words as the tag dictionary.
602. And the computer equipment matches each tag word contained in the tag dictionary with comment text of the sample song to obtain at least one tag word.
In one possible implementation, the computer device may obtain comment text of the sample song, match the tag words in the tag dictionary with the comment text of the sample song, and combine the matched comment text to form a tag word set. For example, when the tag dictionary is an emotion tag dictionary, the computer device may match emotion tag words in the emotion tag dictionary with comment text of a sample song, and during the matching process, tag words such as "happy", "pleasant", and "wounded" may be obtained, and the computer device may use tag words such as "happy", "pleasant", and "wounded" as a tag word set.
603. The computer device determines a relevance of the at least one tagged word to the comment text of the sample song, and determines the tagged word whose relevance meets the target condition as a second type tag of the sample song.
In one possible implementation manner, the computer device may determine, based on the number of times that the matched tag word appears in the comment text, a relevance between the tag word and the comment text of the sample song, and determine, as a second type tag of the sample song, a tag word having a relevance greater than a target relevance, where the target relevance may be set according to actual needs, and the embodiment of the present application does not limit this. For example, after the computer device matches the comment text of the sample song based on the tag dictionary, a tag word set is obtained, five tag words "happy", "hi", "hard" and "wounded" are in the tag word set, wherein the occurrence number of "happy" is 20, the occurrence number of "happy" is 23, the occurrence number of "hi" is 13, the occurrence number of "hard" is 3 and the occurrence number of "wounded" is 5, the occurrence number of the target relevance is 15, and then the computer device may use "happy" and "pleasant" as the second type tag of the sample song.
In one possible implementation, the number of comment texts of the sample song may be smaller, and the number of matched tag words may be smaller, so that the computer device may directly use the tag word with the largest occurrence number as the second type tag of the target song. Alternatively, prior to determining the sample song, the computer device may filter the songs based on the number of comment texts, with songs that meet the target number of comment texts as the sample song.
In one possible implementation, the computer device may derive a relevance of the tag word to the comment text of the sample song based on a relationship between a Term Frequency (TF) of the tag word and an inverse document Frequency (Inverse Document Frequency, IDF). Specifically, the computer device may divide the number of occurrences of a certain tag word in the comment text of the sample song by the total vocabulary number of the comment text of the sample song to obtain a word frequency of the tag word; the computer device may determine a first number of target text contained in the target text information and a second number of target text in which the tag word is present, and divide the first number and the second number to obtain an inverse document frequency of the tag word. The computer device sums the word frequency of the tag word and the inverse document frequency, and takes the result of the summation as the correlation of the tag word and comment text of the sample song.
It should be noted that, the relevance between the tag word and the text of the comment of the sample song may be determined by any of the above methods, which is not limited in the embodiment of the present application.
604. The computer device obtains frequency domain information for a plurality of sample songs.
In a possible implementation manner, the sample song frequency domain information may be obtained in a similar manner to step 301, which is not described herein.
605. The computer equipment inputs the frequency domain information of any sample song into a second initial model corresponding to the first type label to which the sample song belongs, the second initial model predicts second probability information of any second type label to which the sample song belongs based on the frequency domain information of any sample song, and the predicted second type label of the sample song is determined based on the second probability information.
Since the first type of tag is used to represent the category of the second type of tag, each first type of tag may correspond to a plurality of second type of tags, that is, the second initial model corresponding to the first type of tag is actually a model for determining the second probability information of the second type of tag under the first type of tag.
In one possible implementation, the computer device may generate a respective plurality of second initial models based on the number of first type tags, each second initial model being stored in binding with a corresponding first type tag. When the computer equipment trains a second initial model corresponding to a certain first type label, the second initial model can be directly called, frequency domain information of a sample song is input into the second initial model, the second initial model carries out operation based on initialized model parameters and the frequency domain information of the sample song to obtain second probability information representing that the sample song belongs to at least one second type label under the first type label, the second probability information is compared with second preset probability information, when the second probability information is larger than the second preset probability information, the second initial model can output information of the sample song belonging to a certain second type label, namely, the predicted second type label of the sample song, and otherwise, the information of the sample song not belonging to the second type label is output. For example, after the operation, the second initial model performs the operation based on the initialized model parameters and the frequency domain information of the sample song, so that the second probability information of the sample song belonging to a certain second type tag is 40, and the second preset probability information is 35, and then the second initial model may output the information of the sample song belonging to the second type tag, such as 1. It should be noted that, the second initial model may output second probability information corresponding to a plurality of second type tags at the same time, for example, the second initial model may be a conditional random field (Conditional Random Field, CRF) model, and a set of sequences, for example, (110) may be output, where 1 indicates that the song belongs to a certain second type tag, 0 indicates that the song does not belong to a certain second type tag, and the sequence of numbers may indicate different second type tags, and then the computer device may determine, based on the sequence output by the second initial model, that the sample song belongs to the second type tag.
606. And the computer equipment adjusts the model parameters of the second initial model based on the second type label of the sample song and the second difference information of the predicted second type label of the sample song until the model parameters of the second initial model meet the target cut-off condition, and stops training the second initial model, and the trained second initial model is used as the second type label to determine the model.
In one possible implementation, the computer device may compare the actual second type tag of the sample song with the predicted second type tag output by the second initial model to obtain second difference information, adjust the model parameter based on the second difference information, then predict based on the frequency domain information of the next sample song to obtain a predicted second type tag of the next sample song, and adjust the model parameter again based on the predicted second type tag of the next sample song and the second difference information of the actual second type tag of the next sample song until the number of iterations of the model reaches a preset number or the loss function of the model reaches a target value, and stop training the model to determine the model as the second type tag. The structure of a specific second type of tag determination model can be seen in fig. 7.
In the embodiment of the application, the computer equipment acquires the corresponding label dictionary based on the first type label, and automatically extracts the corresponding second type label from the comment text of the sample song, thereby further simplifying the step of adding the label to the target song and improving the label adding efficiency.
Fig. 8 is a block diagram of a song label determining apparatus according to an embodiment of the present application, referring to fig. 8, the apparatus includes: an acquisition module 801, a first type tag determination module 802, a first determination module 803, and a second type tag determination module 804.
An acquisition module 801, configured to acquire frequency domain information of a target song;
a first type tag determining module 802, configured to input frequency domain information into a first type tag determining model, and determine, by the first type tag determining model, a first type tag of the target song based on the frequency domain information, where the first type tag is used to represent a tag type of a second type tag, and the second type tag is used to represent a characteristic of the target song;
a first determining module 803, configured to determine, based on a first type tag of the target song, a second type tag determination model corresponding to the first type tag;
a second type tag determination module 804 is configured to input the frequency domain information into a second type tag determination model, and determine at least one second type tag of the target song based on the frequency domain information by the second type tag determination model.
In one possible embodiment, the apparatus further comprises:
the training module is used for carrying out model training based on a sample data set to obtain a second type tag determination model, wherein the sample data set comprises audio signals of a plurality of sample songs and second type tags of each sample song, and each sample song is a song which is reviewed by a user.
In one possible embodiment, the apparatus further comprises:
the tag dictionary acquisition module is used for acquiring a tag dictionary corresponding to each first type tag, and the tag dictionary is generated based on text information marked with each first type tag;
the matching module is used for matching each tag word contained in the tag dictionary with comment text of the sample song to obtain at least one tag word;
the second determining module is used for determining the relativity between at least one tag word and comment text of the sample song;
and the third determining module is used for determining the tag words with the relevance meeting the target condition as the second type tags of the sample songs.
In one possible implementation, the training module includes:
the first input unit is used for inputting frequency domain information of any sample song into an initial model corresponding to a first type tag to which the sample song belongs;
The prediction unit is used for predicting probability information of any second type label of the sample song based on frequency domain information of any sample song by the initial model;
a first determining unit, configured to determine a predicted second type tag of the sample song based on the probability information;
the adjusting unit is used for adjusting the model parameters of the initial model based on the second type label of the sample song and the difference information of the predicted second type label of the sample song until the model parameters of the initial model meet the target cut-off condition, stopping training the initial model, and taking the trained initial model as the second type label determination model.
In one possible implementation, the second type of tag determination module includes:
a second input unit for inputting the frequency domain information into a second type tag determination model, and outputting probability information of a plurality of second type tags by the second type tag determination model;
and the second determining unit is used for determining at least one second type tag conforming to the target probability information as at least one second type tag of the target song.
In one possible implementation, the first type of tag determination model is a classification model.
In one possible implementation, the second type of tag determination model is a sequence annotation model.
By the device provided by the embodiment of the application, the computer equipment can automatically add the label to the target song based on the frequency domain information of the target song, so that the efficiency of adding the label to the target song is greatly improved, and a good foundation is laid for realizing the follow-up song recommendation function. Meanwhile, based on the trained first type tag determination model and the trained second type tag determination model, tags are added for target songs, and the characteristic of strong generalization capability of the deep learning model can be utilized to add tags for more types of target songs.
It should be noted that: in determining the song label, the song label determining apparatus provided in the above embodiment is only exemplified by the division of the above functional modules, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the computer device is divided into different functional modules, so as to perform all or part of the functions described above. In addition, the song label determining apparatus provided in the above embodiment and the song label determining method embodiment belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not repeated herein.
Fig. 9 is a schematic structural diagram of a computer device according to an embodiment of the present application, where the computer device 900 may have a relatively large difference due to configuration or performance, and may include one or more processors (Central Processing Units, CPU) 901 and one or more memories 902, where at least one program code is stored in the one or more memories 902, and the at least one program code is loaded and executed by the one or more processors 901 to implement the methods provided in the foregoing method embodiments. Of course, the computer device 900 may also have a wired or wireless network interface, a keyboard, an input/output interface, and other components for implementing the functions of the device, which are not described herein.
In an exemplary embodiment, a storage medium, such as a memory, including program code executable by a processor to perform the song label determination method of the above-described embodiment is also provided. For example, the storage medium may be Read-Only Memory (ROM), random-access Memory (Random Access Memory, RAM), compact disc Read-Only (Compact Disc Read-Only Memory, CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by program code related hardware, and the program may be stored in a storage medium, where the storage medium may be a read only memory, a magnetic disk or optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements within the spirit and principles of the present application.

Claims (12)

1. A method of determining a song label, the method comprising:
acquiring frequency domain information of a target song;
Inputting the frequency domain information into a first type tag determination model, and determining a first type tag of the target song based on the frequency domain information by the first type tag determination model, wherein the first type tag is used for representing a tag type of a second type tag, and the second type tag is used for representing an audio feature of the target song;
determining a second type tag determination model corresponding to the first type tag based on the first type tag of the target song;
inputting the frequency domain information into the second type tag determination model, and determining at least one second type tag of the target song based on the frequency domain information by the second type tag determination model;
the second type label determining model is obtained by model training based on a sample data set, wherein the sample data set comprises frequency domain information of a plurality of sample songs and second type labels of each sample song, and each sample song is a song reviewed by a user; the second type of labels are label words, wherein the relevance between the label words and comment texts of the sample songs meets target conditions, the label words are obtained by matching each label word contained in a label dictionary with the comment texts of the sample songs, the label dictionary is generated based on text information marked with the first type of labels, and the label dictionary corresponds to the first type of labels.
2. The method according to claim 1, wherein the method further comprises:
inputting the frequency domain information of any sample song into an initial model corresponding to a first type tag to which the sample song belongs;
predicting probability information of any second type tag of the sample song based on the frequency domain information of any sample song by the initial model;
determining a predicted second type tag for the sample song based on the probability information;
and adjusting model parameters of the initial model based on the difference information of the second type label of the sample song and the predicted second type label of the sample song until the model parameters of the initial model meet the target cut-off condition, stopping training the initial model, and taking the trained initial model as the second type label determination model.
3. The method of claim 1, wherein the inputting the frequency domain information into the second type of tag determination model, determining, by the second type of tag determination model, at least one second type of tag of the target song based on the frequency domain information comprises:
inputting the frequency domain information into the second type tag determination model, and outputting probability information of a plurality of second type tags by the second type tag determination model;
At least one second type tag that meets the target probability information is determined as at least one second type tag for the target song.
4. The method of claim 1, wherein the first type of tag determination model is a classification model.
5. The method of claim 1, wherein the second type of tag determination model is a sequence annotation model.
6. A song label determining apparatus, the apparatus comprising:
the acquisition module is used for acquiring the frequency domain information of the target song;
the first type tag determining module is used for inputting the frequency domain information into a first type tag determining model, and determining a first type tag of the target song based on the frequency domain information by the first type tag determining model, wherein the first type tag is used for representing the tag type of a second type tag, and the second type tag is used for representing the audio characteristics of the target song;
the first determining module is used for determining a second type tag determining model corresponding to the first type tag based on the first type tag of the target song;
a second type tag determination module for inputting the frequency domain information into the second type tag determination model, determining at least one second type tag of the target song based on the frequency domain information by the second type tag determination model;
The training module is used for carrying out model training based on a sample data set to obtain the second type tag determination model, wherein the sample data set comprises frequency domain information of a plurality of sample songs and second type tags of each sample song, and each sample song is a song which is reviewed by a user;
the label dictionary obtaining module is used for obtaining a label dictionary corresponding to each first type label, and the label dictionary is generated based on text information marked with each first type label;
the matching module is used for matching each tag word contained in the tag dictionary with the comment text of the sample song to obtain at least one tag word;
a second determining module, configured to determine a relevance between the at least one tag word and comment text of the sample song;
and the third determining module is used for determining the tag words with the relevance meeting the target condition as the second type tags of the sample songs.
7. The apparatus of claim 6, wherein the training module comprises:
the first input unit is used for inputting frequency domain information of any sample song into an initial model corresponding to a first type tag to which the sample song belongs;
The prediction unit is used for predicting probability information of any second type tag of the sample song based on the frequency domain information of any sample song by the initial model;
a first determining unit, configured to determine a predicted second type tag of the sample song based on the probability information;
and the adjusting unit is used for adjusting the model parameters of the initial model based on the difference information of the second type label of the sample song and the predicted second type label of the sample song until the model parameters of the initial model meet the target cut-off condition, stopping training the initial model, and taking the trained initial model as the second type label determination model.
8. The apparatus of claim 6, wherein the second type of tag determination module comprises:
a second input unit for inputting the frequency domain information into the second type tag determination model, and outputting probability information of a plurality of second type tags by the second type tag determination model;
and the second determining unit is used for determining at least one second type tag conforming to the target probability information as at least one second type tag of the target song.
9. The apparatus of claim 6, wherein the first type of tag determination model is a classification model.
10. The apparatus of claim 6, wherein the second type of tag determination model is a sequence annotation model.
11. A computer device comprising one or more processors and one or more memories, the one or more memories having stored therein at least one program code that is loaded and executed by the one or more processors to implement the operations performed by the song label determination method of any of claims 1-5.
12. A storage medium having stored therein at least one program code loaded and executed by a processor to perform the operations performed by the song label determination method of any one of claims 1 to 5.
CN201911261720.2A 2019-12-10 2019-12-10 Song label determining method, device, computer equipment and storage medium Active CN111026908B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911261720.2A CN111026908B (en) 2019-12-10 2019-12-10 Song label determining method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911261720.2A CN111026908B (en) 2019-12-10 2019-12-10 Song label determining method, device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111026908A CN111026908A (en) 2020-04-17
CN111026908B true CN111026908B (en) 2023-09-08

Family

ID=70208753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911261720.2A Active CN111026908B (en) 2019-12-10 2019-12-10 Song label determining method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111026908B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112100432B (en) * 2020-09-17 2024-04-09 咪咕文化科技有限公司 Sample data acquisition method, feature extraction method, processing device and storage medium
CN112906369A (en) * 2021-02-19 2021-06-04 脸萌有限公司 Lyric file generation method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170091888A (en) * 2016-02-02 2017-08-10 네이버 주식회사 Method and system for automatically tagging themes suited for songs
CN107943865A (en) * 2017-11-10 2018-04-20 阿基米德(上海)传媒有限公司 It is a kind of to be suitable for more scenes, the audio classification labels method and system of polymorphic type
CN107967280A (en) * 2016-10-19 2018-04-27 北京酷我科技有限公司 A kind of method and system of label recommendations song
WO2018196561A1 (en) * 2017-04-25 2018-11-01 腾讯科技(深圳)有限公司 Label information generating method and device for application and storage medium
CN109063069A (en) * 2018-07-23 2018-12-21 天翼爱音乐文化科技有限公司 Song label determines method, apparatus, computer equipment and readable storage medium storing program for executing
CN109271521A (en) * 2018-11-16 2019-01-25 北京九狐时代智能科技有限公司 A kind of file classification method and device
CN109918662A (en) * 2019-03-04 2019-06-21 腾讯科技(深圳)有限公司 A kind of label of e-sourcing determines method, apparatus and readable medium
CN109977255A (en) * 2019-02-22 2019-07-05 北京奇艺世纪科技有限公司 Model generating method, audio-frequency processing method, device, terminal and storage medium
CN110163245A (en) * 2019-04-08 2019-08-23 阿里巴巴集团控股有限公司 Class of service prediction technique and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7672916B2 (en) * 2005-08-16 2010-03-02 The Trustees Of Columbia University In The City Of New York Methods, systems, and media for music classification
US20190028766A1 (en) * 2017-07-18 2019-01-24 Audible Magic Corporation Media classification for media identification and licensing
CN107832305A (en) * 2017-11-28 2018-03-23 百度在线网络技术(北京)有限公司 Method and apparatus for generating information

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170091888A (en) * 2016-02-02 2017-08-10 네이버 주식회사 Method and system for automatically tagging themes suited for songs
CN107967280A (en) * 2016-10-19 2018-04-27 北京酷我科技有限公司 A kind of method and system of label recommendations song
WO2018196561A1 (en) * 2017-04-25 2018-11-01 腾讯科技(深圳)有限公司 Label information generating method and device for application and storage medium
CN107943865A (en) * 2017-11-10 2018-04-20 阿基米德(上海)传媒有限公司 It is a kind of to be suitable for more scenes, the audio classification labels method and system of polymorphic type
CN109063069A (en) * 2018-07-23 2018-12-21 天翼爱音乐文化科技有限公司 Song label determines method, apparatus, computer equipment and readable storage medium storing program for executing
CN109271521A (en) * 2018-11-16 2019-01-25 北京九狐时代智能科技有限公司 A kind of file classification method and device
CN109977255A (en) * 2019-02-22 2019-07-05 北京奇艺世纪科技有限公司 Model generating method, audio-frequency processing method, device, terminal and storage medium
CN109918662A (en) * 2019-03-04 2019-06-21 腾讯科技(深圳)有限公司 A kind of label of e-sourcing determines method, apparatus and readable medium
CN110163245A (en) * 2019-04-08 2019-08-23 阿里巴巴集团控股有限公司 Class of service prediction technique and system

Also Published As

Publication number Publication date
CN111026908A (en) 2020-04-17

Similar Documents

Publication Publication Date Title
US8112418B2 (en) Generating audio annotations for search and retrieval
CN111708869B (en) Processing method and device for man-machine conversation
Tran et al. Ensemble application of ELM and GPU for real-time multimodal sentiment analysis
CN109325040B (en) FAQ question-answer library generalization method, device and equipment
CN110851650B (en) Comment output method and device and computer storage medium
Ashraf et al. A globally regularized joint neural architecture for music classification
CN111694940A (en) User report generation method and terminal equipment
CN111666376B (en) Answer generation method and device based on paragraph boundary scan prediction and word shift distance cluster matching
CN113157885B (en) Efficient intelligent question-answering system oriented to knowledge in artificial intelligence field
CN110334186A (en) Data query method, apparatus, computer equipment and computer readable storage medium
CN113505204A (en) Recall model training method, search recall device and computer equipment
CN111026908B (en) Song label determining method, device, computer equipment and storage medium
CN111079418A (en) Named body recognition method and device, electronic equipment and storage medium
CN112364125A (en) Text information extraction system and method combining reading course learning mechanism
CN115270797A (en) Text entity extraction method and system based on self-training semi-supervised learning
CN116150306A (en) Training method of question-answering robot, question-answering method and device
Arronte Alvarez et al. Distributed vector representations of folksong motifs
CN115359785A (en) Audio recognition method and device, computer equipment and computer-readable storage medium
Ren Pop music trend and image analysis based on big data technology
Kipyatkova et al. Experimenting with attention mechanisms in joint CTC-attention models for Russian speech recognition
Chen et al. Design of automatic extraction algorithm of knowledge points for MOOCs
Yang Personalized Song recommendation system based on vocal characteristics
CN115114910B (en) Text processing method, device, equipment, storage medium and product
CN117235237B (en) Text generation method and related device
CN117540004B (en) Industrial domain intelligent question-answering method and system based on knowledge graph and user behavior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40022485

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant