CN113688309A - Training method for generating model and generation method and device for recommendation reason - Google Patents

Training method for generating model and generation method and device for recommendation reason Download PDF

Info

Publication number
CN113688309A
CN113688309A CN202110838589.2A CN202110838589A CN113688309A CN 113688309 A CN113688309 A CN 113688309A CN 202110838589 A CN202110838589 A CN 202110838589A CN 113688309 A CN113688309 A CN 113688309A
Authority
CN
China
Prior art keywords
network model
word
recommendation
result
comment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110838589.2A
Other languages
Chinese (zh)
Other versions
CN113688309B (en
Inventor
王姿雯
王思睿
易根良
张富峥
武威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sankuai Online Technology Co Ltd
Original Assignee
Beijing Sankuai Online Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sankuai Online Technology Co Ltd filed Critical Beijing Sankuai Online Technology Co Ltd
Priority to CN202110838589.2A priority Critical patent/CN113688309B/en
Publication of CN113688309A publication Critical patent/CN113688309A/en
Application granted granted Critical
Publication of CN113688309B publication Critical patent/CN113688309B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a training method for generating a model and a method and a device for generating a recommendation reason, wherein the training method comprises the following steps: and training the generator network model, the first discriminator network model and the second discriminator network model according to the training sample data until the convergence condition is met. The first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text; and the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label click characteristics. The embodiment of the invention takes the user characteristics as training sample data, namely, the label click characteristics are introduced in the training process of generating the model. And judging whether the recommendation reason output by the generator network model belongs to the label click feature through the second discriminator network model, and guiding the training of the generation model through the label click feature so that the generation model can generate the recommendation reason with high click rate.

Description

Training method for generating model and generation method and device for recommendation reason
Technical Field
The invention relates to the technical field of internet, in particular to a training method and a training device for a recommendation reason generation model and a recommendation reason generation method and a recommendation reason generation device.
Background
The recommendation reason plays a great role in helping the user to quickly know the characteristics of the merchant, assisting the user in making a visit decision and promoting the content consumption of the user. At present, a plurality of plates for searching, recommending and the like are enabled as recommendation reasons, and play a positive role in expressing click rate and conversion rate.
In the related art, the merchant recommendation reason is mainly obtained through the following schemes:
(1) the method of manual writing: the proposal can ensure high quality and rich expression of recommended reasons according to professional production Content (PGC for short) written by professional operators.
(2) The comment extraction method comprises the following steps: extracted from the merchant's high-quality user reviews. The scheme can make full use of mass User Generated Content (UGC for short) of the comment service to obtain a recommendation reason which is closer to the User view and is more credible.
(3) The template filling method comprises the following steps: the method is obtained by filling user and merchant information based on a template designed by professional operators, for example, the user from the city name can like the old shop which includes the year X. The scheme has controllable quality, can display the personalized information of the user and brings surprise to the people.
(4) Scheme of text generation: the merchant information, user comments and the like are used as input, the existing high-quality recommendation reason is used as a sample, and the recommendation reason is generated through a sequence to sequence (sequence to sequence) training model.
However, the above solutions all have technical drawbacks:
(1) the method of manual writing: this solution requires a lot of time and labor costs and cannot be individually customized for users with different preferences.
(2) The comment extraction method comprises the following steps: the scheme depends on the number of high-quality UGC of a merchant, and for a city below three lines or a new store, the high-quality UGC with enough quantity is difficult to extract.
(3) The template filling method comprises the following steps: the schema is relatively single in language form.
(4) Scheme of text generation: in the prior text generation scheme, user characteristics are rarely taken into consideration, only indexes of a language model are usually taken into consideration for generating targets, the quality of the language model and the performance of indexes on a line cannot be completely equivalent, and when the scheme is used alone, the quality of on-line generation is uncontrollable, and bad cases (bad cases) are easily generated.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a method and an apparatus for training a recommendation reason generation model, and a recommendation reason generation method and an apparatus, which overcome the above problems or at least partially solve the above problems.
In order to solve the above problem, according to a first aspect of an embodiment of the present invention, a training method for a generative model of a recommendation reason is disclosed, including: acquiring training sample data, wherein the training sample data comprises user characteristics and comment labeling texts of POI, and the user characteristics comprise a label clicking characteristic; training a generator network model, a first discriminator network model and a second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions; the first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text or not; the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristics.
Optionally, the training the generator network model, the first discriminator network model and the second discriminator network model according to the training sample data includes: inputting the training sample data to the generator network model; coding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason; generating word embedding vectors of recommended words of the recommendation reason according to the probability distribution result; and inputting the word embedding vector of each recommended word and the word embedding vector of the user characteristic into the first discriminator network model and the second discriminator network model so as to train the first discriminator network model according to the word embedding vector of the recommended word and the comment labeling text and train the second discriminator network model according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic.
Optionally, the obtaining, by performing encoding processing and decoding processing on the training sample data based on the generator network model, a probability distribution result of each recommended word of the recommendation reason includes: respectively coding the word embedded vector of the user characteristic and the word embedded vector of the comment labeling text based on the generator network model to obtain a coding result of the training sample data; and decoding the coding result based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason.
Optionally, the obtaining an encoding result of the training sample data by respectively encoding the word embedding vector of the user feature and the word embedding vector of the comment annotation text based on the generator network model includes: coding the word embedding vector of the user characteristic based on the generator network model to obtain a coding result of the user characteristic; coding the word embedded vector of the comment annotation text based on the generator network model to obtain a coding result of the comment annotation text; and splicing the coding result of the user characteristic and the coding result of the comment labeling text into the coding result of the training sample data.
Optionally, the decoding the encoding result based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason includes: decoding the coding result according to a copy mode based on the generator network model to obtain the attention distribution result of each recommended word of the recommendation reason; extracting each comment word from the comment annotation text according to the attention distribution result of each recommendation word so as to reduce the number of each recommendation word of the recommendation reason to be equal to the number of each comment word of the comment annotation text; and taking the attention distribution result of each recommended word as the probability distribution result of each corresponding recommended word.
Optionally, the generating a word embedding vector of each recommended word of the reason for recommendation according to the probability distribution result includes: and carrying out weighted summation on the word embedding vectors of the comment words according to the probability distribution result of each recommended word to obtain the word embedding vectors of each recommended word.
Optionally, the training the generator network model, the first discriminator network model and the second discriminator network model according to the training sample data includes: training the generator network model and the second discriminator network model according to the training sample data until the generator network model and the second discriminator network model meet the convergence condition; keeping the parameters of the generator network model and the parameters of the second discriminator network model unchanged, adjusting the parameters of the first discriminator network model, keeping the parameters of the first discriminator network model and the parameters of the second discriminator network model unchanged, and adjusting the parameters of the generator network model until the generator network model and the first discriminator network model meet the convergence condition.
According to the second aspect of the embodiment of the present invention, there is also disclosed a method for generating a reason for recommendation, including: acquiring user characteristics, wherein the user characteristics comprise a label clicking characteristic; inputting the user characteristics into a generated model obtained by training according to the method of the first aspect, and outputting POI recommendation reasons for the user characteristics.
Optionally, the inputting the user feature into the generated model trained according to the method of the first aspect, and outputting a POI recommendation reason for the user feature includes: generating a probability distribution result of each recommended word of the POI recommendation reason according to a generator network model of the generation model; and decoding the probability distribution result to obtain the POI recommendation reason.
Optionally, the decoding the probability distribution result to obtain the POI recommendation reason includes: decoding the probability distribution result according to a cluster searching and decoding mode to obtain a local optimal solution; and taking the local optimal solution as the POI recommendation reason.
Optionally, the method further comprises: and inputting the POI recommendation reasons to a trained text classification model and a confusion language model, and outputting a linguistic judgment result of the POI recommendation reasons.
Optionally, the method further comprises: and performing category offset judgment and entity existence judgment on the POI recommendation reason so as to ensure the correlation between the POI recommendation result and the user characteristics.
According to a third aspect of the embodiments of the present invention, there is also disclosed a training apparatus for a recommendation-reason generation model, including: the acquisition module is used for acquiring training sample data, wherein the training sample data comprises user characteristics and comment marking texts of POI, and the user characteristics comprise a label clicking characteristic; the training module is used for training a generator network model, a first discriminator network model and a second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions; the first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text or not; the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristics.
Optionally, the training module comprises: a sample input module for inputting the training sample data to the generator network model; the coding and decoding module is used for coding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason; the word embedding module is used for generating word embedding vectors of the recommended words of the recommendation reasons according to the probability distribution result; a word embedding input module, configured to input the word embedding vector of each recommended word and the word embedding vector of the user characteristic into the first discriminator network model and the second discriminator network model, so as to train the first discriminator network model according to the word embedding vector of the recommended word and the comment labeling text, and train the second discriminator network model according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic.
Optionally, the encoding and decoding module includes: the coding module is used for respectively coding the word embedded vector of the user characteristic and the word embedded vector of the comment labeling text based on the generator network model to obtain a coding result of the training sample data; and the decoding module is used for decoding the coding result based on the generator network model to obtain the probability distribution result of each recommended word of the recommendation reason.
Optionally, the encoding module includes: the user coding module is used for coding the word embedding vector of the user characteristic based on the generator network model to obtain a coding result of the user characteristic; the comment encoding module is used for encoding the word embedding vector of the comment labeling text based on the generator network model to obtain an encoding result of the comment labeling text; and the result splicing module is used for splicing the coding result of the user characteristic and the coding result of the comment labeling text into the coding result of the training sample data.
Optionally, the decoding module includes: the attention decoding module is used for decoding the coding result according to a copy mode based on the generator network model to obtain the attention distribution result of each recommended word of the recommendation reason; the word extraction module is used for extracting each comment word from the comment annotation text according to the attention distribution result of each recommendation word so as to reduce the number of each recommendation word of the recommendation reason to be equal to the number of each comment word of the comment annotation text; and the probability distribution determining module is used for taking the attention distribution result of each recommended word as the probability distribution result of each corresponding recommended word.
Optionally, the word embedding module is configured to perform weighted summation on the word embedding vectors of the comment words according to the probability distribution result of each recommended word, so as to obtain the word embedding vector of each recommended word.
Optionally, the training module is configured to train the generator network model and the second discriminator network model according to the training sample data until the generator network model and the second discriminator network model meet the convergence condition; keeping the parameters of the generator network model and the parameters of the second discriminator network model unchanged, adjusting the parameters of the first discriminator network model, keeping the parameters of the first discriminator network model and the parameters of the second discriminator network model unchanged, and adjusting the parameters of the generator network model until the generator network model and the first discriminator network model meet the convergence condition.
According to a fourth aspect of the embodiments of the present invention, there is also disclosed an apparatus for generating a reason for recommendation, including: the system comprises a characteristic acquisition module, a characteristic acquisition module and a characteristic acquisition module, wherein the characteristic acquisition module is used for acquiring user characteristics which comprise a label click characteristic; an input/output module, configured to input the user feature into the generated model trained according to the method of the first aspect, and output a POI recommendation reason for the user feature.
Optionally, the input/output module includes: a probability distribution result generation module for generating a probability distribution result of each recommended word of the POI recommendation reason according to the generator network model of the generation model; and the probability distribution result decoding module is used for decoding the probability distribution result to obtain the POI recommendation reason.
Optionally, the probability distribution result decoding module is configured to decode the probability distribution result according to a bundle search decoding manner to obtain a local optimal solution; and taking the local optimal solution as the POI recommendation reason.
Optionally, the apparatus further comprises: and the linguistic processing module is used for inputting the POI recommendation reason to the trained text classification model and the confusion language model and outputting a linguistic judgment result of the POI recommendation reason.
Optionally, the apparatus further comprises: and the correlation processing module is used for carrying out category offset judgment and entity existence judgment on the POI recommendation reason so as to ensure the correlation between the POI recommendation result and the user characteristics.
Compared with the prior art, the technical scheme provided by the embodiment of the invention has the following advantages:
the training scheme of the generation model of the recommendation reason provided by the embodiment of the invention obtains training sample data of a comment marking text containing user characteristics and a Point of Interest (POI for short), wherein the user characteristics contain a label clicking characteristic. And training the generator network model, the first discriminator network model and the second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions. The first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text; and the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label click characteristics. The embodiment of the invention takes the user characteristics as training sample data, namely, the label click characteristics are introduced in the training process of generating the model. And judging whether the recommendation reason output by the generator network model belongs to the label click feature through the second discriminator network model, and guiding the training of the generation model through the label click feature so that the generation model can generate the recommendation reason with high click rate.
Drawings
FIG. 1 is a flowchart illustrating the steps of a method for training a recommendation reason generation model according to an embodiment of the present invention;
FIG. 2 is a flowchart of the steps for training a generator network model, a first discriminator network model and a second discriminator network model in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a network structure for generating a model according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating steps of a method for generating a reason for recommendation according to an embodiment of the present invention;
FIG. 5 is a block diagram of a training apparatus for generating a model of a reason for recommendation according to an embodiment of the present invention;
fig. 6 is a block diagram showing a configuration of a recommendation reason generation device according to an embodiment of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1, a flowchart illustrating steps of a training method for a reason for recommendation generative model according to an embodiment of the present invention is shown. The method for training the recommendation reason generation model specifically includes the following steps:
step 101, obtaining training sample data.
In an embodiment of the invention, the training sample data may contain user characteristics and comment annotation text of the POI. Wherein the user feature may comprise a tag click feature. In practical applications, the comment annotation text may be a sentence, for example, "live on every business trip, which is a good choice for business trips". The tag click feature may be a historical high frequency click feature, such as "business," comfort, "" meeting, "" high-end, "" luxury, "" swimming pool, "" parking lot. The POI may be a merchant, such as a restaurant, hotel, casino, or the like.
And 102, training the generator network model, the first discriminator network model and the second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions.
In an embodiment of the invention, the generative model may comprise a generator network model, a first discriminator network model and a second discriminator network model. In practical application, the generator Network model may adopt a Network structure of a Point Network (Point Network) in a Sequence to Sequence frame. The first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text; and the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label click characteristics. In practical applications, the first discriminator network model and the second discriminator network model may both adopt a Text classification model (Text CNN) network structure.
According to the training scheme of the generation model of the recommendation reason, provided by the embodiment of the invention, training sample data of a comment marking text containing user characteristics and POI (point of interest) are obtained, wherein the user characteristics contain a label clicking characteristic. And training the generator network model, the first discriminator network model and the second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions. The first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text; and the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label click characteristics. The embodiment of the invention takes the user characteristics as training sample data, namely, the label click characteristics are introduced in the training process of generating the model. And judging whether the recommendation reason output by the generator network model belongs to the label click feature through the second discriminator network model, and guiding the training of the generation model through the label click feature so that the generation model can generate the recommendation reason with high click rate.
In a preferred embodiment of the present invention, referring to fig. 2, a flowchart of the steps of training the generator network model, the first discriminator network model and the second discriminator network model according to an embodiment of the present invention is shown. One embodiment of training the generator network model, the first discriminator network model and the second discriminator network model according to training sample data includes the following steps.
Step 201, inputting training sample data to a generator network model.
In an embodiment of the present invention, the training sample data may contain a plurality of comment annotation texts and user features of the POI. The user features include an identity feature and a tag click feature. The identity characteristics may include gender, occupation, consumption level, etc., among others. The user characteristic may indicate that the user conforming to the identity characteristic clicks the tag of the POI frequently. The comment annotation text represents comment text generated for the user according with the user characteristics.
And 202, carrying out encoding processing and decoding processing on training sample data based on a generator network model to obtain a probability distribution result of each recommended word of a recommendation reason.
In the embodiment of the invention, the word embedding vector of the user characteristic and the word embedding vector of the comment labeling text can be respectively encoded based on the generator network model to obtain the encoding result of the training sample data. And then decoding the coding result based on the generator network model to obtain the probability distribution result of each recommended word of the recommendation reason. When the word embedding vector of the user characteristic and the word embedding vector of the comment labeling text are generated, a set of word embedding vector parameters can be shared.
One implementation way of respectively encoding the word embedded vector of the user characteristic and the word embedded vector of the comment tagged text based on the generator network model to obtain the encoding result of the training sample data is that the word embedded vector of the user characteristic is encoded based on the generator network model to obtain the encoding result of the user characteristic, the word embedded vector of the comment tagged text is encoded based on the generator network model to obtain the encoding result of the comment tagged text, and then the encoding result of the user characteristic and the encoding result of the comment tagged text are spliced into the encoding result of the training sample data.
In practical applications, the encoding process may adopt a coding structure of a Long Short Term Memory Network (LSTM), or may also adopt a convolution structure or a transform (a natural language processing model) structure to perform the encoding process.
One embodiment of obtaining the probability distribution result of each recommended word of the recommendation reason by decoding the encoding result based on the generator network model is to decode the encoding result based on the generator network model according to a copy mode to obtain the attention distribution result of each recommended word of the recommendation reason; extracting each comment word from the comment annotation text according to the attention distribution result of each recommendation word so as to reduce the number of each recommendation word of the recommendation reason to be the same as the number of each comment word of the comment annotation text; and taking the attention distribution result of each recommended word as the probability distribution result of each corresponding recommended word. According to the embodiment of the invention, the attention distribution result of the coding process of the Point Network structure is used as the probability distribution result of the recommended word in the decoding process by multiplexing the parameters of the Point Network structure, so that the complexity of the generator Network model is reduced.
Step 203, generating word embedding vectors of each recommended word of the recommendation reason according to the probability distribution result.
In the embodiment of the invention, the word embedding vectors of the comment words are subjected to weighted summation according to the probability distribution result of each recommended word, so that the word embedding vectors of the recommended words are obtained.
Step 204, inputting the word embedding vector of each recommended word and the word embedding vector of the user characteristic into the first discriminator network model and the second discriminator network model, so as to train the first discriminator network model according to the word embedding vector of the recommended word and the comment labeling text, and train the second discriminator network model according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic.
In the embodiment of the invention, the word embedding vector of each recommended word and the word embedding vector of the user characteristic can be spliced and then input into the first discriminator network model and the second discriminator network model.
In a preferred embodiment of the present invention, one implementation of training the generator network model, the first discriminator network model and the second discriminator network model according to training sample data is that the generator network model and the second discriminator network model are trained according to the training sample data until the generator network model and the second discriminator network model satisfy a convergence condition; keeping the parameters of the generator network model and the parameters of the second discriminator network model unchanged, adjusting the parameters of the first discriminator network model, keeping the parameters of the first discriminator network model and the parameters of the second discriminator network model unchanged, and adjusting the parameters of the generator network model until the generator network model and the first discriminator network model meet the convergence condition.
In a preferred embodiment of the present invention, referring to fig. 3, a schematic network structure diagram of a generative model according to an embodiment of the present invention is shown. In fig. 3, a Point Network structure is used as a generator Network model G of the generation model, and a Text CNN Network structure is used as a discriminator Network model D of the generation model. The input items of the generator network model G contain a number of premium comments (comment annotation text) and user features of the POI. The user characteristics adopt identity characteristics (Profile adjustment) such as gender, occupation and consumption level, real-time characteristics such as high-frequency display labels of POI clicked by the user history, a set of word embedding (embedding) vector parameters are shared, and a corresponding word embedding vector is generated. And then respectively coding the word embedding vectors and splicing. The encoding may be performed by using a bidirectional LSTM encoding structure, a convolution structure, or a transform structure. The decoding process takes words from comments and user characteristics according to the Attention (Attention) distribution results using the Copy mode of the Attention based decoder (Attention based decoder). In each step of decoding, the Attention distribution result calculated by the generator Network model G is directly used as the probability distribution result output by the Point Network, and the complexity of the generator Network model G is greatly reduced through parameter multiplexing. And according to the probability distribution result output by the generator network model G for each word in the comment, carrying out weighted summation on the word embedded vectors of all words in the input item to obtain the word embedded vectors of all recommended words in the recommendation reason. Loss function of attention-based decoder not Losss. And embedding words of the recommended words into the vectorAnd splicing the word embedding vectors of the user characteristics, and then inputting the word embedding vectors into a discriminator network model D. The network model D of the discriminator carries out two classification tasks, wherein the Task I (Task1) is used for judging whether the generated result is a real sample (real/fake) or not, and the corresponding network structure is recorded as the network model D of the discriminator1Network model of arbiter D1Has a Loss function of Lossc1. Task two (Task2) is to determine whether the generated result will be clicked by the current user (CTR Predict), and the corresponding network structure is recorded as the discriminator network model D2Network model of arbiter D2Has a Loss function of Lossc2. Discriminator network model D1And discriminator network model D2A general text classification network structure may be employed. The Loss function of the generative model is Loss ═ Losss+Lossc1+Lossc2
In the training stage of generating the model, firstly, a generator network model G and a discriminator network model D are pre-trained according to the input items2Until the model converges, then in each round of training, fixing the generator network model G and the discriminator network model D2Optimizing the discriminator network model D1And then fixing the discriminator network model D1And discriminator network model D2Optimizing the generator network model G until the generator network model G and the discriminator network model D1And (6) converging.
Referring to fig. 4, a flowchart illustrating steps of an embodiment of a method for generating a reason for recommendation according to an embodiment of the present invention is shown. The method for generating the recommendation reason may specifically include the following steps:
step 401, obtaining user characteristics.
In embodiments of the present invention, the user features may include a tag click feature and an identity feature.
Step 402, inputting the user characteristics into the generated model obtained by training according to the training method of the generated model of the recommendation reason, and outputting the POI recommendation reason aiming at the user characteristics.
In an embodiment of the present invention, the generative model may be generated according to the steps shown in FIG. 1. The output POI recommendation reason may be POI premium reviews.
In a preferred embodiment of the present invention, an implementation manner of inputting the user characteristics into the generated model obtained by training according to the training method of the generated model of the recommendation reason and outputting the POI recommendation reason for the user characteristics is that a probability distribution result of each recommended word comment text of the POI recommendation reason is generated according to a generator network model of the generated model; and decoding the probability distribution result to obtain the POI recommendation reason. In practical application, when the probability distribution result is decoded, the probability distribution result can be decoded according to a bundle searching and decoding mode to obtain a local optimal solution, and then the local optimal solution is used as a POI recommendation result. Compared with the global optimal solution obtained by decoding, the local optimal solution obtained by decoding reduces the dimensionality of the recommended word recommended by the POI, has the advantage of short time consumption, and meets the requirement of generating the POI recommendation reason on line in real time.
In a preferred embodiment of the present invention, after generating the POI reason, the quality control may be performed on the POI reason, which mainly solves the following two problems:
1) linguistic problems: and inputting the POI recommendation reasons into the trained text classification model and the confusion language model, and outputting a linguistic judgment result of the POI recommendation reasons. The judgment result is used for indicating whether the POI recommendation result has the problems of linguistics incompactness and incompleteness.
The text classification model can be obtained based on negative sample training constructed in the modes of word loss, word filling and order exchange.
2) The relevance problem is as follows: and performing category offset judgment and entity existence judgment on the POI recommendation reason to ensure the correlation between the POI recommendation result and the user characteristics.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 5, a block diagram of a training apparatus for generating a model of a reason for recommendation according to an embodiment of the present invention is shown, where the training apparatus for generating a model of a reason for recommendation specifically includes the following modules:
the obtaining module 51 is configured to obtain training sample data, where the training sample data includes user features and comment labeling texts of POIs, and the user features include a tag click feature;
a training module 52, configured to train a generator network model, a first discriminator network model, and a second discriminator network model according to the training sample data until the generator network model, the first discriminator network model, and the second discriminator network model meet a preset convergence condition;
the first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text or not; the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristics.
In a preferred embodiment of the present invention, the training module 52 includes:
a sample input module for inputting the training sample data to the generator network model;
the coding and decoding module is used for coding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason;
the word embedding module is used for generating word embedding vectors of the recommended words of the recommendation reasons according to the probability distribution result;
a word embedding input module, configured to input the word embedding vector of each recommended word and the word embedding vector of the user characteristic into the first discriminator network model and the second discriminator network model, so as to train the first discriminator network model according to the word embedding vector of the recommended word and the comment labeling text, and train the second discriminator network model according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic.
In a preferred embodiment of the present invention, the encoding/decoding module includes:
the coding module is used for respectively coding the word embedded vector of the user characteristic and the word embedded vector of the comment labeling text based on the generator network model to obtain a coding result of the training sample data;
and the decoding module is used for decoding the coding result based on the generator network model to obtain the probability distribution result of each recommended word of the recommendation reason.
In a preferred embodiment of the present invention, the encoding module includes:
the user coding module is used for coding the word embedding vector of the user characteristic based on the generator network model to obtain a coding result of the user characteristic;
the comment encoding module is used for encoding the word embedding vector of the comment labeling text based on the generator network model to obtain an encoding result of the comment labeling text;
and the result splicing module is used for splicing the coding result of the user characteristic and the coding result of the comment labeling text into the coding result of the training sample data.
In a preferred embodiment of the present invention, the decoding module includes:
the attention decoding module is used for decoding the coding result according to a copy mode based on the generator network model to obtain the attention distribution result of each recommended word of the recommendation reason;
the word extraction module is used for extracting each comment word from the comment annotation text according to the attention distribution result of each recommendation word so as to reduce the number of each recommendation word of the recommendation reason to be equal to the number of each comment word of the comment annotation text;
and the probability distribution determining module is used for taking the attention distribution result of each recommended word as the probability distribution result of each corresponding recommended word.
In a preferred embodiment of the present invention, the word embedding module is configured to perform weighted summation on the word embedding vectors of the comment words according to a result of probability distribution of each of the recommended words, so as to obtain the word embedding vector of each of the recommended words.
In a preferred embodiment of the present invention, the training module is configured to train the generator network model and the second discriminator network model according to the training sample data until the generator network model and the second discriminator network model satisfy the convergence condition; keeping the parameters of the generator network model and the parameters of the second discriminator network model unchanged, adjusting the parameters of the first discriminator network model, keeping the parameters of the first discriminator network model and the parameters of the second discriminator network model unchanged, and adjusting the parameters of the generator network model until the generator network model and the first discriminator network model meet the convergence condition.
Referring to fig. 6, a block diagram of a device for generating a reason for recommendation according to an embodiment of the present invention is shown, where the device for generating a reason for recommendation specifically includes the following modules:
the feature obtaining module 61 is configured to obtain a user feature, where the user feature includes a tag click feature;
and an input/output module 62, configured to input the user characteristics into the generated model trained according to the training method of the generated model of the recommendation reason described above, and output POI recommendation reasons for the user characteristics.
In a preferred embodiment of the present invention, the input/output module 62 includes:
a probability distribution result generation module for generating a probability distribution result of each recommended word of the POI recommendation reason according to the generator network model of the generation model;
and the probability distribution result decoding module is used for decoding the probability distribution result to obtain the POI recommendation reason.
In a preferred embodiment of the present invention, the probability distribution result decoding module is configured to perform decoding processing on the probability distribution result according to a bundle search decoding manner to obtain a local optimal solution; and taking the local optimal solution as the POI recommendation reason.
In a preferred embodiment of the present invention, the apparatus further comprises:
and the linguistic processing module is used for inputting the POI recommendation reason to the trained text classification model and the confusion language model and outputting a linguistic judgment result of the POI recommendation reason.
In a preferred embodiment of the present invention, the apparatus further comprises:
and the correlation processing module is used for carrying out category offset judgment and entity existence judgment on the POI recommendation reason so as to ensure the correlation between the POI recommendation result and the user characteristics.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment. The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The method and the device for training the model for generating the reason for recommendation provided by the invention and the method and the device for generating the reason for recommendation are described in detail above, and specific examples are applied in the text to explain the principle and the implementation of the invention, and the description of the above examples is only used to help understanding the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (24)

1. A method for training a generative model for a recommendation reason, comprising:
acquiring training sample data, wherein the training sample data comprises user characteristics and comment labeling texts of POI, and the user characteristics comprise a label clicking characteristic;
training a generator network model, a first discriminator network model and a second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions;
the first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text or not; the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristics.
2. The method of claim 1, wherein training a generator network model, a first discriminator network model and a second discriminator network model from the training sample data comprises:
inputting the training sample data to the generator network model;
coding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason;
generating word embedding vectors of recommended words of the recommendation reason according to the probability distribution result;
and inputting the word embedding vector of each recommended word and the word embedding vector of the user characteristic into the first discriminator network model and the second discriminator network model so as to train the first discriminator network model according to the word embedding vector of the recommended word and the comment labeling text and train the second discriminator network model according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic.
3. The method according to claim 2, wherein the encoding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason comprises:
respectively coding the word embedded vector of the user characteristic and the word embedded vector of the comment labeling text based on the generator network model to obtain a coding result of the training sample data;
and decoding the coding result based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason.
4. The method of claim 3, wherein the encoding the word embedding vector of the user feature and the word embedding vector of the comment labeling text respectively based on the generator network model to obtain the encoding result of the training sample data comprises:
coding the word embedding vector of the user characteristic based on the generator network model to obtain a coding result of the user characteristic;
coding the word embedded vector of the comment annotation text based on the generator network model to obtain a coding result of the comment annotation text;
and splicing the coding result of the user characteristic and the coding result of the comment labeling text into the coding result of the training sample data.
5. The method according to claim 3, wherein the decoding the encoded result based on the generator network model to obtain a probability distribution result of each recommended word of the reason for recommendation comprises:
decoding the coding result according to a copy mode based on the generator network model to obtain the attention distribution result of each recommended word of the recommendation reason;
extracting each comment word from the comment annotation text according to the attention distribution result of each recommendation word so as to reduce the number of each recommendation word of the recommendation reason to be equal to the number of each comment word of the comment annotation text;
and taking the attention distribution result of each recommended word as the probability distribution result of each corresponding recommended word.
6. The method according to claim 5, wherein the generating a word embedding vector for each recommended word of the reason for recommendation according to the probability distribution result comprises:
and carrying out weighted summation on the word embedding vectors of the comment words according to the probability distribution result of each recommended word to obtain the word embedding vectors of each recommended word.
7. The method of claim 1, wherein training a generator network model, a first discriminator network model and a second discriminator network model from the training sample data comprises:
training the generator network model and the second discriminator network model according to the training sample data until the generator network model and the second discriminator network model meet the convergence condition;
keeping the parameters of the generator network model and the parameters of the second discriminator network model unchanged, adjusting the parameters of the first discriminator network model, keeping the parameters of the first discriminator network model and the parameters of the second discriminator network model unchanged, and adjusting the parameters of the generator network model until the generator network model and the first discriminator network model meet the convergence condition.
8. A method for generating a reason for recommendation, comprising:
acquiring user characteristics, wherein the user characteristics comprise a label clicking characteristic;
inputting the user features into a generative model trained according to the method of any one of claims 1 to 7, and outputting POI recommendation reasons for the user features.
9. The method according to claim 8, wherein the inputting the user features into a generative model trained according to the method of any one of claims 1 to 7 and outputting POI recommendation reasons for the user features comprises:
generating a probability distribution result of each recommended word of the POI recommendation reason according to a generator network model of the generation model;
and decoding the probability distribution result to obtain the POI recommendation reason.
10. The method of claim 9, wherein the decoding the probability distribution result to obtain the POI recommendation reason comprises:
decoding the probability distribution result according to a cluster searching and decoding mode to obtain a local optimal solution;
and taking the local optimal solution as the POI recommendation reason.
11. The method according to any one of claims 8 to 10, further comprising:
and inputting the POI recommendation reasons to a trained text classification model and a confusion language model, and outputting a linguistic judgment result of the POI recommendation reasons.
12. The method according to any one of claims 8 to 10, further comprising:
and performing category offset judgment and entity existence judgment on the POI recommendation reason so as to ensure the correlation between the POI recommendation result and the user characteristics.
13. A training device for generating a model for a reason for recommendation, comprising:
the acquisition module is used for acquiring training sample data, wherein the training sample data comprises user characteristics and comment marking texts of POI, and the user characteristics comprise a label clicking characteristic;
the training module is used for training a generator network model, a first discriminator network model and a second discriminator network model according to the training sample data until the generator network model, the first discriminator network model and the second discriminator network model meet preset convergence conditions;
the first discriminator network model is used for judging whether the recommendation reason output by the generator network model belongs to the comment annotation text or not; the second discriminator is used for judging whether the recommendation reason output by the generator network model belongs to the label clicking characteristics.
14. The apparatus of claim 13, wherein the training module comprises:
a sample input module for inputting the training sample data to the generator network model;
the coding and decoding module is used for coding and decoding the training sample data based on the generator network model to obtain a probability distribution result of each recommended word of the recommendation reason;
the word embedding module is used for generating word embedding vectors of the recommended words of the recommendation reasons according to the probability distribution result;
a word embedding input module, configured to input the word embedding vector of each recommended word and the word embedding vector of the user characteristic into the first discriminator network model and the second discriminator network model, so as to train the first discriminator network model according to the word embedding vector of the recommended word and the comment labeling text, and train the second discriminator network model according to the word embedding vector of the recommended word and the word embedding vector of the user characteristic.
15. The apparatus of claim 14, wherein the codec module comprises:
the coding module is used for respectively coding the word embedded vector of the user characteristic and the word embedded vector of the comment labeling text based on the generator network model to obtain a coding result of the training sample data;
and the decoding module is used for decoding the coding result based on the generator network model to obtain the probability distribution result of each recommended word of the recommendation reason.
16. The apparatus of claim 15, wherein the encoding module comprises:
the user coding module is used for coding the word embedding vector of the user characteristic based on the generator network model to obtain a coding result of the user characteristic;
the comment encoding module is used for encoding the word embedding vector of the comment labeling text based on the generator network model to obtain an encoding result of the comment labeling text;
and the result splicing module is used for splicing the coding result of the user characteristic and the coding result of the comment labeling text into the coding result of the training sample data.
17. The apparatus of claim 15, wherein the decoding module comprises:
the attention decoding module is used for decoding the coding result according to a copy mode based on the generator network model to obtain the attention distribution result of each recommended word of the recommendation reason;
the word extraction module is used for extracting each comment word from the comment annotation text according to the attention distribution result of each recommendation word so as to reduce the number of each recommendation word of the recommendation reason to be equal to the number of each comment word of the comment annotation text;
and the probability distribution determining module is used for taking the attention distribution result of each recommended word as the probability distribution result of each corresponding recommended word.
18. The apparatus of claim 17, wherein the word embedding module is configured to perform weighted summation on the word embedding vector of each of the comment words according to the probability distribution result of each of the recommended words to obtain the word embedding vector of each of the recommended words.
19. The apparatus of claim 13, wherein the training module is configured to train the generator network model and the second discriminator network model according to the training sample data until the generator network model and the second discriminator network model satisfy the convergence condition; keeping the parameters of the generator network model and the parameters of the second discriminator network model unchanged, adjusting the parameters of the first discriminator network model, keeping the parameters of the first discriminator network model and the parameters of the second discriminator network model unchanged, and adjusting the parameters of the generator network model until the generator network model and the first discriminator network model meet the convergence condition.
20. An apparatus for generating a reason for recommendation, comprising:
the system comprises a characteristic acquisition module, a characteristic acquisition module and a characteristic acquisition module, wherein the characteristic acquisition module is used for acquiring user characteristics which comprise a label click characteristic;
an input and output module, configured to input the user characteristics into a generative model trained according to the method of any one of claims 1 to 7, and output a POI recommendation reason for the user characteristics.
21. The apparatus of claim 20, wherein the input-output module comprises:
a probability distribution result generation module for generating a probability distribution result of each recommended word of the POI recommendation reason according to the generator network model of the generation model;
and the probability distribution result decoding module is used for decoding the probability distribution result to obtain the POI recommendation reason.
22. The apparatus of claim 21, wherein the probability distribution result decoding module is configured to perform decoding processing on the probability distribution result according to a bundle search decoding manner to obtain a local optimal solution; and taking the local optimal solution as the POI recommendation reason.
23. The apparatus of any one of claims 20 to 22, further comprising:
and the linguistic processing module is used for inputting the POI recommendation reason to the trained text classification model and the confusion language model and outputting a linguistic judgment result of the POI recommendation reason.
24. The apparatus of any one of claims 20 to 22, further comprising:
and the correlation processing module is used for carrying out category offset judgment and entity existence judgment on the POI recommendation reason so as to ensure the correlation between the POI recommendation result and the user characteristics.
CN202110838589.2A 2021-07-23 2021-07-23 Training method for generating model and generation method and device for recommendation reason Active CN113688309B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110838589.2A CN113688309B (en) 2021-07-23 2021-07-23 Training method for generating model and generation method and device for recommendation reason

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110838589.2A CN113688309B (en) 2021-07-23 2021-07-23 Training method for generating model and generation method and device for recommendation reason

Publications (2)

Publication Number Publication Date
CN113688309A true CN113688309A (en) 2021-11-23
CN113688309B CN113688309B (en) 2022-11-29

Family

ID=78577793

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110838589.2A Active CN113688309B (en) 2021-07-23 2021-07-23 Training method for generating model and generation method and device for recommendation reason

Country Status (1)

Country Link
CN (1) CN113688309B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150371262A1 (en) * 2014-06-23 2015-12-24 Pure Auto Llc Dba Purecars Internet Search Engine Advertisement Optimization
CN110457452A (en) * 2019-07-08 2019-11-15 汉海信息技术(上海)有限公司 Rationale for the recommendation generation method, device, electronic equipment and readable storage medium storing program for executing
CN110532463A (en) * 2019-08-06 2019-12-03 北京三快在线科技有限公司 Rationale for the recommendation generating means and method, storage medium and electronic equipment
CN110727844A (en) * 2019-10-21 2020-01-24 东北林业大学 Online commented commodity feature viewpoint extraction method based on generation countermeasure network
CN111046138A (en) * 2019-11-15 2020-04-21 北京三快在线科技有限公司 Recommendation reason generation method and device, electronic device and storage medium
CN112308650A (en) * 2020-07-01 2021-02-02 北京沃东天骏信息技术有限公司 Recommendation reason generation method, device, equipment and storage medium
US20210073630A1 (en) * 2019-09-10 2021-03-11 Robert Bosch Gmbh Training a class-conditional generative adversarial network
CN112667813A (en) * 2020-12-30 2021-04-16 北京华宇元典信息服务有限公司 Method for identifying sensitive identity information of referee document
CN112905776A (en) * 2021-03-17 2021-06-04 西北大学 Emotional dialogue model construction method, emotional dialogue system and method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150371262A1 (en) * 2014-06-23 2015-12-24 Pure Auto Llc Dba Purecars Internet Search Engine Advertisement Optimization
CN110457452A (en) * 2019-07-08 2019-11-15 汉海信息技术(上海)有限公司 Rationale for the recommendation generation method, device, electronic equipment and readable storage medium storing program for executing
CN110532463A (en) * 2019-08-06 2019-12-03 北京三快在线科技有限公司 Rationale for the recommendation generating means and method, storage medium and electronic equipment
WO2021023249A1 (en) * 2019-08-06 2021-02-11 北京三快在线科技有限公司 Generation of recommendation reason
US20210073630A1 (en) * 2019-09-10 2021-03-11 Robert Bosch Gmbh Training a class-conditional generative adversarial network
CN110727844A (en) * 2019-10-21 2020-01-24 东北林业大学 Online commented commodity feature viewpoint extraction method based on generation countermeasure network
CN111046138A (en) * 2019-11-15 2020-04-21 北京三快在线科技有限公司 Recommendation reason generation method and device, electronic device and storage medium
CN112308650A (en) * 2020-07-01 2021-02-02 北京沃东天骏信息技术有限公司 Recommendation reason generation method, device, equipment and storage medium
CN112667813A (en) * 2020-12-30 2021-04-16 北京华宇元典信息服务有限公司 Method for identifying sensitive identity information of referee document
CN112905776A (en) * 2021-03-17 2021-06-04 西北大学 Emotional dialogue model construction method, emotional dialogue system and method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
严丹 等: ""考虑评级信息的音乐评论文本自动生成"", 《计算机科学与探索》 *

Also Published As

Publication number Publication date
CN113688309B (en) 2022-11-29

Similar Documents

Publication Publication Date Title
JP7150842B2 (en) Multilingual Document Retrieval Based on Document Structure Extraction
CN110688832B (en) Comment generation method, comment generation device, comment generation equipment and storage medium
US10915756B2 (en) Method and apparatus for determining (raw) video materials for news
US11533495B2 (en) Hierarchical video encoders
CN110188158B (en) Keyword and topic label generation method, device, medium and electronic equipment
CN111414561B (en) Method and device for presenting information
CN111553159B (en) Question generation method and system
CN113408287B (en) Entity identification method and device, electronic equipment and storage medium
CN112016320A (en) English punctuation adding method, system and equipment based on data enhancement
CN115238710B (en) Intelligent document generation and management method and device
CN110738059A (en) text similarity calculation method and system
Xu et al. Audio caption in a car setting with a sentence-level loss
CN111241310A (en) Deep cross-modal Hash retrieval method, equipment and medium
CN114880444A (en) Dialog recommendation system based on prompt learning
CN115630145A (en) Multi-granularity emotion-based conversation recommendation method and system
CN116467417A (en) Method, device, equipment and storage medium for generating answers to questions
CN116431803A (en) Automatic generation method, system, equipment and client of Chinese media comment text
Zhang et al. Distinctive image captioning via clip guided group optimization
CN112287687B (en) Case tendency extraction type summarization method based on case attribute perception
CN113688309B (en) Training method for generating model and generation method and device for recommendation reason
CN116977701A (en) Video classification model training method, video classification method and device
CN110852103A (en) Named entity identification method and device
CN114118068B (en) Method and device for amplifying training text data and electronic equipment
CN115905585A (en) Keyword and text matching method and device, electronic equipment and storage medium
Wang et al. Distill-AER: Fine-Grained Address Entity Recognition from Spoken Dialogue via Knowledge Distillation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant