WO2021023249A1 - Generation of recommendation reason - Google Patents

Generation of recommendation reason Download PDF

Info

Publication number
WO2021023249A1
WO2021023249A1 PCT/CN2020/107285 CN2020107285W WO2021023249A1 WO 2021023249 A1 WO2021023249 A1 WO 2021023249A1 CN 2020107285 W CN2020107285 W CN 2020107285W WO 2021023249 A1 WO2021023249 A1 WO 2021023249A1
Authority
WO
WIPO (PCT)
Prior art keywords
vector
selective
comment
hidden state
query
Prior art date
Application number
PCT/CN2020/107285
Other languages
French (fr)
Chinese (zh)
Inventor
王金刚
王倩舒
兰田
陆源源
张富峥
王仲远
Original Assignee
北京三快在线科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京三快在线科技有限公司 filed Critical 北京三快在线科技有限公司
Publication of WO2021023249A1 publication Critical patent/WO2021023249A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations

Definitions

  • the present disclosure relates to the field of recommendation reason generation, in particular to an artificial intelligence-based recommendation reason generation device and method, computer-readable storage media, and electronic equipment.
  • the reason for recommendation is a natural language text displayed to the user on a search result page or a discovery page (such as a scene decision, a must-eat list, etc.) for highlight recommendation to assist the user in decision-making, which is usually a short sentence. For example, if you enter "steak" on the search page of a review website APP, multiple businesses related to "steak” will be listed on the search results page, and there is usually a short recommendation reason in the brief introduction of each business, for example, “Selected filet steak is very generous", “Selected sirloin wagyu steak is fresh and delicious, very tender”.
  • muscle-eat reason for example, "Broiled pork is great , The service attitude is very good, very good", "Roast duck is well-deserved, not only the duck skin is crispy, but the duck meat is very sticky.”
  • the reason for recommendation in the form of natural language can be seen as a high degree of concentration of real user reviews, explaining the recall results for users, digging out merchant characteristics, attracting users to click, and providing scene-oriented guidance to users, enhancing user experience and trust in the platform .
  • a recommendation reason generation device which includes: a query encoder for encoding user query information into a query vector; and a selective encoder for encoding user comments into a query vector based on the query vector The selective hidden state of the review vector; and a decoder, which forms an encoder-decoder structure with the selective encoder for decoding the selective hidden state of the review vector into a recommendation reason, wherein the selective encoder It includes a comment coding unit and a selective calculation network.
  • the comment coding unit is used to code the user comment into a comment vector intermediate implicit state, and the selective calculation network is used to divide the query vector with the comment vector.
  • the hidden state is operated to generate the selective hidden state of the comment vector.
  • a method for generating a recommendation reason including: encoding user query information as a query vector; encoding user comments as an intermediate hidden state of the comment vector; and combining the query vector with the comment vector
  • the intermediate hidden state is operated to generate a selective hidden state of the review vector; and the selective hidden state of the review vector is decoded as a recommendation reason.
  • a computer-readable storage medium having a computer program stored thereon, and the program is executed by a processor to implement the above method.
  • an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the above method when executed.
  • the user's query information is taken into consideration when the user comments are encoded, so it is possible to generate a personalized recommendation reason for the query, thereby improving user experience.
  • Fig. 1 shows a schematic block diagram of a recommendation reason generating device according to an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of an operation example of a recommendation reason generating device according to an embodiment of the present disclosure
  • FIG. 3 shows a schematic flowchart of a method for generating a recommendation reason according to an embodiment of the present disclosure
  • Fig. 4 shows a schematic structural diagram of an electronic device for generating a recommendation reason according to an embodiment of the present disclosure.
  • a method for generating a recommendation reason is specifically: generating a one-sentence recommendation reason based on the high-quality user reviews of the POI (Point of Interest).
  • the wording of the generated recommendation reason does not need to be strictly extracted from the original text, but is based on Artificial intelligence technology encodes the original text through an encoder (Encoder) and then generates it through a decoder (Decoder).
  • the generation method does not take into account the user's real-time intentions, and generates recommendation reasons that are irrelevant to the user's query, resulting in uneven generation results and reducing user experience. For example, if you search for "Take a baby to eat” in a gourmet app, the app will recommend a number of restaurants. However, the reason for the recommendation of these restaurants may not be related to "Take a baby to eat", but is only a general recommendation about the restaurant, such as "Brown” The meat tastes good” because the generation method does not consider the information queried by the user when generating the recommendation reason.
  • the present disclosure provides an artificial intelligence-based recommendation reason generating device and method, which can generate a personalized recommendation reason for query.
  • the recommendation reason generation method proposed in the present disclosure is an improvement of the above-mentioned generative method, which can generate personalized recommendation reasons for recalled POIs according to different user query information, which is more intelligent and improves user experience.
  • “query” includes any behavior of the user searching for POIs, so “query information” includes any information input by the user for searching for POIs, such as search information such as text or voice input by the user (for example, "steak ", "Take a baby to eat"), but also include user-selected themes (such as "ramen”), etc.
  • a business of interest (POI) includes any type of business that can be searched, such as restaurants, swimming pools, movie theaters, etc.
  • the sequence-to-sequence (Seq2Seq) model based on the encoder-decoder structure has achieved good results in the field of text generation .
  • Recommendation reason generation can be regarded as a generative text summary task, which is a kind of text generation task.
  • the recommendation reason generation device and method of the embodiments of the present disclosure are implemented based on the sequence-to-sequence model of the encoder-decoder structure of artificial intelligence.
  • Fig. 1 shows a schematic block diagram of a recommendation reason generating device 100 according to an embodiment of the present disclosure.
  • the recommendation reason generating device 100 includes a query encoder 101, a selective encoder 102 and a decoder 103.
  • the query encoder 101 is used to encode the user's query information into a query vector.
  • the selective encoder 102 is configured to encode user comments into a comment vector selective hidden state based on the query vector.
  • the decoder 103 and the selective encoder 102 form an encoder-decoder structure for decoding the selective hidden state of the comment vector into a recommendation reason.
  • the selective encoder 102 includes a comment encoding unit 1021 and a selective calculation network 1022.
  • the comment encoding unit 1021 is configured to encode the user comment into an intermediate hidden state of the comment vector.
  • the selective calculation network 1022 is configured to perform operations on the query vector and the intermediate hidden state of the review vector to generate the selective hidden state of the review vector, so that the recommendation reason includes content related to the query information.
  • the decoder 103 may also perform decoding based on the query vector generated by the query encoder 101.
  • the recommendation reason generating device 100 of the embodiment of the present disclosure is implemented based on an encoder-decoder structure of artificial intelligence technology.
  • the encoder encodes the source text into a vector
  • the decoder decodes the vector generated by the encoder into the target text.
  • a recommendation reason needs to be generated based on user comments. Therefore, the user comment is the source text in the encoder-decoder structure, and the recommendation reason is the target text.
  • the selective encoder 102 is used when encoding user comments.
  • the decoder 103 decodes based on the query vector, so that the decoded recommendation reason better reflects the user's query information.
  • the decoder 103 adopts an attention mechanism, and uses the query vector as an input parameter of the context vector generation function when generating a context vector based on the attention mechanism.
  • the query encoder 101 can use any existing or future-developed encoder technology in the field of artificial intelligence to encode the user's query information into a query vector, for example, using one-way or two-way long short-term memory (Long Short-term Memory). -Term Memory, LSTM) network, Word2Vec and other word vector tools.
  • the "query information" here means any information entered by the user to search for POIs.
  • non-text input such as voice
  • vector encoding is performed.
  • the structure of the selective encoder 102-decoder 103 may be based on any existing or future neural network encoder-decoder model developed in the field of artificial intelligence, for example, a pointer generator (Pointer-Generator) Network, Recurrent Neural Network-Long Short-Term Memory (RNN-LSTM) network, ConS2S (sequence to sequence framework based on Convolutional Neural Network), Transformer network, etc.
  • the selective encoder 102 may include a comment coding unit 1021 and a selective calculation network 1022 to encode user comments into a selective hidden state of the comment vector filtered by the query vector, so that the reason for recommendation decoded by the decoder 103 contains the same Describe the content related to the query information.
  • the comment coding unit 1021 may adopt any existing or future general-purpose encoder technology in the field of artificial intelligence, for example, a one-way or two-way LSTM network, to encode user comments into the intermediate hidden state of the comment vector, and the intermediate hidden state is the same as The query vector is irrelevant.
  • the selective calculation network 1022 calculates the query vector and the intermediate hidden state of the review vector, so that the generated selective hidden state of the review vector contains query vector information, so that the recommendation reason includes information related to the query information. content.
  • the selective computing network 1022 may be a Multilayer Perceptron (MLP) network, for example.
  • MLP Multilayer Perceptron
  • selective hidden state means that the hidden state contains query vector information, and the query vector information has a selective filtering effect on the content of user comments.
  • FIG. 2 shows a schematic diagram of an operation example of the recommendation reason generation device 100 according to an embodiment of the present disclosure.
  • the query encoder 101 encodes the user's query information into a query vector q* through, for example, a two-way LSTM, to be input to the selective encoder 102 and the decoder 103.
  • the query vector q* is only used in the selective encoder 102 and not in the decoder 103.
  • the selective encoder 102 uses the query vector q* as a filter parameter to divide the constituent words x 1 , x 2 , x 3 , x 4 , x 5 , and x 6 of the user review (in the example of FIG. 2, the 6 constituent words are For example, the actual user comment can have more or less than 6 constituent words) coded as a comment vector.
  • the encoder 102 may be based on selective query vector q * selectivity hidden state h i '.
  • the comment coding unit 1021 of the selective encoder 102 first generates the intermediate hidden states h 1 , h 2 , h 3 , h 4 , h 5 , h 6 (collectively referred to as h i ), then, the selective calculation network 1022 of the selective encoder 102 generates a comment vector selection for output to the decoder 103 based on the intermediate hidden state h i of the comment vector and the query vector q* Sexual recessive state h i '.
  • the comment encoding unit 1021 may encode the input user comments through a conventional two-way LSTM to generate intermediate implicit words corresponding to each constituent word x 1 , x 2 , x 3 , x 4 , x 5 , and x 6 in the user comment sequence.
  • the vector q* is operated to generate selective hidden states h 1 ', h 2 ', h 3 ', h 4 ', h 5 ', h 6 'in consideration of the query vector q*.
  • the selective computing network 1022 may adopt a multilayer perceptron (MLP), as shown in FIG. 2.
  • MLP multilayer perceptron
  • network 1022 may calculate the selectivity based on the intermediate hidden state and query vector h i q * and selectivity weighting vector g i, h i and an intermediate hidden state with a selective vector elements g i is multiplied by the selectivity
  • the hidden state h i ' for example, is shown in the following equations (1) and (2):
  • W h, W q h i are weights and q *, b is the offset value, W h, W q and b are parameters can learn, ⁇ represents a sigmoid activation function, ⁇ denotes element-multiplication.
  • the comment coding unit 1021 encodes the user comments by LSTM and other methods to form an entire sequence vector h*, for example, it may be h 1 or h 6 in FIG. 2.
  • the selective calculation network 1022 is generating selective implicit the user also reviews the state of the entire vector sequence h * calculates the intermediate hidden state h i, for example, the user reviews the entire vector sequence h * and the arithmetic or weighted splicing intermediate hidden state h i.
  • the above formula (1) can be transformed into formula (3) or formula (4):
  • g i in formula (3) represents the weighted sum of h*, h i and q*, and W c represents the learnable weight of the entire sequence vector h*; g i in formula (4) represents h* Spliced with h i .
  • the selective hidden states h 1 ′, h 2 ′, h 3 ′, h 4 ′, h 5 ′, and h 6 ′ output by the selective computing network 1022 in the selective encoder 102 are used by the decoder 103 for decoding.
  • the decoder 103 can adopt any conventional decoder with or without an attention mechanism.
  • the decoder 103 uses an attention mechanism, and the decoder 103 is generating context vectors C1, C2, C3, C4 based on the attention mechanism (collectively referred to as Ct, in Figure 2 with 4 context vectors
  • the query vector q* is used as the input parameter of the context vector generation function.
  • This function can use Multilayer perceptron (MLP).
  • the context vector generation function generates the context vector using MLP, as shown in Figure 2.
  • decoder 103 may perform the calculation in the query vector q C t * and Exposition selectively hidden state vector h i and the weighting operation, each of the weights' weight to be learning parameters.
  • the input parameters of the context vector generation function based on the attention mechanism may also include the entire sequence vector h* of user comments.
  • the entire sequence vector h can be executed when calculating the context vector C t based on the attention mechanism. *With the query vector q* and the comment vector selective hidden state h i'weighted sum operation, or concatenate the entire sequence vector h* with the comment vector selective hidden state h t '.
  • C t is calculated unless otherwise stated conventional C t calculation may be employed.
  • the decoder 103 After the decoder 103 obtains the context vector C t , it can generate target sequence hidden states S 1 , S 2 , S 3 , S 4 and output the constituent words y 1 , y 2 , y 3 , and y 4 of the target sequence by conventional methods.
  • the decoder 103 of the embodiment of the present disclosure considers the query vector q* in the context vector based on the attention mechanism, and thus decodes the information related to the user query in the generated target text (that is, the reason for recommendation), so that the query personality can be better obtained Reason for recommendation.
  • the comment encoding unit 1021 of the selective encoder 102 may be based on other encoder structures, such as word vector tools such as one-way LSTM or Word2Vec.
  • the selective encoder 102 can output the selective hidden state of all the constituent words of the user comment to the decoder 103 for decoding, or it can only output one selective hidden state of the entire sequence of information representing the user comment, such as the one in Figure 2 h 1 'or h 6 ', output to the decoder 103 for decoding.
  • the recommendation reason generating device of the embodiment of the present disclosure can be trained through POI user comments, user queries, and user clicked recommendation reasons. For example, the recommendation reason has accumulated a large number of user search and click logs after the review site went online in multiple scenarios.
  • a certain number of four-tuples POI, high-quality user reviews, query information, user click recommendation reasons
  • POI high-quality user reviews, query information, user click recommendation reasons
  • the above-mentioned recommendation reason generation device can be trained. For example, from these logs, about one million quaternions are selected to train the model. Table 1 below lists a quaternion for training.
  • Table 2 is a test example based on the recommended reasons generated by three different scenarios.
  • the first solution is a conventional Pointer-Generator model that does not consider query information, and the recommended reason generated is "the shop is large with billiards";
  • the second solution is to consider query information only in the encoder Solution (also based on the Pointer-Generator model), the recommended reason generated is "The shop is large, there are billiards, and the sound effects are not bad”;
  • the third solution is that both the encoder and the decoder consider the query information solution, so The recommended reason generated is "It is comfortable to lie down and watch a movie.” From the results of these three schemes, it can be seen that the recommendation reason generated by the first scheme is very poorly related to the user’s query information "Private Cinema", which cannot meet the requirements of personalized query; the second scheme generates The recommendation reason contains "the sound effect is not bad", which is generally related to the query information "private theater”, which can meet the personalized requirements of the query; and the recommendation reason generated by the third solution is "comfortable to lie down and watch a movie" and The query information "Priv
  • the recommendation reason generation device of the embodiment of the present disclosure can generate a personalized query recommendation reason by considering the user's query information in the encoding process and optionally in the decoding process.
  • the recommendation reason generating apparatus in the embodiment of the present disclosure may be any device or system that can be used to generate recommendation reasons, for example, it may be a server, a workstation, a personal computer, a tablet circuit, a smart phone, a personal digital assistant, etc., or may be Cloud computing equipment, or devices in the above equipment.
  • the operation of the recommendation reason generation device in the embodiment of the present disclosure involves the user's query information and user comments, this does not mean that the user directly enters the query information or user comments on the recommendation reason generation device, only that the recommendation reason generation device will use Query information or user comments, which can be sent to the recommendation reason generation device after receiving user input by other devices, and the transmitted query information or user comments are not necessarily in exactly the same form as the original input information of the user, but only the recommendation reason generation is required
  • the device can obtain the content of the corresponding information.
  • FIG. 3 shows a schematic flowchart of a method 300 for generating a recommendation reason according to an embodiment of the present disclosure.
  • the recommendation reason generation method 300 includes steps S301 to S304.
  • step S301 the user's query information is encoded as a query vector; in step S302, the user comment is encoded as an intermediate hidden state of the comment vector; in step S303, the query vector and the comment vector are intermediate
  • the hidden state is operated to generate the selective hidden state of the review vector; in step S304, the selective hidden state of the review vector is decoded as a recommendation reason.
  • the step S304 may adopt an attention mechanism in the process of decoding the selective hidden state of the comment vector into a recommendation reason, and when generating the context vector based on the attention mechanism,
  • the query vector is used as the input parameter of the context vector generation function (the function adopts MLP, for example).
  • the step S303 calculating the intermediate hidden state of the query vector and the review vector to generate the selective hidden state of the review vector includes: based on the intermediate hidden state of the review vector and the hidden state of the review vector.
  • the weighted sum of the query vector generates a selectivity vector; and the intermediate hidden state of the review vector and the selectivity vector are multiplied by elements to generate the selectivity hidden state of the review vector.
  • the entire sequence vector of the user comment and the intermediate hidden state of the comment vector are also calculated.
  • the query vector and the comment vector are selectively hidden by performing a weighted sum operation.
  • the recommendation reason generation device also applies to the recommendation reason generation method.
  • the method for generating a recommendation reason according to an embodiment of the present disclosure can generate a personalized query recommendation reason by considering the user's query information in the encoding process and optionally in the decoding process.
  • the recommendation reason generation method can be executed by any electronic device or system, including but not limited to servers, workstations, personal computers, tablet circuits, smart phones, personal digital assistants, cloud computing devices, and so on.
  • an electronic device for performing the above method may include a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor implements the steps of the method according to any embodiment of the present disclosure when executed.
  • FIG. 4 shows a schematic structural diagram of an electronic device 400 according to an exemplary embodiment of the present disclosure.
  • the electronic device 400 includes a processor 401 and a memory 402.
  • the processor 401 may execute corresponding processing according to a program stored or loaded in the memory 402, that is, it may execute the steps of the method according to any embodiment of the present disclosure.
  • the processor 401 and the memory 402 may be connected to each other through a bus 403.
  • the electronic device 400 may further include an input/output (I/O) interface 404, and the interface 404 may also be connected to the bus 403.
  • I/O input/output
  • an embodiment of the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps in the method according to any embodiment of the present disclosure are implemented.
  • the computer-readable storage medium according to an embodiment of the present disclosure may be one computer-readable storage medium or a combination of multiple computer-readable storage media.
  • the embodiments of the present disclosure can be implemented in hardware or dedicated circuits, software, firmware or any combination thereof.
  • the processor in the embodiments of the present disclosure may be an integrated circuit chip or discrete hardware component with signal processing capabilities, for example, it may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), or a ready-made programmable Programmable logic devices such as gate arrays (FPGAs), discrete gates or transistor logic devices, etc.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGAs gate arrays
  • the methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
  • the computer-readable storage medium or memory in the embodiment of the present disclosure may be a volatile memory or a non-volatile memory, or include both volatile and non-volatile memory.
  • the non-volatile memory may be read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), or flash memory.
  • Volatile memory can be random access memory (RAM), such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access Access memory (DDRSDRAM), enhanced synchronous dynamic random access memory (ESDRAM), synchronous connection dynamic random access memory (SLDRAM), and direct memory bus random access memory (DR RAM).
  • RAM random access memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous dynamic random access memory
  • DDRSDRAM double data rate synchronous dynamic random access Access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous connection dynamic random access memory
  • DR RAM direct memory bus random access memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A recommendation reason generation apparatus (100) capable of generating a recommendation reason for personalized query. The recommendation reason generation apparatus (100) comprises: a query encoder (101) used for encoding the query information of a user into a query vector; a selective encoder (102) used for encoding the comment of the user into a comment vector selective hiding state according to the query vector; and a decoder (103) constituting an encoder-decoder structure together with the selective encoder (102) and used for decoding the comment vector selective hiding state into a recommendation reason. The selective encoder (102) comprises a comment encoding unit (1021) and a selective computing network (1022); the comment encoding unit (1021) is used for encoding the comment of the user into a comment vector intermediate hiding state; the selective computing network (1022) is used for performing computing on the query vector and the comment vector intermediate hiding state to generate the comment vector selective hiding state.

Description

推荐理由的生成Recommendation reason generation
相关申请的交叉引用Cross references to related applications
本专利申请要求于2019年8月6日提交的、申请号为2019107204036的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。This patent application claims the priority of the Chinese patent application with application number 2019107204036 filed on August 6, 2019, and the full text of the application is incorporated herein by reference.
技术领域Technical field
本公开涉及推荐理由生成领域,具体涉及基于人工智能的推荐理由生成装置及方法、计算机可读存储介质以及电子设备。The present disclosure relates to the field of recommendation reason generation, in particular to an artificial intelligence-based recommendation reason generation device and method, computer-readable storage media, and electronic equipment.
背景技术Background technique
推荐理由是在搜索结果页或发现页(例如场景决策、必吃榜单等)上展示给用户进行亮点推荐从而辅助用户决策的自然语言文本,其通常为一句简短语句。例如,在点评网站APP的搜索页面中输入“牛排”,搜索结果页上会给出多个与“牛排”有关的商家,在各个商家的简介绍中通常会有一句简短的推荐理由,例如,“精选菲力牛排分量很足”、“精选西冷和牛牛排新鲜好吃,很嫩”。再例如,在点评网站APP的“必吃榜”上会推荐多个必吃餐厅,每个推荐餐厅的简介绍中具有一句称为“必吃理由”的推荐理由,例如,“红烧肉很棒,服务态度很好,特别棒”、“烤鸭名不虚传,不仅鸭皮脆,鸭肉很糯”。The reason for recommendation is a natural language text displayed to the user on a search result page or a discovery page (such as a scene decision, a must-eat list, etc.) for highlight recommendation to assist the user in decision-making, which is usually a short sentence. For example, if you enter "steak" on the search page of a review website APP, multiple businesses related to "steak" will be listed on the search results page, and there is usually a short recommendation reason in the brief introduction of each business, for example, "Selected filet steak is very generous", "Selected sirloin wagyu steak is fresh and delicious, very tender". For another example, there are many must-eat restaurants recommended on the "must-eat list" of the review website APP, and the brief introduction of each recommended restaurant has a recommendation reason called "must-eat reason", for example, "Broiled pork is great , The service attitude is very good, very good", "Roast duck is well-deserved, not only the duck skin is crispy, but the duck meat is very sticky."
自然语言形式的推荐理由,可以看作是真实用户评论的高度浓缩,为用户解释召回结果,挖掘商户特色,吸引用户点击,并对用户进行场景化引导,增强用户使用体验和对平台的信任感。The reason for recommendation in the form of natural language can be seen as a high degree of concentration of real user reviews, explaining the recall results for users, digging out merchant characteristics, attracting users to click, and providing scene-oriented guidance to users, enhancing user experience and trust in the platform .
发明内容Summary of the invention
根据本公开的一个方面提供了一种推荐理由生成装置,包括:查询编码器,用于将用户的查询信息编码为查询向量;选择性编码器,用于基于所述查询向量将用户评论编码为评论向量选择性隐状态;以及解码器,与所述选择性编码器构成编码器-解码器结构,用于将所述评论向量选择性隐状态解码为推荐理由,其中,所述选择性编码器包括评论编码单元和选择性计算网络,所述评论编码单元用于将所述用户评论编码为评论向量中间隐状态,并且所述选择性计算网络用于将所述查询向量与所述评论向量中间隐状 态进行运算以生成所述评论向量选择性隐状态。According to one aspect of the present disclosure, a recommendation reason generation device is provided, which includes: a query encoder for encoding user query information into a query vector; and a selective encoder for encoding user comments into a query vector based on the query vector The selective hidden state of the review vector; and a decoder, which forms an encoder-decoder structure with the selective encoder for decoding the selective hidden state of the review vector into a recommendation reason, wherein the selective encoder It includes a comment coding unit and a selective calculation network. The comment coding unit is used to code the user comment into a comment vector intermediate implicit state, and the selective calculation network is used to divide the query vector with the comment vector. The hidden state is operated to generate the selective hidden state of the comment vector.
根据本公开的另一个方面,提供了一种推荐理由生成方法,包括:将用户的查询信息编码为查询向量;将用户评论编码为评论向量中间隐状态;将所述查询向量与所述评论向量中间隐状态进行运算以生成评论向量选择性隐状态;以及将所述评论向量选择性隐状态解码为推荐理由。According to another aspect of the present disclosure, a method for generating a recommendation reason is provided, including: encoding user query information as a query vector; encoding user comments as an intermediate hidden state of the comment vector; and combining the query vector with the comment vector The intermediate hidden state is operated to generate a selective hidden state of the review vector; and the selective hidden state of the review vector is decoded as a recommendation reason.
根据本公开的另一个方面,提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述方法。According to another aspect of the present disclosure, there is provided a computer-readable storage medium having a computer program stored thereon, and the program is executed by a processor to implement the above method.
根据本公开的另一个方面,提供了一种电子设备,包括存储器、处理器及存储在存储器上并在处理器上可运行的计算机程序,所述处理器执行时实现上述方法。According to another aspect of the present disclosure, there is provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, and the processor implements the above method when executed.
根据本公开的上述方面,在对用户评论进行编码时会考虑用户的查询信息,因此能够生成查询个性化的推荐理由,从而提高用户体验。According to the above-mentioned aspects of the present disclosure, the user's query information is taken into consideration when the user comments are encoded, so it is possible to generate a personalized recommendation reason for the query, thereby improving user experience.
附图说明Description of the drawings
通过下面结合附图对实施例的描述,本公开的这些和/或其他方面、特征和优点将变得更加清楚和容易理解,其中:These and/or other aspects, features and advantages of the present disclosure will become clearer and easier to understand through the following description of the embodiments in conjunction with the accompanying drawings, in which:
图1示出根据本公开实施例的推荐理由生成装置的示意框图;Fig. 1 shows a schematic block diagram of a recommendation reason generating device according to an embodiment of the present disclosure;
图2示出根据本公开实施例的推荐理由生成装置的操作示例示意图;FIG. 2 shows a schematic diagram of an operation example of a recommendation reason generating device according to an embodiment of the present disclosure;
图3示出根据本公开实施例的推荐理由生成方法的示意流程图;FIG. 3 shows a schematic flowchart of a method for generating a recommendation reason according to an embodiment of the present disclosure;
图4示出根据本公开实施例的生成推荐理由的电子设备的结构示意图。Fig. 4 shows a schematic structural diagram of an electronic device for generating a recommendation reason according to an embodiment of the present disclosure.
具体实施方式detailed description
下面将参考本公开的示例性实施例对本公开进行详细描述。然而,本公开不限于这里所描述的实施例,其可以以许多不同的形式来实施。所描述的实施例仅用于使本公开彻底和完整,并全面地向本领域的技术人员传递本公开的构思。所描述的各个实施例的特征可以互相组合或替换,除非明确排除或根据上下文应当排除。The present disclosure will be described in detail below with reference to exemplary embodiments of the present disclosure. However, the present disclosure is not limited to the embodiments described herein, and it can be implemented in many different forms. The described embodiments are only used to make the present disclosure thorough and complete, and fully convey the concept of the present disclosure to those skilled in the art. The features of the various described embodiments can be combined or replaced with each other, unless explicitly excluded or should be excluded according to the context.
自然语言形式的推荐理由可以增强用户使用体验和对平台的信任感。目前,一种推荐理由生成方法具体为:根据对兴趣商户(Point of Interest,POI)的优质用户评论生成一句话推荐理由,生成的推荐理由用词不需要严格从原文中进行抽取,而是基于人工智 能技术通过编码器(Encoder)对原文编码后通过解码器(Decoder)生成。Recommendation reasons in natural language can enhance user experience and trust in the platform. At present, a method for generating a recommendation reason is specifically: generating a one-sentence recommendation reason based on the high-quality user reviews of the POI (Point of Interest). The wording of the generated recommendation reason does not need to be strictly extracted from the original text, but is based on Artificial intelligence technology encodes the original text through an encoder (Encoder) and then generates it through a decoder (Decoder).
然而,该生成方法没有考虑到用户的实时意图,会生成和用户查询无关的推荐理由,导致生成结果参差不齐,降低用户体验。例如,在一美食APP中搜索“带娃吃”,该APP会推荐若干餐馆,然而这些餐馆的推荐理由可能与“带娃吃”不相关,而仅是关于该餐馆的一般推荐,例如“红烧肉味道很好”,原因在于该生成方法在生成推荐理由时未考虑用户查询的信息。However, the generation method does not take into account the user's real-time intentions, and generates recommendation reasons that are irrelevant to the user's query, resulting in uneven generation results and reducing user experience. For example, if you search for "Take a baby to eat" in a gourmet app, the app will recommend a number of restaurants. However, the reason for the recommendation of these restaurants may not be related to "Take a baby to eat", but is only a general recommendation about the restaurant, such as "Brown" The meat tastes good" because the generation method does not consider the information queried by the user when generating the recommendation reason.
本公开提供一种基于人工智能的推荐理由生成装置及方法,其能够生成查询个性化的推荐理由。本公开提出的推荐理由生成方法是对上述生成式方法的改进,其能够根据用户查询信息的不同而为召回的POI生成个性化的推荐理由,更为智能,并提高用户体验。在本公开中,“查询(query)”包括用户搜寻POI的任何行为,因而“查询信息”包括用户输入的任何用于搜寻POI的信息,诸如用户输入的文本或语音等搜索信息(例如“牛排”、“带娃吃”),也包括用户选定的主题(例如“拉面”),等等。在本公开中,兴趣商户(POI)包括可以被搜寻的任何类型的商户,例如餐馆、游泳馆、电影院等。The present disclosure provides an artificial intelligence-based recommendation reason generating device and method, which can generate a personalized recommendation reason for query. The recommendation reason generation method proposed in the present disclosure is an improvement of the above-mentioned generative method, which can generate personalized recommendation reasons for recalled POIs according to different user query information, which is more intelligent and improves user experience. In the present disclosure, "query" includes any behavior of the user searching for POIs, so "query information" includes any information input by the user for searching for POIs, such as search information such as text or voice input by the user (for example, "steak ", "Take a baby to eat"), but also include user-selected themes (such as "ramen"), etc. In the present disclosure, a business of interest (POI) includes any type of business that can be searched, such as restaurants, swimming pools, movie theaters, etc.
近年来,随着人工智能深度学习方法在自然语言处理领域的大规模应用,基于编码器-解码器(Encoder-Decoder)结构的序列至序列(Seq2Seq)模型在文本生成领域取得了很好的效果。推荐理由生成可以视为生成式文本摘要任务,属于一种文本生成任务。本公开实施例的推荐理由生成装置及方法是基于人工智能的编码器-解码器结构的序列至序列模型实现的。In recent years, with the large-scale application of artificial intelligence deep learning methods in the field of natural language processing, the sequence-to-sequence (Seq2Seq) model based on the encoder-decoder structure has achieved good results in the field of text generation . Recommendation reason generation can be regarded as a generative text summary task, which is a kind of text generation task. The recommendation reason generation device and method of the embodiments of the present disclosure are implemented based on the sequence-to-sequence model of the encoder-decoder structure of artificial intelligence.
图1示出根据本公开实施例的推荐理由生成装置100的示意框图。如图1所示,推荐理由生成装置100包括查询编码器101、选择性编码器102以及解码器103。查询编码器101用于将用户的查询信息编码为查询向量。选择性编码器102用于基于所述查询向量将用户评论编码为评论向量选择性隐状态。解码器103与选择性编码器102构成编码器-解码器结构,用于将所述评论向量选择性隐状态解码为推荐理由。选择性编码器102包括评论编码单元1021和选择性计算网络1022。评论编码单元1021用于将所述用户评论编码为评论向量中间隐状态。选择性计算网络1022用于将所述查询向量与所述评论向量中间隐状态进行运算以生成所述评论向量选择性隐状态,从而使得所述推荐理由中包含与所述查询信息相关的内容。在一些实施例中,解码器103也可以基于查询编码器101产生的查询向量进行解码。Fig. 1 shows a schematic block diagram of a recommendation reason generating device 100 according to an embodiment of the present disclosure. As shown in FIG. 1, the recommendation reason generating device 100 includes a query encoder 101, a selective encoder 102 and a decoder 103. The query encoder 101 is used to encode the user's query information into a query vector. The selective encoder 102 is configured to encode user comments into a comment vector selective hidden state based on the query vector. The decoder 103 and the selective encoder 102 form an encoder-decoder structure for decoding the selective hidden state of the comment vector into a recommendation reason. The selective encoder 102 includes a comment encoding unit 1021 and a selective calculation network 1022. The comment encoding unit 1021 is configured to encode the user comment into an intermediate hidden state of the comment vector. The selective calculation network 1022 is configured to perform operations on the query vector and the intermediate hidden state of the review vector to generate the selective hidden state of the review vector, so that the recommendation reason includes content related to the query information. In some embodiments, the decoder 103 may also perform decoding based on the query vector generated by the query encoder 101.
本公开实施例的推荐理由生成装置100是基于人工智能技术的编码器-解码器结构 实现的。在编码器-解码器结构中,编码器将源文本编码为向量,解码器将编码器生成的向量解码为目标文本。在本公开的实施例中,需要基于用户评论生成推荐理由,因此,用户评论是编码器-解码器结构中的源文本,推荐理由是目标文本。在本公开的实施例中,在对用户评论进行编码时采用选择性编码器102,其在将用户评论编码为评论向量时不仅基于输入的用户评论本身,还基于查询编码器101生成的查询向量,其中查询向量对应于用户的查询信息。可以在选择性编码器102对用户评论进行编码时基于查询向量对信息进行过滤,使得编码后的评论向量中包含与用户查询相关的信息。从而,解码器103基于该评论向量解码后对应的推荐理由能够体现用户的查询信息,即,生成的推荐理由为查询个性化推荐理由。在一些实施例中,解码器103基于查询向量解码,使得解码出的推荐理由更好地体现用户的查询信息。例如,解码器103采用注意力机制,在生成基于所述注意力机制的上下文向量时将所述查询向量作为上下文向量生成函数的输入参数。The recommendation reason generating device 100 of the embodiment of the present disclosure is implemented based on an encoder-decoder structure of artificial intelligence technology. In the encoder-decoder structure, the encoder encodes the source text into a vector, and the decoder decodes the vector generated by the encoder into the target text. In the embodiment of the present disclosure, a recommendation reason needs to be generated based on user comments. Therefore, the user comment is the source text in the encoder-decoder structure, and the recommendation reason is the target text. In the embodiment of the present disclosure, the selective encoder 102 is used when encoding user comments. When encoding user comments into a comment vector, it is not only based on the input user comment itself, but also based on the query vector generated by the query encoder 101 , Where the query vector corresponds to the user’s query information. The information may be filtered based on the query vector when the selective encoder 102 encodes the user comment, so that the encoded comment vector contains information related to the user's query. Therefore, the corresponding recommendation reason after decoding by the decoder 103 based on the comment vector can reflect the user's query information, that is, the generated recommendation reason is the query personalized recommendation reason. In some embodiments, the decoder 103 decodes based on the query vector, so that the decoded recommendation reason better reflects the user's query information. For example, the decoder 103 adopts an attention mechanism, and uses the query vector as an input parameter of the context vector generation function when generating a context vector based on the attention mechanism.
在本公开的实施例中,查询编码器101可以采用人工智能领域中任何现有或未来开发的编码器技术将用户的查询信息编码为查询向量,例如采用单向或双向长短期记忆(Long Short-Term Memory,LSTM)网络、Word2Vec等词向量工具。如上所述,这里的“查询信息”表示用户输入的任何用于搜寻POI的信息。当用户采用语音等非文本输入的情况下,例如,可以先将非文本信息识别为文本信息,再进行向量编码。In the embodiments of the present disclosure, the query encoder 101 can use any existing or future-developed encoder technology in the field of artificial intelligence to encode the user's query information into a query vector, for example, using one-way or two-way long short-term memory (Long Short-term Memory). -Term Memory, LSTM) network, Word2Vec and other word vector tools. As mentioned above, the "query information" here means any information entered by the user to search for POIs. When the user uses non-text input such as voice, for example, the non-text information can be recognized as text information first, and then vector encoding is performed.
根据本公开实施例的选择性编码器102-解码器103的结构可以基于人工智能领域中任何现有或未来开发的神经网络编码器-解码器模型,例如,例如指针生成器(Pointer-Generator)网络、循环神经网络-长短期记忆(RNN-LSTM)网络、ConS2S(基于卷积神经网络的sequence to sequence框架)、变形器(Transformer)网络等。选择性编码器102可以包括评论编码单元1021和选择性计算网络1022,以将用户评论编码为经查询向量过滤的评论向量选择性隐状态,从而使得解码器103解码出的推荐理由中包含与所述查询信息相关的内容。具体的,评论编码单元1021可以采用人工智能领域中任何现有或未来开发的通用编码器技术,例如,单向或双向LSTM网络,将用户评论编码为评论向量中间隐状态,该中间隐状态与查询向量无关。选择性计算网络1022将所述查询向量与所述评论向量中间隐状态进行运算,使得生成的评论向量选择性隐状态中带有查询向量信息,进而使得推荐理由中包含与所述查询信息相关的内容。选择性计算网络1022例如可以是多层感知机(Multilayer Perceptron,MLP)网络。在本公开中,“选择性隐状态”表示该隐状态中包含查询向量信息,该查询向量信息对用户评论的内 容具有选择性的过滤作用。The structure of the selective encoder 102-decoder 103 according to the embodiments of the present disclosure may be based on any existing or future neural network encoder-decoder model developed in the field of artificial intelligence, for example, a pointer generator (Pointer-Generator) Network, Recurrent Neural Network-Long Short-Term Memory (RNN-LSTM) network, ConS2S (sequence to sequence framework based on Convolutional Neural Network), Transformer network, etc. The selective encoder 102 may include a comment coding unit 1021 and a selective calculation network 1022 to encode user comments into a selective hidden state of the comment vector filtered by the query vector, so that the reason for recommendation decoded by the decoder 103 contains the same Describe the content related to the query information. Specifically, the comment coding unit 1021 may adopt any existing or future general-purpose encoder technology in the field of artificial intelligence, for example, a one-way or two-way LSTM network, to encode user comments into the intermediate hidden state of the comment vector, and the intermediate hidden state is the same as The query vector is irrelevant. The selective calculation network 1022 calculates the query vector and the intermediate hidden state of the review vector, so that the generated selective hidden state of the review vector contains query vector information, so that the recommendation reason includes information related to the query information. content. The selective computing network 1022 may be a Multilayer Perceptron (MLP) network, for example. In the present disclosure, "selective hidden state" means that the hidden state contains query vector information, and the query vector information has a selective filtering effect on the content of user comments.
下面以Pointer-Generator网络为例,对本公开的推荐理由生成装置100进行具体阐述。图2示出了根据本公开实施例的推荐理由生成装置100的操作示例示意图。如图2所示,查询编码器101通过例如双向LSTM将用户的查询信息编码为查询向量q*,以输入到选择性编码器102和解码器103。需要注意的是,在一些实施例中,查询向量q*仅用于选择性编码器102而不用于解码器103。In the following, taking the Pointer-Generator network as an example, the recommendation reason generating device 100 of the present disclosure will be described in detail. FIG. 2 shows a schematic diagram of an operation example of the recommendation reason generation device 100 according to an embodiment of the present disclosure. As shown in FIG. 2, the query encoder 101 encodes the user's query information into a query vector q* through, for example, a two-way LSTM, to be input to the selective encoder 102 and the decoder 103. It should be noted that, in some embodiments, the query vector q* is only used in the selective encoder 102 and not in the decoder 103.
选择性编码器102基于查询向量q*作为过滤参数将用户评论的各个组成词x 1、x 2、x 3、x 4、x 5、x 6(在图2的示例中以6个组成词为例,实际的用户评论可以具有多于或少于6个组成词)编码为评论向量选择性隐状态h 1’、h 2’、h 3’、h 4’、h 5’、h 6’(统称为h i’)以输出给解码器103进行解码。换言之,选择性编码器102可以基于查询向量q*生成选择性隐状态h i’。具体的,如图2所示,选择性编码器102的评论编码单元1021首先生成每个组成词的与所述查询向量无关的评论向量中间隐状态h 1、h 2、h 3、h 4、h 5、h 6(统称为h i),接着,选择性编码器102的选择性计算网络1022基于评论向量中间隐状态h i与查询向量q*生成用于输出给解码器103的评论向量选择性隐状态h i’。例如,评论编码单元1021可以通过常规的双向LSTM对输入的用户评论进行编码以生成对应于用户评论序列中各个组成词x 1、x 2、x 3、x 4、x 5、x 6的中间隐状态h 1、h 2、h 3、h 4、h 5、h 6,接着,选择性计算网络1022可以将中间隐状态h 1、h 2、h 3、h 4、h 5、h 6与查询向量q*进行运算,以生成考虑了查询向量q*的选择性隐状态h 1’、h 2’、h 3’、h 4’、h 5’、h 6’。在本公开的实施例中,选择性计算网络1022可以采用多层感知机(MLP),如图2所示。 The selective encoder 102 uses the query vector q* as a filter parameter to divide the constituent words x 1 , x 2 , x 3 , x 4 , x 5 , and x 6 of the user review (in the example of FIG. 2, the 6 constituent words are For example, the actual user comment can have more or less than 6 constituent words) coded as a comment vector. Selective hidden states h 1 ', h 2 ', h 3 ', h 4 ', h 5 ', h 6 '( collectively referred to as h i ') for output to the decoder 103 for decoding. In other words, the encoder 102 may be based on selective query vector q * selectivity hidden state h i '. Specifically, as shown in Fig. 2, the comment coding unit 1021 of the selective encoder 102 first generates the intermediate hidden states h 1 , h 2 , h 3 , h 4 , h 5 , h 6 (collectively referred to as h i ), then, the selective calculation network 1022 of the selective encoder 102 generates a comment vector selection for output to the decoder 103 based on the intermediate hidden state h i of the comment vector and the query vector q* Sexual recessive state h i '. For example, the comment encoding unit 1021 may encode the input user comments through a conventional two-way LSTM to generate intermediate implicit words corresponding to each constituent word x 1 , x 2 , x 3 , x 4 , x 5 , and x 6 in the user comment sequence. The state h 1 , h 2 , h 3 , h 4 , h 5 , h 6 , and then, the selective computing network 1022 can combine the intermediate hidden states h 1 , h 2 , h 3 , h 4 , h 5 , h 6 and query The vector q* is operated to generate selective hidden states h 1 ', h 2 ', h 3 ', h 4 ', h 5 ', h 6 'in consideration of the query vector q*. In the embodiment of the present disclosure, the selective computing network 1022 may adopt a multilayer perceptron (MLP), as shown in FIG. 2.
作为示例,选择性计算网络1022可以基于中间隐状态h i与查询向量q*的加权和生成选择性向量g i,并将中间隐状态h i与选择性向量g i按元素相乘生成选择性隐状态h i’,例如,如下式(1)和(2)所示: As an example, network 1022 may calculate the selectivity based on the intermediate hidden state and query vector h i q * and selectivity weighting vector g i, h i and an intermediate hidden state with a selective vector elements g i is multiplied by the selectivity The hidden state h i ', for example, is shown in the following equations (1) and (2):
g i=σ(W h h i+W qq*+b)     (1) g i =σ(W h h i +W q q*+b) (1)
h i’=g i⊙h i    (2) h i '=g i ⊙h i (2)
其中,W h、W q分别为h i和q*的权重,b为偏置值,W h、W q和b都是可学习参数,σ表示sigmoid激活函数,⊙表示按元素相乘。 Wherein, W h, W q h i are weights and q *, b is the offset value, W h, W q and b are parameters can learn, σ represents a sigmoid activation function, ⊙ denotes element-multiplication.
在一些实施例中,评论编码单元1021通过LSTM等方法将用户评论编码后形成整序列向量h*,例如其可以是图2中的h 1或h 6,选择性计算网络1022在生成选择性隐状 态时还将所述用户评论的整序列向量h*与中间隐状态h i进行运算,例如,将用户评论的整序列向量h*与中间隐状态h i进行拼接或者加权和运算。作为示例,在基于用户评论的整序列向量h*的情况下,上式(1)可变换为公式(3)或公式(4): In some embodiments, the comment coding unit 1021 encodes the user comments by LSTM and other methods to form an entire sequence vector h*, for example, it may be h 1 or h 6 in FIG. 2. The selective calculation network 1022 is generating selective implicit the user also reviews the state of the entire vector sequence h * calculates the intermediate hidden state h i, for example, the user reviews the entire vector sequence h * and the arithmetic or weighted splicing intermediate hidden state h i. As an example, in the case of the entire sequence vector h* based on user comments, the above formula (1) can be transformed into formula (3) or formula (4):
g i=σ(W h h i+W qq*+W ch*+b)    (3) g i =σ(W h h i +W q q*+W c h*+b) (3)
或,g i=σ(W h[h i;h*]+W qq*+b)   (4) Or, g i =σ(W h [h i ; h*]+W q q*+b) (4)
其中,式(3)中的g i表示将h*与h i、q*加权求和,W c表示为整序列向量h*的可学习权重;式(4)中的g i表示将h*与h i拼接。 Among them, g i in formula (3) represents the weighted sum of h*, h i and q*, and W c represents the learnable weight of the entire sequence vector h*; g i in formula (4) represents h* Spliced with h i .
选择性编码器102中选择性计算网络1022输出的选择性隐状态h 1’、h 2’、h 3’、h 4’、h 5’、h 6’被解码器103用于解码。解码器103可以采用任何具有或不具有注意力机制的常规解码器。在图2所示的示例中,解码器103采用注意力机制,且解码器103在生成基于注意力机制的上下文向量C1、C2、C3、C4(统称为Ct,图2中以4个上下文向量为例)时将查询向量q*作为上下文向量生成函数的输入参数,该函数可以采用多层感知机(Multilayer perceptron,MLP),此时,上下文向量生成函数生成采用MLP的上下文向量,如图2所示。例如,解码器103在计算C t时可以执行查询向量q*与评论向量选择性隐状态h i’加权和运算,各自的权重为可学习参数。在一些实施例中,基于注意力机制的上下文向量生成函数的输入参数还可以包括用户评论的整序列向量h*,例如,在计算基于注意力机制的上下文向量C t时可以执行整序列向量h*与查询向量q*和评论向量选择性隐状态h i’加权和运算,或者将整序列向量h*与评论向量选择性隐状态h t’拼接。根据本公开实施例,C t的计算除特别说明外可以采用传统的C t计算方式。 The selective hidden states h 1 ′, h 2 ′, h 3 ′, h 4 ′, h 5 ′, and h 6 ′ output by the selective computing network 1022 in the selective encoder 102 are used by the decoder 103 for decoding. The decoder 103 can adopt any conventional decoder with or without an attention mechanism. In the example shown in Figure 2, the decoder 103 uses an attention mechanism, and the decoder 103 is generating context vectors C1, C2, C3, C4 based on the attention mechanism (collectively referred to as Ct, in Figure 2 with 4 context vectors For example), the query vector q* is used as the input parameter of the context vector generation function. This function can use Multilayer perceptron (MLP). At this time, the context vector generation function generates the context vector using MLP, as shown in Figure 2. Shown. For example, decoder 103 may perform the calculation in the query vector q C t * and Exposition selectively hidden state vector h i and the weighting operation, each of the weights' weight to be learning parameters. In some embodiments, the input parameters of the context vector generation function based on the attention mechanism may also include the entire sequence vector h* of user comments. For example, the entire sequence vector h can be executed when calculating the context vector C t based on the attention mechanism. *With the query vector q* and the comment vector selective hidden state h i'weighted sum operation, or concatenate the entire sequence vector h* with the comment vector selective hidden state h t '. According to an embodiment of the present disclosure, C t is calculated unless otherwise stated conventional C t calculation may be employed.
解码器103在获得上下文向量C t后,可以通过常规方法生成目标序列隐状态S 1、S 2、S 3、S 4以及输出目标序列的组成词y 1、y 2、y 3、y 4After the decoder 103 obtains the context vector C t , it can generate target sequence hidden states S 1 , S 2 , S 3 , S 4 and output the constituent words y 1 , y 2 , y 3 , and y 4 of the target sequence by conventional methods.
本公开实施例的解码器103在基于注意力机制的上下文向量中考虑查询向量q*,由此解码生成的目标文本(即推荐理由)中涉及用户查询的信息,从而可以更好地获得查询个性化推荐理由。The decoder 103 of the embodiment of the present disclosure considers the query vector q* in the context vector based on the attention mechanism, and thus decodes the information related to the user query in the generated target text (that is, the reason for recommendation), so that the query personality can be better obtained Reason for recommendation.
以上结合图2示例性说明了本公开实施例的推荐理由生成装置的操作,但本公开的实施方式不限于上述示例。例如,选择性编码器102的评论编码单元1021可以基于其它编码器结构,例如单向LSTM或Word2Vec等词向量工具。选择性编码器102可以将用户评论的所有组成词的选择性隐状态都输出给解码器103进行解码,也可以仅将表示用户评论的整序列信息的一个选择性隐状态,例如图2中的h 1’或h 6’,输出给解码器 103进行解码。 The operation of the recommendation reason generation device in the embodiment of the present disclosure has been exemplarily described above in conjunction with FIG. For example, the comment encoding unit 1021 of the selective encoder 102 may be based on other encoder structures, such as word vector tools such as one-way LSTM or Word2Vec. The selective encoder 102 can output the selective hidden state of all the constituent words of the user comment to the decoder 103 for decoding, or it can only output one selective hidden state of the entire sequence of information representing the user comment, such as the one in Figure 2 h 1 'or h 6 ', output to the decoder 103 for decoding.
本公开实施例的推荐理由生成装置可以通过POI的用户评论、用户查询以及用户点击的推荐理由进行训练。例如,推荐理由在点评网站多个场景上线后积累了大量用户搜索和点击日志。在一示例中,可以从用户的点击日志中抽取了一定数量的四元组(POI,优质用户评论,查询信息,用户点击的推荐理由)作为训练数据,对上述推荐理由生成装置进行了训练。例如,从这些日志中选择了约一百万个四元组对模型进行训练,下表1列出了一个用于训练的四元组。The recommendation reason generating device of the embodiment of the present disclosure can be trained through POI user comments, user queries, and user clicked recommendation reasons. For example, the recommendation reason has accumulated a large number of user search and click logs after the review site went online in multiple scenarios. In an example, a certain number of four-tuples (POI, high-quality user reviews, query information, user click recommendation reasons) can be extracted from the user's click log as training data, and the above-mentioned recommendation reason generation device can be trained. For example, from these logs, about one million quaternions are selected to train the model. Table 1 below lists a quaternion for training.
表1Table 1
Figure PCTCN2020107285-appb-000001
Figure PCTCN2020107285-appb-000001
对训练后的模型进行了测试,下表2是基于三种不同方案所生成的推荐理由的测试示例。The trained model was tested. Table 2 below is a test example based on the recommended reasons generated by three different scenarios.
表2Table 2
Figure PCTCN2020107285-appb-000002
Figure PCTCN2020107285-appb-000002
表2中,第一种方案是不考虑查询信息的常规Pointer-Generator模型,所生成的推荐理由为“店子很大,有台球”;第二种方案是仅在编码器中考虑查询信息的方案(也基于Pointer-Generator模型),所生成的推荐理由为“店子很大,有台球,音效还不错”;第三种方案是编码器和解码器中都考虑了查询信息的方案,所生成的推荐理由为“躺着看电影很舒服”。从这三种方案的结果可以看出,第一种方案所生成的推荐理由与用户的查询信息“私人影院”关联性很差,其不能满足查询个性化的要求;第二种方案所生成的推荐理由中含有“音效还不错”,这与查询信息“私人影院”存在一般关联性,可以满足查询个性化的要求;而第三种方案所生成的推荐理由“躺着看电影很舒服”与查询信息“私人影院”的关联性很强,更好地满足了查询个性化的要求。In Table 2, the first solution is a conventional Pointer-Generator model that does not consider query information, and the recommended reason generated is "the shop is large with billiards"; the second solution is to consider query information only in the encoder Solution (also based on the Pointer-Generator model), the recommended reason generated is "The shop is large, there are billiards, and the sound effects are not bad"; the third solution is that both the encoder and the decoder consider the query information solution, so The recommended reason generated is "It is comfortable to lie down and watch a movie." From the results of these three schemes, it can be seen that the recommendation reason generated by the first scheme is very poorly related to the user’s query information "Private Cinema", which cannot meet the requirements of personalized query; the second scheme generates The recommendation reason contains "the sound effect is not bad", which is generally related to the query information "private theater", which can meet the personalized requirements of the query; and the recommendation reason generated by the third solution is "comfortable to lie down and watch a movie" and The query information "Private Cinema" has a strong relevance, which better meets the requirements of individual query.
综上所述,本公开实施例的推荐理由生成装置通过在编码过程以及可选地在解码过程中考虑用户的查询信息能够生成查询个性化的推荐理由。In summary, the recommendation reason generation device of the embodiment of the present disclosure can generate a personalized query recommendation reason by considering the user's query information in the encoding process and optionally in the decoding process.
本公开实施例的推荐理由生成装置可以是任一能够用于生成推荐理由的设备或系统,例如,可以是服务器、工作站、个人电脑、平板电路、智能手机、个人数字助理等设备,或者可以是云计算设备,或者上述设备中的装置。虽然本公开实施例的推荐理由生成装置的操作中涉及用户的查询信息以及用户评论,但这并不表示用户直接在推荐理由生成装置上输入查询信息或用户评论,仅表示推荐理由生成装置会利用查询信息或用户评论,其可以是其它设备接收用户输入后传送给推荐理由生成装置,且传送的查询信息或用户评论也不必然与用户原始输入的信息的形式完全相同,而仅要求推荐理由生成装置能够获得相应信息的内容即可。The recommendation reason generating apparatus in the embodiment of the present disclosure may be any device or system that can be used to generate recommendation reasons, for example, it may be a server, a workstation, a personal computer, a tablet circuit, a smart phone, a personal digital assistant, etc., or may be Cloud computing equipment, or devices in the above equipment. Although the operation of the recommendation reason generation device in the embodiment of the present disclosure involves the user's query information and user comments, this does not mean that the user directly enters the query information or user comments on the recommendation reason generation device, only that the recommendation reason generation device will use Query information or user comments, which can be sent to the recommendation reason generation device after receiving user input by other devices, and the transmitted query information or user comments are not necessarily in exactly the same form as the original input information of the user, but only the recommendation reason generation is required The device can obtain the content of the corresponding information.
图3示出根据本公开实施例的推荐理由生成方法300的示意流程图。推荐理由生成方法300包括步骤S301至S304。如图3所示,在步骤S301,将用户的查询信息编码为查询向量;在步骤S302,将用户评论编码为评论向量中间隐状态;在步骤S303,将所述查询向量与所述评论向量中间隐状态进行运算以生成评论向量选择性隐状态;在步骤S304,将所述评论向量选择性隐状态解码为推荐理由。FIG. 3 shows a schematic flowchart of a method 300 for generating a recommendation reason according to an embodiment of the present disclosure. The recommendation reason generation method 300 includes steps S301 to S304. As shown in FIG. 3, in step S301, the user's query information is encoded as a query vector; in step S302, the user comment is encoded as an intermediate hidden state of the comment vector; in step S303, the query vector and the comment vector are intermediate The hidden state is operated to generate the selective hidden state of the review vector; in step S304, the selective hidden state of the review vector is decoded as a recommendation reason.
在本公开的示例实施例中,所述步骤S304将所述评论向量选择性隐状态解码为推荐理由的过程中可采用注意力机制,并且在生成基于所述注意力机制的上下文向量时将所述查询向量作为上下文向量生成函数(该函数例如采用MLP)的输入参数。In the exemplary embodiment of the present disclosure, the step S304 may adopt an attention mechanism in the process of decoding the selective hidden state of the comment vector into a recommendation reason, and when generating the context vector based on the attention mechanism, The query vector is used as the input parameter of the context vector generation function (the function adopts MLP, for example).
在本公开的示例实施例中,所述步骤S303将所述查询向量与所述评论向量中间隐状态进行计算以生成所述评论向量选择性隐状态包括:基于所述评论向量中间隐状态与所述查询向量的加权和生成选择性向量;以及将所述评论向量中间隐状态与所述选择性 向量按元素相乘生成所述评论向量选择性隐状态。In the exemplary embodiment of the present disclosure, the step S303 calculating the intermediate hidden state of the query vector and the review vector to generate the selective hidden state of the review vector includes: based on the intermediate hidden state of the review vector and the hidden state of the review vector. The weighted sum of the query vector generates a selectivity vector; and the intermediate hidden state of the review vector and the selectivity vector are multiplied by elements to generate the selectivity hidden state of the review vector.
在本公开的示例实施例中,在生成所述评论向量选择性隐状态时还将所述用户评论的整序列向量与所述评论向量中间隐状态进行运算。In the exemplary embodiment of the present disclosure, when the selective hidden state of the comment vector is generated, the entire sequence vector of the user comment and the intermediate hidden state of the comment vector are also calculated.
在本公开的示例实施例中,在生成基于所述注意力机制的上下文向量时将所述查询向量与所述评论向量选择性隐状态进行加权和运算。In an exemplary embodiment of the present disclosure, when generating a context vector based on the attention mechanism, the query vector and the comment vector are selectively hidden by performing a weighted sum operation.
上文关于推荐理由生成装置的说明同样适用于推荐理由生成方法。同样的,根据本公开实施例的推荐理由生成方法通过在编码过程以及可选地在解码过程中考虑用户的查询信息能够生成查询个性化的推荐理由。The above description of the recommendation reason generation device also applies to the recommendation reason generation method. Similarly, the method for generating a recommendation reason according to an embodiment of the present disclosure can generate a personalized query recommendation reason by considering the user's query information in the encoding process and optionally in the decoding process.
根据本公开实施例的推荐理由生成方法可以由任何电子设备或系统执行,包括但不限于服务器、工作站、个人电脑、平板电路、智能手机、个人数字助理、云计算设备等。The recommendation reason generation method according to the embodiment of the present disclosure can be executed by any electronic device or system, including but not limited to servers, workstations, personal computers, tablet circuits, smart phones, personal digital assistants, cloud computing devices, and so on.
作为示例,一种执行上述方法的电子设备可以包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序。所述处理器执行时实现根据本公开任一实施例的方法的步骤。图4示出根据本公开示例实施例的电子设备400的结构示意图。电子设备400包括处理器401和存储器402。处理器401可以根据存储或加载在存储器402中的程序而执行相应的处理,即可以执行根据本公开任一实施例的方法的步骤。处理器401和存储器402可以通过总线403彼此相连。根据本公开的实施例,电子设备400还可以包括输入/输出(I/O)接口404,接口404也可以连接至总线403。As an example, an electronic device for performing the above method may include a memory, a processor, and a computer program stored in the memory and running on the processor. The processor implements the steps of the method according to any embodiment of the present disclosure when executed. FIG. 4 shows a schematic structural diagram of an electronic device 400 according to an exemplary embodiment of the present disclosure. The electronic device 400 includes a processor 401 and a memory 402. The processor 401 may execute corresponding processing according to a program stored or loaded in the memory 402, that is, it may execute the steps of the method according to any embodiment of the present disclosure. The processor 401 and the memory 402 may be connected to each other through a bus 403. According to an embodiment of the present disclosure, the electronic device 400 may further include an input/output (I/O) interface 404, and the interface 404 may also be connected to the bus 403.
此外,本公开的实施例还提供一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现根据本公开任一实施例的方法中的步骤。根据本公开实施例的计算机可读存储介质可以是一个计算机可读存储介质或多个计算机可读存储介质的组合。In addition, an embodiment of the present disclosure also provides a computer-readable storage medium on which a computer program is stored, and when the program is executed by a processor, the steps in the method according to any embodiment of the present disclosure are implemented. The computer-readable storage medium according to an embodiment of the present disclosure may be one computer-readable storage medium or a combination of multiple computer-readable storage media.
需要说明的是,本公开的实施例可以在硬件或专用电路、软件、固件或其任何组合中实施。本公开实施例中的处理器可以是具有信号处理能力的集成电路芯片或分立硬件组件,例如,其可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)等可编程逻辑器件、分立门或晶体管逻辑器件等。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。本公开实施例中的计算机可读存储介质或存储器可以是易失性存储器或非易失性存储器,或者包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM)、可编程只读存储器(PROM)、可擦除可编程只读存储器(EPROM)、电可擦除可编程只读存储器(EEPROM)或闪存。易 失性存储器可以是随机存取存储器(RAM),例如静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(SDRAM)、双倍数据速率同步动态随机存取存储器(DDRSDRAM)、增强型同步动态随机存取存储器(ESDRAM)、同步连接动态随机存取存储器(SLDRAM)和直接内存总线随机存取存储器(DR RAM)。It should be noted that the embodiments of the present disclosure can be implemented in hardware or dedicated circuits, software, firmware or any combination thereof. The processor in the embodiments of the present disclosure may be an integrated circuit chip or discrete hardware component with signal processing capabilities, for example, it may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), or a ready-made programmable Programmable logic devices such as gate arrays (FPGAs), discrete gates or transistor logic devices, etc. The methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed. The computer-readable storage medium or memory in the embodiment of the present disclosure may be a volatile memory or a non-volatile memory, or include both volatile and non-volatile memory. The non-volatile memory may be read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), or flash memory. Volatile memory can be random access memory (RAM), such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous dynamic random access memory (SDRAM), double data rate synchronous dynamic random access Access memory (DDRSDRAM), enhanced synchronous dynamic random access memory (ESDRAM), synchronous connection dynamic random access memory (SLDRAM), and direct memory bus random access memory (DR RAM).
本领域技术人员应该理解,上述的具体实施例仅是例子而非限制,可以根据设计需求和其它因素对本公开的实施例进行各种修改、组合、部分组合和替换,只要它们在所附权利要求或其等同的范围内,即属于本公开所要保护的权利范围。Those skilled in the art should understand that the specific embodiments described above are only examples and not limitations. Various modifications, combinations, partial combinations and replacements can be made to the embodiments of the present disclosure according to design requirements and other factors, as long as they are in the appended claims. Within the scope of its equivalent, it belongs to the scope of rights to be protected by this disclosure.

Claims (13)

  1. 一种推荐理由生成装置,包括:A recommendation reason generating device includes:
    查询编码器,用于将用户的查询信息编码为查询向量;The query encoder is used to encode the user's query information into a query vector;
    选择性编码器,用于基于所述查询向量将用户评论编码为评论向量选择性隐状态;以及A selective encoder for encoding user comments into a comment vector selective hidden state based on the query vector; and
    解码器,与所述选择性编码器构成编码器-解码器结构,用于将所述评论向量选择性隐状态解码为推荐理由,A decoder, which forms an encoder-decoder structure with the selective encoder, and is used to decode the selective hidden state of the comment vector into a recommendation reason,
    其中,所述选择性编码器包括评论编码单元和选择性计算网络,Wherein, the selective encoder includes a comment coding unit and a selective calculation network,
    所述评论编码单元用于将所述用户评论编码为评论向量中间隐状态,并且The comment coding unit is used for coding the user comment into an intermediate hidden state of the comment vector, and
    所述选择性计算网络用于将所述查询向量与所述评论向量中间隐状态进行运算以生成所述评论向量选择性隐状态。The selective calculation network is used for calculating the query vector and the intermediate hidden state of the review vector to generate the selective hidden state of the review vector.
  2. 根据权利要求1所述的装置,其中The device of claim 1, wherein
    所述解码器采用注意力机制,并且在生成基于所述注意力机制的上下文向量时将所述查询向量作为上下文向量生成函数的输入参数。The decoder adopts an attention mechanism, and uses the query vector as an input parameter of the context vector generation function when generating a context vector based on the attention mechanism.
  3. 根据权利要求1所述的装置,其中The device of claim 1, wherein
    所述选择性计算网络基于所述评论向量中间隐状态与所述查询向量的加权和生成选择性向量,并将所述评论向量中间隐状态与所述选择性向量按元素相乘生成所述评论向量选择性隐状态。The selectivity calculation network generates a selectivity vector based on the weighted sum of the intermediate hidden state of the review vector and the query vector, and multiplies the intermediate hidden state of the review vector and the selectivity vector by element to generate the comment Vector selective hidden state.
  4. 根据权利要求1所述的装置,其中The device of claim 1, wherein
    所述选择性计算网络在生成所述评论向量选择性隐状态时还将所述用户评论的整序列向量与所述评论向量中间隐状态进行运算。When generating the selective hidden state of the comment vector, the selective computing network also performs an operation on the entire sequence vector of the user comment and the intermediate hidden state of the comment vector.
  5. 根据权利要求2所述的装置,其中The device of claim 2, wherein
    所述解码器在生成基于所述注意力机制的上下文向量时将所述查询向量与所述评论向量选择性隐状态进行加权和运算。When generating the context vector based on the attention mechanism, the decoder performs a weighted sum operation on the query vector and the selective hidden state of the comment vector.
  6. 根据权利要求1所述的装置,其中The device of claim 1, wherein
    所述推荐理由生成装置通过兴趣商户的用户评论、用户查询以及用户点击的推荐理由进行训练。The recommendation reason generating device is trained through user comments of interested merchants, user queries, and user clicked recommendation reasons.
  7. 一种基于编码器-解码器结构的推荐理由生成方法,包括:A method for generating recommended reasons based on the encoder-decoder structure includes:
    将用户的查询信息编码为查询向量;Encode the user's query information into a query vector;
    将用户评论编码为评论向量中间隐状态;Encode user comments into the intermediate hidden state of the comment vector;
    将所述查询向量与所述评论向量中间隐状态进行运算以生成评论向量选择性隐状态;以及Computing the query vector and the intermediate hidden state of the review vector to generate the selective hidden state of the review vector; and
    将所述评论向量选择性隐状态解码为推荐理由。The selective hidden state of the comment vector is decoded as a recommendation reason.
  8. 根据权利要求7所述的方法,其中The method according to claim 7, wherein
    采用注意力机制将所述评论向量选择性隐状态解码为推荐理由,并且在生成基于所述注意力机制的上下文向量时将所述查询向量作为上下文向量生成函数的输入参数。The attention mechanism is used to decode the selective hidden state of the comment vector into a recommendation reason, and the query vector is used as the input parameter of the context vector generation function when generating the context vector based on the attention mechanism.
  9. 根据权利要求7所述的方法,其中所述将所述查询向量与所述评论向量中间隐状态进行运算以生成所述评论向量选择性隐状态包括:8. The method according to claim 7, wherein said calculating said query vector and said review vector intermediate hidden state to generate said review vector selective hidden state comprises:
    基于所述评论向量中间隐状态与所述查询向量的加权和生成选择性向量;以及Generating a selectivity vector based on the weighted sum of the intermediate hidden state of the comment vector and the query vector; and
    将所述评论向量中间隐状态与所述选择性向量按元素相乘生成所述评论向量选择性隐状态。Multiplying the intermediate hidden state of the review vector and the selective vector by element to generate the selective hidden state of the review vector.
  10. 根据权利要求7所述的方法,其中The method according to claim 7, wherein
    在生成所述评论向量选择性隐状态时还将所述用户评论的整序列向量与所述评论向量中间隐状态进行运算。When generating the selective hidden state of the comment vector, the entire sequence vector of the user comment and the intermediate hidden state of the comment vector are also calculated.
  11. 根据权利要求8所述的方法,其中The method according to claim 8, wherein
    在生成基于所述注意力机制的上下文向量时将所述查询向量与所述评论向量选择性隐状态进行加权和运算。When generating a context vector based on the attention mechanism, a weighted sum operation is performed on the query vector and the selective hidden state of the comment vector.
  12. 一种计算机可读存储介质,其上存储有计算机程序,其中该程序被处理器执行时实现如权利要求7-11中的任一项所述的方法。A computer-readable storage medium having a computer program stored thereon, wherein the program is executed by a processor to implement the method according to any one of claims 7-11.
  13. 一种电子设备,包括存储器、处理器及存储在所述存储器上并在所述处理器上可运行的计算机程序,所述处理器执行时实现如权利要求7-11中的任一项所述的方法。An electronic device, comprising a memory, a processor, and a computer program stored on the memory and runnable on the processor, and the processor implements any one of claims 7-11 when executed Methods.
PCT/CN2020/107285 2019-08-06 2020-08-06 Generation of recommendation reason WO2021023249A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910720403.6A CN110532463A (en) 2019-08-06 2019-08-06 Rationale for the recommendation generating means and method, storage medium and electronic equipment
CN201910720403.6 2019-08-06

Publications (1)

Publication Number Publication Date
WO2021023249A1 true WO2021023249A1 (en) 2021-02-11

Family

ID=68661556

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/107285 WO2021023249A1 (en) 2019-08-06 2020-08-06 Generation of recommendation reason

Country Status (2)

Country Link
CN (1) CN110532463A (en)
WO (1) WO2021023249A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113672804A (en) * 2021-08-02 2021-11-19 上海浦东发展银行股份有限公司 Recommendation information generation method, system, computer device and storage medium
CN113688309A (en) * 2021-07-23 2021-11-23 北京三快在线科技有限公司 Training method for generating model and generation method and device for recommendation reason
CN113836392A (en) * 2021-08-06 2021-12-24 浙江大学 Deep learning interpretable recommendation method based on BERT and user comments
WO2022235404A1 (en) * 2021-05-03 2022-11-10 Oracle International Corporation Composing human-readable explanations for user navigational recommendations

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532463A (en) * 2019-08-06 2019-12-03 北京三快在线科技有限公司 Rationale for the recommendation generating means and method, storage medium and electronic equipment
CN111325571B (en) * 2019-12-30 2023-08-18 北京航空航天大学 Automatic generation method, device and system for commodity comment labels for multitask learning
CN113495942B (en) * 2020-04-01 2022-07-05 百度在线网络技术(北京)有限公司 Method and device for pushing information
CN112308650B (en) * 2020-07-01 2022-09-30 北京沃东天骏信息技术有限公司 Recommendation reason generation method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280112A (en) * 2017-06-22 2018-07-13 腾讯科技(深圳)有限公司 Abstraction generating method, device and computer equipment
CN109189933A (en) * 2018-09-14 2019-01-11 腾讯科技(深圳)有限公司 A kind of method and server of text information classification
CN109800390A (en) * 2018-12-21 2019-05-24 北京石油化工学院 A kind of calculation method and device of individualized emotion abstract
US20190205750A1 (en) * 2017-12-29 2019-07-04 Alibaba Group Holding Limited Content generation method and apparatus
CN110532463A (en) * 2019-08-06 2019-12-03 北京三快在线科技有限公司 Rationale for the recommendation generating means and method, storage medium and electronic equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108280112A (en) * 2017-06-22 2018-07-13 腾讯科技(深圳)有限公司 Abstraction generating method, device and computer equipment
US20190205750A1 (en) * 2017-12-29 2019-07-04 Alibaba Group Holding Limited Content generation method and apparatus
CN109189933A (en) * 2018-09-14 2019-01-11 腾讯科技(深圳)有限公司 A kind of method and server of text information classification
CN109800390A (en) * 2018-12-21 2019-05-24 北京石油化工学院 A kind of calculation method and device of individualized emotion abstract
CN110532463A (en) * 2019-08-06 2019-12-03 北京三快在线科技有限公司 Rationale for the recommendation generating means and method, storage medium and electronic equipment

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022235404A1 (en) * 2021-05-03 2022-11-10 Oracle International Corporation Composing human-readable explanations for user navigational recommendations
US11625446B2 (en) 2021-05-03 2023-04-11 Oracle International Corporation Composing human-readable explanations for user navigational recommendations
CN113688309A (en) * 2021-07-23 2021-11-23 北京三快在线科技有限公司 Training method for generating model and generation method and device for recommendation reason
CN113688309B (en) * 2021-07-23 2022-11-29 北京三快在线科技有限公司 Training method for generating model and generation method and device for recommendation reason
CN113672804A (en) * 2021-08-02 2021-11-19 上海浦东发展银行股份有限公司 Recommendation information generation method, system, computer device and storage medium
CN113672804B (en) * 2021-08-02 2024-06-11 上海浦东发展银行股份有限公司 Recommendation information generation method, system, computer device and storage medium
CN113836392A (en) * 2021-08-06 2021-12-24 浙江大学 Deep learning interpretable recommendation method based on BERT and user comments
CN113836392B (en) * 2021-08-06 2024-03-26 浙江大学 Deep learning interpretable recommendation method based on BERT and user comments

Also Published As

Publication number Publication date
CN110532463A (en) 2019-12-03

Similar Documents

Publication Publication Date Title
WO2021023249A1 (en) Generation of recommendation reason
US12014259B2 (en) Generating natural language descriptions of images
US20210050014A1 (en) Generating dialogue responses utilizing an independent context-dependent additive recurrent neural network
US20200251099A1 (en) Generating Target Sequences From Input Sequences Using Partial Conditioning
US20240070392A1 (en) Computing numeric representations of words in a high-dimensional space
US10268671B2 (en) Generating parse trees of text segments using neural networks
US20180114108A1 (en) Answer to question neural networks
US20230334293A1 (en) Neural network for processing graph data
CN111712832A (en) Automatic image correction using machine learning
WO2019056628A1 (en) Generation of point of interest copy
CN110612536A (en) Depth-wise separable convolution of neural machine translation
CN106997370A (en) Text classification and conversion based on author
US20160180215A1 (en) Generating parse trees of text segments using neural networks
JP2020502625A (en) Processing text sequences using neural networks
EP3497629A1 (en) Generating audio using neural networks
US20170140272A1 (en) Generating larger neural networks
US10185725B1 (en) Image annotation based on label consensus
US10685012B2 (en) Generating feature embeddings from a co-occurrence matrix
US10503837B1 (en) Translating terms using numeric representations
CN115310408A (en) Transformer based encoding in conjunction with metadata
CN116451700A (en) Target sentence generation method, device, equipment and storage medium
CN112509559A (en) Audio recognition method, model training method, device, equipment and storage medium
WO2024072999A1 (en) Variable length video generation from textual descriptions
US9852123B1 (en) Semiotic class normalization
CN117131853A (en) Text similarity determination method, device, equipment and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20850764

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20850764

Country of ref document: EP

Kind code of ref document: A1