CN116911304B - Text recommendation method and device - Google Patents

Text recommendation method and device Download PDF

Info

Publication number
CN116911304B
CN116911304B CN202311169038.7A CN202311169038A CN116911304B CN 116911304 B CN116911304 B CN 116911304B CN 202311169038 A CN202311169038 A CN 202311169038A CN 116911304 B CN116911304 B CN 116911304B
Authority
CN
China
Prior art keywords
text information
historical browsing
candidate text
candidate
entity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311169038.7A
Other languages
Chinese (zh)
Other versions
CN116911304A (en
Inventor
齐盛
董辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xumi Yuntu Space Technology Co Ltd
Original Assignee
Shenzhen Xumi Yuntu Space Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xumi Yuntu Space Technology Co Ltd filed Critical Shenzhen Xumi Yuntu Space Technology Co Ltd
Priority to CN202311169038.7A priority Critical patent/CN116911304B/en
Publication of CN116911304A publication Critical patent/CN116911304A/en
Application granted granted Critical
Publication of CN116911304B publication Critical patent/CN116911304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The disclosure relates to the technical field of artificial intelligence, and provides a text recommendation method, a text recommendation device, computer equipment and a computer readable storage medium. In the method, in the process of determining the recommendation probability corresponding to the candidate text information, not only the similarity of the historical browsing text information and the coding vectors corresponding to the candidate text information respectively is considered, but also the relevance between the entity characteristics and the category characteristics corresponding to the historical browsing text information of the target user and the candidate text information respectively is considered, so that the accuracy of the recommendation probability corresponding to the determined candidate text information can be improved, the accuracy of CTR prediction (namely, the estimated value of the recommendation probability corresponding to the candidate text information) in a text recommendation scene can be effectively improved, the recommended text recommended to the user is really wanted by the user, and further the user experience and the efficiency of the whole recommendation system can be improved.

Description

Text recommendation method and device
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to a text recommendation method and device.
Background
The recommendation system plays an indispensable role in the life today, and has the physical and mental effects of online shopping, news reading and video watching. There are many types of recommended items, such as house sources, commodities, texts, etc., according to different scenes. The recommendation system preferentially pushes news texts most likely to be clicked by the user to the user through modeling and expression of the user and the texts, so that the satisfaction degree of the user and the efficiency of the whole recommendation system are improved.
The conventional text recommendation model generally models a text and a user respectively, encodes text information into dense vectors, performs feature crossing through a deep neural network, and finally outputs a probability of whether the user clicks. The current text recommendation model only considers the intersection between different feature domains in the prediction process of the user click prediction (Click Through Rate, CTR) result of the text; therefore, the accuracy of the whole CTR estimation is affected, the accuracy of the CTR estimation result is greatly reduced, the text recommended to the user is not really wanted by the user, the user experience is poor, and the efficiency of the whole recommendation system is affected.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide a text recommendation method, apparatus, computer device, and computer readable storage medium, so as to solve the problem that in the prior art, only the intersections between different feature domains are considered in the prediction process of the user click prediction (Click Through Rate, CTR) result of the text due to the current text recommendation model; therefore, the accuracy of the whole CTR estimation is affected, the accuracy of the CTR estimation result is greatly reduced, and the problem that the text recommended to the user is not really wanted by the user, the user experience is poor and the efficiency of the whole recommendation system is affected is caused.
In a first aspect of an embodiment of the present disclosure, there is provided a text recommendation method, including:
acquiring historical browsing text information and candidate text information of a target user;
according to the historical browsing text information, determining a coding vector, entity characteristics and category characteristics corresponding to the historical browsing text information;
determining coding vectors, entity characteristics and category characteristics corresponding to the candidate text information according to the candidate text information;
determining the recommendation probability corresponding to the candidate text information according to the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information and the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information;
and if the recommendation probability corresponding to the candidate text information is greater than a preset threshold value, taking the candidate text as the recommendation text of the target user.
In a second aspect of the embodiments of the present disclosure, there is provided a text recommendation apparatus, including:
the information acquisition unit is used for acquiring historical browsing text information and candidate text information of the target user;
the first determining unit is used for determining the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information according to the historical browsing text information;
The second determining unit is used for determining the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information according to the candidate text information;
the third determining unit is used for determining the recommendation probability corresponding to the candidate text information according to the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information and the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information;
and the text recommending unit is used for taking the candidate text as the recommended text of the target user if the recommending probability corresponding to the candidate text information is larger than a preset threshold value.
In a third aspect of the disclosed embodiments, a computer device is provided, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above method when the computer program is executed.
In a fourth aspect of the disclosed embodiments, a computer-readable storage medium is provided, which stores a computer program which, when executed by a processor, implements the steps of the above-described method.
Compared with the prior art, the embodiment of the disclosure has the beneficial effects that: the embodiment of the disclosure can firstly acquire the historical browsing text information and the candidate text information of the target user; then, according to the historical browsing text information, determining the coding vector, entity characteristic and category characteristic corresponding to the historical browsing text information; then, according to the candidate text information, determining the coding vector, entity characteristic and category characteristic corresponding to the candidate text information; next, determining a recommendation probability corresponding to the candidate text information according to the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information and the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information; and if the recommendation probability corresponding to the candidate text information is greater than a preset threshold value, taking the candidate text as the recommendation text of the target user. It can be seen that, in this embodiment, the encoding vector, the entity feature and the category feature corresponding to the historical browsing text information of the target user may be used to determine the recommendation probability corresponding to the candidate text information, and determine, according to the recommendation probability corresponding to the candidate text information, whether the candidate text may be used as the recommendation text of the target user. Therefore, in the process of determining the recommendation probability corresponding to the candidate text information, not only the similarity of the historical browsing text information and the encoding vectors corresponding to the candidate text information respectively is considered, but also the relevance between the entity characteristics and the category characteristics corresponding to the historical browsing text information of the target user and the candidate text information respectively is considered, so that the accuracy of the recommendation probability corresponding to the determined candidate text information can be improved, the accuracy of CTR prediction (namely, the estimated value of the recommendation probability corresponding to the candidate text information) in a text recommendation scene can be effectively improved, the recommended text recommended to the user is really wanted by the user, and further user experience can be improved, and the efficiency of the whole recommendation system can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings that are required for the embodiments or the description of the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a scene schematic diagram of an application scene of an embodiment of the present disclosure;
FIG. 2 is a flow chart of a text recommendation method provided by an embodiment of the present disclosure;
FIG. 3 is a block diagram of a text recommendation device provided by an embodiment of the present disclosure;
fig. 4 is a schematic diagram of a computer device provided by an embodiment of the present disclosure.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system configurations, techniques, etc. in order to provide a thorough understanding of the disclosed embodiments. However, it will be apparent to one skilled in the art that the present disclosure may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present disclosure with unnecessary detail.
A text recommendation method and apparatus according to embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
In the prior art, due to the current text recommendation model, only the intersections between different feature domains are considered in the prediction process of the user click prediction (Click Through Rate, CTR) results for the text; therefore, the accuracy of the whole CTR estimation is affected, the accuracy of the CTR estimation result is greatly reduced, the text recommended to the user is not really wanted by the user, the user experience is poor, and the efficiency of the whole recommendation system is affected.
In order to solve the above problems. The invention provides a text recommendation method, in the method, the coding vector, entity characteristic and category characteristic corresponding to the historical browsing text information of a target user can be utilized to determine the recommendation probability corresponding to the candidate text information, and whether the candidate text can be used as the recommended text of the target user is judged according to the recommendation probability corresponding to the candidate text information. Therefore, in the process of determining the recommendation probability corresponding to the candidate text information, not only the similarity of the historical browsing text information and the encoding vectors corresponding to the candidate text information respectively is considered, but also the relevance between the entity characteristics and the category characteristics corresponding to the historical browsing text information of the target user and the candidate text information respectively is considered, so that the accuracy of the recommendation probability corresponding to the determined candidate text information can be improved, the accuracy of CTR prediction (namely, the estimated value of the recommendation probability corresponding to the candidate text information) in a text recommendation scene can be effectively improved, the recommended text recommended to the user is really wanted by the user, and further user experience can be improved, and the efficiency of the whole recommendation system can be improved.
For example, the embodiment of the present invention may be applied to an application scenario as shown in fig. 1. In this scenario, a terminal device 1 and a server 2 may be included.
The terminal device 1 may be hardware or software. When the terminal device 1 is hardware, it may be various electronic devices having a display screen and supporting communication with the server 2, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like; when the terminal device 1 is software, it may be installed in the electronic device as described above. The terminal device 1 may be implemented as a plurality of software or software modules, or as a single software or software module, to which the embodiments of the present disclosure are not limited. Further, various applications, such as a data processing application, an instant messaging tool, social platform software, a search class application, a shopping class application, and the like, may be installed on the terminal device 1.
The server 2 may be a server that provides various services, for example, a background server that receives a request transmitted from a terminal device with which communication connection is established, and the background server may perform processing such as receiving and analyzing the request transmitted from the terminal device and generate a processing result. The server 2 may be a server, a server cluster formed by a plurality of servers, or a cloud computing service center, which is not limited in the embodiment of the present disclosure.
The server 2 may be hardware or software. When the server 2 is hardware, it may be various electronic devices that provide various services to the terminal device 1. When the server 2 is software, it may be a plurality of software or software modules providing various services to the terminal device 1, or may be a single software or software module providing various services to the terminal device 1, which is not limited by the embodiments of the present disclosure.
The terminal device 1 and the server 2 may be communicatively connected via a network. The network may be a wired network using coaxial cable, twisted pair wire, and optical fiber connection, or may be a wireless network that can implement interconnection of various communication devices without wiring, for example, bluetooth (Bluetooth), near field communication (Near Field Communication, NFC), infrared (Infrared), etc., which are not limited by the embodiments of the present disclosure.
Specifically, the user can input the history browsing text information of the target user and the candidate text information through the terminal device 1; the terminal device 1 transmits the history browsing text information of the target user and the candidate text information to the server 2. The server 2 can determine the coding vector, entity characteristic and category characteristic corresponding to the historical browsing text information according to the historical browsing text information; then, the server 2 can determine the coding vector, entity feature and category feature corresponding to the candidate text information according to the candidate text information; then, the server 2 may determine the recommendation probability corresponding to the candidate text information according to the coding vector, the entity feature and the category feature corresponding to the historical browsing text information and the coding vector, the entity feature and the category feature corresponding to the candidate text information; finally, if the recommendation probability corresponding to the candidate text information is greater than the preset threshold, the server 2 may return the candidate text as the recommendation text of the target user to the terminal device 1, so that the terminal device 1 may display the recommendation text of the target user to the user. In this way, the embodiment can determine the recommendation probability corresponding to the candidate text information by using the coding vector, the entity feature and the category feature corresponding to the historical browsing text information of the target user, and judge whether the candidate text can be used as the recommendation text of the target user according to the recommendation probability corresponding to the candidate text information. Therefore, in the process of determining the recommendation probability corresponding to the candidate text information, not only the similarity of the historical browsing text information and the encoding vectors corresponding to the candidate text information respectively is considered, but also the relevance between the entity characteristics and the category characteristics corresponding to the historical browsing text information of the target user and the candidate text information respectively is considered, so that the accuracy of the recommendation probability corresponding to the determined candidate text information can be improved, the accuracy of CTR prediction (namely, the estimated value of the recommendation probability corresponding to the candidate text information) in a text recommendation scene can be effectively improved, the recommended text recommended to the user is really wanted by the user, and further user experience can be improved, and the efficiency of the whole recommendation system can be improved.
It should be noted that the specific types, numbers and combinations of the terminal device 1 and the server 2 and the network may be adjusted according to the actual requirements of the application scenario, which is not limited in the embodiment of the present disclosure.
It should be noted that the above application scenario is only shown for the convenience of understanding the present disclosure, and embodiments of the present disclosure are not limited in any way in this respect. Rather, embodiments of the present disclosure may be applied to any scenario where applicable.
Fig. 2 is a flowchart of a text recommendation method provided in an embodiment of the present disclosure. A text recommendation method of fig. 2 may be performed by the terminal device or the server of fig. 1. As shown in fig. 2, the text recommendation method includes:
s201: and acquiring historical browsing text information and candidate text information of the target user.
In this embodiment, the target user may be understood as a user who needs to make text recommendation. The historical browsing text information of the target user may be understood as text information that the target user has browsed, for example, news, papers, posts, etc. that the target user has browsed in the past. It should be noted that, the historical browsing text information of the target user may include a title and content corresponding to the historical browsing text information, for example, the historical browsing text information is a news, and the news may include the title and the content. The candidate text information may be understood as text information of a candidate recommended to the target user, wherein the candidate text information may also include a title and content. It should be noted that, the historical browsing text information and the candidate text information of the target user may be one or more.
S202: and determining the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information according to the historical browsing text information.
After the historical browsing text information of the target user is obtained, the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information can be determined according to the historical browsing text information. For example, the coding vector corresponding to the historical browsing text information can be determined according to the title and the content of the historical browsing text information; the category characteristics corresponding to the historical browsing text information can be determined according to the title of the historical browsing text information; the entity characteristics corresponding to the historical browsing text information can be determined according to the content of the historical browsing text information.
The code vector corresponding to the history browsing text information may be understood as a feature vector capable of reflecting the entire context of the history browsing text information. The entity characteristics corresponding to the history browsing text information may be understood as feature vectors capable of reflecting the entity corresponding to the history browsing text information, and for example, the entity of the history browsing text information may include company name, company location, organization name, product name, and the like. The category feature corresponding to the historical browsing text information may be understood as a feature vector capable of reflecting the category corresponding to the historical browsing text information, for example, the category corresponding to the historical browsing text information may be entertainment, science and technology, civilian, and the like. It should be noted that, if there are multiple historical browsing text information, the coding vector, the entity feature and the category feature corresponding to each of the historical browsing text information may be determined respectively.
S203: and determining the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information according to the candidate text information.
After the candidate text information of the target user is obtained, the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information can be determined according to the candidate text information. For example, the encoding vector corresponding to the candidate text information may be determined according to the title and the content of the candidate text information; determining category characteristics corresponding to the candidate text information according to the title of the candidate text information; entity characteristics corresponding to the candidate text information can be determined according to the content of the candidate text information.
The encoding vector corresponding to the candidate text information may be understood as a feature vector capable of reflecting the context of the entire candidate text information. The entity characteristics corresponding to the candidate text information may be understood as feature vectors capable of reflecting the entity corresponding to the candidate text information, and for example, the entity of the candidate text information may include a company name, a company location, an organization name, a product name, and the like. The category feature corresponding to the candidate text information may be understood as a feature vector capable of reflecting the category corresponding to the candidate text information, for example, the category corresponding to the historical browsing text information may be entertainment, science and technology, civilian, and the like. If there are multiple candidate text messages, the coding vector, the entity feature and the category feature corresponding to each candidate text message may be determined.
S204: and determining the recommendation probability corresponding to the candidate text information according to the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information and the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information.
After the coding vector, the entity feature and the category feature corresponding to the historical browsing text information are obtained, and the coding vector, the entity feature and the category feature corresponding to the candidate text information are obtained, the similarity between the coding vector corresponding to the historical browsing text information and the coding vector corresponding to the candidate text information can be determined according to the coding vector corresponding to the historical browsing text information and the coding vector corresponding to the candidate text information. And determining the similarity between the entity characteristics corresponding to the historical browsing text information and the entity characteristics corresponding to the candidate text information according to the entity characteristics corresponding to the historical browsing text information and the entity characteristics corresponding to the candidate text information. And determining the similarity between the category characteristic corresponding to the historical browsing text information and the category characteristic corresponding to the candidate text information according to the category characteristic corresponding to the historical browsing text information and the category characteristic corresponding to the candidate text information.
Then, the recommendation probability corresponding to the candidate text information can be determined according to the similarity between the encoding vector corresponding to the historical browsing text information and the encoding vector corresponding to the candidate text information, the similarity between the entity feature corresponding to the historical browsing text information and the entity feature corresponding to the candidate text information, and the similarity between the category feature corresponding to the historical browsing text information and the category feature corresponding to the candidate text information. It can be understood that the recommendation probability corresponding to the candidate text information can be understood as the probability that the target user will click on the candidate text information, wherein the higher the recommendation probability corresponding to the candidate text information, the higher the probability that the target user will click on the candidate text information, otherwise, the lower the recommendation probability corresponding to the candidate text information, the lower the probability that the target user will click on the candidate text information. In one implementation, the recommendation probability corresponding to the candidate text information may be a user click prediction (Click Through Rate, CTR).
S205: and if the recommendation probability corresponding to the candidate text information is greater than a preset threshold value, taking the candidate text as the recommendation text of the target user.
If the recommendation probability corresponding to the candidate text information is greater than the preset threshold value, the candidate text information is the text information which is interested by the user and can be clicked and checked, so that the candidate text can be used as the recommendation text of the target user and pushed to the user.
Compared with the prior art, the embodiment of the disclosure has the beneficial effects that: the embodiment of the disclosure can firstly acquire the historical browsing text information and the candidate text information of the target user; then, according to the historical browsing text information, determining the coding vector, entity characteristic and category characteristic corresponding to the historical browsing text information; then, according to the candidate text information, determining the coding vector, entity characteristic and category characteristic corresponding to the candidate text information; next, determining a recommendation probability corresponding to the candidate text information according to the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information and the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information; and if the recommendation probability corresponding to the candidate text information is greater than a preset threshold value, taking the candidate text as the recommendation text of the target user. It can be seen that, in this embodiment, the encoding vector, the entity feature and the category feature corresponding to the historical browsing text information of the target user may be used to determine the recommendation probability corresponding to the candidate text information, and determine, according to the recommendation probability corresponding to the candidate text information, whether the candidate text may be used as the recommendation text of the target user. Therefore, in the process of determining the recommendation probability corresponding to the candidate text information, not only the similarity of the historical browsing text information and the encoding vectors corresponding to the candidate text information respectively is considered, but also the relevance between the entity characteristics and the category characteristics corresponding to the historical browsing text information of the target user and the candidate text information respectively is considered, so that the accuracy of the recommendation probability corresponding to the determined candidate text information can be improved, the accuracy of CTR prediction (namely, the estimated value of the recommendation probability corresponding to the candidate text information) in a text recommendation scene can be effectively improved, the recommended text recommended to the user is really wanted by the user, and further user experience can be improved, and the efficiency of the whole recommendation system can be improved.
In some embodiments, the method corresponding to fig. 2 may be applied to a trained text recommendation model, where the text recommendation model may include an encoder, a primary task recommendation model, an entity recognition model, a classification model. It should be noted that, in some embodiments, the main task recommendation model may be a BERT neural network model, and the training loss function used in the training process of the main task recommendation model is a NCE (Noise ContrastiveEstimator) loss function. The entity recognition model may be a NER (named entity recognition ) model, and the training loss function used by the entity recognition model may be a cross entropy loss function. The classification model may be a BERT neural network model, and the [ CLS ] may be used to embed the input text for classification tasks, where the training loss function used by the classification task model may be a cross entropy loss function. It should be noted that, the training of the text recommendation model is multi-task collaborative training, which can optimize the main task recommendation model and the two auxiliary models (i.e. the entity recognition model and the classification model) at the same time, and the overall loss value can be the sum of the loss value of the main task recommendation model, the loss value of the entity recognition model and the loss value of the classification model; because the text recommendation model is a multi-task model, the text recommendation model can collide when in gradient updating in the training process, so that gradients of auxiliary task models (namely entity identification models and classification models) can be combined for scaling to a certain extent, the influence on the updating of the gradients of a main task (namely the main task recommendation model) is avoided, and for example, the sum of the gradients of the entity identification models and the classification models can be multiplied by a preset scaling factor to obtain updated gradients. In this way, for the possible gradient conflict between the main task and the auxiliary task in the model training process, the gradient adjustment method is used in the embodiment, so that the conflict between the main task and the auxiliary task is effectively relieved, and the effect of multi-task learning is improved.
In some embodiments, S202 "determining the encoding vector, the entity feature, and the category feature corresponding to the historical browsing text information" according to the historical browsing text information may include the following steps:
s202a: inputting the historical browsing text information into the encoder to obtain a coding vector corresponding to the historical browsing text information;
s202b: inputting the coding vector corresponding to the historical browsing text information into the entity recognition model to obtain entity characteristics corresponding to the historical browsing text information;
s202c: and inputting the coding vector corresponding to the historical browsing text information into the classification model to obtain the category characteristics corresponding to the historical browsing text information.
In this embodiment, the history browsing text information may be input to the Encoder (for example, the Item Encoder) to obtain the encoding vector corresponding to the history browsing text information, for example, the title and the content of the history browsing text information may be input to the Encoder respectively to obtain the encoding vector corresponding to the title and the content of the history browsing text information. Then, a classification identifier (e.g., [ CLS ]) can be added before the code vector corresponding to the historical browsing text information to obtain an adjusted code vector, and then the adjusted code vector is input into a classification model to obtain a class feature corresponding to the historical browsing text information; and inputting the coding vectors corresponding to the historical browsing text information into the classification model to obtain the category characteristics corresponding to the historical browsing text information, for example, inputting the coding vectors corresponding to the contents of the historical browsing text information into the classification model to obtain the category characteristics corresponding to the historical browsing text information.
In some embodiments, S203 "determining the encoding vector, the entity feature, and the category feature corresponding to the candidate text information" according to the candidate text information may include the steps of:
s203a: inputting the candidate text information into the encoder to obtain a coding vector corresponding to the candidate text information;
s203b: inputting the coding vector corresponding to the candidate text information into the entity recognition model to obtain the entity characteristics corresponding to the candidate text information;
s203c: and inputting the coding vector corresponding to the candidate text information into the classification model to obtain the category characteristic corresponding to the candidate text information.
In this embodiment, the candidate text information may be input to the Encoder (for example, the Item Encoder) to obtain the encoding vector corresponding to the candidate text information, for example, the title and the content of the candidate text information may be input to the Encoder respectively to obtain the encoding vector corresponding to each of the title and the content of the candidate text information. Then, a classification identifier (e.g., [ CLS ]) may be added before the encoding vector corresponding to the candidate text information to obtain an adjusted encoding vector, and then the adjusted encoding vector is input into a classification model to obtain a class feature corresponding to the candidate text information; and inputting the coding vectors corresponding to the candidate text information into the classification model to obtain the class characteristics corresponding to the candidate text information, for example, inputting the coding vectors corresponding to the contents of the candidate text information into the classification model to obtain the class characteristics corresponding to the candidate text information.
In some embodiments, S204 "determining the recommendation probability corresponding to the candidate text information according to the encoding vector, the entity feature, and the category feature corresponding to the historical browsing text information, and the encoding vector, the entity feature, and the category feature corresponding to the candidate text information" may include the steps of:
s204a: and inputting the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information and the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information into the main task recommendation model to obtain the clicking probability corresponding to the candidate text information.
As an example, the primary task recommendation model may determine a first probability value based on the encoding vector corresponding to the historical browsing text information and the encoding vector corresponding to the candidate text information. For example, dot product calculation may be performed on the encoded vector corresponding to the historical browsing text information and the encoded vector corresponding to the candidate text information, to obtain the first probability value.
The primary task recommendation model may determine a second probability value according to the entity characteristic corresponding to the historical browsing text information and the entity characteristic corresponding to the candidate text information. For example, dot product calculation may be performed on the entity feature corresponding to the historical browsing text information and the entity feature corresponding to the candidate text information, so as to obtain the second probability value.
And the main task recommendation model determines a third probability value according to the category characteristics corresponding to the historical browsing text information and the category characteristics corresponding to the candidate text information. For example, dot product calculation may be performed on the category feature corresponding to the historical browsing text information and the category feature corresponding to the candidate text information, to determine the third probability value.
And finally, the main task recommendation model can obtain clicking probability corresponding to the candidate text information according to the first probability value, the second probability value and the third probability value. For example, the first probability value, the second probability value and the third probability value are weighted to obtain the clicking probability corresponding to the candidate text information.
S204b: and taking the click probability corresponding to the candidate text information as the recommendation probability corresponding to the candidate text information.
After obtaining the click probability corresponding to the candidate text information, the click probability corresponding to the candidate text information can be used as the recommendation probability corresponding to the candidate text information.
It should be noted that in some embodiments, the text recommendation model further includes an aggregation layer. If the historical browsing text information of the target user is a plurality of historical browsing text information, before the step of inputting the coding vector, the entity feature and the category feature corresponding to the historical browsing text information, and the coding vector, the entity feature and the category feature corresponding to the candidate text information into the main task recommendation model to obtain the click probability corresponding to the candidate text information, the method further includes the following steps:
Step a: inputting the coding vectors corresponding to the historical browsing text information respectively into the aggregation layer to obtain a compressed coding vector;
step b: and taking the compressed code vector as the code vector corresponding to the historical browsing text information.
It can be appreciated that the aggregation layer may compress the encoding vectors corresponding to the plurality of historical browsing text information into a compressed encoding vector, i.e. the compressed encoding vector includes the encoding vectors corresponding to the plurality of historical browsing text information.
As an example, the aggregation layer may be an attention mechanism neural network, and the aggregation layer may compress the information of the encoding vectors corresponding to each of the plurality of historical browsing text information by using an attention (i.e. attention weighting) manner, and specifically may compress the encoding vectors by adopting the following formula:
wherein r is u The compressed code vector is compressed;the weight corresponding to the i-th historical browsing text information is given; />Browsing text information for the ith history, i.e. r h Browsing text information for history; i is the total number of historical browsing text information; q u And W is u Parameters that need to be learned for the aggregation layer.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
The following are device embodiments of the present disclosure that may be used to perform method embodiments of the present disclosure. For details not disclosed in the embodiments of the apparatus of the present disclosure, please refer to the embodiments of the method of the present disclosure.
Fig. 3 is a schematic diagram of a text recommendation device provided in an embodiment of the present disclosure. As shown in fig. 3, the text recommendation apparatus includes:
an information acquisition unit 301 for acquiring historical browsing text information and candidate text information of a target user;
a first determining unit 302, configured to determine, according to the historical browsing text information, a coding vector, an entity feature, and a category feature corresponding to the historical browsing text information;
a second determining unit 303, configured to determine, according to the candidate text information, a coding vector, an entity feature, and a category feature corresponding to the candidate text information;
a third determining unit 304, configured to determine a recommendation probability corresponding to the candidate text information according to the coding vector, the entity feature, and the category feature corresponding to the historical browsing text information, and the coding vector, the entity feature, and the category feature corresponding to the candidate text information;
and the text recommending unit 305 is configured to take the candidate text as the recommended text of the target user if the recommendation probability corresponding to the candidate text information is greater than a preset threshold.
In some embodiments, the apparatus is applied to a text recommendation model, wherein the text recommendation model includes an encoder, a main task recommendation model, an entity recognition model, a classification model.
In some embodiments, the first determining unit 302 is configured to:
inputting the historical browsing text information into the encoder to obtain a coding vector corresponding to the historical browsing text information;
inputting the coding vector corresponding to the historical browsing text information into the entity recognition model to obtain entity characteristics corresponding to the historical browsing text information;
and inputting the coding vector corresponding to the historical browsing text information into the classification model to obtain the category characteristics corresponding to the historical browsing text information.
In some embodiments, the second determining unit 303 is configured to:
inputting the candidate text information into the encoder to obtain a coding vector corresponding to the candidate text information;
inputting the coding vector corresponding to the candidate text information into the entity recognition model to obtain the entity characteristics corresponding to the candidate text information;
and inputting the coding vector corresponding to the candidate text information into the classification model to obtain the category characteristic corresponding to the candidate text information.
In some embodiments, the third determining unit 304 is configured to:
Inputting the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information and the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information into the main task recommendation model to obtain the clicking probability corresponding to the candidate text information;
and taking the click probability corresponding to the candidate text information as the recommendation probability corresponding to the candidate text information.
In some embodiments, the third determining unit 304 is configured to:
the main task recommendation model determines a first probability value according to the coding vector corresponding to the historical browsing text information and the coding vector corresponding to the candidate text information;
the main task recommendation model determines a second probability value according to the entity characteristics corresponding to the historical browsing text information and the entity characteristics corresponding to the candidate text information;
the main task recommendation model determines a third probability value according to the category characteristics corresponding to the historical browsing text information and the category characteristics corresponding to the candidate text information;
and the main task recommendation model obtains clicking probability corresponding to the candidate text information according to the first probability value, the second probability value and the third probability value.
In some embodiments, the text recommendation model further includes an aggregation layer; if the historical browsing text information of the target user is a plurality of historical browsing text information; the apparatus further comprises a compression unit for:
Before the step of inputting the coding vector, entity feature and category feature corresponding to the historical browsing text information and the coding vector, entity feature and category feature corresponding to the candidate text information into the main task recommendation model to obtain the click probability corresponding to the candidate text information,
inputting the coding vectors corresponding to the historical browsing text information respectively into the aggregation layer to obtain a compressed coding vector;
and taking the compressed code vector as the code vector corresponding to the historical browsing text information.
Compared with the prior art, the embodiment of the disclosure has the beneficial effects that: the embodiment of the disclosure provides a text recommendation device, which comprises: the information acquisition unit is used for acquiring historical browsing text information and candidate text information of the target user; the first determining unit is used for determining the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information according to the historical browsing text information; the second determining unit is used for determining the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information according to the candidate text information; the third determining unit is used for determining the recommendation probability corresponding to the candidate text information according to the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information and the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information; and the text recommending unit is used for taking the candidate text as the recommended text of the target user if the recommending probability corresponding to the candidate text information is larger than a preset threshold value. It can be seen that, in this embodiment, the encoding vector, the entity feature and the category feature corresponding to the historical browsing text information of the target user may be used to determine the recommendation probability corresponding to the candidate text information, and determine, according to the recommendation probability corresponding to the candidate text information, whether the candidate text may be used as the recommendation text of the target user. Therefore, in the process of determining the recommendation probability corresponding to the candidate text information, not only the similarity of the historical browsing text information and the encoding vectors corresponding to the candidate text information respectively is considered, but also the relevance between the entity characteristics and the category characteristics corresponding to the historical browsing text information of the target user and the candidate text information respectively is considered, so that the accuracy of the recommendation probability corresponding to the determined candidate text information can be improved, the accuracy of CTR prediction (namely, the estimated value of the recommendation probability corresponding to the candidate text information) in a text recommendation scene can be effectively improved, the recommended text recommended to the user is really wanted by the user, and further user experience can be improved, and the efficiency of the whole recommendation system can be improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic of each process, and should not constitute any limitation on the implementation process of the embodiments of the disclosure.
Fig. 4 is a schematic diagram of a computer device 4 provided by an embodiment of the present disclosure. As shown in fig. 4, the computer device 4 of this embodiment includes: a processor 401, a memory 402 and a computer program 403 stored in the memory 402 and executable on the processor 401. The steps of the various method embodiments described above are implemented by processor 401 when executing computer program 403. Alternatively, the processor 401 may implement the functions of the modules/modules in the above-described device embodiments when executing the computer program 403.
Illustratively, the computer program 403 may be partitioned into one or more modules/modules, which are stored in the memory 402 and executed by the processor 401 to complete the present disclosure. One or more of the modules/modules may be a series of computer program instruction segments capable of performing particular functions to describe the execution of the computer program 403 in the computer device 4.
The computer device 4 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The computer device 4 may include, but is not limited to, a processor 401 and a memory 402. It will be appreciated by those skilled in the art that fig. 4 is merely an example of computer device 4 and is not intended to limit computer device 4, and may include more or fewer components than shown, or may combine certain components, or different components, e.g., a computer device may also include an input-output device, a network access device, a bus, etc.
The processor 401 may be a central processing module (Central Processing Unit, CPU) or other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application SpecificIntegrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 402 may be an internal storage module of the computer device 4, for example, a hard disk or a memory of the computer device 4. The memory 402 may also be an external storage device of the computer device 4, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card) or the like, which are provided on the computer device 4. Further, the memory 402 may also include both internal memory modules of the computer device 4 and external memory devices. The memory 402 is used to store computer programs and other programs and data required by the computer device. The memory 402 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of each functional module and module is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules or modules to perform all or part of the above-described functions. The functional modules and the modules in the embodiment can be integrated in one processing module, or each module can exist alone physically, or two or more modules can be integrated in one module, and the integrated modules can be realized in a form of hardware or a form of a software functional module. In addition, the specific names of the functional modules and the modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present disclosure. The modules in the above system, and the specific working process of the modules may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative modules and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus/computer device and method may be implemented in other manners. For example, the apparatus/computer device embodiments described above are merely illustrative, e.g., a module or division of modules is merely a logical function division, and there may be additional divisions of actual implementation, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or modules, which may be in electrical, mechanical or other forms.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present disclosure may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
The integrated modules/modules may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the present disclosure may implement all or part of the flow of the method of the above-described embodiments, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of the method embodiments described above. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable medium can be appropriately increased or decreased according to the requirements of the jurisdiction's jurisdiction and the patent practice, for example, in some jurisdictions, the computer readable medium does not include electrical carrier signals and telecommunication signals according to the jurisdiction and the patent practice.
The above embodiments are merely for illustrating the technical solution of the present disclosure, and are not limiting thereof; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the disclosure, and are intended to be included in the scope of the present disclosure.

Claims (6)

1. The text recommendation method is characterized in that the method is applied to a text recommendation model, wherein the text recommendation model comprises an encoder, a main task recommendation model, an entity recognition model and a classification model, training of the text recommendation model is multi-task collaborative training so as to optimize the main task recommendation model, the entity recognition model and the classification model at the same time, in the training process of the text recommendation model, the entity recognition model and the classification model are used as auxiliary models, and gradients of the auxiliary models are combined and multiplied by a preset scaling factor when the gradients are updated to obtain updated gradients; the method comprises the following steps:
Acquiring historical browsing text information and candidate text information of a target user;
determining coding vectors, entity characteristics and category characteristics corresponding to the historical browsing text information according to the historical browsing text information;
determining coding vectors, entity characteristics and category characteristics corresponding to the candidate text information according to the candidate text information;
determining the click probability corresponding to the candidate text information according to the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information and the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information, and taking the click probability corresponding to the candidate text information as the recommendation probability corresponding to the candidate text information;
if the recommendation probability corresponding to the candidate text information is larger than a preset threshold, the candidate text is used as the recommendation text of the target user;
the determining the click probability corresponding to the candidate text information according to the coding vector, the entity feature and the category feature corresponding to the historical browsing text information and the coding vector, the entity feature and the category feature corresponding to the candidate text information comprises the following steps:
The main task recommendation model determines a first probability value according to the similarity between the coding vector corresponding to the historical browsing text information and the coding vector corresponding to the candidate text information;
the main task recommendation model determines a second probability value according to the similarity between the entity characteristics corresponding to the historical browsing text information and the entity characteristics corresponding to the candidate text information;
the main task recommendation model determines a third probability value according to the similarity between the category characteristics corresponding to the historical browsing text information and the category characteristics corresponding to the candidate text information;
the main task recommendation model obtains clicking probability corresponding to the candidate text information according to the first probability value, the second probability value and the third probability value;
the text recommendation model further includes an aggregation layer; if the historical browsing text information of the target user is a plurality of historical browsing text information; before the step of determining the click probability corresponding to the candidate text information according to the coding vector, the entity feature and the category feature corresponding to the historical browsing text information and the coding vector, the entity feature and the category feature corresponding to the candidate text information, the method further includes:
Inputting the coding vectors respectively corresponding to the plurality of historical browsing text information into the aggregation layer to obtain a compressed coding vector; the polymeric layer is compressed using the following formula:
wherein r is u The compressed code vector is compressed;the weight corresponding to the i-th historical browsing text information is given; />Browsing text information for the ith history, i.e. r h Browsing text information for history; i is the total number of historical browsing text information; q u And W is u Parameters to be learned for the aggregation layer;
and taking the compressed coding vector as the coding vector corresponding to the historical browsing text information.
2. The method of claim 1, wherein determining the encoding vector, the entity feature, and the category feature corresponding to the historical browsing text information according to the historical browsing text information comprises:
inputting the historical browsing text information into the encoder to obtain a coding vector corresponding to the historical browsing text information;
inputting the coding vector corresponding to the historical browsing text information into the entity recognition model to obtain entity characteristics corresponding to the historical browsing text information;
and inputting the coding vector corresponding to the historical browsing text information into the classification model to obtain the category characteristic corresponding to the historical browsing text information.
3. The method according to claim 1, wherein determining the coding vector, the entity feature, and the category feature corresponding to the candidate text information according to the candidate text information comprises:
inputting the candidate text information into the encoder to obtain a coding vector corresponding to the candidate text information;
inputting the coding vector corresponding to the candidate text information into the entity recognition model to obtain entity characteristics corresponding to the candidate text information;
and inputting the coding vector corresponding to the candidate text information into the classification model to obtain the category characteristic corresponding to the candidate text information.
4. The text recommendation device is characterized in that the device is applied to a text recommendation model, wherein the text recommendation model comprises an encoder, a main task recommendation model, an entity recognition model and a classification model, training of the text recommendation model is multi-task collaborative training so as to optimize the main task recommendation model, the entity recognition model and the classification model at the same time, in the training process of the text recommendation model, the entity recognition model and the classification model are used as auxiliary models, and gradients of the auxiliary models are combined and multiplied by a preset scaling factor when the gradients are updated to obtain updated gradients; the device comprises:
The information acquisition unit is used for acquiring historical browsing text information and candidate text information of the target user;
the first determining unit is used for determining coding vectors, entity characteristics and category characteristics corresponding to the historical browsing text information according to the historical browsing text information;
the second determining unit is used for determining the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information according to the candidate text information;
the third determining unit is used for determining the click probability corresponding to the candidate text information according to the coding vector, the entity characteristic and the category characteristic corresponding to the historical browsing text information and the coding vector, the entity characteristic and the category characteristic corresponding to the candidate text information, and taking the click probability corresponding to the candidate text information as the recommendation probability corresponding to the candidate text information;
a text recommending unit, configured to take the candidate text as a recommended text of the target user if a recommendation probability corresponding to the candidate text information is greater than a preset threshold;
the third determining unit is specifically configured to: the main task recommendation model determines a first probability value according to the similarity between the coding vector corresponding to the historical browsing text information and the coding vector corresponding to the candidate text information; the main task recommendation model determines a second probability value according to the similarity between the entity characteristics corresponding to the historical browsing text information and the entity characteristics corresponding to the candidate text information; the main task recommendation model determines a third probability value according to the similarity between the category characteristics corresponding to the historical browsing text information and the category characteristics corresponding to the candidate text information; the main task recommendation model obtains clicking probability corresponding to the candidate text information according to the first probability value, the second probability value and the third probability value;
The text recommendation model further includes an aggregation layer; if the historical browsing text information of the target user is a plurality of historical browsing text information; the apparatus further comprises a compression unit for: inputting the coding vectors respectively corresponding to the plurality of historical browsing text information into the aggregation layer to obtain a compressed coding vector; the polymeric layer is compressed using the following formula:
wherein r is u The compressed code vector is compressed;the weight corresponding to the i-th historical browsing text information is given; />Browsing text information for the ith history, i.e. r h Browsing text information for history; i is the total number of historical browsing text information; q u And W is u Parameters to be learned for the aggregation layer; and taking the compressed coding vector as the coding vector corresponding to the historical browsing text information.
5. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 3 when the computer program is executed.
6. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 3.
CN202311169038.7A 2023-09-12 2023-09-12 Text recommendation method and device Active CN116911304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311169038.7A CN116911304B (en) 2023-09-12 2023-09-12 Text recommendation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311169038.7A CN116911304B (en) 2023-09-12 2023-09-12 Text recommendation method and device

Publications (2)

Publication Number Publication Date
CN116911304A CN116911304A (en) 2023-10-20
CN116911304B true CN116911304B (en) 2024-02-20

Family

ID=88367185

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311169038.7A Active CN116911304B (en) 2023-09-12 2023-09-12 Text recommendation method and device

Country Status (1)

Country Link
CN (1) CN116911304B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163149A (en) * 2020-09-16 2021-01-01 北京明略昭辉科技有限公司 Method and device for recommending messages
CN113961823A (en) * 2021-12-17 2022-01-21 江西中业智能科技有限公司 News recommendation method, system, storage medium and equipment
CN115098786A (en) * 2022-07-22 2022-09-23 齐鲁工业大学 News recommendation method and system based on gating multi-head self-attention
CN116010696A (en) * 2023-01-04 2023-04-25 华南农业大学 News recommendation method, system and medium integrating knowledge graph and long-term interest of user
CN116431919A (en) * 2023-04-13 2023-07-14 齐鲁工业大学(山东省科学院) Intelligent news recommendation method and system based on user intention characteristics
CN116467523A (en) * 2023-04-17 2023-07-21 平安科技(深圳)有限公司 News recommendation method, device, electronic equipment and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112163149A (en) * 2020-09-16 2021-01-01 北京明略昭辉科技有限公司 Method and device for recommending messages
CN113961823A (en) * 2021-12-17 2022-01-21 江西中业智能科技有限公司 News recommendation method, system, storage medium and equipment
CN115098786A (en) * 2022-07-22 2022-09-23 齐鲁工业大学 News recommendation method and system based on gating multi-head self-attention
CN116010696A (en) * 2023-01-04 2023-04-25 华南农业大学 News recommendation method, system and medium integrating knowledge graph and long-term interest of user
CN116431919A (en) * 2023-04-13 2023-07-14 齐鲁工业大学(山东省科学院) Intelligent news recommendation method and system based on user intention characteristics
CN116467523A (en) * 2023-04-17 2023-07-21 平安科技(深圳)有限公司 News recommendation method, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN116911304A (en) 2023-10-20

Similar Documents

Publication Publication Date Title
CN109492772B (en) Method and device for generating information
CN108280200B (en) Method and device for pushing information
CN106354856B (en) Artificial intelligence-based deep neural network enhanced search method and device
CN116541610B (en) Training method and device for recommendation model
CN112650841A (en) Information processing method and device and electronic equipment
CN112241327A (en) Shared information processing method and device, storage medium and electronic equipment
CN112149699A (en) Method and device for generating model and method and device for recognizing image
CN115935185A (en) Training method and device for recommendation model
CN112995414B (en) Behavior quality inspection method, device, equipment and storage medium based on voice call
CN114119123A (en) Information pushing method and device
CN116186541A (en) Training method and device for recommendation model
CN116911304B (en) Text recommendation method and device
CN112000872A (en) Recommendation method based on user vector, training method of model and training device
CN113780318B (en) Method, device, server and medium for generating prompt information
CN114925275A (en) Product recommendation method and device, computer equipment and storage medium
CN114445179A (en) Service recommendation method and device, electronic equipment and computer readable medium
CN115329183A (en) Data processing method, device, storage medium and equipment
CN113742593A (en) Method and device for pushing information
CN113592315A (en) Method and device for processing dispute order
CN116911913B (en) Method and device for predicting interaction result
CN116911912B (en) Method and device for predicting interaction objects and interaction results
CN111784377A (en) Method and apparatus for generating information
CN117454015B (en) Information recommendation method and device
CN111626805B (en) Information display method and device
CN116340638A (en) Method and device for determining interaction result

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant