CN113641916B - Content recommendation method and device, electronic equipment and storage medium - Google Patents

Content recommendation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113641916B
CN113641916B CN202111191793.6A CN202111191793A CN113641916B CN 113641916 B CN113641916 B CN 113641916B CN 202111191793 A CN202111191793 A CN 202111191793A CN 113641916 B CN113641916 B CN 113641916B
Authority
CN
China
Prior art keywords
content
sample
recommended
recommendation
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111191793.6A
Other languages
Chinese (zh)
Other versions
CN113641916A (en
Inventor
梁瀚明
马骊
赵忠
傅妍玫
赵光耀
何新昇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111191793.6A priority Critical patent/CN113641916B/en
Publication of CN113641916A publication Critical patent/CN113641916A/en
Application granted granted Critical
Publication of CN113641916B publication Critical patent/CN113641916B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9538Presentation of query results

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the technical field of computers, in particular to a content recommendation method, a content recommendation device, electronic equipment and a storage medium, which can be applied to scenes such as cloud technology, artificial intelligence, intelligent traffic, auxiliary driving and the like and are used for improving timeliness of a recommendation system. The method comprises the following steps: responding to a content recommendation request, and selecting content to be recommended from a candidate content set based on the existing duration of the candidate content; acquiring object characteristics of a target object, context characteristics related to a content recommendation request and content characteristics of each content to be recommended, wherein the content characteristics comprise: a time-of-existence characteristic characterizing a length of time that the content has existed, determined based on at least one of a content-publishing time and an occurrence time of the content-related event; based on the characteristics, respectively carrying out low-order characteristic crossing and high-order characteristic crossing on each content to be recommended, and determining a target recommendation value; a recommendation order is determined based on the target recommendation value. The method and the device have the advantages that the recall and the sequencing are carried out in combination with the existing duration of the content, and the timeliness can be effectively improved.

Description

Content recommendation method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a content recommendation method and apparatus, an electronic device, and a storage medium.
Background
With the rapid development of computer technology, more and more contents need to be acquired through computer processing, and are transmitted through a network after the processing is finished; and content recommendation is required for users in more and more scenes. Such as recommendations for news, videos, or advertisements, etc.
In the related art, content recommendation mainly relies on a content recommendation model. In order to ensure that the high-aging content is ranked ahead, two weighting modes are proposed in the related art.
The first mode is as follows: and weighting the estimated scores of the high-aging articles estimated based on the content recommendation model.
When the first mode is adopted, the weighted numerical value can be coupled with the content recommendation model, the structure of the content recommendation model is changed or the scoring distribution is changed, and the effect of improving the timeliness can be influenced.
The second way is: in the training process of the content recommendation model, the high timeliness samples are weighted.
When the second mode is adopted, the weighting of the high-timeliness sample and the improvement of the estimated score do not have a causal relationship, so that the corresponding estimated score of the high-timeliness sample cannot be effectively improved by weighting the sample, and the sequencing result is influenced.
In summary, neither of the above two weighting methods can effectively improve the timeliness of the recommendation system.
Disclosure of Invention
The embodiment of the application provides a content recommendation method and device, electronic equipment and a storage medium, which are used for effectively improving timeliness of a recommendation system.
The content recommendation method provided by the embodiment of the application comprises the following steps:
responding to a content recommendation request triggered by a target object, and selecting at least one content to be recommended from a candidate content set based on the existing duration corresponding to the candidate content;
obtaining the object characteristics of the target object, the context characteristics related to the content recommendation request and the content characteristics corresponding to each content to be recommended, wherein the content characteristics at least comprise: a time-of-existence characteristic determined based on at least one of the content release time and the occurrence time of the content-related event and used for representing the time length for which the content has existed;
respectively performing low-order feature crossing and high-order feature crossing on each content to be recommended based on the object features, the context features and the content features including existence time features, and determining a target recommendation value corresponding to each content to be recommended based on a feature crossing result;
and determining the recommendation sequence of each content to be recommended aiming at the target object based on the target recommendation value corresponding to each content to be recommended.
An embodiment of the present application provides a content recommendation device, including:
the recall unit is used for responding to a content recommendation request triggered by the target object and selecting at least one content to be recommended from the candidate content set based on the existing duration corresponding to the candidate content;
a feature obtaining unit, configured to obtain an object feature of the target object, a context feature related to the content recommendation request, and a content feature corresponding to each content to be recommended, where the content feature at least includes: a time-of-existence characteristic determined based on at least one of the content release time and the occurrence time of the content-related event and used for representing the time length for which the content has existed;
the analysis unit is used for performing low-order feature intersection and high-order feature intersection on each content to be recommended respectively based on the object features, the context features and the content features including existence time features, and determining a target recommendation value corresponding to each content to be recommended respectively based on a feature intersection result;
and the recommending unit is used for determining the recommending sequence of each content to be recommended aiming at the target object based on the target recommending value corresponding to each content to be recommended.
Optionally, the analysis unit is specifically configured to:
respectively inputting the content characteristics, the object characteristics and the context characteristics corresponding to the contents to be recommended into a trained content recommendation model;
recommending and sequencing the contents to be recommended based on the content recommendation model, and acquiring a target recommendation value corresponding to each content to be recommended;
wherein, the content recommendation model is obtained by training based on a training sample data set containing related sample contents of different sample objects, and the sample contents in the training sample data set at least include: and each sample content is marked with a label value used for representing the size of the existing time length corresponding to the sample content and whether the sample content is clicked or not.
Optionally, the analysis unit is specifically configured to:
based on the content recommendation model, performing feature intersection on the content features of the contents to be recommended and the object features to obtain intersection features;
respectively performing low-order feature crossing on the content features, the object features, the context features and the crossing features of each content to be recommended to obtain a first recommended value corresponding to each content to be recommended, and performing high-order feature crossing to obtain a second recommended value corresponding to each content to be recommended;
carrying out weighted summation on the first recommendation value corresponding to each content to be recommended and the corresponding second recommendation value to obtain a target recommendation value corresponding to each content to be recommended;
optionally, the apparatus further comprises:
a labeling unit, configured to determine the tag value of each sample content by:
classifying each sample content according to the existing duration corresponding to each sample content and whether each sample content is clicked by a sample object;
and determining the label value corresponding to each sample content according to the classified category of each sample content.
Optionally, the labeling unit is specifically configured to:
taking the corresponding existing duration as a first type of sample content in a specified time period and the clicked sample content;
taking the clicked sample content as a second type of sample content, wherein the corresponding existing duration is not in the specified time period;
taking the sample content which is not clicked as the third type sample content;
wherein the label value of the first type sample content is greater than the label value of the second type sample content, and the label value of the second type sample content is greater than the label value of the third type sample content.
Optionally, the apparatus further comprises:
the model training unit is used for obtaining the content recommendation model through the following training modes:
acquiring the training sample data set, executing cyclic iterative training on the initial content recommendation model according to the sample content pair in the training sample data set, and outputting the trained content recommendation model when the training is finished; wherein the following operations are executed in a loop iteration training process:
selecting a sample content pair from the training sample data set, inputting the selected sample content pair into the trained content recommendation model, and obtaining a first pre-estimated recommendation value corresponding to a first sample content in the sample content pair and a second pre-estimated recommendation value corresponding to a second sample content in the sample content pair, which are obtained based on the content recommendation model, wherein the first sample content and the second sample content are related to the same sample object, and the label value of the first sample content is greater than that of the second sample content;
and constructing a target loss function based on the first pre-estimated recommended value, the second pre-estimated recommended value, the label value of the first sample content and the label value of the second sample content, and adjusting the network parameters of the content recommendation model based on the target loss function.
Optionally, the model training unit is specifically configured to:
determining a corresponding sample pair weight based on a difference value of the tag values of the first sample content and the second sample content, wherein an absolute value of the difference value is positively correlated with the sample pair weight;
determining a corresponding estimated loss value based on the difference value between the first estimated recommended value and the second estimated recommended value;
determining the target loss function based on a product of the sample pair weight and the estimated loss value, wherein the target loss function is positively correlated with the product.
An electronic device provided in an embodiment of the present application includes a processor and a memory, where the memory stores program codes, and when the program codes are executed by the processor, the processor is caused to execute any one of the steps of the content recommendation method.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer readable storage medium, and the processor executes the computer instructions to cause the computer device to execute the steps of any one of the content recommendation methods described above.
An embodiment of the present application provides a computer-readable storage medium, which includes program code, when the program product runs on an electronic device, the program code is configured to enable the electronic device to execute the steps of any one of the content recommendation methods described above.
The beneficial effect of this application is as follows:
the embodiment of the application provides a content recommendation method and device, electronic equipment and a storage medium. The candidate content is recalled by combining with the existence time characteristic used for representing the existing time length of the content, the target recommendation value of each content to be recommended is estimated, the high-timeliness content obtained by analyzing the existing time length can be guaranteed, the high-timeliness content has a high target recommendation value, and on the basis, when the recommendation sequence is sequenced on the basis of the target recommendation values corresponding to the contents to be recommended, the timeliness corresponding to the high-timeliness content can be effectively guaranteed. Compared with the related technology, the estimation method in the application does not need to weight the estimation score of the high-timeliness article obtained by model estimation, so that the condition that the weighted numerical value can be coupled with the content recommendation model can not be generated, the high-timeliness sample does not need to be weighted, and the timeliness of the recommendation system can be effectively improved under the condition that the model structure and the calculation complexity are not changed.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic diagram of an application scenario in an embodiment of the present application;
fig. 2 is a schematic flowchart of a content recommendation method in an embodiment of the present application;
fig. 3 is a schematic flowchart of a method for obtaining a target recommendation value of a content to be recommended in an embodiment of the present application;
FIG. 4 is a diagram of a content presentation interface in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a content recommendation model in an embodiment of the present application;
FIG. 6 is a schematic flow chart illustrating a model training method according to an embodiment of the present disclosure;
FIG. 7 is a graph showing the results of a test performed in the example of the present application;
fig. 8 is a timing flowchart of a news recommendation method in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a content recommendation device in an embodiment of the present application;
fig. 10 is a schematic diagram of a hardware component of an electronic device to which an embodiment of the present application is applied;
fig. 11 is a schematic diagram of a hardware component structure of another electronic device to which the embodiment of the present application is applied.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments, but not all embodiments, of the technical solutions of the present application. All other embodiments obtained by a person skilled in the art without any inventive step based on the embodiments described in the present application are within the scope of the protection of the present application.
Some concepts related to the embodiments of the present application are described below.
The recommendation method comprises the steps of obtaining a target object and object characteristics, wherein the target object refers to an object which wants to view contents to be recommended, and in the embodiment of the application, the target object can be a certain user or an account which the certain user logs in; the object characteristics include basic attributes of the user, accumulated user behaviors, and the like.
Content and content characteristics: the content generally includes various media forms such as text, sound and image, and the media used includes characters, pictures, photos, sound, animation and movies, and interactive functions provided by programs. In the embodiment of the present application, information and goods that can be used for recommending to a user are specifically referred to, for example, news, pictures and texts, videos, articles, and the like. The content features include content classification, content labels, and the like, and in addition, the content features in the embodiment of the present application further include a time-to-live feature that represents a time period during which the content has been present.
The recommendation system comprises: the personalized recommendation is to recommend information and commodities which are interested by the user to the user according to the interest characteristics and purchasing behaviors of the user. With the continuous expansion of the electronic commerce scale, the number and the variety of the commodities are rapidly increased, and customers need to spend a great deal of time to find the commodities which the customers want to buy. This process of browsing through large amounts of unrelated information and products will undoubtedly result in a constant loss of consumers who are overwhelmed by the problem of information overload. To address these issues, personalized recommendation systems have been developed. The personalized recommendation system is a high-level business intelligent platform established on the basis of mass data mining to help an e-commerce website to provide completely personalized decision support and information service for shopping of customers.
Aged and highly aged sample content: aging refers to the effect that can occur over a period of time. The high-aging content refers to the content which has a long existing time in a specified time period. The recommendation system in the embodiment of the application has high timeliness, new news, articles and the like can be recommended, and the timeliness of content recommendation is guaranteed.
The embodiments of the present application relate to Artificial Intelligence (AI) and Machine Learning technologies, and are designed based on a computer vision technology and Machine Learning (ML) in the AI.
Artificial intelligence is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence.
Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making. The artificial intelligence technology mainly comprises a computer vision technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and other directions. With the research and progress of artificial intelligence technology, artificial intelligence is researched and applied in a plurality of fields, such as common smart homes, smart customer service, virtual assistants, smart speakers, smart marketing, unmanned driving, automatic driving, robots, smart medical treatment and the like.
Machine learning is a multi-field cross discipline, and relates to a plurality of disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and the like. The special research on how a computer simulates or realizes the learning behavior of human beings so as to acquire new knowledge or skills and reorganize the existing knowledge structure to continuously improve the performance of the computer. Compared with the method for finding mutual characteristics among big data by data mining, the machine learning focuses on the design of an algorithm, so that a computer can automatically learn rules from the data and predict unknown data by using the rules.
Machine learning is the core of artificial intelligence, is the fundamental approach for computers to have intelligence, and is applied to all fields of artificial intelligence. Machine learning and deep learning generally include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, inductive learning, and the like. The content recommendation model in the embodiment of the application is obtained by training through a machine learning or deep learning technology. The content recommendation method based on the content recommendation model in the embodiment of the application can be used for recommending contents such as videos, pictures and texts and news.
The method for training the content recommendation model provided in the embodiment of the application can be divided into two parts, including a training part and an application part; the training part relates to the technical field of machine learning, and in the training part, a content recommendation model is trained through the technology of machine learning. Specifically, a content recommendation model is trained by using sample content pairs in a training sample data set given in the embodiment of the application, after the sample content pairs pass through the content recommendation model, an output result of the content recommendation model is obtained, model parameters are continuously adjusted by combining the output result, and the trained content recommendation model is output; the application part is used for recommending the content by using the content recommendation model trained in the training part.
The following briefly introduces the design concept of the embodiments of the present application:
a recommendation system is an information filtering system that predicts the user's rating or preference for content. Generally, recommendation systems typically include two phases, recall and sort. The recalling stage is to quickly select candidate contents related to the user interest from a full content library, for example, to screen out thousands of contents from a million-level content pool, and the sorting stage is to score the recalled contents, and to intercept top n contents as recommendation results according to scores, for example, to screen out a plurality of contents from thousands of contents and display the contents to the user.
In the related art, two ways are mainly proposed to ensure that the high-aging content is ranked in the top.
The method comprises the following steps of weighting pre-estimated scores of high-aging articles pre-estimated based on a sequencing model:
Figure 249294DEST_PATH_IMAGE001
wherein s isiIs the first
Figure 467567DEST_PATH_IMAGE002
The estimated fraction of the sample fine-ranking model,
Figure 575200DEST_PATH_IMAGE004
is the weighted prediction score, aiIs the weight that weights the highly aged samples, a number greater than 1. a isiThe value of (a) needs to be selected according to the specific situation of the recommendation system.
When the method is adopted, the weighted numerical value can be coupled with the sequencing model, the structure of the sequencing model is changed or the scoring distribution is changed, and the effect of timeliness improvement can be influenced.
And secondly, weighting the high timeliness samples in the training process of the sequencing model. A commonly used loss function is the cross entropy based on Pointwise (meaning that the loss function is constructed from one sample) loss:
Figure 501568DEST_PATH_IMAGE005
wherein, yiIs the first
Figure 581520DEST_PATH_IMAGE002
Labels (label), s of individual samplesiIs the estimated score, piIs the estimated click probability.
After weighting the high timeliness content, the loss function becomes:
Figure 935140DEST_PATH_IMAGE006
wherein, biThe sample weight, which is a high age sample, is a number greater than 1. biThe value is selected according to the specific situation of the recommendation system.
When the second mode is adopted, the weighting of the high-timeliness sample and the improvement of the estimated score do not have a causal relationship, so that the corresponding estimated score of the high-timeliness sample cannot be effectively improved by weighting the sample, and the sequencing result is influenced.
In view of this, embodiments of the present application provide a content recommendation method and apparatus, an electronic device, and a storage medium. The candidate content is recalled by combining with the existence time characteristic used for representing the existing time length of the content, the target recommendation value of each content to be recommended is estimated, the high-timeliness content obtained by analyzing the existing time length can be guaranteed, the high-timeliness content has a high target recommendation value, and on the basis, when the recommendation sequence is sequenced on the basis of the target recommendation values corresponding to the contents to be recommended, the timeliness corresponding to the high-timeliness content can be effectively guaranteed. Compared with the related technology, the estimation method in the application does not need to weight the estimation score of the high-timeliness article obtained by model estimation, so that the condition that the weighted numerical value can be coupled with the content recommendation model can not be generated, the high-timeliness sample does not need to be weighted, and the timeliness of the recommendation system can be effectively improved under the condition that the model structure and the calculation complexity are not changed.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it should be understood that the preferred embodiments described herein are merely for illustrating and explaining the present application, and are not intended to limit the present application, and that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Fig. 1 is a schematic view of an application scenario in the embodiment of the present application. The application scenario diagram includes two terminal devices 110 and a server 120. The terminal device 110 in the embodiment of the present application may be installed with a content recommendation client, where the content recommendation client is used to perform content recommendation, and specifically may be social software, such as instant messaging software and short video software, and may also be an applet, a web page, and the like, which is not limited specifically herein.
It should be noted that the content recommendation client in the embodiment of the present application may also refer to various content recommendation applications that can be applied to the vehicle, such as education, messages, tourism, listening to books, advertisements, and the like, and accordingly, the content to be recommended may refer to news, books, strategies, and the like related to the education and the tourism, or advertisements, information flow messages, and the like, which is not limited herein.
The server 120 may include a content recommendation server. The content recommendation server is configured to provide content materials for the content recommendation client, for example, the candidate content set in the embodiment of the present application may be located on the content recommendation server side, and a plurality of candidate contents are stored in the content recommendation server. Alternatively, the candidate content set may be local to the content recommendation client. In addition, the content recommendation server in the embodiment of the present application may also be used for content recommendation, and is not specifically limited herein.
It should be noted that the content recommendation method in the embodiment of the present application may be executed by the server or the terminal device alone, or may be executed by both the server and the terminal device. For example, a user logs in a content recommendation client installed on a terminal device to trigger a content recommendation request, the terminal device responds to the content recommendation request triggered by a target object and sends the content recommendation request and related information of the target object to a server, the server obtains object characteristics of the target object, context characteristics related to the content recommendation request and content characteristics corresponding to each content to be recommended, based on the characteristics, a recommendation sequence of each content to be recommended for the target object is determined finally, and the recommendation sequence is returned to the terminal device. And displaying a recommendation result and the like to the target object by the terminal equipment according to the determined recommendation sequence.
It should be noted that the implementation process of the content recommendation method listed above is only an example, and the steps are specifically limited.
In the embodiment of the present application, the content recommendation model may be deployed on the terminal device 110 for training, or may be deployed on the server 120 for training. The server 120 may store a plurality of training samples, including at least one sample content pair, for training the content recommendation model. Optionally, after the content recommendation model is obtained based on the training method in the embodiment of the present application through training, the trained content recommendation model may be directly deployed on the server 120 or the terminal device 110. Generally, the content recommendation model is directly deployed on the server 120, and in the embodiment of the present application, the content recommendation model is often used in a recommendation system for content recommendation.
It should be noted that the content recommendation method for training the content recommendation model provided in the embodiment of the present application may be applied to various application scenarios including content recommendation tasks, including but not limited to cloud technology, artificial intelligence, smart transportation, assisted driving, and the like, and training samples used in different scenarios are different and are not listed here.
In an alternative embodiment, terminal device 110 and server 120 may communicate via a communication network.
In an alternative embodiment, the communication network is a wired network or a wireless network.
In the embodiment of the present application, the terminal device 110 is a computer device used by a user, and the computer device includes, but is not limited to, a personal computer, a mobile phone, a tablet computer, a notebook, an electronic book reader, an intelligent voice interaction device, an intelligent appliance, a vehicle-mounted terminal, and the like. Each terminal device 110 is connected to a server 120 through a wireless network, and the server 120 is a server or a server cluster or a cloud computing center formed by a plurality of servers, or is a virtualization platform.
It should be noted that fig. 1 is only an example, and the number of the terminal devices and the servers is not limited in practice, and is not specifically limited in the embodiment of the present application.
The video detection method provided by the exemplary embodiment of the present application is described below with reference to the accompanying drawings in conjunction with the application scenarios described above, and it should be noted that the application scenarios described above are only shown for the convenience of understanding the spirit and principles of the present application, and the embodiments of the present application are not limited in this respect.
Referring to fig. 2, an implementation flow chart of a content recommendation method provided in the embodiment of the present application is shown, taking a terminal device as an execution subject, and a specific implementation flow of the method is as follows:
s201: the terminal equipment responds to a content recommendation request triggered by a target object, and selects at least one content to be recommended from a candidate content set based on the existing duration corresponding to the candidate content;
s202: the method comprises the steps that terminal equipment obtains object characteristics of a target object, context characteristics related to a content recommendation request and content characteristics corresponding to various contents to be recommended, wherein the content characteristics at least comprise: a time-of-existence characteristic determined based on at least one of the content release time and the occurrence time of the content-related event and used for representing the time length for which the content has existed;
the target object refers to a user or a login account of a content recommendation platform logged in by the user, the object characteristics include basic attributes of the user and accumulated user behaviors, and the common basic attributes of the user include age, gender, occupation, city and the like. The context characteristics are determined according to the content recommendation request triggered by the target object, and may include information such as time and date of the content recommendation request.
The content to be recommended in the embodiment of the application can be news, pictures and texts, videos, articles and the like. The content features include classification and labeling of the content, etc. Taking an article as an example, the content feature is an article feature, which includes a classification and a label of the article, for example, the classification of the article includes food, clothing, and the like; taking news as an example, the content features include news categories and tags, such as entertainment, education, economy, and the like.
Taking item recommendation as an example, in a recommendation system, the value of an item can decay over time, and it can be concluded that a highly time-efficient item is very important for the recommendation system. Taking news recommendation as an example, the requirement on timeliness of the articles related to the hot events is higher, because the hot events can be continuously developed, and the recommendation brings bad user experience to the outdated content of the user.
Based on this, the content features in the embodiments of the present application further include a time-to-live feature that characterizes how long the content has been present.
In an alternative embodiment, the existing time length of a certain content to be recommended or a candidate content may be obtained by any one of the following methods, which are collectively referred to as any one of the following:
the method comprises the steps of determining the existing time length of any one content according to the release time of any one content.
In this way, the time length from the release time of the content to the request time is taken as the existing time length of the content to be recommended.
In the embodiment of the present application, the request time indicates a time when the target object triggers the content recommendation request. For example, when a user logs in a chat application and clicks a place of a viewpoint in the chat application, a content recommendation request may be triggered, the request time may refer to a time when the user clicks the place of the viewpoint in the chat application, and the like, and the actual application time is determined according to specific situations.
For example, the content a to be recommended is a piece of news, the release time of the piece of news is t1, and the request time is t2, so that the existing duration Ta = t2-t1 of the content a to be recommended is obtained.
And secondly, determining the existing time length of any one content according to the occurrence time of any one content related event.
The content-related event to be recommended may refer to an event reported by a piece of news. In this way, the time length from the occurrence time of the event to the request time is taken as the existing time length of the content to be recommended.
For example, the content B to be recommended is also a piece of news, the release time of the piece of news is t1, the occurrence time of the related event published by the news is t3, and the request time is t2, so that the existing time length Tb = t2-t3 of the content a to be recommended is stored.
In the embodiment of the present application, the time unit of the existing time period may be days, hours, and the like. Taking days as an example, specifically, when determining the existing time length of a certain news, for example, the news is published at 0:00 on 1 month and 1 day in 2021, and the user request time is 12:30 on 1 month and 3 months in 2021, the existing time length can be determined to be about 3.5 days; for example, in the case of an hour, for example, a news message is issued at 0:00 every 1 month and 1 day in 2021, and the user requests 12:30 every 1 month and 1 day in 2021, it can be determined that the existing time period is 12.5 hours.
In addition, the existing duration of the sample content may also be calculated based on any one of the first and second manners, and is not limited herein.
In embodiments of the present application, the timeliness of the content may be determined based on the length of time that the content has been present. If the existing time length of the content is within the specified time period, the content can be used as the high-aging content. Based on the above embodiment, the timeliness of the recommendation system in the embodiment of the application can be effectively guaranteed, the content with high timeliness can be effectively recommended to the user, the user experience is improved, and the click rate of the recommended content is further improved.
S203: the terminal equipment respectively carries out low-order characteristic crossing and high-order characteristic crossing on each content to be recommended based on object characteristics, context characteristics and content characteristics including existence time characteristics, and determines a target recommendation value corresponding to each content to be recommended based on a characteristic crossing result;
the target recommendation value may represent a probability that the user clicks the content to be recommended.
In the embodiment of the present application, the step may be implemented based on artificial intelligence, for example, a target recommendation value corresponding to each content to be recommended may be obtained through a content recommendation model (i.e., a ranking model) in the present application.
It should be noted that the content recommendation model is obtained by training based on a training sample data set containing related sample contents of different sample objects, and accordingly, the sample contents in the training sample data set at least include: sample content that has been present for a specified period of time (i.e., high age sample content), each sample content being labeled with a label value that characterizes: the size of the existing duration corresponding to the sample content and whether the sample content is clicked.
Optionally, the tag value of each sample content may be specifically determined in the following manner:
firstly, classifying each sample content according to the existing duration corresponding to each sample content and whether each sample content is clicked by a sample object; and determining the label value corresponding to each sample content according to the classification of each sample content obtained by the division.
In the embodiment of the present application, the contents of each sample are specifically classified into the following three categories:
firstly, the corresponding existing time length is in a specified time period, and the clicked sample content is taken as the first type of sample content, which can also be called high-aging click sample content.
For example, taking the specified time period as 24 hours as an example, for sample contents whose publication time is within 24 hours, sample contents whose publication time is within 24 hours and whose related events occur within 24 hours can be classified as high-aging click sample contents if they were clicked by a certain user.
Secondly, the corresponding existing duration is not in the specified time period, and the clicked sample content is taken as a second type sample content, which can also be called as a common click sample content.
For example, for sample contents whose publication time is 24 hours or less, and whose event time is 24 hours or less, if they were clicked by a certain user, are classified as common click sample contents.
Third, the sample content that is not clicked is regarded as a third type of sample content, which may also be referred to as the un-clicked sample content.
That is, samples other than the above two types are taken as (exposed) non-clicked sample contents.
The label value of the first type sample content is larger than that of the second type sample content, and the label value of the second type sample content is larger than that of the third type sample content.
It should be noted that the present application is applicable to different definition modes of the high aging, and may be other specified time periods, other definition modes, and the like besides the above listed definition modes of the high aging, and is not limited specifically herein.
An alternative embodiment is that S203 can be implemented according to the flowchart shown in fig. 3, and includes the following steps:
s301: the terminal equipment respectively inputs the content characteristics, the object characteristics and the context characteristics corresponding to each content to be recommended into a trained content recommendation model;
s302: the terminal equipment carries out recommendation sequencing on each content to be recommended based on a content recommendation model, and obtains a target recommendation value corresponding to each content to be recommended;
specifically, in the case of the content recommendation model being an Deep Factorization (Deep FM) model based on Pairwise (pair) loss (meaning that a loss function is constructed from two samples), the model includes two parts, namely a Factorization (FM) model for performing low-order feature crossing and a Deep Neural Network (DNN) model for performing high-order feature crossing, and step S302 can be further divided into the following sub-steps:
s3021: the terminal equipment performs feature intersection on the content features and the object features of each content to be recommended based on a content recommendation model to obtain intersection features;
the cross features in the embodiment of the application can also be called artificial cross features, and original object features and content features are used as input, Cartesian products are made according to manually specified feature combinations, and high-dimensional sparse features can be obtained through the Cartesian products.
S3022: the terminal equipment respectively carries out low-order feature crossing on the content features, the object features, the context features and the crossing features of the contents to be recommended to obtain a first recommended value corresponding to each content to be recommended, and carries out high-order feature crossing to obtain a second recommended value corresponding to each content to be recommended;
the first recommendation value is mainly a score obtained based on the FM portion, and the second recommendation value is a score obtained based on the DNN portion.
S3023: and the terminal equipment performs weighted summation on the first recommendation value corresponding to each content to be recommended and the corresponding second recommendation value to obtain a target recommendation value corresponding to each content to be recommended.
S204: and the terminal equipment determines the recommendation sequence of each content to be recommended aiming at the target object based on the target recommendation value corresponding to each content to be recommended.
In general, the higher the target recommendation value corresponding to the content to be recommended is, the higher the recommendation order of the content to be recommended for the target object is, that is, the content to be recommended is preferentially recommended to the target object.
In the application embodiment, the candidate content is recalled by combining with the existing time characteristic used for representing the existing time length of the content, and the target recommendation value of each content to be recommended is estimated, so that the high-timeliness content obtained by analyzing the existing time length can be ensured to have a higher target recommendation value, and on the basis, when the recommendation sequence is sequenced based on the target recommendation values corresponding to the contents to be recommended, the timeliness corresponding to the high-timeliness content can be effectively ensured. Compared with the related technology, the estimation method in the application does not need to weight the estimation score of the high-timeliness article obtained by model estimation, so that the condition that the weighted numerical value can be coupled with the content recommendation model can not be generated, the high-timeliness sample does not need to be weighted, and the timeliness of the recommendation system can be effectively improved under the condition that the model structure and the calculation complexity are not changed.
Fig. 4 is a schematic diagram of a content display interface according to an embodiment of the present application. When a user logs in a certain chat application and clicks a viewpoint block in the chat application, a content recommendation request can be triggered, an account of the user logging in the chat application can be used as a target object, object characteristics of the target object can be generated according to basic attributes of the account, related user historical behaviors and the like, and context characteristics can be generated based on information such as time and date when the user triggers the content recommendation request.
In the embodiment of the application, after the chat application responds to the content recommendation request triggered by the target object, the content to be recommended, which is higher in relevance and timeliness and more interesting for the target object, can be screened out for the target object based on the relevant recommendation system, and is displayed to the user through the interface shown in fig. 4.
Specifically, the recommendation system comprises two stages of recalling and sorting. The first phase is a recall phase, which will screen thousands of levels of content to be recommended from millions of content pools. The method and the system for recommending the content have the advantages that the variety and certain correlation of the content to be recommended need to be guaranteed in the recall process, the timeliness of the recommendation system in the related technology cannot be considered independently in the recall process, the content to be recommended with high timeliness is less in the sequencing stage, and in order to solve the problem, the timeliness queue is additionally added in the recall stage. The recall phase typically includes image recalls, collaborative recalls, and the like. The embodiment of the application additionally adds the portrait recall considering timeliness and recalls some high-timeliness contents.
An optional implementation manner is that the content to be recommended with high timeliness is obtained by the following method:
selecting candidate contents clicked by at least one candidate object from the candidate content set based on the historical behaviors of the candidate objects; sequencing each candidate content according to the existing duration corresponding to each selected candidate content; and selecting candidate contents of the sequencing result at the specified sequencing position as the contents to be recommended which are recalled in the recalling stage and have high timeliness.
The candidate objects refer to some historical user-related accounts obtained through big data statistics when the target application is used, and the candidate content set refers to a content pool. When the first screening is performed according to the historical behaviors of the candidate objects, specifically, the first screening is recalled according to the labels of the candidate contents accumulated by the historical behaviors of the user. The number of times that the user clicks the label of the candidate content is recorded in the portrait, the greater the number of times, the stronger the interest of the user in the label is, and a part of the candidate content with the label in the portrait can be selected during recall. And then, sequencing according to the sequence of the existing duration corresponding to each selected candidate content from small to large, and selecting a certain number of candidate contents sequenced in the front as the contents to be recommended. Furthermore, the contents to be recommended can be sorted based on the content recommendation model in the embodiment of the application.
It should be noted that, in the recall stage in the embodiment of the present application, rather than only recalling the part of the content to be recommended with high timeliness, in addition to the content to be recommended recalled in the manner in the related art, a part of the content to be recommended with high timeliness is recalled additionally based on the manner described above.
In the embodiment, by considering the timeliness image recall, the input of the content recommendation model can be effectively ensured to contain the highly-timeliness content to be recommended, and in addition, the content recommendation model is trained based on the highly-timeliness sample content, so that the situation that the content recommendation model cannot identify highly-timeliness articles and is difficult to generate effects is avoided.
It should be noted that, in the embodiment of the present application, the acquired account-related data is acquired under the authorization of the user, and the privacy of the user is not violated.
The second stage is a sequencing stage, and sequencing can be further divided into coarse ranking and fine ranking, wherein the coarse ranking refers to scoring thousands of levels of contents to be recommended and screening hundreds of levels of contents to be recommended with highest relevance and timeliness; and the fine ranking means that tens of contents to be recommended with highest correlation and timeliness are screened out from hundreds of contents to be recommended and finally displayed to the user.
The recommendation system in the related art does not consider timeliness in the sorting stage. In order to solve the problem, the present application mainly aims at the refinement part, and uses the Pairwise loss considering the timeliness in the loss of the content recommendation model.
The content recommendation model in the present application is described in detail below with reference to fig. 5:
fig. 5 is a schematic structural diagram of a content recommendation model in an embodiment of the present application, where the model is a DeepFM model based on Pairwise loss, and includes two parts, namely FM and DNN, where the FM part is divided into a first-order part and a second-order part.
It should be noted that the present application is not limited to a specific model structure, and other model structures such as eXtreme Gradient Boosting (XGBoost) may be used in addition to the deep fm model. The input features of the model and the structure of the model are described in detail below:
first, the input features of the model in the embodiment of the present application are briefly introduced:
specifically, the input features of the content recommendation model shown in fig. 5 mainly include: object features, content features, cross features, and context features.
The cross feature is characterized in that original object features and content features are used as input, Cartesian products are made according to manually specified feature combinations, and high-dimensional sparse features are obtained through the Cartesian products.
As shown in fig. 5, the cross feature may be obtained based on the first-order part of FM, and for the second-order part of FM and the DNN part, the object feature, the content feature, the context feature, and the cross feature may be input, or only the object feature, the content feature, and the context feature may be input, and the following is exemplified by inputting four types of features.
In addition, in order to enable the content recommendation model to identify the articles with high timeliness, the time-to-live characteristics for representing the existing time length of the content are added, that is, the content characteristics in the embodiment of the application further include the time-to-live characteristics for representing the existing time length of the content.
In the embodiments of the present application, these features can be classified into sparse type features and continuous type features.
The sparse feature is mainly processed by Embedding (Embedding), for example, the Embedding layer (Embedding layer) shown in fig. 5 may be used to embed the sparse feature. Specifically, the method comprises the following steps: the high-dimensional classification variables are converted into a low-dimensional dense representation. The method randomly initializes a lookup table, wherein one row represents a value of a classification variable, the row is the value number of the classification variable, and the column is the dimension of a low-dimensional dense vector. When calculating embedding of features, all embedding where classification occurs are searched, and then weighted and summed. And the continuous characteristic needs to be processed according to the value range and distribution of the characteristic, so that the value range after processing is kept between [0,1] as much as possible. Common processing methods include normalization and logarithmization.
Taking the existence time feature as an example, the existence time feature in the embodiment of the present application may be used as a sparse feature or a continuous feature.
When the sparse feature is used, a threshold value can be determined according to the existing time length distribution of the content, and then discretization is performed based on the threshold value, and the sparse feature is processed in an Embedding manner.
Taking news recommendations as an example, where the existing duration is typically in hours, following a long tail distribution, one feasible threshold is set to 1 hour, 3 hours, 6 hours, 12 hours, 24 hours, 3 days, 7 days. Based on the several thresholds, the existing time length of the content may be discretized into id for representing the time-to-existence characteristic, for example, when the existing time length is less than 1 hour, id is 1, when the existing time length is between 1 hour and 3 hours, id is 2, when the existing time length is between 3 hours and 6 hours, id is 3, and so on.
When the feature is a continuous feature, the log extraction process may be performed on the existing time length so that the distribution of the existing time feature is close to a normal distribution.
The above-mentioned characteristic processing method is only an example, and is not particularly limited herein.
The first order part of the FM is a Logistic regression model containing cross features. The first order part enables the memory of the model.
As shown in fig. 5, the input of the FM first order part includes sparse type features: sparse field 1 (sparse field 1), sparse field 2 (sparse field 2), …, sparse field m (sparse field m). Specifically, the sparse fields may refer to fields such as user age, user gender, content tags, and the like.
The FM second order part performs a second order cross over for all features, which may be object features, content features, context features, and cross features. For example, the characteristics of age and gender are firstly converted into dense representation by the embedding method, and then the dense representation is subjected to pairwise inner product.
Based on the first and second order parts of the FM, a 1-dimensional output may ultimately be obtained, namely an FM score (FM score) as shown in fig. 5, which is also referred to herein as score1, as the first recommendation.
Compared with the first-order part, the dimensionality of the parameters is reduced by using the embedding method, so that the risk of overfitting is reduced. Meanwhile, the second-order part can learn the intersections which do not appear in the training samples and has certain generalization.
The DNN part is a deep neural network, the inputs of which contain sparse and continuous features. All input features are stitched (concat) into a vector, such as dense features (dense features) as shown in fig. 5, and then passed through several fully connected layers, resulting in a 1-dimensional output, i.e., DNN score (DNN score) as shown in fig. 5, which is also referred to herein as score 2. The DNN part is a high order implicit feature crossing. Since the DNN uses the embedding method, it converts the classification variables into points in space, and finds potential association through the nonlinear learning capability of the DNN.
Finally, the FM and DNN fractions are concat, and then a full link layer is used for weighting and summing to obtain the final estimated fraction
Figure 31535DEST_PATH_IMAGE007
Namely, the target recommendation value corresponding to the content to be recommended in the embodiment of the application. The weight corresponding to score1 is W shown in FIG. 5FMThe weight corresponding to score2 is W shown in FIG. 5DNN
The following describes in detail a training process of the content recommendation model in the embodiment of the present application:
optionally, the content recommendation model is obtained by training in the following manner:
firstly, a training sample data set is obtained, then, according to a sample content pair in the training sample data set, the cyclic iterative training is executed on an initial content recommendation model, and when the training is finished, the trained content recommendation model is output.
Taking the server as an execution subject, as shown in fig. 6, the following operations are performed in one loop iteration training process:
s601: the server selects a sample content pair from the training sample data set, inputs the selected sample content pair into a content recommendation model, and obtains a first estimated recommendation value corresponding to a first sample content in the sample content pair and a second estimated recommendation value corresponding to a second sample content in the sample content pair, which are obtained based on the content recommendation model;
and the first sample content and the second sample content are related to the same sample object, and the label value of the first sample content is larger than that of the second sample content. The first estimated recommendation value and the second estimated recommendation value have the same meaning as the target recommendation value corresponding to the content to be recommended listed above, and the first estimated recommendation value and the second estimated recommendation value are referred to as the estimated recommendation values mainly for distinguishing a model training stage and an on-line prediction stage. In the model training phase, the estimated recommendation value is obtained by performing weighted summation based on the results of the FM part and the DNN part, and the "first estimated recommendation value" and the "second estimated recommendation value" are for different samples in the sample content pair.
Specifically, the sample content pair in the embodiment of the present application is two training samples for the same object, and the label values corresponding to the sample content in the two training samples are different.
S602: the server constructs a target loss function based on the first pre-estimated recommended value, the second pre-estimated recommended value, the label value of the first sample content and the label value of the second sample content, and adjusts the network parameters of the content recommendation model based on the target loss function.
In the embodiment of the application, the training data of the content recommendation model based on Pairwise loss is slightly different from the training data of the content recommendation model based on Pointwise loss. According to the method, samples generated by one-time refreshing of the same user are placed in a batch, each sample is propagated forward to calculate the pre-estimated score (namely the pre-estimated recommended value), and then the samples of the same user are constructed into sample content pairs (pair) according to the labels, so that loss is calculated.
It should be noted that although the same sample content may appear in multiple pairs, the calculation amount of model training is not increased because the estimated recommendation values are already calculated through one forward propagation.
Optionally, in this embodiment of the application, it is defined that, considering the timeliness Pairwise loss, label of an exposed non-clicked sample content is 0, label of a common clicked sample content is 1, and label of a high-timeliness clicked sample content is 2. Two samples of the same user
Figure 761593DEST_PATH_IMAGE002
And
Figure 696051DEST_PATH_IMAGE008
forming a pair.
Optionally, step S602 may be further divided into the following sub-steps:
s6021: the server determines corresponding sample pair weights based on the difference value of the label values of the first sample content and the second sample content;
it should be noted that, in the embodiment of the present application, the sample pair weight is associated with the difference value of the sample content tag value, is not a preset model hyper-parameter, and is not coupled with the ranking model, so that when the model structure is replaced or the distribution of the estimated score is changed, no great influence is generated.
In the embodiment of the present application, the value of the weight only needs to be calculated by the following formula 2, and does not need to be determined through experiments, and does not need a higher experiment period. Moreover, the weight is not used for weighting the estimated score, the relevance of the head object is not influenced, and the intention of the user for clicking is not reduced.
S6022: the server determines a corresponding estimated loss value based on a difference value between the first estimated recommended value and the second estimated recommended value;
s6023: the server determines a target loss function based on a product of the sample pair weight and the estimated loss value, wherein the target loss function is positively correlated with the product.
Hypothesis sample
Figure 17311DEST_PATH_IMAGE002
As the first sample content, sample
Figure 302799DEST_PATH_IMAGE008
For the second sample content, the specific formula of the target loss function is as follows:
Figure 305390DEST_PATH_IMAGE009
wherein, yiAnd yjAre respectively the first
Figure 359934DEST_PATH_IMAGE002
And a first
Figure 616209DEST_PATH_IMAGE008
Label, s of individual samplesiAnd sjIs the estimated fraction of the sample, wijIs the weight of pair, i.e. sample pair weight.
In the examples of the present application, yiAnd yjThe absolute value of the difference value (c) is positively correlated with the sample pair weight, i.e., the larger the difference value is, the larger the sample pair weight is.
E.g. yi=2,yjIf not =0, then
Figure 388993DEST_PATH_IMAGE010
= 3; as another example, yi=2,yj=1, then
Figure 929696DEST_PATH_IMAGE011
=2;yi=1,yjIf not =0, then
Figure 104325DEST_PATH_IMAGE012
=1, and so on.
In the embodiment of the application, theIn yi>yjWhen a sample of (1) is si>sjThe losses will be relatively small, whereas the losses will be relatively large. In optimizing the model (i.e., adjusting the network parameters of the model), using a gradient descent algorithm will make y as large as possiblei>yjPair of (a) satisfies si>sj
Therefore, considering the timeliness Pairwise loss in the embodiment of the application, the estimated score of the content of the high-timeliness click sample is higher than that of the normal click sample, and both the estimated score and the estimated score are higher than those of the un-click sample. According to the method, the timeliness of the sample can be learned by the model by considering the timeliness Pairwise loss.
It should be noted that, in the embodiment of the present application, the definition of label in Pairwise loss is not limited to that label of an exposed non-clicked sample content is 0, a general clicked sample content is 1, and a label of a high-aging clicked sample content is 2. The following relative relationship is satisfied, and is not specifically limited herein:
label of the high-aging click sample content > label of the normal click sample content > label of the un-clicked sample content is exposed.
Fig. 7 is a schematic diagram of a test result in the embodiment of the present application. The left side is a schematic diagram of a test result corresponding to the idle running period, and the right side is a schematic diagram of a test result corresponding to the experimental period.
Under a viewpoint image-text recommendation scene in a certain chat application, the content recommendation model provided by the application is used for content recommendation, and the click ratio of an article existing within 24 hours can be improved by 6.89% under the condition that the access volume (PV) is kept flat.
In summary, the application introduces a high timeliness recall queue to ensure that the input of the sorting model has high timeliness articles; increasing the time-dependent characteristics of the item's presence ensures that the model can identify items of high age; and (3) setting the label of the high-aging and clicked article to be 2 by adopting Pairwise loss, so that the model can give a higher estimation score to the high-aging article. Based on the content recommendation method listed above, the timeliness of the recommendation system can be effectively improved under the condition that the model structure and the calculation complexity are not changed.
Fig. 8 is a timing flowchart of a news recommendation method in the embodiment of the present application, which is illustrated by taking a recommendation system deployed in a terminal device as an example. The specific implementation flow of the method is as follows:
step S801: in the recalling stage, the terminal equipment selects candidate news clicked by at least one candidate object from the candidate news set based on the historical behaviors of the candidate objects;
step S802: the terminal equipment sorts the candidate news according to the existing duration corresponding to the selected candidate news;
step S803: the terminal equipment selects candidate news with the sequencing result at the specified sequencing position as news to be recommended;
step S804: roughly arranging the recalled news to be recommended by the terminal equipment, and screening hundreds of grades of news to be recommended;
step S805: the method comprises the steps that a terminal device responds to a news recommendation request triggered by a target object, and obtains object characteristics of the target object, context characteristics related to the news recommendation request and news characteristics corresponding to various news to be recommended screened in a rough arrangement mode;
step S806: the terminal equipment respectively inputs news characteristics, object characteristics and context characteristics corresponding to each news to be recommended into a trained news recommendation model;
step S807: the terminal equipment performs feature intersection on news features and object features of news to be recommended based on a news recommendation model to obtain intersection features;
step S808: the terminal equipment respectively carries out low-order feature crossing on news features, object features, context features and crossing features of each news to be recommended to obtain a first recommended value corresponding to each news to be recommended, and carries out high-order feature crossing to obtain a second recommended value corresponding to each news to be recommended;
step S809: the terminal equipment performs weighted summation on the first recommendation value corresponding to each news to be recommended and the corresponding second recommendation value to obtain a target recommendation value corresponding to each news to be recommended;
step S810: and the terminal equipment carries out fine ranking on the basis of the target recommendation values corresponding to the news to be recommended respectively.
It should be noted that the embodiment of the present application is applicable to various recommendation systems, and may be applied to product recommendation, video recommendation, and the like in addition to the above listed news recommendation, and is not limited in detail herein.
Based on the same inventive concept, the embodiment of the application also provides a content recommendation device. As shown in fig. 9, which is a schematic structural diagram of a content recommendation apparatus 900 in an embodiment of the present application, the content recommendation apparatus may include:
a feature obtaining unit 902, configured to select, in response to a content recommendation request triggered by a target object, at least one content to be recommended from a candidate content set based on a pre-existing duration corresponding to a candidate content;
a feature obtaining unit 902, configured to obtain an object feature of a target object, a context feature related to a content recommendation request, and a content feature corresponding to each content to be recommended, where the content feature at least includes: a time-of-existence characteristic determined based on at least one of the content release time and the occurrence time of the content-related event and used for representing the time length for which the content has existed;
an analyzing unit 903, configured to perform low-order feature intersection and high-order feature intersection on each content to be recommended respectively based on the object features, the context features, and the content features including the existence time features, and determine a target recommendation value corresponding to each content to be recommended based on a feature intersection result;
and the recommending unit 904 is configured to determine a recommendation order of each content to be recommended for the target object based on the target recommendation value corresponding to each content to be recommended.
Optionally, the apparatus further comprises:
a duration determining unit 905, configured to obtain an existing duration of any one content, where the any one content is a content to be recommended or a candidate content, as follows:
taking the time length between the release time and the request time of any one content as the existing time length of any one content, wherein the request time represents the time for triggering the content recommendation request by the target object; alternatively, the first and second electrodes may be,
and taking the time length between the occurrence time of any content-related event and the request time as the existing time length of any content, wherein the request time represents the time when the target object triggers the content recommendation request.
Optionally, the recall unit 901 is specifically configured to:
selecting candidate contents clicked by at least one candidate object from the candidate content set based on the historical behaviors of the candidate objects;
sequencing each candidate content according to the existing duration corresponding to each selected candidate content;
and selecting candidate contents of the sequencing result at the specified sequencing position as the contents to be recommended.
Optionally, the analysis unit 903 is specifically configured to:
respectively inputting content characteristics, object characteristics and context characteristics corresponding to each content to be recommended into a trained content recommendation model;
recommending and sequencing each content to be recommended based on a content recommendation model, and acquiring a target recommendation value corresponding to each content to be recommended;
the content recommendation model is obtained by training based on a training sample data set containing related sample contents of different sample objects, and the sample contents in the training sample data set at least comprise: and each sample content is marked with a label value used for representing the size of the existing time length corresponding to the sample content and whether the sample content is clicked or not.
Optionally, the analysis unit 903 is specifically configured to:
based on a content recommendation model, performing feature intersection on the content features and the object features of each content to be recommended to obtain intersection features;
respectively performing low-order feature crossing on the content features, the object features, the context features and the crossing features of each content to be recommended to obtain a first recommended value corresponding to each content to be recommended, and performing high-order feature crossing to obtain a second recommended value corresponding to each content to be recommended;
carrying out weighted summation on the first recommendation value corresponding to each content to be recommended and the corresponding second recommendation value to obtain a target recommendation value corresponding to each content to be recommended;
optionally, the apparatus further comprises:
a labeling unit 906 for determining a label value of each sample content by:
classifying the sample contents according to the existing duration corresponding to each sample content and whether each sample content is clicked by the sample object;
and determining the label value corresponding to each sample content according to the category of each sample content obtained by division.
Optionally, the labeling unit 906 is specifically configured to:
taking the corresponding existing duration as a first type of sample content in a specified time period and the clicked sample content;
taking the clicked sample content as a second type of sample content, wherein the corresponding existing duration is not in the specified time period;
taking the sample content which is not clicked as the third type sample content;
the label value of the first type sample content is larger than that of the second type sample content, and the label value of the second type sample content is larger than that of the third type sample content.
Optionally, the apparatus further comprises:
a model training unit 907, configured to obtain a content recommendation model through training in the following manner:
acquiring a training sample data set, executing loop iterative training on the initial content recommendation model according to a sample content pair in the training sample data set, and outputting the trained content recommendation model when the training is finished; wherein the following operations are executed in a loop iteration training process:
selecting a sample content pair from a training sample data set, inputting the selected sample content pair into a trained content recommendation model, and acquiring a first pre-estimated recommendation value corresponding to a first sample content in the sample content pair and a second pre-estimated recommendation value corresponding to a second sample content in the sample content pair, which are acquired based on the content recommendation model, wherein the first sample content and the second sample content are related to the same sample object, and the label value of the first sample content is greater than that of the second sample content;
and constructing a target loss function based on the first estimated recommended value, the second estimated recommended value, the label value of the first sample content and the label value of the second sample content, and adjusting the network parameters of the content recommendation model based on the target loss function.
Optionally, the model training unit 907 is specifically configured to:
determining a corresponding sample pair weight based on a difference value of the label values of the first sample content and the second sample content, wherein the absolute value of the difference value is in positive correlation with the sample pair weight;
determining a corresponding estimated loss value based on a difference value between the first estimated recommended value and the second estimated recommended value;
and determining a target loss function based on the product of the sample pair weight and the estimated loss value, wherein the target loss function is positively correlated with the product.
The embodiment of the application provides a content recommendation method and device, electronic equipment and a storage medium. The candidate content is recalled by combining with the existence time characteristic used for representing the existing time length of the content, the target recommendation value of each content to be recommended is estimated, the high-timeliness content obtained by analyzing the existing time length can be guaranteed, the high-timeliness content has a high target recommendation value, and on the basis, when the recommendation sequence is sequenced on the basis of the target recommendation values corresponding to the contents to be recommended, the timeliness corresponding to the high-timeliness content can be effectively guaranteed. Compared with the related technology, the estimation method in the application does not need to weight the estimation score of the high-timeliness article obtained by model estimation, so that the condition that the weighted numerical value can be coupled with the content recommendation model can not be generated, the high-timeliness sample does not need to be weighted, and the timeliness of the recommendation system can be effectively improved under the condition that the model structure and the calculation complexity are not changed.
For convenience of description, the above parts are separately described as modules (or units) according to functional division. Of course, the functionality of the various modules (or units) may be implemented in the same one or more pieces of software or hardware when implementing the present application.
Having described the content recommendation method and apparatus according to the exemplary embodiments of the present application, next, a content recommendation apparatus according to another exemplary embodiment of the present application is described.
As will be appreciated by one skilled in the art, aspects of the present application may be embodied as a system, method or program product. Accordingly, various aspects of the present application may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
In some possible embodiments, a content recommendation device according to the present application may include at least a processor and a memory. Wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps of the content recommendation method according to various exemplary embodiments of the present application described in the specification. For example, the processor may perform the steps as shown in fig. 2.
The electronic equipment is based on the same inventive concept as the method embodiment, and the embodiment of the application also provides the electronic equipment. In one embodiment, the electronic device may be a server, such as server 120 shown in FIG. 1. In this embodiment, the electronic device may be configured as shown in fig. 10, and include a memory 1001, a communication module 1003, and one or more processors 1002.
A memory 1001 for storing computer programs executed by the processor 1002. The memory 1001 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, a program required for running an instant messaging function, and the like; the storage data area can store various instant messaging information, operation instruction sets and the like.
Memory 1001 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 1001 may also be a non-volatile memory (non-volatile memory), such as a read-only memory (rom), a flash memory (flash memory), a hard disk (HDD) or a solid-state drive (SSD); or the memory 1001 is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The memory 1001 may be a combination of the above memories.
The processor 1002 may include one or more Central Processing Units (CPUs), a digital processing unit, and the like. The processor 1002 is configured to implement the content recommendation method when the computer program stored in the memory 1001 is called.
The communication module 1003 is used for communicating with the terminal device and other servers.
In the embodiment of the present application, the specific connection medium among the memory 1001, the communication module 1003, and the processor 1002 is not limited. In the embodiment of the present application, the memory 1001 and the processor 1002 are connected through the bus 1004 in fig. 10, the bus 1004 is depicted by a thick line in fig. 10, and the connection manner between other components is merely illustrative and is not limited. The bus 1004 may be divided into an address bus, a data bus, a control bus, and the like. For ease of description, only one thick line is depicted in fig. 10, but only one bus or one type of bus is not depicted.
The memory 1001 stores therein a computer storage medium, and the computer storage medium stores therein computer-executable instructions for implementing the content recommendation method according to the embodiment of the present application. The processor 1002 is configured to execute the content recommendation method described above, as shown in fig. 2.
In another embodiment, the electronic device may also be other electronic devices, such as the terminal device 110 shown in fig. 1. In this embodiment, the structure of the electronic device may be as shown in fig. 11, including: communications component 1110, memory 1120, display unit 1130, camera 1140, sensor 1150, audio circuit 1160, bluetooth module 1170, processor 1180, and the like.
The communication component 1110 is configured to communicate with a server. In some embodiments, a Wireless Fidelity (WiFi) module may be included, the WiFi module being a short-range Wireless transmission technology, through which the electronic device may help the user to transmit and receive information.
The memory 1120 may be used to store software programs and data. The processor 1180 performs various functions of the terminal device 110 and data processing by executing software programs or data stored in the memory 1120. The memory 1120 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. The memory 1120 stores an operating system that enables the terminal device 110 to operate. The memory 1120 may store an operating system and various application programs, and may also store codes for executing the content recommendation method according to the embodiment of the present application.
The display unit 1130 may also be used to display information input by the user or information provided to the user and a Graphical User Interface (GUI) of various menus of the terminal apparatus 110. Specifically, the display unit 1130 may include a display screen 1132 disposed on the front surface of the terminal device 110. The display screen 1132 may be configured in the form of a liquid crystal display, a light emitting diode, or the like. The display unit 1130 may be used to display a content recommendation interface and the like in the embodiment of the present application.
The display unit 1130 may also be used to receive input numeric or character information and generate signal input related to user settings and function control of the terminal apparatus 110, and specifically, the display unit 1130 may include a touch screen 1131 disposed on the front surface of the terminal apparatus 110 and may collect touch operations of a user thereon or nearby, such as clicking a button, dragging a scroll box, and the like.
The touch screen 1131 may be covered on the display screen 1132, or the touch screen 1131 and the display screen 1132 may be integrated to implement the input and output functions of the terminal device 110, and after the integration, the touch screen may be referred to as a touch display screen for short. The display unit 1130 in the present application may display the application programs and the corresponding operation steps.
Camera 1140 may be used to capture still images and a user may post comments on the images captured by camera 1140 through an application. The number of the cameras 1140 may be one or more. The object generates an optical image through the lens and projects the optical image to the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing elements convert the light signals into electrical signals, which are then passed to the processor 1180 for conversion into digital image signals.
The terminal device may further comprise at least one sensor 1150, such as an acceleration sensor 1151, a distance sensor 1152, a fingerprint sensor 1153, a temperature sensor 1154. The terminal device may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, light sensor, motion sensor, and the like.
Audio circuitry 1160, speakers 1161, and microphone 1162 may provide an audio interface between a user and terminal device 110. The audio circuit 1160 may transmit the electrical signal converted from the received audio data to the speaker 1161, and convert the electrical signal into a sound signal for output by the speaker 1161. Terminal device 110 may also be configured with a volume button for adjusting the volume of the sound signal. On the other hand, the microphone 1162 converts the collected sound signals into electrical signals, which are received by the audio circuit 1160 and converted into audio data, which is then output to the communication assembly 1110 for transmission to, for example, another terminal device 110, or to the memory 1120 for further processing.
The bluetooth module 1170 is used for performing information interaction with other bluetooth devices having bluetooth modules through a bluetooth protocol. For example, the terminal device may establish a bluetooth connection with a wearable electronic device (e.g., a smart watch) that is also equipped with a bluetooth module via the bluetooth module 1170, so as to perform data interaction.
The processor 1180 is a control center of the terminal device, connects various parts of the entire terminal device using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs stored in the memory 1120 and calling data stored in the memory 1120. In some embodiments, processor 1180 may include one or more processing units; the processor 1180 may also integrate an application processor, which primarily handles operating systems, user interfaces, application programs, and the like, and a baseband processor, which primarily handles wireless communications. It will be appreciated that the baseband processor described above may not be integrated into the processor 1180. In the present application, the processor 1180 may run an operating system, an application program, a user interface display, a touch response, and a content recommendation method according to the embodiment of the present application. Additionally, the processor 1180 is coupled to the display unit 1130.
In some possible embodiments, the various aspects of the content recommendation method provided herein may also be implemented in the form of a program product comprising program code for causing a computer device to perform the steps of the content recommendation method according to various exemplary embodiments of the present application described above in this specification when the program product is run on the computer device, for example, the computer device may perform the steps as shown in fig. 2.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product of embodiments of the present application may employ a portable compact disc read only memory (CD-ROM) and include program code, and may be run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with a command execution system, apparatus, or device.
A readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with a command execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user computing device, partly on the user equipment, as a stand-alone software package, partly on the user computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Various modifications and alterations of this application may be made by those skilled in the art without departing from the spirit and scope of this application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (13)

1. A method for recommending content, the method comprising:
responding to a content recommendation request triggered by a target object, and selecting at least one content to be recommended from a candidate content set based on the existing duration corresponding to the candidate content;
obtaining the object characteristics of the target object, the context characteristics related to the content recommendation request and the content characteristics corresponding to each content to be recommended, wherein the content characteristics at least comprise: a time-of-existence characteristic determined based on at least one of the content release time and the occurrence time of the content-related event and used for representing the time length for which the content has existed;
respectively inputting the content characteristics, the object characteristics and the context characteristics corresponding to the contents to be recommended into a trained content recommendation model; respectively performing low-order characteristic crossing and high-order characteristic crossing on each content to be recommended based on the content recommendation model, and determining a target recommendation value corresponding to each content to be recommended based on a characteristic crossing result;
determining a recommendation sequence of each content to be recommended for the target object based on a target recommendation value corresponding to each content to be recommended;
wherein, the content recommendation model is obtained by training based on a training sample data set containing related sample contents of different sample objects, and the sample contents in the training sample data set at least include: and each sample content is marked with a label value used for representing the size of the existing time length corresponding to the sample content and whether the sample content is clicked or not.
2. The method according to claim 1, wherein the existing time length of any one content is obtained by the following method, and the any one content is a content to be recommended or a candidate content:
taking the time length between the release time and the request time of any one content as the existing time length of any one content, wherein the request time represents the time when the target object triggers the content recommendation request; alternatively, the first and second electrodes may be,
and taking the time length between the occurrence time of any content-related event and the request time as the existing time length of any content, wherein the request time represents the time when the target object triggers the content recommendation request.
3. The method of claim 1, wherein selecting at least one content to be recommended from the candidate content set based on the existing duration corresponding to the candidate content comprises:
selecting candidate contents clicked by at least one candidate object from the candidate content set based on the historical behaviors of the candidate objects;
sequencing each selected candidate content according to the existing duration corresponding to each selected candidate content;
and selecting candidate contents of the sequencing result at the specified sequencing position as the contents to be recommended.
4. The method of claim 1, wherein the performing low-order feature intersection and high-order feature intersection on each content to be recommended respectively based on the content recommendation model, and determining a target recommendation value corresponding to each content to be recommended based on a feature intersection result comprises:
based on the content recommendation model, performing feature intersection on the content features of the contents to be recommended and the object features to obtain intersection features;
respectively performing low-order feature crossing on the content features, the object features, the context features and the crossing features of each content to be recommended to obtain a first recommended value corresponding to each content to be recommended, and performing high-order feature crossing to obtain a second recommended value corresponding to each content to be recommended;
and performing weighted summation on the first recommendation value corresponding to each content to be recommended and the corresponding second recommendation value to obtain a target recommendation value corresponding to each content to be recommended.
5. The method of claim 1, wherein the tag value for the respective sample content is determined by:
classifying each sample content according to the existing duration corresponding to each sample content and whether each sample content is clicked by a sample object;
and determining the label value corresponding to each sample content according to the classified category of each sample content.
6. The method of claim 5, wherein classifying each sample content according to the existing duration corresponding to each sample content and whether each sample content is clicked on by a sample object comprises:
taking the corresponding existing duration as a first type of sample content in a specified time period and the clicked sample content;
taking the clicked sample content as a second type of sample content, wherein the corresponding existing duration is not in the specified time period;
taking the sample content which is not clicked as the third type sample content;
wherein the label value of the first type sample content is greater than the label value of the second type sample content, and the label value of the second type sample content is greater than the label value of the third type sample content.
7. The method of claim 4, wherein the content recommendation model is trained by:
acquiring the training sample data set, executing cyclic iterative training on the initial content recommendation model according to the sample content pair in the training sample data set, and outputting the trained content recommendation model when the training is finished; wherein the following operations are executed in a loop iteration training process:
selecting a sample content pair from the training sample data set, inputting the selected sample content pair into the content recommendation model, and acquiring a first pre-estimated recommendation value corresponding to a first sample content in the sample content pair and a second pre-estimated recommendation value corresponding to a second sample content in the sample content pair, which are acquired based on the content recommendation model, wherein the first sample content and the second sample content are related to the same sample object, and the label value of the first sample content is greater than that of the second sample content;
and constructing a target loss function based on the first pre-estimated recommended value, the second pre-estimated recommended value, the label value of the first sample content and the label value of the second sample content, and adjusting the network parameters of the content recommendation model based on the target loss function.
8. The method of claim 7, wherein constructing an objective loss function based on the first pre-estimated recommendation, the second pre-estimated recommendation, the tag value of the first sample content and the tag value of the second sample content comprises:
determining a corresponding sample pair weight based on a difference value of the tag values of the first sample content and the second sample content, wherein an absolute value of the difference value is positively correlated with the sample pair weight;
determining a corresponding estimated loss value based on the difference value between the first estimated recommended value and the second estimated recommended value;
determining the target loss function based on a product of the sample pair weight and the estimated loss value, wherein the target loss function is positively correlated with the product.
9. A content recommendation apparatus characterized by comprising:
the recall unit is used for responding to a content recommendation request triggered by the target object and selecting at least one content to be recommended from the candidate content set based on the existing duration corresponding to the candidate content;
a feature obtaining unit, configured to obtain an object feature of the target object, a context feature related to the content recommendation request, and a content feature corresponding to each content to be recommended, where the content feature at least includes: a time-of-existence characteristic determined based on at least one of the content release time and the occurrence time of the content-related event and used for representing the time length for which the content has existed;
the analysis unit is used for inputting the content characteristics, the object characteristics and the context characteristics corresponding to the contents to be recommended into a trained content recommendation model respectively; respectively performing low-order characteristic crossing and high-order characteristic crossing on each content to be recommended based on the content recommendation model, and determining a target recommendation value corresponding to each content to be recommended based on a characteristic crossing result;
the recommending unit is used for determining the recommending sequence of each content to be recommended aiming at the target object based on the target recommending value corresponding to each content to be recommended;
wherein, the content recommendation model is obtained by training based on a training sample data set containing related sample contents of different sample objects, and the sample contents in the training sample data set at least include: and each sample content is marked with a label value used for representing the size of the existing time length corresponding to the sample content and whether the sample content is clicked or not.
10. The apparatus of claim 9, wherein the apparatus further comprises:
the device comprises a duration determining unit, a time duration determining unit and a processing unit, wherein the duration determining unit is used for obtaining the existing duration of any one content in the following modes, and the any one content is a content to be recommended or a candidate content:
taking the time length between the release time and the request time of any one content as the existing time length of any one content, wherein the request time represents the time when the target object triggers the content recommendation request; alternatively, the first and second electrodes may be,
and taking the time length between the occurrence time of any content-related event and the request time as the existing time length of any content, wherein the request time represents the time when the target object triggers the content recommendation request.
11. The apparatus of claim 9, wherein the recall unit is specifically configured to:
selecting candidate contents clicked by at least one candidate object from the candidate content set based on the historical behaviors of the candidate objects;
sequencing each selected candidate content according to the existing duration corresponding to each selected candidate content;
and selecting candidate contents of the sequencing result at the specified sequencing position as the contents to be recommended.
12. An electronic device, comprising a processor and a memory, wherein the memory stores program code which, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 8.
13. A computer-readable storage medium, characterized in that it comprises program code for causing an electronic device to carry out the steps of the method according to any one of claims 1 to 8, when said storage medium is run on said electronic device.
CN202111191793.6A 2021-10-13 2021-10-13 Content recommendation method and device, electronic equipment and storage medium Active CN113641916B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111191793.6A CN113641916B (en) 2021-10-13 2021-10-13 Content recommendation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111191793.6A CN113641916B (en) 2021-10-13 2021-10-13 Content recommendation method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113641916A CN113641916A (en) 2021-11-12
CN113641916B true CN113641916B (en) 2022-02-08

Family

ID=78426671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111191793.6A Active CN113641916B (en) 2021-10-13 2021-10-13 Content recommendation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113641916B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115129975B (en) * 2022-05-13 2024-01-23 腾讯科技(深圳)有限公司 Recommendation model training method, recommendation device, recommendation equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108769125B (en) * 2018-04-28 2021-08-17 阿里巴巴(中国)有限公司 Application recommendation method and device, storage medium and computer equipment
CN110263243A (en) * 2019-01-23 2019-09-20 腾讯科技(深圳)有限公司 Media information recommending method, apparatus, storage medium and computer equipment
CN111177575B (en) * 2020-04-07 2020-07-24 腾讯科技(深圳)有限公司 Content recommendation method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113641916A (en) 2021-11-12

Similar Documents

Publication Publication Date Title
CN107622427B (en) Deep learning method, device and system
CN111444428A (en) Information recommendation method and device based on artificial intelligence, electronic equipment and storage medium
Kumar et al. Combined artificial bee colony algorithm and machine learning techniques for prediction of online consumer repurchase intention
US10290040B1 (en) Discovering cross-category latent features
WO2022016522A1 (en) Recommendation model training method and apparatus, recommendation method and apparatus, and computer-readable medium
CN110110233B (en) Information processing method, device, medium and computing equipment
JP7325156B2 (en) Improving search queries with contextual analysis
US20160379224A1 (en) Targeted e-commerce business strategies based on affiliation networks derived from predictive cognitive traits
WO2024002167A1 (en) Operation prediction method and related apparatus
KR102422408B1 (en) Method and apparatus for recommending item based on collaborative filtering neural network
CN115147130A (en) Problem prediction method, apparatus, storage medium, and program product
CN113641916B (en) Content recommendation method and device, electronic equipment and storage medium
CN111292168A (en) Data processing method, device and equipment
US10387934B1 (en) Method medium and system for category prediction for a changed shopping mission
WO2024041483A1 (en) Recommendation method and related device
US20230316106A1 (en) Method and apparatus for training content recommendation model, device, and storage medium
CN114817692A (en) Method, device and equipment for determining recommended object and computer storage medium
CN113706228B (en) Multimedia information playing control method and device, electronic equipment and storage medium
CN116308640A (en) Recommendation method and related device
CN116204709A (en) Data processing method and related device
Chopra et al. An adaptive RNN algorithm to detect shilling attacks for online products in hybrid recommender system
Latha et al. Product recommendation using enhanced convolutional neural network for e-commerce platform
CN116484085A (en) Information delivery method, device, equipment, storage medium and program product
CN115482021A (en) Multimedia information recommendation method and device, electronic equipment and storage medium
CN114816180A (en) Content browsing guiding method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40053635

Country of ref document: HK