CN117009556A - Content recommendation method and related device based on evaluation model - Google Patents

Content recommendation method and related device based on evaluation model Download PDF

Info

Publication number
CN117009556A
CN117009556A CN202211523991.2A CN202211523991A CN117009556A CN 117009556 A CN117009556 A CN 117009556A CN 202211523991 A CN202211523991 A CN 202211523991A CN 117009556 A CN117009556 A CN 117009556A
Authority
CN
China
Prior art keywords
full
scene
media content
candidate media
connection layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211523991.2A
Other languages
Chinese (zh)
Inventor
陈煜钊
罗达
黄春振
林康熠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202211523991.2A priority Critical patent/CN117009556A/en
Publication of CN117009556A publication Critical patent/CN117009556A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/438Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application relates to the technical field of artificial intelligence, and provides a content recommendation method and a related device based on an evaluation model, which are used for improving the prediction accuracy of the evaluation value so as to improve the recommendation accuracy, wherein the evaluation model comprises a meta-network module and a prediction module, and the method comprises the following steps: after the attribute characteristics of the candidate media content are obtained, the scene specific characteristics in the attribute characteristics are respectively input into each first full-connection layer of the meta-network module to conduct parameter prediction to obtain corresponding model parameter sets, then, based on each obtained model parameter set, parameter configuration is conducted on each second full-connection layer, the attribute characteristics are respectively input into each second full-connection layer to conduct prediction to obtain corresponding evaluation values, and finally, content recommendation is conducted based on the evaluation values. Therefore, the key information related to the data source can be effectively extracted, so that the model parameters can be dynamically adjusted according to the data source, and the prediction accuracy of the evaluation value is improved.

Description

Content recommendation method and related device based on evaluation model
Technical Field
The application relates to the technical field of artificial intelligence, and provides a content recommendation method and a related device based on an evaluation model.
Background
Along with the development of science and technology, the recommendation system is widely applied to the fields of electronic commerce, searching, video and the like. In the recommendation system, an evaluation model is adopted to predict the click rate of the target object for each candidate media content, and then the target media content recommended to the target object is determined from each candidate media content based on the predicted click rate.
In the related art, model training is generally performed on an evaluation model to be trained to obtain a trained evaluation model, and then the trained evaluation model is adopted to predict the click rate of a target object for each candidate media content.
However, model parameters of the trained evaluation model are usually fixed after training, and for each candidate media content with different data sources, the content forms of the candidate media content are different, and the fixed model parameters are difficult to accurately measure the differences between recommended content from different data sources, so that the evaluation model is difficult to accurately predict the click rate, and finally recall of a recommendation system is not accurate enough, so that the accuracy of the recommended multimedia content is lower.
Disclosure of Invention
The embodiment of the application provides a content recommendation method and a related device based on an evaluation model, which are used for improving the prediction accuracy of an evaluation value, so that the recommendation accuracy is improved.
In a first aspect, an embodiment of the present application provides a content recommendation method based on an evaluation model, where the evaluation model includes a meta-network module and a prediction module, the meta-network module includes first full-connection layers, and the prediction module includes second full-connection layers that have the same number of nodes as the first full-connection layers, and the method includes:
acquiring attribute characteristics of candidate media contents, wherein the attribute characteristics comprise scene specific characteristics;
inputting the scene specific features to the first full-connection layers respectively for parameter prediction to obtain model parameter sets output by the first full-connection layers respectively; each first full-connection layer is obtained through model training;
based on the obtained parameter sets of the models, respectively carrying out parameter configuration on the second full-connection layers, respectively inputting the attribute features into the second full-connection layers for prediction, and obtaining the evaluation value of the candidate media content;
and when the evaluation value meets a preset recommendation condition, taking the candidate media content as a target recommendation content.
In a second aspect, an embodiment of the present application provides a content recommendation device based on an evaluation model, where the evaluation model includes a meta-network module and a prediction module, the meta-network module includes first full-connection layers, and the prediction module includes second full-connection layers that are the same as the number of nodes corresponding to the first full-connection layers, and the device includes:
The acquisition unit is used for acquiring attribute characteristics of the candidate media content, wherein the attribute characteristics comprise scene specific characteristics;
the meta-network unit is used for inputting the scene specific characteristics to each first full-connection layer respectively for parameter prediction to obtain model parameter sets output by each first full-connection layer respectively; each first full-connection layer is obtained through model training;
the estimating unit is used for respectively carrying out parameter configuration on each second full-connection layer based on each obtained model parameter set, and respectively inputting the attribute characteristics into each second full-connection layer for prediction to obtain an evaluation value of the candidate media content;
and the recommending unit is used for taking the candidate media content as a target recommended content when the evaluation value meets a preset recommending condition.
As a possible implementation manner, when the parameter configuration is performed on each second full-connection layer based on each obtained model parameter set, the estimating unit is specifically configured to:
based on the set connection sequence between the second full connection layers, the following operations are performed for each of the second full connection layers in turn:
Determining, for one second full-connection layer, one first full-connection layer corresponding to the one second full-connection layer from the first full-connection layers based on a correspondence between the first full-connection layers and the second full-connection layers;
and carrying out parameter configuration on the second full-connection layer based on the model parameter set output by the first full-connection layer.
As a possible implementation manner, the second full-connection layers are connected according to a set connection sequence, the attribute features are respectively input into the second full-connection layers for prediction, and when an evaluation value of the candidate media content is obtained, the prediction unit is specifically configured to:
based on the set connection order, for each of the second full connection layers, the following operations are sequentially performed:
aiming at one second full-connection layer, carrying out linear transformation on the mapping characteristics output by the previous second full-connection layer to obtain linear mapping characteristics; wherein the input of the first and second full connection layers is the attribute feature;
and carrying out nonlinear mapping on the linear mapping characteristics to obtain the mapping characteristics of the output of the second full-connection layer, wherein the evaluation value of the candidate media content is obtained based on the mapping characteristics of the output of the last second full-connection layer.
As a possible implementation manner, when the attribute features of the candidate media content are acquired, the acquiring unit is specifically configured to:
for a target object and the candidate media content, obtaining object characteristics of the target object based on set object attributes, and obtaining content characteristics of the candidate media content based on set content attributes;
acquiring scene specific characteristics of the candidate media content based on historical recommendation information between the candidate media content and the target object;
and obtaining attribute characteristics of the candidate media content based on the scene specific characteristics, the object characteristics and the content characteristics.
As a possible implementation manner, when the obtaining the scene specific feature of the candidate media content based on the historical recommendation information between the candidate media content and the target object, the obtaining unit 1301 is specifically configured to:
based on target scene attributes set for target data sources to which the candidate media contents belong, extracting information from historical recommendation information between the candidate media contents and the target objects to obtain target scene characteristics of the candidate media contents;
Based on other scene attributes set for other data sources, adopting a set attribute value as other scene characteristics of the candidate media content;
and obtaining the scene specific feature based on the target scene feature and the other scene features.
As a possible implementation manner, the information extraction is performed on the history recommendation information between the candidate media content and the target object based on the target scene attribute set for the target data source to which the candidate media content belongs, and when obtaining the target scene feature of the candidate media content, the obtaining unit is specifically configured to:
if the candidate media information is graphic information and the target data source to which the graphic information belongs is a first data source, extracting information from historical recommendation information between the graphic information and the target object based on target scene attributes set for the first data source to obtain the target scene characteristics;
and if the candidate media information is a video and the target data source to which the video belongs is a second data source, extracting information from historical recommendation information between the video and the target object based on target scene attributes set for the second data source to obtain the target scene characteristics.
As one possible implementation, the evaluation value is used to characterize a probability that the target object clicks on the candidate media content;
the recommendation unit is specifically configured to, when the evaluation value meets a preset recommendation condition and the candidate media content is taken as a target recommended content;
acquiring evaluation values of other media contents, and sorting the other media contents and the media contents based on the evaluation values of the other media contents and the media contents;
and when the candidate media content is determined to be within the set order range based on the sorting result, the candidate media content is taken as target recommended content.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores a computer program, and when the computer program is executed by the processor, causes the processor to execute the steps of the content recommendation method based on an evaluation model.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium comprising a computer program for causing an electronic device to perform the steps of the above-described content recommendation method based on an evaluation model when the computer program is run on the electronic device.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a computer program stored in a computer readable storage medium, from which a processor of an electronic device reads and executes the computer program, causing the electronic device to perform the steps of the above-described content recommendation method based on an evaluation model.
In the embodiment of the application, a content recommendation method based on an evaluation model is provided, wherein the evaluation model consists of a meta-network module and a prediction module, specifically, attribute characteristics of candidate media content are obtained, then scene specific characteristics in the attribute characteristics are respectively input into each first full-connection layer of the meta-network module for parameter prediction, and model parameter sets respectively output by each first full-connection layer are obtained; and then, based on the obtained parameter sets of the models, respectively carrying out parameter configuration on each second full-connection layer, respectively inputting attribute characteristics into each second full-connection layer for prediction to obtain corresponding evaluation values, and finally, taking the candidate media content as target recommended content when the evaluation values meet preset recommended conditions.
In this way, the model parameters of the prediction module are obtained by using the scene specific features of the candidate media content and the meta-network module, so that the key information related to the data source can be effectively extracted from the complex features of the candidate media content, and the model parameters of the prediction module can be dynamically adjusted according to the data source, thereby improving the evaluation accuracy of the evaluation model and further improving the recommendation accuracy. In addition, because the model parameters are obtained by utilizing the scene specific characteristics, even if the content ratio difference of the multiple information sources is large, the model parameters can better show the difference between different information sources, so that the evaluation accuracy of the evaluation model is further improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of an evaluation model according to an embodiment of the present application;
fig. 3A is a schematic structural diagram of a meta-network module according to an embodiment of the present application;
FIG. 3B is a schematic diagram of a prediction module according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an attribute feature provided in an embodiment of the present application;
FIG. 5 is a schematic flow chart of an evaluation model training method according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating a parameter prediction process of a meta-network module according to an embodiment of the present application;
FIG. 7 is a schematic diagram of a feature matrix obtaining process according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a parameter configuration according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a parameter configuration for a first full-connection layer according to an embodiment of the present application;
FIG. 10 is a flowchart of a content recommendation method based on an evaluation model according to an embodiment of the present application;
fig. 11 is a logic schematic diagram of a recommendation process in an application scenario provided in an embodiment of the present application;
FIG. 12 is a schematic logic flow diagram of a recommendation process in another application scenario provided in an embodiment of the present application;
FIG. 13 is a schematic view of an apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the technical solutions of the present application, but not all embodiments. All other embodiments, based on the embodiments described in the present document, which can be obtained by a person skilled in the art without any creative effort, are within the scope of protection of the technical solutions of the present application.
Embodiments of the present application relate to artificial intelligence and machine learning techniques, designed primarily based on machine learning in artificial intelligence.
Artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use the knowledge to obtain optimal results. In other words, artificial intelligence is an integrated technology of computer science that attempts to understand the essence of intelligence and to produce a new intelligent machine that can react in a similar way to human intelligence. Artificial intelligence, i.e. research on design principles and implementation methods of various intelligent machines, enables the machines to have functions of sensing, reasoning and decision.
The artificial intelligence technology is a comprehensive subject, and relates to the technology with wide fields, namely the technology with a hardware level and the technology with a software level. Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and other directions.
Machine learning is a multi-domain interdisciplinary, involving multiple disciplines such as probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory, and the like. It is specially studied how a computer simulates or implements learning behavior of a human to acquire new knowledge or skills, and reorganizes existing knowledge structures to continuously improve own performance. Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, confidence networks, reinforcement learning, transfer learning, induction learning, teaching learning, and the like.
Machine learning is the core of artificial intelligence, a fundamental approach to letting computers have intelligence, which is applied throughout various areas of artificial intelligence. Machine learning and deep learning typically include techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, induction learning, and the like. An artificial neural network (Artificial Neural Network, ANN) abstracts the human brain neural network from the point of information processing, builds a simple model, and forms different networks according to different connection modes. The neural network is an operation model, which is formed by interconnecting a plurality of nodes (or neurons), each node represents a specific output function, called an excitation function (activation function), the connection between every two nodes represents a weighting value for the signal passing through the connection, called a weight, which is equivalent to the memory of an artificial neural network, the output of the network is different according to the connection mode of the network, the weight value and the excitation function are different, and the network itself is usually an approximation to a certain algorithm or function in nature, and can also be an expression of a logic strategy.
With research and progress of artificial intelligence technology, research and application of artificial intelligence technology are being developed in various fields, such as common smart home, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned, autopilot, unmanned, robotic, smart medical, smart customer service, car networking, autopilot, smart transportation, etc., and it is believed that with the development of technology, artificial intelligence technology will be applied in more fields and will be of increasing importance.
The scheme provided by the embodiment of the application relates to an artificial intelligence machine learning technology. In the embodiment of the application, the sample media content is utilized to obtain the evaluation model by adopting a machine learning technology, and then the candidate media content is evaluated by utilizing the evaluation model obtained by learning in the actual content recommendation process, so as to obtain the corresponding target evaluation value.
In particular, embodiments of the present application relate to a training portion and an application portion. In the training part, training an evaluation model by a machine learning technology, so that the evaluation model is trained based on the training sample pair and the training method provided by the embodiment of the application, and model parameters are continuously adjusted by an optimization algorithm until the model converges; the application part is used for carrying out parameter value evaluation by using the evaluation model obtained by training in the training part, and further carrying out content recommendation. In addition, it should be noted that, in the embodiment of the present application, the evaluation may be online training or offline training, which is not specifically limited herein, and is illustrated herein by taking offline training as an example.
In the related art, for a recommendation system, model training is generally performed on an evaluation model to be trained to obtain a trained evaluation model, then the trained evaluation model is adopted to predict the click rate of a target object for each candidate content, and then the target content recommended to the target object is determined from each candidate content based on the predicted click rate.
However, model parameters of the trained evaluation model are often fixed after training is finished, and content forms of the candidate media contents are different according to different data sources, so that the fixed model parameters are difficult to accurately measure differences between recommended contents from different data sources, and therefore the evaluation model is difficult to accurately predict the click rate, and finally recall of a recommendation system is inaccurate, so that the accuracy of the recommended media contents is low.
In view of this, in the embodiment of the present application, a content recommendation method based on an evaluation model is provided, where the evaluation model is formed by a meta-network module and a prediction module, specifically, attribute features of candidate media content are obtained, then scene specific features in the attribute features are respectively input to each first full-connection layer of the meta-network module to perform parameter prediction, and model parameter sets output by each first full-connection layer are obtained; and then, based on the obtained parameter sets of the models, respectively carrying out parameter configuration on each second full-connection layer, respectively inputting attribute characteristics into each second full-connection layer for prediction to obtain corresponding evaluation values, and finally, taking the candidate media content as target recommended content when the evaluation values meet preset recommended conditions.
In this way, the model parameters of the prediction module are obtained by using the scene specific features of the candidate media content and the meta-network module, so that the key information related to the data source can be effectively extracted from the complex features of the candidate media content, and the model parameters of the prediction module can be dynamically and dynamically adjusted according to the data source, thereby improving the evaluation accuracy of the evaluation model and further improving the recommendation accuracy. In addition, because the model parameters are obtained by utilizing the scene specific characteristics, even if the content ratio difference of the multiple information sources is large, the model parameters can better show the difference between different information sources, so that the evaluation accuracy of the evaluation model is further improved.
The following description is made for some simple descriptions of application scenarios applicable to the technical solution of the embodiment of the present application, and it should be noted that the application scenarios described below are only used for illustrating the embodiment of the present application, but not limiting. In the specific implementation process, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Referring to fig. 1, a schematic diagram of an application scenario provided in an embodiment of the present application may include a terminal device 101 and a server 102.
In one embodiment, the terminal device 101 may be, for example, a device owned by the target object, such as a mobile phone, a tablet computer (PAD), a notebook computer, a desktop computer, a smart television, a smart car device, a smart wearable device, and the like. The terminal device 101 may be provided with an application having a content recommendation function, for example, a video application, a shopping application, an instant messaging application, etc., where the application related to the embodiment of the present application may be a software client, or may be a client such as a web page or an applet, and the server is a background server corresponding to the software or the web page or the applet, without limiting the specific type of the client.
The server 102 may be a background server corresponding to an application with a content recommendation function installed on the terminal device 101, and may provide a background service function of a recommendation system, for example, implement the training method of the evaluation model and the steps of the content recommendation method provided in the embodiment of the present application. The server 102 may be, for example, a stand-alone physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, i.e., a content delivery network (Content Delivery Network, CDN), and basic cloud computing services such as big data and an artificial intelligence platform, but is not limited thereto.
In one embodiment, the training process of the evaluation model may be performed by the server 102 to quickly implement training of the evaluation model using the computing resources of the server 102, or may be performed by the terminal device 101. The training process of the evaluation model may be performed by the server 102 or may be performed with the terminal device 101 engaged in the execution. For example, after the terminal device 101 acquires the attribute features of the candidate media content, the attribute features of the candidate media content are transmitted to the server 102, and the server 102 determines a target evaluation value of the candidate media content by using the trained evaluation model, and then performs content recommendation according to the target evaluation value. Alternatively, the terminal device 101 may transmit the scene specific feature of the candidate media content to the server 102, where the server 102 obtains the attribute feature of the candidate media content based on the scene specific feature of the candidate media content, and then determines the target evaluation value of the candidate media content by using the evaluation model obtained by training, and further performs content recommendation according to the target evaluation value.
The terminal device 101 and the server 102 may be directly or indirectly connected to each other through one or more networks. The network may be a wired network, or may be a Wireless network, for example, a mobile cellular network, or may be a Wireless-Fidelity (WIFI) network, or may be other possible networks, which embodiments of the present invention are not limited in this respect.
In the embodiment of the present application, the number of the terminal devices 101 may be one or more, and similarly, the number of the servers 102 may be one or more, that is, the number of the terminal devices 101 or the servers 102 is not limited.
In one possible application scenario, the related data (such as feature vectors, etc.) and model parameters involved in the embodiments of the present application may be stored using cloud storage (closed storage) technology. Cloud storage is a new concept which extends and develops in the concept of cloud computing, and a distributed cloud storage system refers to a storage system which integrates a large number of storage devices (or called storage nodes) of different types in a network through application software or application interfaces to cooperatively work and jointly provides data storage and service access functions for the outside through functions of cluster application, grid technology, a distributed storage file system and the like.
Of course, the method provided by the embodiment of the present application is not limited to the application scenario shown in fig. 1, but may be used in other possible application scenarios, and the embodiment of the present application is not limited. The functions that can be implemented by each device in the application scenario shown in fig. 1 will be described together in the following method embodiments, which are not described in detail herein.
The method flow provided in the embodiments of the present application may be performed by the server 102 or the terminal device 101 in fig. 1, or may be performed jointly by the server 102 and the terminal device 101, and is mainly described herein by taking the server 102 as an example.
In the embodiment of the present application, the training process of the recommended model is a process of performing multiple loop iterative training by using training samples, and may mainly include a model design stage, a data preparation stage and an iterative training stage, which will be described below.
(1) Model design stage
Referring to fig. 2, a schematic diagram of an architecture of an estimated parametric prediction model according to an embodiment of the application is shown. The evaluation parameter prediction model comprises a meta-network module and a pre-estimation module, wherein the meta-network module is used for identifying scene specific characteristics to obtain corresponding model parameters, and the pre-estimation module is used for evaluating mixed-arrangement characteristics to obtain corresponding evaluation values.
As shown in fig. 2, the meta-network module identifies the scene specific features, and after obtaining the corresponding model parameters, the meta-network module transmits the model parameters to the pre-estimation module, and the pre-estimation module uses the model parameters from the meta-network module as the model parameters of the pre-estimation module, and then evaluates the attribute features to obtain the corresponding evaluation values.
In one embodiment, the meta-network module includes first full-connection layers, and the prediction module includes second full-connection layers having the same number of nodes as the first full-connection layers. The number of fully connected layers is L as an example.
Fig. 3A is a schematic structural diagram of a meta-network module according to an embodiment of the application. The meta-network module comprises L first full-connection layers, and the L first full-connection layers are in parallel relation. Each first full-connection layer is used for carrying out parameter prediction based on scene specific characteristics of sample media content or candidate media content to obtain a corresponding model parameter set, wherein the model parameter set can also be called meta-weight, and L meta-weights can be obtained through L first full-connection layers.
Fig. 3B is a schematic structural diagram of an estimation module according to an embodiment of the application. In the prediction module, the attribute characteristics of the sample media content or the candidate media content are taken as input, the L element weights are dynamically crossed in a full-connection network mode and correspond to the attribute characteristics to obtain corresponding evaluation values, and specifically, the prediction module comprises L second full-connection layers which correspond to the L first full-connection layers respectively, and the input of each layer is the output of the previous layer. In fig. 3B, the nodes in the full connection layer are represented by circles, and it is assumed that the respective node numbers of the L first full connection layers are D1, D2, … …, DL-1, and 1, the node numbers of the first full connection layer and the first second full connection layer are D1, and the node numbers of the second first full connection layer and the second full connection layer are D2, which are similar and are not described again.
The data processing process performed by the above model will be described in detail in the following process, and thus will not be described in detail herein.
(2) Data preparation phase
In the embodiment of the application, each sample data is usually multi-source heterogeneous data, wherein the multi-source heterogeneous data is data with different data sources and different data types, and each sample data can also be called mixed data. For example, an instant messaging application includes a subscription number and a video number, the data type of the subscription number message is usually graphics context, the data type of the video number message is usually video, and the subscription number message and the video number message are multi-source heterogeneous data. Also for example, in shopping applications, merchandise and living rooms are multi-source heterogeneous data. As another example, in video applications, long video and short video are multi-source heterogeneous data. In the embodiment of the present application, different data sources may be directly understood as different application scenarios.
In the embodiment of the application, a set of unified feature fields, called mixed row unified features, are maintained for all sample data. The shuffling unification feature may also be referred to as an attribute feature. It should be noted that, in the embodiment of the present application, the sample data may also be referred to as sample media content.
In one implementation, a shuffling unification feature of a sample media content may include object properties of the sample object, content properties of the sample media content, and scene-specific features.
Specifically, the attribute characteristics of any sample media content can be obtained by the following ways:
firstly, for one sample media content, obtaining object characteristics of the sample object based on set object attributes, and obtaining content characteristics of the sample media content based on set content attributes;
secondly, based on historical recommendation information between the sample media content and the sample object, scene specific characteristics of the sample media content are obtained;
finally, based on the scene specific features, the object features and the content features, the attribute features of the sample media content are obtained.
Object attributes include, but are not limited to, object identification, categories in which the object is interested, exposure click rate of the sample object over a period of time (e.g., the past week), and the like. Content attributes include, but are not limited to, content identification, and the like. The object identification and the content identification may be represented by serial numbers (IDs), but are not limited thereto. The history recommendation information includes, but is not limited to, one or more of exposure times, click times, and exposure click rate.
For example, the sample media content is video 1 from video number a, the sample object is a small object, video number a is a science popularization video number, video 1 is a bird identification video, based on the object attribute: object ID, obtain the object feature of the Ming, the value of the object feature of the Ming is useID1001, based on the object attribute: the method comprises the steps of obtaining a content ID of a video 1, obtaining a content characteristic of the video 1, wherein the value of the content characteristic of the video 1 is item ID2334, obtaining scene specific characteristics of sample media content based on historical recommendation information between the Xiaoming and the video 1, and obtaining attribute characteristics of the sample media content based on the scene specific characteristics, object characteristics and the content characteristics.
By the implementation manner, the attribute features comprise scene specific features, object features and content features, namely various attributes of the object and the media content, so that the evaluation model can learn the relation between the object and the media content when model training is carried out subsequently, and the probability of clicking the media content by the object is further output.
In one implementation, the scene specific features include scene features corresponding to scene attributes of each data source. The scene attribute of a data source may include one or more of the following attributes: historical recommendation information of the sample object on the sample media content in a first duration; the sample object recommends information over a second time period for a history of data sources to which the sample media content pertains.
Specifically, any one sample of media content scene specific features can be obtained by:
firstly, extracting information of historical recommendation information between sample media content and sample objects based on target scene attributes set for target data sources to which the sample media content belongs, and obtaining target scene characteristics of the sample media content;
secondly, based on other scene attributes set for other data sources, adopting the set attribute values as other scene characteristics of the candidate media content;
finally, scene specific features are obtained based on the target scene features and other scene features.
The historical recommendation information of the sample object on the data source to which the sample media content belongs in the second duration refers to summary data of the historical recommendation information of the sample object on each media content in the data source in the first duration, and the summary data can be a total value or a mean value, but is not limited to the total value or the mean value. The first duration and the second duration may be a period of time or a plurality of periods of time, which is not limited. The first time period may be the same as the second time period or may be different from the second time period.
Taking a mixed ordering recommended scene of subscription number information and video number information in instant messaging application as an example for explanation, scene attributes set for the subscription number include: the scene attributes set for the video number include: the method comprises the steps that historical recommendation information of a sample object on a sample video in a first time period and historical recommendation information of the sample object on a video number to which the sample video belongs in a second time period are obtained.
For one sample data, the value of the scene attribute of each data source in the scene specific feature depends on the data source to which the sample media content in the sample data belongs. As a case, if the sample media information is the graphic information and the target data source to which the graphic information belongs is the first data source, information extraction is performed on the history recommended information between the graphic information and the sample object based on the target scene attribute set for the first data source, so as to obtain the target scene feature. As another case, if the sample media information is a video and the target data source to which the video belongs is a second data source, information extraction is performed on historical recommendation information between the video and the sample object based on the target scene attribute set for the second data source, so as to obtain the target scene feature. In a mixed ordering recommendation scenario of a subscription number message and a video number message, the first data source is a subscription number and the second data source is a video number. It should be noted that in the embodiment of the present application, only two data sources are described, and in the practical application process, the number of data sources may also be more than two.
That is, referring to fig. 4, the attribute features include an object feature, a content feature and a scene-specific feature, where the scene-specific feature includes a scene attribute of a subscription number and a feature corresponding to the scene attribute of a video number, if the sample media content is a subscription number message, that is, graphics and text information, the scene attribute of the subscription number has a value, and the scene attribute of the video number adopts a set attribute value. If the sample media content is a video number message, the scene attribute of the video number has a value, and the scene attribute of the subscription number adopts a set attribute value. Illustratively, the attribute value is set to 0.
For example, the recommended sample content is video 1 from video number a, the sample object is small, video 2 and video 3 are also published in video number 1, and the scene specific features of the sample data include: total exposure times, total number of clicks, and total exposure click rate for video 1, video 2, and video 3 in the past 1 day, past 7 days; the exposure times, the click times and the exposure click rate of the Xiaoming for the video 1 in the past 1 day and the past 7 days; total exposure times, total click times and total exposure click rate of the xiaoming to the subscription number in the past 1 day and the past 7 days; the total exposure times, the total clicking times and the total exposure clicking rate of the xiaoming to the sample video in the past 1 day and the past 7 days are all 0, and the total exposure times, the total clicking times and the total exposure clicking rate of the xiaoming to the sample video in the past 1 day and the past 7 days are all 0.
By means of the implementation mode, the model supports the input of multi-source heterogeneous data by fusing the scene characteristics of each data source in the scene specific characteristics, and further media contents from different data sources, namely media data from different application scenes, can be acquired by learning of a meta-network module in the model, so that more accurate model parameters corresponding to the application scenes are acquired, the evaluation accuracy of the model is improved, and the recommendation effect of a recommendation system is further improved.
It should be noted that the scene specific features may be set by prior business knowledge and expert experience, and may be adjusted according to the application scene in the actual application process, which is not limited.
(3) Iterative training phase
In the embodiment of the application, after the training data preparation is completed, the constructed model can be trained by utilizing the training data.
In one embodiment, according to the structure of the model, super parameters such as batch (batch), iteration number (epoch), learning rate (learning rate) and the like are set, and training is started, so that an evaluation model is finally obtained.
For example, the batch of the evaluation model is set to 128, and the epoch is set to 1000,learning rate to 0.0001, that is, the training is iterated 1000 times, and each iteration divides the training sample into 128 batches for learning. Of course, the training parameters herein are just one possible example, and may be adjusted as needed in practical situations.
Referring to fig. 5, a flow chart of an evaluation model training method according to an embodiment of the present application is shown. In the iterative training process, all training samples are divided into designated batches, and training is performed based on training samples of respective sub-batches, and since steps performed in training for each batch in each iterative process are similar, training for one batch is described here as an example.
S501, respectively inputting respective scene specific characteristics of each sample media content into each first full-connection layer for parameter prediction, and obtaining corresponding model parameter sets output by each first full-connection layer.
In the embodiment of the application, each sample media content can be correspondingly provided with a real label, and the real label characterizes whether the sample object clicks the sample media content. For example, the sample recommended content is video 1 from video number a, the sample object is a min, and the real tag corresponding to video 1 characterizes min click video 1.
Specifically, referring to fig. 6, a flow chart of a parameter prediction process of a meta-network module in an embodiment of the application is shown. Here, the sample media content x, which is any one of the sample media contents, will be described as an example only.
S601, inputting scene specific features of sample media content x into each first full-connection layer respectively for feature mapping processing to obtain corresponding mapping vectors, wherein each mapping vector comprises model parameters output by the corresponding first full-connection layer.
In one embodiment, since the dimensions of each scene feature in the scene-specific features are multidimensional data, in order to reduce the amount of calculation, the scene-specific features may be pooled, and then the pooled scene features are used as the input of the meta-network module. The pooling process may be, but is not limited to, a mean pooling approach.
Specifically, when S601 is executed, the following steps may be adopted, but are not limited to:
s6011, carrying out pooling treatment on the scene specific features to obtain pooled scene features.
The dimension of the scene-specific features is represented by (F, d 0), where F characterizes the dimension of each scene feature and d0 represents the number of features. And carrying out average pooling operation on the feature dimension of the scene feature to obtain pooled scene features with the feature dimension of (1, d 0).
For example, assume that the dimension of the scene-specific feature is (5, 10), and the scene-specific feature is subjected to pooling processing to obtain a pooled scene feature with the feature dimension of (1, 10).
S6012, respectively inputting the pooled scene features into each first full-connection layer for feature mapping processing to obtain corresponding mapping vectors.
In the embodiment of the application, the first full connection layers are not sequenced.
Each first full connection layer includes L first full connection layers such as a first full connection layer 1, first full connection layers 2 and … …, and a first full connection layer L, and D0 is used to represent the total number of each feature included in the attribute feature. Referring to fig. 7, for the first layer, an average pooling operation is performed on scene features with feature dimensions (F, D0) to obtain pooled scene features with feature dimensions (1, D0), and the pooled scene features are input to the first full-connection layer 1 to perform feature mapping processing to obtain a mapping vector 1, where the dimension of the mapping vector 1 is (1, d0×d1). Similarly, for the second layer, the pooled scene features are input to the first full-connection layer 2 for feature mapping processing to obtain a mapping vector 2, the dimension of the mapping vector 2 is (1, d1×d2), and for the L-th layer, the pooled scene features are input to the first full-connection layer L for feature mapping processing to obtain a mapping vector L, and the dimension of the mapping vector L is (1, dl-1×1).
S602, based on the set vector dimensions, splitting the obtained mapping vectors respectively to obtain corresponding feature matrixes, wherein each element contained in each feature matrix is a corresponding model parameter.
In the embodiment of the application, the resolution operation can be adopted to split each obtained mapping vector, and then each split model parameter is recombined into a matrix respectively, each matrix corresponds to one model parameter set, and each element contained in each matrix is a model parameter in the corresponding model parameter set.
Referring still to fig. 7, based on the set vector dimensions, the obtained mapping vector 1 is split to obtain a feature matrix 1, the dimensions of the feature matrix 1 are (D0, D1), d0×d1 elements contained in the matrix 1 are model parameters corresponding to the first full connection layer 1, similarly, the obtained mapping vector 2 is split to obtain a feature matrix 2, the dimensions of the feature matrix 2 are (D1, D2), d1×d2 elements contained in the matrix 2 are model parameters corresponding to the first full connection layer 2, and the obtained mapping vector L is split to obtain a feature matrix L, the dimensions L of the feature matrix are (DL-1, 1), and DL-1×1 elements contained in the matrix L are model parameters corresponding to the first full connection layer 3.
For example, the dimension of the feature matrix 1 is (12, 10), the dimension of the feature matrix 2 is (10, 8), the dimension L of the feature matrix is (4, 1), 120 elements contained in the matrix 1 are model parameters corresponding to the first fully connected layer 1, 80 elements contained in the matrix 2 are model parameters corresponding to the first fully connected layer 2, and similarly, 4 elements contained in the matrix L are model parameters corresponding to the first fully connected layer 3.
S502, respectively carrying out parameter configuration on each second full-connection layer based on each obtained model parameter set.
Specifically, for each sample media content, based on the set connection order between the second full connection layers, the following operations are performed for each of the second full connection layers in turn:
determining a first full-connection layer y corresponding to the second full-connection layer x from the first full-connection layers based on the corresponding relation between the first full-connection layers and the second full-connection layers for the second full-connection layer x;
and carrying out parameter configuration on the second full connection layer x based on the model parameter set output by the first full connection layer y.
And carrying out parameter configuration on the second full-connection layer x based on the model parameter set output by the first full-connection layer y, so as to realize that the model parameter set output by the first full-connection layer y is used as the model parameter of the second full-connection layer x.
Referring to fig. 8, each of the second full connection layers includes L second full connection layers such as a second full connection layer 1, a second full connection layer 2, … …, and a second full connection layer L, and it is assumed that, in the correspondence between each of the first full connection layers and each of the second full connection layers, the first full connection layer 1 corresponds to the second full connection layer 1, the first full connection layer 2 corresponds to the second full connection layer 2, … …, and the first full connection layer L corresponds to the second full connection layer L.
It should be noted that, the pooling processing is performed on the attribute features to obtain pooled attribute features, and then the pooled attribute features are used as the model input of the pre-estimation module. The feature dimension of the attribute feature is (F, D0), and the feature dimension of the pooled attribute feature is (1, D0).
For the first layer, determining that a first full-connection layer corresponding to the second full-connection layer 1 is the first full-connection layer 1 from L first full-connection layers based on the corresponding relation, and outputting a model parameter set of the first full-connection layer 1: the feature matrix 1 with dimensions (D0, D1) is used as a model parameter for the second fully connected layer 1.
For the second layer, determining that the first fully-connected layer corresponding to the second fully-connected layer 2 is the first fully-connected layer 2 from the L first fully-connected layers based on the corresponding relation, and outputting the model parameter set of the first fully-connected layer 2: the feature matrix 1 with dimensions (D1, D2) is used as model parameter for the second fully connected layer 2.
Similarly, for the L-th layer, based on the corresponding relation, determining the first fully-connected layer corresponding to the second fully-connected layer L as the first fully-connected layer L from the L first fully-connected layers, and outputting the model parameter set of the first fully-connected layer L: feature matrix 1 with dimension (DL-1, 1) is used as model parameter of the second full connection layer L.
Taking layer 1 as an example only, referring to fig. 9, assume that the number of nodes of the first fully connected layer 1 and the second fully connected layer 1 is 10, the feature dimension of the input feature of the second fully connected layer 1 is (1, 12), the output dimension of the first fully connected layer 1 is (12, 10) feature matrix 1, w contained in the feature matrix 1 1,1 、w 1,2 、w 1,3 、……、w 12,10 120 model parameters, w 1,1 、w 1,2 、w 1,3 、……、w 12,10 120 model parameters were equal as the model parameters of the second fully connected layer 1, where w 1,1 、w 1,2 、w 1-3 、……、w 12,10 The values of (2) are 0, 0.15, 0.25, … … and 0.15 respectively.
Through the implementation manner, the corresponding model parameter set output by the first full-connection layer is utilized to carry out parameter configuration on the second full-connection layer, so that the model parameter set is in the form of a full-connection network, and the multi-source heterogeneous content is poorer in dynamic performance.
S503, respectively inputting the attribute characteristics of each sample media content into each second full-connection layer for prediction to obtain respective evaluation values of each sample media content.
In an embodiment of the present application, the evaluation value includes, but is not limited to, click Through Rate (CTR).
Specifically, for each sample media content, a respective evaluation value may be obtained in the following manner:
taking sample media content as an example, based on the set connection sequence, for each of the second fully connected layers, the following operations are performed in sequence:
aiming at the second full-connection layer x, carrying out linear transformation on the mapping characteristics output by the previous second full-connection layer x to obtain linear mapping characteristics; wherein the input of the first and second fully connected layers is an attribute feature;
and carrying out nonlinear mapping on the linear mapping characteristics to obtain mapping characteristics of the output of the second full connection layer x.
Wherein the evaluation value of the sample media content x is obtained based on the mapping characteristics of the last second full connection layer output. Illustratively, sigmoid function mapping is performed on the mapping features output by the last second full-connection layer, so as to obtain respective evaluation values of the sample media content.
For the first layer, performing linear transformation on the attribute feature X with the feature dimension of (1, D0) to obtain a linear mapping feature, wherein the feature dimension of the model parameter of the second full-connection layer 1 is (D0, D1), so that the feature dimension of the linear mapping feature is (1, D1), and then performing nonlinear mapping on the linear mapping feature to obtain a mapping feature X1 output by the second full-connection layer 1, and the feature dimension of the mapping feature X1 is (1, D1).
For the second layer, performing linear transformation on the attribute feature X1 with the feature dimension of (1, D1) to obtain a linear mapping feature, wherein the feature dimension of the model parameter of the second full-connection layer 2 is (D1, D2), so that the feature dimension of the linear mapping feature is (1, D2), and then performing nonlinear mapping on the linear mapping feature to obtain a mapping feature X2 output by the second full-connection layer 2, and the feature dimension of the mapping feature X2 is (1, D2).
Similarly, for the L-th layer, the attribute feature XL-1 with the feature dimension of (1, DL-1) is subjected to linear transformation to obtain a linear mapping feature, and as the feature dimension of the model parameter of the second full-connection layer L is (DL-1, 1), the feature dimension of the linear mapping feature is (1, 1), and then the linear mapping feature is subjected to nonlinear mapping to obtain a mapping feature XL output by the second full-connection layer L, wherein the feature dimension of the mapping feature XL is (1, 1).
And finally, mapping the mapping characteristics XL output by the second full-connection layer L by an activation function (such as a sigmoid function) to obtain an evaluation value of the sample media content x.
Through the implementation manner, the characteristics are processed through the linear mapping and the nonlinear mapping, so that the characteristic vector output by the second full-connection layer of the current layer can be used as the input of the second full-connection layer of the next layer.
S504, determining the total loss value corresponding to the evaluation model based on the obtained evaluation values and the corresponding real labels.
In the embodiment of the application, whether the sample object clicks the corresponding sample media content can be judged based on the obtained evaluation values, and then the total loss value corresponding to the evaluation model is determined based on the judgment results and the corresponding real labels, but the method is not limited to the method.
In one possible implementation manner, if the evaluation value is not smaller than the preset evaluation threshold value, the judgment result represents that the evaluation value represents that the sample object clicks on the sample media content, and otherwise, the judgment result represents that the evaluation value does not click on the sample media content.
S505, judging whether the evaluation model meets the convergence condition.
In the embodiment of the present application, the convergence condition may include at least one of the following conditions:
(1) The total loss value is not greater than a preset loss value threshold.
(2) The iteration number reaches a preset number upper limit value.
S506: if the determination result in step S505 is no, the parameter adjustment is performed on the evaluation model based on the total loss value.
If the above conditions are met, it is determined that the evaluation model meets the convergence conditions, then training is ended, if not, then model parameters need to be continuously adjusted, and the adjusted evaluation model is used to enter the next training process, that is, the process jumps to S501 for execution.
In the embodiment of the application, the evaluation model can be evaluated by adopting an AUC (area under an ROC curve), when the AUC value meets a certain condition, the trained evaluation model is output, otherwise, the evaluation model can be retrained, wherein the AUC is used for evaluating the index of the performance of the model, and the higher the value, the better the performance of the model.
In the embodiment of the application, after the training of the evaluation model is finished, content recommendation can be performed based on the evaluation model obtained by training.
Referring to fig. 10, referring to the evaluation model shown in fig. 2, a flowchart of a content recommendation method based on the evaluation model according to an embodiment of the application is shown, and a specific flowchart is as follows.
S1001, acquiring attribute characteristics of candidate media contents, wherein the attribute characteristics comprise scene specific characteristics. The attribute feature acquisition process of the sample media content in the data preparation stage is specifically referred to and will not be described herein.
For example, referring to fig. 11, the candidate media content is a graphic information 1, an attribute feature 1 of the graphic information 1 is obtained, a target object is a small object, the attribute feature 1 includes a small object ID, a content ID of the graphic information 1, and a scene-specific feature 1, the scene-specific feature 1 includes a scene feature of a subscription number and a scene feature of a video number, and it is assumed that a data source of the graphic information 1 is a subscription number a, the subscription number a further publishes a graphic information 2, and the scene feature of the subscription number includes: total exposure times, total clicking times and total exposure clicking rate of the xiaoming to the graphic information 1 and the graphic information 2 in the past 1 day and the past 7 days; the total exposure times, the total clicking times and the total exposure clicking rate of the Xiaoming on the image-text information 1 in the past 1 day and the past 7 days are all 0 in the scene characteristics of the video number.
S1002, inputting scene specific features into each first full-connection layer to conduct parameter prediction, and obtaining model parameter sets output by each first full-connection layer; each first fully connected layer is obtained through model training. Referring specifically to S501, details are not repeated here.
For example, referring to fig. 11, the scene specific feature 1 is input to each first full-connection layer to perform parameter prediction, so as to obtain a model parameter set output by each first full-connection layer.
S1003, respectively carrying out parameter configuration on each second full-connection layer based on each obtained model parameter set, and respectively inputting attribute characteristics into each second full-connection layer for prediction to obtain an evaluation value of the candidate media content. See specifically S502 and S503, and are not described in detail herein.
For example, referring to fig. 11, based on the obtained parameter sets of the models, the parameters of the second full-connection layers are respectively configured, and the attribute features 1 are respectively input into the second full-connection layers for prediction, so as to obtain an evaluation value 1 of the graphic information 1, wherein the evaluation value 1 is 0.9, i.e. the click rate is 90%.
And S1004, when the evaluation value meets the preset recommendation condition, taking the candidate media content as the target recommendation content.
In the embodiment of the present application, as a possible implementation manner, the evaluation values of other media contents are obtained, the other media contents and the media contents are ranked based on the respective evaluation values of the other media contents and the candidate media contents, and when the candidate media contents are determined to be within the set order range based on the ranking result, the candidate media contents are regarded as the target recommended contents.
The number of other media contents may be one or more. The evaluation value of other media content can be obtained based on the evaluation model, and the specific process is not repeated. The order may be from large to small, or from small to large, and is not limited thereto. Taking the case of sorting other media content and media content in order from large to small, the set order range may be the first k, where the value of k is a positive integer.
For example, still referring to fig. 11, the other media contents include 9 pieces of video 1 and image-text information 2, where video 1 is an cartoon, image-text information 2 is image-text information for purchasing food, similarly, based on scene specific feature 2 and attribute feature 2 of video 1, an evaluation value 0.7 of video 1 is obtained, based on scene specific feature 3 and attribute feature 3 of image-text information 2, an evaluation value 0.9 of image-text information 2 is obtained, then, based on the respective evaluation values of the other media contents and candidate media contents, the other media contents and candidate media contents are ordered in order from large to small, and assuming that the set order range is the first 3 pieces, the ordering result characterizes image-text information 1 as the second piece, at this time, image-text information 1 is located in the set order range, and candidate image-text information 1 is taken as the target recommended content, further, if video 1 and image-text information 2 are also located in the first 3 pieces, video 1 and image-text information 2 can also be taken as the target recommended content.
According to the implementation mode, aiming at the multi-source heterogeneous data, the target recommended content can be determined from all media contents through the evaluation value output by the model, so that accurate recommendation of the media contents is realized, and the recommendation precision of a recommendation system is improved.
It should be noted that, in the embodiment of the present application, only one candidate media content is taken as an example for illustration, the number of candidate media contents may be one or more, and if the number of candidate media contents is a plurality of candidate media contents, the candidate media contents are ranked based on the evaluation value, so as to determine the target media content. The process of sorting the plurality of candidate media contents based on the evaluation value and determining the target media content is similar to the process of sorting other media contents and candidate media contents based on the evaluation value and determining the target media content, and will not be described again.
The following description is made in connection with a specific application scenario.
In the model training stage, model training is carried out on the evaluation model to be trained by utilizing the media content of each sample in the shopping scene, and the evaluation model is obtained after training.
Referring to fig. 12, assume that the candidate media content is commodity 1, commodity 1 is strawberry, attribute feature 4 of commodity 1 is obtained, the target object is a king, the attribute feature includes a target ID of the king, a content ID of commodity 1, and scene-specific feature 4, the scene-specific feature 4 includes a scene feature of the commodity and a scene feature of a living room, and assuming that the data source of commodity 1 is fruit store z, the fruit store z also distributes commodities such as pineapple and orange, and the scene features of the commodities include: the total exposure times, total clicking times and total exposure clicking rate of the king to each commodity issued by the fruit store z in the past 1 day and the past 7 days; the total exposure times, the total clicking times and the total exposure clicking rate of the commercial product 1 in the past 1 day and the past 7 days are all 0 in the scene characteristics of the video number.
For commodity 1, inputting scene specific features 4 into each first full-connection layer for parameter prediction to obtain model parameter sets output by each first full-connection layer, then, based on each obtained model parameter set, respectively carrying out parameter configuration on each second full-connection layer, and respectively inputting attribute features 4 into each second full-connection layer for prediction to obtain an evaluation value 4 of commodity 1, wherein the evaluation value 1 is 0.9.
Other media contents comprise 100 media contents such as xx living broadcasting room, taking xx living broadcasting room as an example, acquiring attribute characteristics 5 of xx living broadcasting room, wherein a target object is a king, the attribute characteristics comprise object IDs of the king, content IDs of xx living broadcasting room and scene specific characteristics 5, the scene specific characteristics 5 comprise scene characteristics of commodities and scene characteristics of living broadcasting room, and supposing that a data source of the commodity 1 is xiao Liu, the scene characteristics of video numbers comprise: the total exposure times, total clicking times and total exposure clicking rate of the king to each field of xiao Liu in the past 1 day and 7 days are all 0, and the scene characteristics of the commodity are all 0.
And aiming at xx live broadcasting rooms, respectively inputting scene specific features 5 into each first full-connection layer for parameter prediction to obtain model parameter sets output by each first full-connection layer, respectively carrying out parameter configuration on each second full-connection layer based on each obtained model parameter set, respectively inputting attribute features 5 into each second full-connection layer for prediction to obtain an evaluation value 5 of xx live broadcasting rooms, wherein the evaluation value 4 is 0.6.
Further, based on the evaluation values of 100 other media contents and commodity 1, sorting the other media contents and the candidate media contents according to the order from big to small, wherein the sorting result represents that commodity 1 and xx live broadcasting rooms are the first two, and commodity 1 and xx live broadcasting rooms are taken as target recommended contents.
Obviously, in the embodiment of the application, a meta learning model (i.e. a meta network module) is introduced to dynamically screen different data modes contained in media contents of different data sources. For the input features [ a, b, c, d ], the meta-learning model considers that the important data pattern for sample 1 is [ a ', b', c ', d' ], and for sample 2, the important data pattern is [ a ", b", c ", d" ], which is a dynamic cross over of the overall feature. Further, for the meta learning model, the extracted token vector is f ([ a, b, c, d ], w=g ([ a, b, c, d ])), where f is a mapping function from the input to the target and the parameter w is dynamically determined by the meta network. For sample 1, the characterization vector is f ([ a, b, c, d ], w= [ a ', b', c ', d' ]; for sample 2, the characterization vector is f ([ a, b, c, d ], w= [ a ", b", c ", d" ]).
Obviously, the scene specific characteristics of the candidate media content are utilized, the model parameters of the prediction module are obtained by adopting the meta-network module, the key information related to the data source can be effectively extracted from the complex characteristics of the candidate media content, so that the model parameters of the prediction module can be dynamically and dynamically adjusted according to the data source, the evaluation accuracy of the evaluation model is improved, and the recommendation accuracy is further improved. In addition, because the model parameters are obtained by utilizing the scene specific characteristics, even if the content ratio difference of the multiple information sources is large, the model parameters can better show the difference between different information sources, so that the evaluation accuracy of the evaluation model is further improved.
It will be appreciated that in the specific embodiment of the present application, related data such as object attributes, historical usage information, etc. related to user information are referred to, and when the above embodiments of the present application are applied to specific products or technologies, user permission or consent is required, and the collection, usage, and processing of related data is required to comply with related laws and regulations and standards of related countries and regions.
Based on the same inventive concept, the embodiment of the application provides a content recommendation device based on an evaluation model. As shown in fig. 13, which is a schematic structural diagram of a content recommendation device 1300 based on an evaluation model, the evaluation model includes a meta-network module and a prediction module, the meta-network module includes first full-connection layers, the prediction module includes second full-connection layers having the same number of nodes as the first full-connection layers, and the device includes:
an obtaining unit 1301, configured to obtain an attribute feature of the candidate media content, where the attribute feature includes a scene specific feature;
a meta-network unit 1302, configured to input the scene specific features to the first full-connection layers respectively to perform parameter prediction, so as to obtain model parameter sets output by the first full-connection layers respectively; each first full-connection layer is obtained through model training;
The estimating unit 1303 is configured to perform parameter configuration on each second full-connection layer based on each obtained model parameter set, and input the attribute features into each second full-connection layer for prediction, so as to obtain an evaluation value of the candidate media content;
and a recommendation unit 1304 for regarding the candidate media content as a target recommended content when the evaluation value satisfies a preset recommendation condition.
As a possible implementation manner, when the scene specific features are respectively input to the first fully-connected layers to perform parameter prediction, and a model parameter set output by each of the first fully-connected layers is obtained, the meta-network unit 1302 is specifically configured to:
the scene specific features are respectively input into the first full-connection layers to perform feature mapping processing to obtain corresponding mapping vectors, wherein each mapping vector comprises model parameters output by the corresponding first full-connection layer;
based on the set vector dimension, splitting the obtained mapping vectors respectively to obtain corresponding feature matrixes, wherein each element contained in each feature matrix is a corresponding model parameter.
As a possible implementation manner, when the scene specific features are respectively input to the first full-connection layers to perform feature mapping processing, and corresponding mapping vectors are obtained, the meta-network unit 1302 is specifically configured to:
Carrying out pooling treatment on the scene specific features to obtain pooled scene features;
and respectively inputting the obtained pooled scene characteristics to each first full-connection layer for characteristic mapping processing to obtain corresponding mapping vectors.
As a possible implementation manner, when the scene specific features are respectively input to the first full-connection layers to perform feature mapping processing, and corresponding mapping vectors are obtained, the meta-network unit 1302 is specifically configured to:
carrying out pooling treatment on the scene specific features to obtain pooled scene features;
and respectively inputting the obtained pooled scene characteristics to each first full-connection layer for characteristic mapping processing to obtain corresponding mapping vectors.
As a possible implementation manner, when the parameter configuration is performed on each of the second fully-connected layers based on each obtained model parameter set, the estimating unit 1303 is specifically configured to:
based on the set connection sequence between the second full connection layers, the following operations are performed for each of the second full connection layers in turn:
determining, for one second full-connection layer, one first full-connection layer corresponding to the one second full-connection layer from the first full-connection layers based on a correspondence between the first full-connection layers and the second full-connection layers;
And carrying out parameter configuration on the second full-connection layer based on the model parameter set output by the first full-connection layer.
As a possible implementation manner, the second full-connection layers are connected according to a set connection sequence, the attribute features are input into the second full-connection layers for prediction, and when the evaluation value of the candidate media content is obtained, the prediction unit 1303 is specifically configured to:
based on the set connection order, for each of the second full connection layers, the following operations are sequentially performed:
aiming at one second full-connection layer, carrying out linear transformation on the mapping characteristics output by the previous second full-connection layer to obtain linear mapping characteristics; wherein the input of the first and second full connection layers is the attribute feature;
and carrying out nonlinear mapping on the linear mapping characteristics to obtain the mapping characteristics of the output of the second full-connection layer, wherein the evaluation value of the candidate media content is obtained based on the mapping characteristics of the output of the last second full-connection layer.
As a possible implementation manner, when the attribute features of the candidate media content are acquired, the acquiring unit 1301 is specifically configured to:
For a target object and the candidate media content, obtaining object characteristics of the target object based on set object attributes, and obtaining content characteristics of the candidate media content based on set content attributes;
acquiring scene specific characteristics of the candidate media content based on historical recommendation information between the candidate media content and the target object;
and obtaining attribute characteristics of the candidate media content based on the scene specific characteristics, the object characteristics and the content characteristics.
As a possible implementation manner, when the obtaining the scene specific feature of the candidate media content based on the historical recommendation information between the candidate media content and the target object, the obtaining unit 1301 is specifically configured to:
based on target scene attributes set for target data sources to which the candidate media contents belong, extracting information from historical recommendation information between the candidate media contents and the target objects to obtain target scene characteristics of the candidate media contents;
based on other scene attributes set for other data sources, adopting a set attribute value as other scene characteristics of the candidate media content;
And obtaining the scene specific feature based on the target scene feature and the other scene features.
As a possible implementation manner, the obtaining unit 1301 is specifically configured to, when extracting information from historical recommendation information between the candidate media content and the target object based on the target scene attribute set for the target data source to which the candidate media content belongs, obtain the target scene feature of the candidate media content:
if the candidate media information is graphic information and the target data source to which the graphic information belongs is a first data source, extracting information from historical recommendation information between the graphic information and the target object based on target scene attributes set for the first data source to obtain the target scene characteristics;
and if the candidate media information is a video and the target data source to which the video belongs is a second data source, extracting information from historical recommendation information between the video and the target object based on target scene attributes set for the second data source to obtain the target scene characteristics.
As one possible implementation, the evaluation value is used to characterize a probability that the target object clicks on the candidate media content;
The recommendation unit 1304 is specifically configured to, when the evaluation value satisfies a preset recommendation condition and the candidate media content is taken as a target recommended content;
acquiring evaluation values of other media contents, and sorting the other media contents and the media contents based on the evaluation values of the other media contents and the media contents;
and when the candidate media content is determined to be within the set order range based on the sorting result, the candidate media content is taken as target recommended content.
For convenience of description, the above parts are described as being functionally divided into modules (or units) respectively. Of course, the functions of each module (or unit) may be implemented in the same piece or pieces of software or hardware when implementing the present application.
The specific manner in which the respective units execute the requests in the apparatus of the above embodiment has been described in detail in the embodiment concerning the method, and will not be described in detail here.
Those skilled in the art will appreciate that the various aspects of the application may be implemented as a system, method, or program product. Accordingly, aspects of the application may be embodied in the following forms, namely: an entirely hardware embodiment, an entirely software embodiment (including firmware, micro-code, etc.) or an embodiment combining hardware and software aspects may be referred to herein as a "circuit," module "or" system.
Based on the same inventive concept, the embodiment of the application also provides electronic equipment. In one embodiment, the electronic device may be a server or a terminal device. Referring to fig. 14, which is a schematic structural diagram of one possible electronic device provided in an embodiment of the present application, in fig. 14, an electronic device 1400 includes: a processor 1410, and a memory 1420.
The memory 1420 stores a computer program executable by the processor 1410, and the processor 1410 can execute the steps of the content recommendation method based on the evaluation model by executing the instructions stored in the memory 1420.
Memory 1420 may be volatile memory (RAM), such as random-access memory (RAM); the Memory 1420 may be a nonvolatile Memory (non-volatile Memory), such as a Read-Only Memory (ROM), a flash Memory (flash Memory), a hard disk (HDD) or a Solid State Drive (SSD); or memory 1420, is any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to such. Memory 1420 may also be a combination of the above.
The processor 1410 may include one or more central processing units (central processing unit, CPU) or digital processing units, etc. A processor 1410 for implementing the above-described content recommendation method based on the evaluation model when executing the computer program stored in the memory 1420.
In some embodiments, processor 1410 and memory 1420 may be implemented on the same chip, and in some embodiments they may be implemented separately on separate chips.
The particular connection medium between the processor 1410 and the memory 1420 is not limited to the specific connection medium described above in embodiments of the present application. In the embodiment of the present application, the processor 1410 and the memory 1420 are connected by a bus, which is depicted in fig. 14 by a bold line, and the connection manner between other components is only schematically illustrated, and is not limited thereto. The buses may be divided into address buses, data buses, control buses, etc. For ease of description, only one thick line is depicted in fig. 14, but only one bus or one type of bus is not depicted.
Based on the same inventive concept, an embodiment of the present application provides a computer readable storage medium comprising a computer program for causing an electronic device to perform the steps of the above-described evaluation model based content recommendation method when the computer program is run on the electronic device. In some possible embodiments, aspects of the content recommendation method based on an evaluation model provided by the present application may also be implemented in the form of a program product comprising a computer program for causing an electronic device to perform the steps of the content recommendation method based on an evaluation model described above when the program product is run on the electronic device, e.g. the electronic device may perform the steps as shown in fig. 5.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a RAM, a ROM, an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (Compact Disk Read Only Memory, CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product of embodiments of the present application may take the form of a CD-ROM and comprise a computer program and may run on an electronic device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a computer program for use by or in connection with a command execution system, apparatus, or device.
The readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave in which a readable computer program is embodied. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a computer program for use by or in connection with a command execution system, apparatus, or device.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (15)

1. A content recommendation method based on an evaluation model, wherein the evaluation model includes a meta-network module and a prediction module, the meta-network module includes first full-connection layers, the prediction module includes second full-connection layers having the same number of nodes as the first full-connection layers, the method includes:
acquiring attribute characteristics of candidate media contents, wherein the attribute characteristics comprise scene specific characteristics;
inputting the scene specific features to the first full-connection layers respectively for parameter prediction to obtain model parameter sets output by the first full-connection layers respectively; each first full-connection layer is obtained through model training;
Based on the obtained parameter sets of the models, respectively carrying out parameter configuration on the second full-connection layers, respectively inputting the attribute features into the second full-connection layers for prediction, and obtaining the evaluation value of the candidate media content;
and when the evaluation value meets a preset recommendation condition, taking the candidate media content as a target recommendation content.
2. The method of claim 1, wherein the inputting the scene-specific features into the first fully-connected layers for parameter prediction to obtain the model parameter sets output by the first fully-connected layers respectively includes:
the scene specific features are respectively input into the first full-connection layers to perform feature mapping processing to obtain corresponding mapping vectors, wherein each mapping vector comprises model parameters output by the corresponding first full-connection layer;
based on the set vector dimension, splitting the obtained mapping vectors respectively to obtain corresponding feature matrixes, wherein each element contained in each feature matrix is a corresponding model parameter.
3. The method of claim 2, wherein the inputting the scene-specific features into the first full-connection layers for feature mapping processing to obtain corresponding mapping vectors includes:
Carrying out pooling treatment on the scene specific features to obtain pooled scene features;
and respectively inputting the obtained pooled scene characteristics to each first full-connection layer for characteristic mapping processing to obtain corresponding mapping vectors.
4. A method according to claim 1, 2 or 3, wherein said configuring parameters of each second fully connected layer based on each obtained model parameter set, respectively, comprises:
based on the set connection sequence between the second full connection layers, the following operations are performed for each of the second full connection layers in turn:
determining, for one second full-connection layer, one first full-connection layer corresponding to the one second full-connection layer from the first full-connection layers based on a correspondence between the first full-connection layers and the second full-connection layers;
and carrying out parameter configuration on the second full-connection layer based on the model parameter set output by the first full-connection layer.
5. A method as claimed in claim 1, 2 or 3, wherein the second full-connection layers are connected according to a set connection sequence, and the attribute features are respectively input into the second full-connection layers for prediction, so as to obtain the evaluation value of the candidate media content, which includes:
Based on the set connection order, for each of the second full connection layers, the following operations are sequentially performed:
aiming at one second full-connection layer, carrying out linear transformation on the mapping characteristics output by the previous second full-connection layer to obtain linear mapping characteristics; wherein the input of the first and second full connection layers is the attribute feature;
and carrying out nonlinear mapping on the linear mapping characteristics to obtain the mapping characteristics of the output of the second full-connection layer, wherein the evaluation value of the candidate media content is obtained based on the mapping characteristics of the output of the last second full-connection layer.
6. A method as recited in claim 1, 2 or 3, wherein said obtaining attribute characteristics of candidate media content comprises:
for a target object and the candidate media content, obtaining object characteristics of the target object based on set object attributes, and obtaining content characteristics of the candidate media content based on set content attributes;
acquiring scene specific characteristics of the candidate media content based on historical recommendation information between the candidate media content and the target object;
and obtaining attribute characteristics of the candidate media content based on the scene specific characteristics, the object characteristics and the content characteristics.
7. The method of claim 6, wherein the obtaining scene specific features of the candidate media content based on historical recommendation information between the candidate media content and the target object comprises:
based on target scene attributes set for target data sources to which the candidate media contents belong, extracting information from historical recommendation information between the candidate media contents and the target objects to obtain target scene characteristics of the candidate media contents;
based on other scene attributes set for other data sources, adopting a set attribute value as other scene characteristics of the candidate media content;
and obtaining the scene specific feature based on the target scene feature and the other scene features.
8. The method of claim 7, wherein the extracting information from the historical recommendation information between the candidate media content and the target object based on the target scene attribute set for the target data source to which the candidate media content belongs, to obtain the target scene feature of the candidate media content, comprises:
if the candidate media information is graphic information, and the target data source to which the graphic information belongs is a first data source, extracting information from historical recommendation information between the graphic information and the target object based on target scene attributes set for the first data source to obtain the target scene characteristics;
And if the candidate media information is a video and the target data source to which the video belongs is a second data source, extracting information from historical recommendation information between the video and the target object based on target scene attributes set for the second data source to obtain the target scene characteristics.
9. The method of claim 6, wherein the evaluation value is used to characterize a probability that the target object clicks on the candidate media content;
and when the evaluation value meets a preset recommendation condition, taking the candidate media content as a target recommendation content, wherein the method comprises the following steps:
acquiring evaluation values of other media contents, and sorting the other media contents and the media contents based on the respective evaluation values of the other media contents and the candidate media contents;
and when the candidate media content is determined to be within the set order range based on the sorting result, the candidate media content is taken as target recommended content.
10. A content recommendation device based on an evaluation model, wherein the evaluation model includes a meta network module and a prediction module, the meta network module includes first full connection layers, the prediction module includes second full connection layers having the same number of nodes as the first full connection layers, the device includes:
The acquisition unit is used for acquiring attribute characteristics of the candidate media content, wherein the attribute characteristics comprise scene specific characteristics;
the meta-network unit is used for inputting the scene specific characteristics to each first full-connection layer respectively for parameter prediction to obtain model parameter sets output by each first full-connection layer respectively; each first full-connection layer is obtained through model training;
the estimating unit is used for respectively carrying out parameter configuration on each second full-connection layer based on each obtained model parameter set, and respectively inputting the attribute characteristics into each second full-connection layer for prediction to obtain an evaluation value of the candidate media content;
and the recommending unit is used for taking the candidate media content as a target recommended content when the evaluation value meets a preset recommending condition.
11. The apparatus of claim 10, wherein the meta-network unit is specifically configured to, when inputting the scene-specific features to the first full-connection layers to perform parameter prediction to obtain the model parameter sets output by the first full-connection layers, respectively:
the scene specific features are respectively input into the first full-connection layers to perform feature mapping processing to obtain corresponding mapping vectors, wherein each mapping vector comprises model parameters output by the corresponding first full-connection layer;
Based on the set vector dimension, splitting the obtained mapping vectors respectively to obtain corresponding feature matrixes, wherein each element contained in each feature matrix is a corresponding model parameter.
12. The apparatus of claim 11, wherein the meta-network module is specifically configured to, when the scene-specific features are respectively input to the first full-connection layers to perform feature mapping processing to obtain corresponding mapping vectors:
carrying out pooling treatment on the scene specific features to obtain pooled scene features;
and respectively inputting the obtained pooled scene characteristics to each first full-connection layer for characteristic mapping processing to obtain corresponding mapping vectors.
13. An electronic device comprising a processor and a memory, wherein the memory stores a computer program which, when executed by the processor, causes the processor to perform the steps of the method of any of claims 1 to 9.
14. A computer readable storage medium, characterized in that it comprises a computer program for causing an electronic device to perform the steps of the method according to any one of claims 1-9 when said computer program is run on the electronic device.
15. A computer program product, characterized in that it comprises a computer program stored in a computer readable storage medium, from which computer readable storage medium a processor of an electronic device reads and executes the computer program, causing the electronic device to perform the steps of the method according to any one of claims 1-9.
CN202211523991.2A 2022-11-30 2022-11-30 Content recommendation method and related device based on evaluation model Pending CN117009556A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211523991.2A CN117009556A (en) 2022-11-30 2022-11-30 Content recommendation method and related device based on evaluation model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211523991.2A CN117009556A (en) 2022-11-30 2022-11-30 Content recommendation method and related device based on evaluation model

Publications (1)

Publication Number Publication Date
CN117009556A true CN117009556A (en) 2023-11-07

Family

ID=88562533

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211523991.2A Pending CN117009556A (en) 2022-11-30 2022-11-30 Content recommendation method and related device based on evaluation model

Country Status (1)

Country Link
CN (1) CN117009556A (en)

Similar Documents

Publication Publication Date Title
CN109919316B (en) Method, device and equipment for acquiring network representation learning vector and storage medium
CN111966914B (en) Content recommendation method and device based on artificial intelligence and computer equipment
CN110728317A (en) Training method and system of decision tree model, storage medium and prediction method
CN114265979B (en) Method for determining fusion parameters, information recommendation method and model training method
CN111368973B (en) Method and apparatus for training a super network
WO2022148186A1 (en) Behavioral sequence data processing method and apparatus
CN113515690A (en) Training method of content recall model, content recall method, device and equipment
CN113609337A (en) Pre-training method, device, equipment and medium of graph neural network
CN117217284A (en) Data processing method and device
CN115186192A (en) Information processing method, device, storage medium and equipment
CN115130536A (en) Training method of feature extraction model, data processing method, device and equipment
CN116910373B (en) House source recommendation method and device, electronic equipment and storage medium
WO2024051707A1 (en) Recommendation model training method and apparatus, and resource recommendation method and apparatus
CN112989182A (en) Information processing method, information processing apparatus, information processing device, and storage medium
CN117573961A (en) Information recommendation method, device, electronic equipment, storage medium and program product
CN116975686A (en) Method for training student model, behavior prediction method and device
CN116910357A (en) Data processing method and related device
Sunitha et al. Political optimizer-based automated machine learning for skin lesion data
CN115631008B (en) Commodity recommendation method, device, equipment and medium
CN117033997A (en) Data segmentation method, device, electronic equipment and medium
CN115858911A (en) Information recommendation method and device, electronic equipment and computer-readable storage medium
CN117009556A (en) Content recommendation method and related device based on evaluation model
CN112417260B (en) Localized recommendation method, device and storage medium
CN111935259A (en) Method and device for determining target account set, storage medium and electronic equipment
CN116501993B (en) House source data recommendation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication