CN113269262A - Method, apparatus and storage medium for training matching degree detection model - Google Patents

Method, apparatus and storage medium for training matching degree detection model Download PDF

Info

Publication number
CN113269262A
CN113269262A CN202110613653.7A CN202110613653A CN113269262A CN 113269262 A CN113269262 A CN 113269262A CN 202110613653 A CN202110613653 A CN 202110613653A CN 113269262 A CN113269262 A CN 113269262A
Authority
CN
China
Prior art keywords
account
matching degree
activity
attribute information
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110613653.7A
Other languages
Chinese (zh)
Inventor
黄昕
傅鸿城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Music Entertainment Technology Shenzhen Co Ltd
Original Assignee
Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Music Entertainment Technology Shenzhen Co Ltd filed Critical Tencent Music Entertainment Technology Shenzhen Co Ltd
Priority to CN202110613653.7A priority Critical patent/CN113269262A/en
Publication of CN113269262A publication Critical patent/CN113269262A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a method, equipment and a storage medium for training a matching degree detection model, and belongs to the technical field of computers. The method comprises the following steps: acquiring a sample low-activity account and a sample high-activity account with the same account attribute information; acquiring first object attribute information of a history display object corresponding to a sample low-activity account and second object attribute information of the history display object corresponding to a sample high-activity account; determining a first matching degree score of the account attribute information and the first object attribute information and a second matching degree score of the account attribute information and the second object attribute information by using a matching degree detection model; adjusting parameters of the matching degree detection model according to the difference value between the second matching degree value and the first matching degree value; and if the preset training end condition is met, determining the matching degree detection model after parameter adjustment as the trained matching degree detection model. By the method and the device, accuracy of model output can be improved.

Description

Method, apparatus and storage medium for training matching degree detection model
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, and a storage medium for training a matching degree detection model.
Background
In many applications, object presentations are involved, and objects may be video, audio, text, merchandise, merchants, and so on.
When selecting an object to be displayed for a user, a machine learning model is generally adopted to determine a matching degree score between the object and the user, and the object is selected based on the matching degree score to be displayed for the user. When a machine learning model is trained, a training sample is generally determined based on actual operation of a user on a displayed object in a one-time display process, the user and the object are used as a sample user and a sample object, if the user approves the object, a benchmark matching degree score is set as a highest score, if the user does not approve the object, the benchmark matching degree score is set as a lowest score, and then the machine learning model is trained based on a difference value between the benchmark matching degree score and an actually output matching degree score.
In the course of implementing the present application, the inventors found that the related art has at least the following problems:
the same object which is not praise may have different degrees of influence on the user, and the same object which is praise may also have different degrees of influence on the user, and whether to perform praise operation to determine the benchmark matching degree score as the highest score or the lowest score is too coarse, and the accuracy of model output is influenced by the training mode.
Disclosure of Invention
The embodiment of the application provides a method, equipment and a storage medium for training a matching degree detection model, and can solve the problem of low accuracy of model output. The technical scheme is as follows:
in a first aspect, a method for training a matching degree detection model is provided, the method including:
acquiring a sample low-activity account and a sample high-activity account with the same account attribute information;
acquiring first object attribute information of a history display object corresponding to the sample low-activity account and second object attribute information of the history display object corresponding to the sample high-activity account;
determining a first matching degree score of the account attribute information and the first object attribute information and a second matching degree score of the account attribute information and the second object attribute information by using a matching degree detection model;
adjusting parameters of the matching degree detection model according to the difference value between the second matching degree score and the first matching degree score;
and if the preset training end condition is met, determining the matching degree detection model after parameter adjustment as the trained matching degree detection model.
In a possible implementation manner, if the preset training end condition is not met, the other sample low-activity accounts and the other sample high-activity accounts with the same account attribute information are obtained again to input the matching degree detection model, and the matched degree detection model after parameter adjustment is subjected to parameter adjustment according to the difference value between the second matching degree score and the first matching degree score until the preset training end condition is met, and the matched degree detection model after parameter adjustment is determined to be the trained matched degree detection model.
In a possible implementation manner, the performing parameter adjustment on the matching degree detection model according to a difference between the second matching degree score and the first matching degree score includes:
and adjusting parameters of the matching degree detection model according to the difference value of the second matching degree score and the first matching degree score, so that the difference value of the second matching degree score and the first matching degree score has an increasing trend.
In a possible implementation manner, the performing a parameter adjustment on the match detection model according to a difference between the second match score and the first match score so that the difference between the second match score and the first match score has an increasing trend includes:
determining a Loss value Loss based on a Loss function Loss max (d1-d2+ margin,0), wherein d2 is the second matching degree score, d1 is the first matching degree score, and margin is a preset positive number;
and adjusting parameters of the matching degree detection model based on the loss value so that the loss value has a decreasing trend.
In a possible implementation manner, the obtaining first object attribute information of the history display object corresponding to the sample low-activity account and second object attribute information of the history display object corresponding to the sample high-activity account includes:
determining a first object set according to a first preset number of objects displayed last for the sample low-activity account, and determining a second object set according to a first preset number of objects displayed last for the sample high-activity account;
acquiring object pairs in the first object set and the second object set, wherein the object pairs comprise a first object and a second object which belong to different object sets;
and acquiring first object attribute information of a first object in the object pair and second object attribute information of a second object in the object pair.
In a possible implementation manner, the determining a first set of objects according to a first preset number of objects last shown in the sample low-activity account, and determining a second set of objects according to a first preset number of objects last shown in the sample high-activity account includes:
and obtaining objects meeting the negative feedback condition of the user from the objects with the first preset number displayed for the sample low-activity accounts last to obtain a first object set, and obtaining objects meeting the positive feedback condition of the user from the objects with the first preset number displayed for the sample high-activity accounts last to obtain a second object set.
In a possible implementation manner, the user positive feedback condition includes that the sample high-activity account performs a praise operation on the object, or that a proportion of a playing time length of the sample high-activity account on the object in a complete time length of the object is greater than a first proportion threshold;
the user negative feedback condition comprises that the sample high-activity account does not execute the praise operation on the object, and the proportion of the playing time length of the sample high-activity account on the object in the complete time length of the object is less than a second proportion threshold value;
wherein the second duty ratio threshold is less than the first duty ratio threshold.
In a possible implementation manner, the obtained sample low-activity accounts and sample high-activity accounts with the same account attribute information are a plurality of sample low-activity accounts and a plurality of sample high-activity accounts with the same account attribute information;
obtaining objects meeting the negative feedback condition of the user from the objects with the first preset number displayed in the last sample low-activity account to obtain a first object set, and obtaining objects meeting the positive feedback condition of the user from the objects with the first preset number displayed in the last sample high-activity account to obtain a second object set, wherein the method comprises the following steps:
for each sample low-activity account, acquiring objects meeting the user negative feedback condition from a first preset number of objects displayed last for the sample low-activity account, combining and de-duplicating the objects meeting the user negative feedback condition acquired corresponding to all sample low-activity accounts, and acquiring a second preset number of objects from the objects obtained by combining and de-duplicating to form a first object set;
for each sample high-activity account, acquiring objects meeting the positive feedback condition of the user from the first preset number of objects displayed in the last sample high-activity account, combining and de-duplicating the objects meeting the positive feedback condition of the user acquired corresponding to all sample high-activity accounts, and acquiring a second preset number of objects from the combined and de-duplicated objects to form a second object set.
In one possible implementation manner, the matching degree detection model includes an account feature extraction module, an object feature extraction module, and a matching degree detection module;
the determining, by using a matching degree detection model, a first matching degree score of the account attribute information and the first object attribute information and a second matching degree score of the account attribute information and the second object attribute information includes:
extracting the characteristics of the account attribute information to obtain corresponding account characteristic information;
respectively extracting features of the first object attribute information and the second object attribute information to obtain corresponding first object feature information and second object feature information;
and carrying out matching degree detection on the account characteristic information and the first object characteristic information to obtain a first matching degree score, and carrying out matching degree detection on the account characteristic information and the second object characteristic information to obtain a second matching degree score.
In one possible implementation, the matching degree detection model is a multi-layer perceptron.
In one possible implementation, the object is video or audio.
In a possible implementation manner, the matching degree detection model includes an account feature extraction module, an object feature extraction module, and a matching degree detection module, and after determining the trained matching degree detection model, the method further includes:
respectively inputting the account attribute information of a plurality of accounts into the account feature extraction module to obtain the account feature information of each account;
respectively inputting the object attribute information of a plurality of objects into the object feature extraction module to obtain the object feature information of each object;
when an object display request corresponding to a target account is received, inputting the account characteristic information of the target account and the object characteristic information of each object to be displayed into the matching degree detection module together to obtain the matching degree score of each object to be displayed and the target account;
and performing object display on the target account according to the matching degree score of each object to be displayed and the target account.
In a second aspect, an apparatus for training a matching degree detection model is provided, the apparatus comprising:
the acquisition module is used for acquiring a sample low-activity account and a sample high-activity account with the same account attribute information; acquiring first object attribute information of a history display object corresponding to the sample low-activity account and second object attribute information of the history display object corresponding to the sample high-activity account;
the detection module is used for determining a first matching degree score of the account attribute information and the first object attribute information and a second matching degree score of the account attribute information and the second object attribute information by using a matching degree detection model;
the training module is used for adjusting parameters of the matching degree detection model according to the difference value of the second matching degree score and the first matching degree score; and if the preset training end condition is met, determining the matching degree detection model after parameter adjustment as the trained matching degree detection model.
In a possible implementation manner, the training module is further configured to, if the preset training end condition is not met, obtain another sample low-activity account and another sample high-activity account with the same account attribute information again to input the matching degree detection model, perform parameter adjustment on the matched degree detection model after parameter adjustment according to a difference value between the second matching degree score and the first matching degree score, and determine the matched degree detection model after parameter adjustment as the trained matching degree detection model until the preset training end condition is met.
In one possible implementation, the training module is configured to:
and adjusting parameters of the matching degree detection model according to the difference value of the second matching degree score and the first matching degree score, so that the difference value of the second matching degree score and the first matching degree score has an increasing trend.
In one possible implementation, the training module is configured to:
determining a Loss value Loss based on a Loss function Loss max (d1-d2+ margin,0), wherein d2 is the second matching degree score, d1 is the first matching degree score, and margin is a preset positive number;
and adjusting parameters of the matching degree detection model based on the loss value so that the loss value has a decreasing trend.
In a possible implementation manner, the obtaining module is configured to:
determining a first object set according to a first preset number of objects displayed last for the sample low-activity account, and determining a second object set according to a first preset number of objects displayed last for the sample high-activity account;
acquiring object pairs in the first object set and the second object set, wherein the object pairs comprise a first object and a second object which belong to different object sets;
and acquiring first object attribute information of a first object in the object pair and second object attribute information of a second object in the object pair.
In a possible implementation manner, the obtaining module is configured to:
and obtaining objects meeting the negative feedback condition of the user from the objects with the first preset number displayed for the sample low-activity accounts last to obtain a first object set, and obtaining objects meeting the positive feedback condition of the user from the objects with the first preset number displayed for the sample high-activity accounts last to obtain a second object set.
In a possible implementation manner, the user positive feedback condition includes that the sample high-activity account performs a praise operation on the object, or that a proportion of a playing time length of the sample high-activity account on the object in a complete time length of the object is greater than a first proportion threshold;
the user negative feedback condition comprises that the sample high-activity account does not execute the praise operation on the object, and the proportion of the playing time length of the sample high-activity account on the object in the complete time length of the object is less than a second proportion threshold value;
wherein the second duty ratio threshold is less than the first duty ratio threshold.
In a possible implementation manner, the obtained sample low-activity accounts and sample high-activity accounts with the same account attribute information are a plurality of sample low-activity accounts and a plurality of sample high-activity accounts with the same account attribute information;
the obtaining module is configured to:
for each sample low-activity account, acquiring objects meeting the user negative feedback condition from a first preset number of objects displayed last for the sample low-activity account, combining and de-duplicating the objects meeting the user negative feedback condition acquired corresponding to all sample low-activity accounts, and acquiring a second preset number of objects from the objects obtained by combining and de-duplicating to form a first object set;
for each sample high-activity account, acquiring objects meeting the positive feedback condition of the user from the first preset number of objects displayed in the last sample high-activity account, combining and de-duplicating the objects meeting the positive feedback condition of the user acquired corresponding to all sample high-activity accounts, and acquiring a second preset number of objects from the combined and de-duplicated objects to form a second object set.
In one possible implementation manner, the matching degree detection model includes an account feature extraction module, an object feature extraction module, and a matching degree detection module;
the detection module is configured to:
extracting the characteristics of the account attribute information to obtain corresponding account characteristic information;
respectively extracting features of the first object attribute information and the second object attribute information to obtain corresponding first object feature information and second object feature information;
and carrying out matching degree detection on the account characteristic information and the first object characteristic information to obtain a first matching degree score, and carrying out matching degree detection on the account characteristic information and the second object characteristic information to obtain a second matching degree score.
In one possible implementation, the matching degree detection model is a multi-layer perceptron.
In one possible implementation, the object is video or audio.
In a possible implementation manner, the matching degree detection model includes an account feature extraction module, an object feature extraction module, and a matching degree detection module, and the apparatus further includes a presentation module configured to:
respectively inputting the account attribute information of a plurality of accounts into the account feature extraction module to obtain the account feature information of each account;
respectively inputting the object attribute information of a plurality of objects into the object feature extraction module to obtain the object feature information of each object;
when an object display request corresponding to a target account is received, inputting the account characteristic information of the target account and the object characteristic information of each object to be displayed into the matching degree detection module together to obtain the matching degree score of each object to be displayed and the target account;
and performing object display on the target account according to the matching degree score of each object to be displayed and the target account.
In a third aspect, a computer device is provided, where the computer device includes a processor and a memory, where the memory stores at least one instruction, and the at least one instruction is loaded and executed by the processor to implement the operations performed by the method for training a matching degree detection model according to the first aspect.
In a fourth aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the at least one instruction is loaded and executed by a processor to implement the operations performed by the method for training a matching degree detection model according to the first aspect.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
in the embodiment of the application, an account enters a low-activity state after browsing a first object with first object attribute information, the account is in a high-activity state after browsing a second object with second object attribute information, and the two accounts have the same account attribute information and can be considered to be the same account approximately. The purpose of the matching degree detection model is to accurately push an object (video and the like) to a user, the purpose of accurately pushing the object is to improve the activity of the user, and the praise operation is only one factor reflecting the activity of the user, so that the model is trained directly based on the activity of the user, and the model is trained according to the praise operation, so that the accuracy of the output of the model is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic diagram of a design concept of a solution provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a design concept of a solution provided by an embodiment of the present application;
FIG. 3 is a flowchart of a method for training a match detection model according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of data processing of a training process provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a training objective provided by an embodiment of the present application;
FIG. 6 is a flowchart of a method for object display according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an apparatus for training a matching degree detection model according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
According to the method for training the matching degree detection model, an execution subject of the method can be a server. The server may be a background server of an application program, the application program may be an application program with an object display function, such as an audio and video application program, a consumer application program, and the like, and the audio and video application program may specifically be an audio and video playing application program, or a comprehensive application program with audio and video playing and karaoke functions, and the like. The server may be a single server or a server group, and if the server is a single server, the server may be responsible for all processing in the following scheme, and if the server is a server group, different servers in the server group may be respectively responsible for different processing in the following scheme, and the specific processing allocation condition may be arbitrarily set by a technician according to actual needs, and is not described herein again.
The object referred to in the embodiments of the present application may be any content that can be presented to the user, such as audio, video, text (e.g., news, consultation, etc.), goods, business, etc., depending on the application.
The embodiment of the application takes an audio and video application program as an example for carrying out the scheme description, the corresponding object takes a video as an example, and other situations are similar to the video, and are not repeated. The audio and video application program at least has an audio and video playing function, and can also have a K song function and the like.
The server may include components such as a processor, memory, and communication components. The processor is respectively connected with the memory and the communication component.
The processor may be a Central Processing Unit (CPU). The processor may be configured to collect sample accounts, search for sample low-activity accounts and sample high-activity accounts having the same account attribute information, obtain object attribute information of objects historically presented to the sample accounts, perform model processing and perform model training based on processing results, and so on.
The Memory may include a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM (Compact Disc Read-Only Memory), a magnetic disk, an optical data storage device, and the like. The memory can be used for storing data, generated intermediate data, generated result data and the like which need to be prestored in the training and using processes of the matching degree detection model, such as object attribute information of a large number of objects, account attribute information of a large number of accounts, model parameters before and after the matching degree detection model is adjusted, and a matching degree score output by the matching degree detection model.
The communication means may be a wired network connector, a WiFi (Wireless Fidelity) module, a bluetooth module, a cellular network communication module, etc. The communication component may be used for data transmission with other devices, and the other devices may be other servers, terminals, and the like. For example, the communication component may receive an object exposure request sent by the terminal.
Some key terms are presented below.
The low-activity account is an account in a low-activity state, and the account enters the low-activity state when the account is not logged in for more than a preset time (such as one week). The sample low-activity account is the low-activity account used as the training sample. The account is referred to herein as a user account, so that the account and the user may be referred to hereinafter as equivalent, and a low-activity account may also be referred to as a low-activity user.
The high-activity account is in a high-activity state, and the time interval of the last login of the account does not exceed the preset time length, so that the account is in the high-activity state. The sample high-activity account is used as a training sample high-activity account. The account here is a user account, so the high-activity account can also be called a high-activity user.
And (4) counterworkers, namely high-activity users and low-activity users with the same attribute.
The following introduces the design idea of the present solution, as shown in fig. 1, the user attribute, video and user activity (high activity or low activity) are associated, and these three parameters are used as nodes. First, users of different attributes are clearly different in churn rate (the proportion of users entering a low activity state), so there is a correlation between user attributes and user activity. Meanwhile, the different user attributes determine the different recommended contents, so that a correlation edge exists between the user attributes and the video. The user sees different videos and also influences whether the low activity state is subsequently entered, so that the videos have a correlation edge with the activity of the user.
In addition, the matching degree scores of the user attributes and the video attributes can be calculated by using a machine learning model, and therefore positive correlation exists between the user activity and the matching degree scores for the same user. Then, a high-activity user and a low-activity user (i.e., a counterwork mirror) having the same attributes may be selected, and the two users may approximate two different states of what may be considered the same user, which may be considered as resulting from viewing different videos. Next, as shown in fig. 2, the matching degree score may be calculated by a machine learning model based on the user attribute of the low-activity user and the video attribute of the video recently viewed by the low-activity user, while the matching degree score may be calculated by a machine learning model based on the user attribute of the high-activity user and the video attribute of the video recently viewed by the high-activity user. In training a machine learning model, the difference between the matching degree score of the latter and the matching degree score of the former is expected to be as large as possible.
Fig. 3 is a flowchart of a method for training a matching degree detection model according to an embodiment of the present disclosure. Referring to fig. 3, the embodiment includes:
301, sample low-activity accounts and sample high-activity accounts having the same account attribute information are obtained.
The account attribute information may be specific information of a plurality of preset attribute items, where the preset attribute items are attribute items having instructive value on video recommendation, such as gender, age, favorite type, activity period, and the like, and these attribute items may be preset by a technician. The same account attribute information may be stored in a vector format, and may be denoted as u ═ u (u ═ b)1、u2、……un)。
In practice, there may be many specific modes of operation, and one possible mode of operation is specifically described herein.
First, a certain low-activity account is randomly acquired as a sample, namely a sample low-activity account. And then acquiring account attribute information of preset attribute items of the sample low-activity accounts, wherein the information of the gender attribute item is female, the information of the age attribute item is 35 years old, and the like. Further, the high-activity account having the account attribute information is searched for among the other accounts as a sample, that is, a sample high-activity account.
Here, only one sample low-activity account and one sample high-activity account may be acquired, and the two accounts have the same account attribute information. Alternatively, multiple sample low-activity accounts and multiple sample high-activity accounts may also be obtained.
And 302, acquiring first object attribute information of the history display object corresponding to the sample low-activity account and second object attribute information of the history display object corresponding to the sample high-activity account.
The object attribute information may be specific information of a plurality of preset attribute items, where the preset attribute items are attribute items having instructive value on video recommendation, for example, video type, video duration, browsing volume, praise volume, and the like, and these attribute items may be preset by a technician.
In implementation, a video browsed by a sample low-activity account before entering a low-activity state may be obtained, and further, object attribute information of a preset attribute item of the video, that is, first object attribute information may be obtained. And acquiring a video recently browsed by the sample high-activity account, and further acquiring object attribute information of a preset attribute item of the video, namely second object attribute information.
Optionally, a plurality of videos browsed before the low-activity account enters the low-activity state may be acquired to form a first video set, a plurality of videos recently browsed by the high-activity account are acquired to form a second video set, then, a video pair is acquired from the first video set and the second video set, the video pair includes a first video and a second video belonging to different video sets, and first object attribute information of the first video in the video pair and second object attribute information of the second video in the video pair are acquired.
A preset number (denoted as N) of objects last shown for the sample low-activity accounts may be obtained to form a first set of objects, and N objects last shown for the sample high-activity accounts may be obtained to form a second set of objects. It should be noted that, if only one sample low-activity account and one sample high-activity account with the same account attribute information are determined in step 301, at this time, only N objects last shown to the sample low-activity account need to be obtained to form a first object set, and N objects last shown to the sample high-activity account need to be obtained to form a second object set. If a plurality of sample low-activity accounts and a plurality of sample high-activity accounts with the same account attribute information are determined in step 301, N objects which are last shown in each sample low-activity account may be obtained, the N objects corresponding to all the sample low-activity accounts are subjected to deduplication processing, all the objects which are left after deduplication are formed into a first object set, the N objects which are last shown in each sample high-activity account are obtained, the N objects corresponding to all the sample high-activity accounts are subjected to deduplication processing, and all the objects which are left after deduplication are formed into a second object set.
After determining the first video set and the second video set, videos in the first video set and the second video set may be randomly paired to obtain a plurality of sample pairs, each sample pair including two videos, where one video is from the first video set and the other video is from the second video set. Each sample pair may be used in a model training process.
Or another pairing mode can be adopted, the videos in the first video set and the second video set are respectively arranged in time sequence, and then the videos in the same sequencing position in the first video set and the videos in the same sequencing position in the second video set are paired to obtain a plurality of sample pairs.
Further, we can arbitrarily acquire a sample pair, and acquire two videos, namely a first video and a second video. Acquiring object attribute information of a preset attribute item of a first video, namely first object attribute information, wherein the object attribute information can be stored in a vector form and can be recorded as x1=(x11、x12、……x1m) Acquiring object attribute information of a preset attribute item of the second video, namely the second object attribute information, wherein the second object attribute information can be stored in a vector form and can be recorded as x2=(x21、x22、……x2m)。
303, determining a first matching score of the account attribute information and the first object attribute information and a second matching score of the account attribute information and the second object attribute information by using a matching degree detection model.
The matching degree detection model may be a multi-layer perceptron, and certainly, models of other algorithms, such as a convolutional neural network, may also be adopted. And the multilayer perceptron is adopted, so that higher processing efficiency can be achieved.
The matching degree detection model comprises an account feature extraction module, an object feature extraction module and a matching degree detection module. The account characteristic extraction module is used for extracting the characteristics of the account attribute information, the object characteristic extraction module is used for extracting the characteristics of the object attribute information, and the matching degree detection module is used for determining the matching degree score according to the account characteristic information and the object characteristic information.
In implementation, referring to the processing procedure shown in fig. 4, an account feature extraction module is used to perform feature extraction on account attribute information to obtain corresponding account feature information. And respectively carrying out feature extraction on the first object attribute information and the second object attribute information by using an object feature extraction module to obtain corresponding first object feature information and second object feature information. And inputting the account characteristic information and the first object characteristic information into a matching degree detection module to obtain a first matching degree score, and inputting the account characteristic information and the second object characteristic information into the matching degree detection module to obtain a second matching degree score. That is, as shown in FIG. 4, u is defined as (u)1、u2、……un) The input account feature extraction module obtains tu, and x1=(x11、x12、……x1m) The input object feature extraction module obtains tx1X is to be2=(x21、x22、……x2m) The input object feature extraction module obtains tx2Will tx1Inputting tu into the matching degree detection module to obtain a first matching degree score d1, and inputting tx2And tu, inputting the matching degree detection module to obtain a second matching degree score d 2.
And 304, adjusting parameters of the matching degree detection model according to the difference value of the second matching degree score and the first matching degree score.
In implementation, based on the above introduced concept, taking the maximum difference between the second matching degree score and the first matching degree score as a training target, performing parameter adjustment on the matching degree detection model, that is, performing parameter adjustment on the matching degree detection model according to the difference between the second matching degree score and the first matching degree score, so that the difference between the second matching degree score and the first matching degree score has an increasing trend. When the model parameters of the matching degree detection model are trained and adjusted, the difference value of the second matching degree score minus the first matching degree score is maximized as a training target, so that the trained matching degree detection model can enable the second object attribute information and the account attribute information to have higher matching degree scores, and enable the first object attribute information and the account attribute information to have lower matching degree scores, as shown in fig. 5, wherein the closer the distance between the nodes is, the higher the matching degree score can be considered.
Here, the Loss function Loss may be designed to be f (d2-d1), and the Loss value Loss is negatively correlated with the difference between the second matching score d2 and the first matching score d1 by designing f (x), so that the Loss value is minimized during the training process, and the difference is maximized. One usable loss function is given below.
And training a matching degree detection model based on the Loss function Loss (max) (d1-d2+ margin, 0). Wherein margin is a preset positive number. In the training process, aiming at minimizing Loss value Loss, calculating an adjustment value of each model parameter of the matching degree detection model, and adjusting each model parameter based on the adjustment value to obtain the adjusted matching degree detection model. That is, the matching degree detection model is parametrized based on the loss value so that the loss value has a decreasing tendency. The model parameters may include model parameters in an account feature extraction module, an object feature extraction module, and a matching degree detection module.
And 305, if the preset training end condition is met, determining the matched degree detection model after parameter adjustment as the trained matched degree detection model, if the preset training end condition is not met, obtaining other sample low-activity accounts and other sample high-activity accounts with the same account attribute information again to input the matched degree detection model, performing parameter adjustment on the matched degree detection model after parameter adjustment according to the difference value of the second matched degree value and the first matched degree value until the preset training end condition is met, and determining the matched degree detection model after parameter adjustment as the trained matched degree detection model.
The training end condition may be that a preset number of parameter adjustments is reached, or that a loss value is smaller than a preset threshold, or the like.
After the parameter tuning processing of the steps 301-304, if the training end condition is not satisfied, the parameter tuning processing of the steps 301-304 needs to be executed circularly until the training end condition is satisfied.
Optionally, another processing manner of the step 302 may be: and obtaining objects meeting the negative feedback condition of the user from the objects with the first preset number displayed for the sample low-activity accounts to obtain a first object set, and obtaining objects meeting the positive feedback condition of the user from the objects with the first preset number displayed for the sample high-activity accounts to obtain a second object set. Object pairs are obtained in the first object set and the second object set, wherein the object pairs comprise a first object and a second object belonging to different object sets. First object attribute information of a first object in the object pair and second object attribute information of a second object in the object pair are obtained.
The positive feedback condition of the user comprises that the sample high-activity account performs a praise operation on the object, or the proportion of the playing time length of the sample high-activity account on the object in the complete time length of the object is larger than a first proportion threshold value. The user negative feedback condition comprises that the sample high-activity account does not execute the praise operation on the object, and the proportion of the playing time length of the sample high-activity account on the object in the complete time length of the object is less than a second proportion threshold value. The second duty ratio threshold is less than the first duty ratio threshold.
If the obtained sample low-activity accounts and sample high-activity accounts with the same account attribute information are multiple sample low-activity accounts and multiple sample high-activity accounts with the same account attribute information, determining the first object set and the second object set by the following specific processing modes:
for each sample low-activity account, obtaining objects meeting the user negative feedback condition from a first preset number of objects finally displayed on the sample low-activity accounts, combining and de-duplicating the objects meeting the user negative feedback condition and obtained from all the sample low-activity accounts, and obtaining a second preset number of objects from the objects obtained by combining and de-duplicating to form a first object set. For each sample high-activity account, acquiring objects meeting the positive feedback condition of the user from the first preset number of objects displayed in the last sample high-activity account, combining and de-duplicating the objects meeting the positive feedback condition of the user acquired corresponding to all sample high-activity accounts, and acquiring a second preset number of objects from the combined and de-duplicated objects to form a second object set.
The whole recommendation system can comprise a plurality of working links such as data analysis, recommendation pool construction, recall, sorting, retaking and the like. The video volume in the database can reach the scale of hundreds of millions, a recommendation pool of about 100 plus 300 million video volumes can be constructed through data analysis and recommendation pool construction, the recommendation pool is not specific to a specific account, screening is carried out based on the related attributes of videos, and videos to be displayed are selected. And the recalling link is performed on the basis of the account attribute information and the object attribute information, and the recalling link is performed by primarily selecting videos in the recommendation pool to obtain about 3-5 thousand videos. And the sorting link is used for further sorting and selecting in the recalled videos to obtain about 500 videos, and the sorting link is carried out based on the account attribute information and the object attribute information. And the rearrangement link carries out further sequencing and selection in the videos obtained by sequencing to obtain videos of about 10-50, wherein the sequencing link is carried out based on the account attribute information and the object attribute information.
The processing of the recalling, sequencing and rearranging links is performed based on a machine learning model, and each link can adopt different models to respectively train. The matching degree detection model trained by the process can be used in a recall link under the condition of adopting a multilayer perceptron. Of course, other algorithms are also contemplated for use in other contexts.
The embodiment of the present application further provides an object display processing method, which uses a trained matching degree detection model to perform object display processing, and as shown in fig. 6, the corresponding processing includes the following steps:
601, inputting the account attribute information of a plurality of accounts into the account feature extraction module respectively to obtain the account feature information of each account.
And 602, inputting the object attribute information of the plurality of objects into the object feature extraction module respectively to obtain the object feature information of each object.
In the two steps, each account and each video in the database can be processed, so that each account in the database correspondingly stores account characteristic information, and each video correspondingly stores object characteristic information. Furthermore, the above two steps can be executed at a certain period, so that the account characteristic information of each account and the object characteristic information of each video stored in the database can be periodically updated.
603, when an object display request corresponding to the target account is received, inputting the account characteristic information of the target account and the object characteristic information of each object to be displayed into the matching degree detection module together, so as to obtain the matching degree score of each object to be displayed and the target account.
When an application program of a user operation terminal enters a corresponding display interface, the terminal is triggered to send an object display request to a server, for example, a recommendation tag is arranged in a certain comprehensive application program with audio and video playing and karaoke functions, the user clicks the recommendation tag to trigger the display of the recommendation interface, and meanwhile, the terminal is triggered to send the object display request to the server. The object display request may carry an account identifier of the target account. The server may then determine a matching score for videos in the recommendation pool to the target account.
And 604, displaying the target account according to the matching degree score of each object to be displayed and the target account.
And the server selects a preset number of videos with the highest matching degree value from the videos based on the matching degree values of the videos in the recommendation pool and the target account as the result of the recall link. And then processing the sequencing and rearrangement links to obtain the video to be displayed finally and corresponding sequencing information. The video to be presented and the ranking information may then be sent to the terminal. The terminal can display each video in the display interface according to the sequencing information so as to be browsed by the user.
The experimental result shows that the user retention rate can be improved by 0.1-0.2%, the per-capita playing frequency can be improved by 1.2%, the per-capita playing time can be improved by 0.9%, and the effective playing permeability can be improved by 0.06% by training the matching degree detection model by using the method.
In the embodiment of the application, an account enters a low-activity state after browsing a first object with first object attribute information, the account is in a high-activity state after browsing a second object with second object attribute information, and the two accounts have the same account attribute information and can be considered to be the same account approximately. The purpose of the matching degree detection model is to accurately push an object (video and the like) to a user, the purpose of accurately pushing the object is to improve the activity of the user, and the praise operation is only one factor reflecting the activity of the user, so that the model is trained directly based on the activity of the user, and the model is trained according to the praise operation, so that the accuracy of the output of the model is improved.
All the above optional technical solutions may be combined arbitrarily to form the optional embodiments of the present disclosure, and are not described herein again.
The embodiment of the present application further provides a device for training a matching degree detection model, which can be applied to the server in the foregoing embodiment, as shown in fig. 7, the device includes:
an obtaining module 710, configured to obtain a sample low-activity account and a sample high-activity account with the same account attribute information; acquiring first object attribute information of a history display object corresponding to the sample low-activity account and second object attribute information of the history display object corresponding to the sample high-activity account;
a detection module 720, configured to determine, using a matching degree detection model, a first matching degree score of the account attribute information and the first object attribute information, and a second matching degree score of the account attribute information and the second object attribute information;
the training module 730 is configured to perform parameter adjustment on the matching degree detection model according to a difference between the second matching degree score and the first matching degree score; and if the preset training end condition is met, determining the matching degree detection model after parameter adjustment as the trained matching degree detection model.
In a possible implementation manner, the training module 730 is further configured to, if the preset training end condition is not met, obtain another sample low-activity account and another sample high-activity account with the same account attribute information again to input the matching degree detection model, perform parameter adjustment on the matched degree detection model after parameter adjustment according to a difference between the second matching degree score and the first matching degree score, until the preset training end condition is met, and determine the matched degree detection model after parameter adjustment as the trained matching degree detection model.
In one possible implementation manner, the training module 730 is configured to:
and adjusting parameters of the matching degree detection model according to the difference value of the second matching degree score and the first matching degree score, so that the difference value of the second matching degree score and the first matching degree score has an increasing trend.
In one possible implementation manner, the training module 730 is configured to:
determining a Loss value Loss based on a Loss function Loss max (d1-d2+ margin,0), wherein d2 is the second matching degree score, d1 is the first matching degree score, and margin is a preset positive number;
and adjusting parameters of the matching degree detection model based on the loss value so that the loss value has a decreasing trend.
In a possible implementation manner, the obtaining module 710 is configured to:
determining a first object set according to a first preset number of objects displayed last for the sample low-activity account, and determining a second object set according to a first preset number of objects displayed last for the sample high-activity account;
acquiring object pairs in the first object set and the second object set, wherein the object pairs comprise a first object and a second object which belong to different object sets;
and acquiring first object attribute information of a first object in the object pair and second object attribute information of a second object in the object pair.
In a possible implementation manner, the obtaining module 710 is configured to:
and obtaining objects meeting the negative feedback condition of the user from the objects with the first preset number displayed for the sample low-activity accounts last to obtain a first object set, and obtaining objects meeting the positive feedback condition of the user from the objects with the first preset number displayed for the sample high-activity accounts last to obtain a second object set.
In a possible implementation manner, the user positive feedback condition includes that the sample high-activity account performs a praise operation on the object, or that a proportion of a playing time length of the sample high-activity account on the object in a complete time length of the object is greater than a first proportion threshold;
the user negative feedback condition comprises that the sample high-activity account does not execute the praise operation on the object, and the proportion of the playing time length of the sample high-activity account on the object in the complete time length of the object is less than a second proportion threshold value;
wherein the second duty ratio threshold is less than the first duty ratio threshold.
In a possible implementation manner, the obtained sample low-activity accounts and sample high-activity accounts with the same account attribute information are a plurality of sample low-activity accounts and a plurality of sample high-activity accounts with the same account attribute information;
the obtaining module 710 is configured to:
for each sample low-activity account, acquiring objects meeting the user negative feedback condition from a first preset number of objects displayed last for the sample low-activity account, combining and de-duplicating the objects meeting the user negative feedback condition acquired corresponding to all sample low-activity accounts, and acquiring a second preset number of objects from the objects obtained by combining and de-duplicating to form a first object set;
for each sample high-activity account, acquiring objects meeting the positive feedback condition of the user from the first preset number of objects displayed in the last sample high-activity account, combining and de-duplicating the objects meeting the positive feedback condition of the user acquired corresponding to all sample high-activity accounts, and acquiring a second preset number of objects from the combined and de-duplicated objects to form a second object set.
In one possible implementation manner, the matching degree detection model includes an account feature extraction module, an object feature extraction module, and a matching degree detection module;
the detecting module 720 is configured to:
extracting the characteristics of the account attribute information to obtain corresponding account characteristic information;
respectively extracting features of the first object attribute information and the second object attribute information to obtain corresponding first object feature information and second object feature information;
and carrying out matching degree detection on the account characteristic information and the first object characteristic information to obtain a first matching degree score, and carrying out matching degree detection on the account characteristic information and the second object characteristic information to obtain a second matching degree score.
In one possible implementation, the matching degree detection model is a multi-layer perceptron.
In one possible implementation, the object is video or audio.
In a possible implementation manner, the matching degree detection model includes an account feature extraction module, an object feature extraction module, and a matching degree detection module, and the apparatus further includes a presentation module configured to:
respectively inputting the account attribute information of a plurality of accounts into the account feature extraction module to obtain the account feature information of each account;
respectively inputting the object attribute information of a plurality of objects into the object feature extraction module to obtain the object feature information of each object;
when an object display request corresponding to a target account is received, inputting the account characteristic information of the target account and the object characteristic information of each object to be displayed into the matching degree detection module together to obtain the matching degree score of each object to be displayed and the target account;
and performing object display on the target account according to the matching degree score of each object to be displayed and the target account.
In the embodiment of the application, an account enters a low-activity state after browsing a first object with first object attribute information, the account is in a high-activity state after browsing a second object with second object attribute information, and the two accounts have the same account attribute information and can be considered to be the same account approximately. The purpose of the matching degree detection model is to accurately push an object (video and the like) to a user, the purpose of accurately pushing the object is to improve the activity of the user, and the praise operation is only one factor reflecting the activity of the user, so that the model is trained directly based on the activity of the user, and the model is trained according to the praise operation, so that the accuracy of the output of the model is improved.
It should be noted that: in the device for training a matching degree detection model according to the above embodiment, when the matching degree detection model is trained, only the division of the functional modules is used for illustration, and in practical applications, the function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the apparatus for training the matching degree detection model and the method for training the matching degree detection model provided in the above embodiments belong to the same concept, and specific implementation processes thereof are described in detail in the method embodiments and are not described herein again.
Fig. 8 is a schematic structural diagram of a server according to an embodiment of the present application, where the server 800 may generate a relatively large difference due to a difference in configuration or performance, and may include one or more processors 801 and one or more memories 802, where the memory 802 stores at least one instruction, and the at least one instruction is loaded and executed by the processors 801 to implement the methods provided by the foregoing method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, a computer-readable storage medium, such as a memory including instructions executable by a processor in a terminal, is also provided to perform the method of training a match detection model in the above embodiments. The computer readable storage medium may be non-transitory. For example, the computer-readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (14)

1. A method of training a match detection model, the method comprising:
acquiring a sample low-activity account and a sample high-activity account with the same account attribute information;
acquiring first object attribute information of a history display object corresponding to the sample low-activity account and second object attribute information of the history display object corresponding to the sample high-activity account;
determining a first matching degree score of the account attribute information and the first object attribute information and a second matching degree score of the account attribute information and the second object attribute information by using a matching degree detection model;
adjusting parameters of the matching degree detection model according to the difference value between the second matching degree score and the first matching degree score;
and if the preset training end condition is met, determining the matching degree detection model after parameter adjustment as the trained matching degree detection model.
2. The method according to claim 1, wherein if the preset training end condition is not met, inputting the matching degree detection models into other sample low-activity accounts and other sample high-activity accounts having the same account attribute information again, performing parameter adjustment on the parameter-adjusted matching degree detection models according to the difference between the second matching degree score and the first matching degree score until the preset training end condition is met, and determining the parameter-adjusted matching degree detection models as the trained matching degree detection models.
3. The method of claim 1, wherein the parametrizing the match detection model based on a difference between the second match score and the first match score comprises:
and adjusting parameters of the matching degree detection model according to the difference value of the second matching degree score and the first matching degree score, so that the difference value of the second matching degree score and the first matching degree score has an increasing trend.
4. The method of claim 3, wherein the parametrizing the match detection model based on the difference between the second match score and the first match score such that the difference between the second match score and the first match score has an increasing trend comprises:
determining a Loss value Loss based on a Loss function Loss max (d1-d2+ margin,0), wherein d2 is the second matching degree score, d1 is the first matching degree score, and margin is a preset positive number;
and adjusting parameters of the matching degree detection model based on the loss value so that the loss value has a decreasing trend.
5. The method according to claim 1, wherein the obtaining of the first object attribute information of the history display object corresponding to the sample low-activity account and the second object attribute information of the history display object corresponding to the sample high-activity account comprises:
determining a first object set according to a first preset number of objects displayed last for the sample low-activity account, and determining a second object set according to a first preset number of objects displayed last for the sample high-activity account;
acquiring object pairs in the first object set and the second object set, wherein the object pairs comprise a first object and a second object which belong to different object sets;
and acquiring first object attribute information of a first object in the object pair and second object attribute information of a second object in the object pair.
6. The method of claim 5, wherein determining a first set of objects according to a first preset number of objects last shown in the sample low-activity account and determining a second set of objects according to a first preset number of objects last shown in the sample high-activity account comprises:
and obtaining objects meeting the negative feedback condition of the user from the objects with the first preset number displayed for the sample low-activity accounts last to obtain a first object set, and obtaining objects meeting the positive feedback condition of the user from the objects with the first preset number displayed for the sample high-activity accounts last to obtain a second object set.
7. The method of claim 6, wherein the user positive feedback condition comprises that the sample high-activity account performs a praise operation on the object, or that a ratio of a playing time length of the sample high-activity account on the object in a complete time length of the object is greater than a first ratio threshold;
the user negative feedback condition comprises that the sample high-activity account does not execute the praise operation on the object, and the proportion of the playing time length of the sample high-activity account on the object in the complete time length of the object is less than a second proportion threshold value;
wherein the second duty ratio threshold is less than the first duty ratio threshold.
8. The method of claim 6, wherein the sample low-activity accounts and sample high-activity accounts having the same account attribute information obtained are a plurality of sample low-activity accounts and a plurality of sample high-activity accounts having the same account attribute information;
obtaining objects meeting the negative feedback condition of the user from the objects with the first preset number displayed in the last sample low-activity account to obtain a first object set, and obtaining objects meeting the positive feedback condition of the user from the objects with the first preset number displayed in the last sample high-activity account to obtain a second object set, wherein the method comprises the following steps:
for each sample low-activity account, acquiring objects meeting the user negative feedback condition from a first preset number of objects displayed last for the sample low-activity account, combining and de-duplicating the objects meeting the user negative feedback condition acquired corresponding to all sample low-activity accounts, and acquiring a second preset number of objects from the objects obtained by combining and de-duplicating to form a first object set;
for each sample high-activity account, acquiring objects meeting the positive feedback condition of the user from the first preset number of objects displayed in the last sample high-activity account, combining and de-duplicating the objects meeting the positive feedback condition of the user acquired corresponding to all sample high-activity accounts, and acquiring a second preset number of objects from the combined and de-duplicated objects to form a second object set.
9. The method according to claim 1, wherein the matching degree detection model comprises an account feature extraction module, an object feature extraction module and a matching degree detection module;
the determining, by using a matching degree detection model, a first matching degree score of the account attribute information and the first object attribute information and a second matching degree score of the account attribute information and the second object attribute information includes:
extracting the characteristics of the account attribute information to obtain corresponding account characteristic information;
respectively extracting features of the first object attribute information and the second object attribute information to obtain corresponding first object feature information and second object feature information;
and carrying out matching degree detection on the account characteristic information and the first object characteristic information to obtain a first matching degree score, and carrying out matching degree detection on the account characteristic information and the second object characteristic information to obtain a second matching degree score.
10. The method of claim 1, wherein the match detection model is a multi-tier perceptron.
11. The method of claim 1, wherein the object is video or audio.
12. The method according to any one of claims 1 to 11, wherein the matching degree detection model comprises an account feature extraction module, an object feature extraction module and a matching degree detection module, and after determining the trained matching degree detection model, the method further comprises:
respectively inputting the account attribute information of a plurality of accounts into the account feature extraction module to obtain the account feature information of each account;
respectively inputting the object attribute information of a plurality of objects into the object feature extraction module to obtain the object feature information of each object;
when an object display request corresponding to a target account is received, inputting the account characteristic information of the target account and the object characteristic information of each object to be displayed into the matching degree detection module together to obtain the matching degree score of each object to be displayed and the target account;
and performing object display on the target account according to the matching degree score of each object to be displayed and the target account.
13. A computer device comprising a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to perform operations performed by the method of training a match detection model according to any one of claims 1 to 12.
14. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by the method of training a match detection model according to any one of claims 1 to 12.
CN202110613653.7A 2021-06-02 2021-06-02 Method, apparatus and storage medium for training matching degree detection model Pending CN113269262A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110613653.7A CN113269262A (en) 2021-06-02 2021-06-02 Method, apparatus and storage medium for training matching degree detection model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110613653.7A CN113269262A (en) 2021-06-02 2021-06-02 Method, apparatus and storage medium for training matching degree detection model

Publications (1)

Publication Number Publication Date
CN113269262A true CN113269262A (en) 2021-08-17

Family

ID=77234032

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110613653.7A Pending CN113269262A (en) 2021-06-02 2021-06-02 Method, apparatus and storage medium for training matching degree detection model

Country Status (1)

Country Link
CN (1) CN113269262A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090006368A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Automatic Video Recommendation
WO2018059176A1 (en) * 2016-09-27 2018-04-05 腾讯科技(深圳)有限公司 Method and apparatus for generating targeted label and storage medium
WO2019205795A1 (en) * 2018-04-26 2019-10-31 腾讯科技(深圳)有限公司 Interest recommendation method, computer device, and storage medium
CN110837607A (en) * 2019-11-14 2020-02-25 腾讯云计算(北京)有限责任公司 Interest point matching method and device, computer equipment and storage medium
CN111368192A (en) * 2020-03-03 2020-07-03 上海喜马拉雅科技有限公司 Information recommendation method, device, equipment and storage medium
CN111898019A (en) * 2019-05-06 2020-11-06 北京达佳互联信息技术有限公司 Information pushing method and device
CN111931592A (en) * 2020-07-16 2020-11-13 苏州科达科技股份有限公司 Object recognition method, device and storage medium
CN112417284A (en) * 2020-11-23 2021-02-26 北京三快在线科技有限公司 Method and device for pushing display information
CN112818241A (en) * 2021-02-20 2021-05-18 腾讯科技(深圳)有限公司 Content promotion method and device, computer equipment and storage medium
CN112861963A (en) * 2021-02-04 2021-05-28 北京三快在线科技有限公司 Method, device and storage medium for training entity feature extraction model

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090006368A1 (en) * 2007-06-29 2009-01-01 Microsoft Corporation Automatic Video Recommendation
WO2018059176A1 (en) * 2016-09-27 2018-04-05 腾讯科技(深圳)有限公司 Method and apparatus for generating targeted label and storage medium
WO2019205795A1 (en) * 2018-04-26 2019-10-31 腾讯科技(深圳)有限公司 Interest recommendation method, computer device, and storage medium
CN111898019A (en) * 2019-05-06 2020-11-06 北京达佳互联信息技术有限公司 Information pushing method and device
CN110837607A (en) * 2019-11-14 2020-02-25 腾讯云计算(北京)有限责任公司 Interest point matching method and device, computer equipment and storage medium
CN111368192A (en) * 2020-03-03 2020-07-03 上海喜马拉雅科技有限公司 Information recommendation method, device, equipment and storage medium
CN111931592A (en) * 2020-07-16 2020-11-13 苏州科达科技股份有限公司 Object recognition method, device and storage medium
CN112417284A (en) * 2020-11-23 2021-02-26 北京三快在线科技有限公司 Method and device for pushing display information
CN112861963A (en) * 2021-02-04 2021-05-28 北京三快在线科技有限公司 Method, device and storage medium for training entity feature extraction model
CN112818241A (en) * 2021-02-20 2021-05-18 腾讯科技(深圳)有限公司 Content promotion method and device, computer equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YASHAR DELDJOO ET AL.: "Recommender Systems Leveraging Multimedia Content", ACM COMPUTING SURVEYS,, 30 September 2020 (2020-09-30) *
刘志;林振涛;鄢致雯;陈波;: "基于属性偏好自学习的推荐方法", 浙江工业大学学报, no. 02, 9 April 2018 (2018-04-09) *
程涛;崔宗敏;喻静;: "一种用于视频推荐的基于LDA的深度学习模型", 计算机技术与发展, no. 08, 10 August 2020 (2020-08-10) *
马学明,童怀: "基于个性化推荐系统的视频App的设计", 电脑知识与技术, vol. 17, no. 8, 31 March 2021 (2021-03-31) *

Similar Documents

Publication Publication Date Title
CN108763502B (en) Information recommendation method and system
US11526799B2 (en) Identification and application of hyperparameters for machine learning
CN110909182B (en) Multimedia resource searching method, device, computer equipment and storage medium
CN110413867B (en) Method and system for content recommendation
US20170169062A1 (en) Method and electronic device for recommending video
CN110490683B (en) Offline collaborative multi-model hybrid recommendation method and system
CN111159563A (en) Method, device and equipment for determining user interest point information and storage medium
CN113535991A (en) Multimedia resource recommendation method and device, electronic equipment and storage medium
CN112100221A (en) Information recommendation method and device, recommendation server and storage medium
CN110008396B (en) Object information pushing method, device, equipment and computer readable storage medium
CN114528474A (en) Method and device for determining recommended object, electronic equipment and storage medium
CN111241400A (en) Information searching method and device
KR20210060375A (en) Method, apparatus and computer program for selecting promising content
CN113836388A (en) Information recommendation method and device, server and storage medium
CN104834728B (en) A kind of method for pushing and device for subscribing to video
CN109697628B (en) Product data pushing method and device, storage medium and computer equipment
CN111125544A (en) User recommendation method and device
CN113269262A (en) Method, apparatus and storage medium for training matching degree detection model
CN111597444B (en) Searching method, searching device, server and storage medium
CN114611022A (en) Method, device, equipment and storage medium for pushing display information
CN109213937B (en) Intelligent search method and device
CN112565904A (en) Video clip pushing method, device, server and storage medium
CN112884538A (en) Item recommendation method and device
CN114417156B (en) Training method and device for content recommendation model, server and storage medium
CN113365095B (en) Live broadcast resource recommendation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination