CN114357301A - Data processing method, device and readable storage medium - Google Patents

Data processing method, device and readable storage medium Download PDF

Info

Publication number
CN114357301A
CN114357301A CN202111675687.5A CN202111675687A CN114357301A CN 114357301 A CN114357301 A CN 114357301A CN 202111675687 A CN202111675687 A CN 202111675687A CN 114357301 A CN114357301 A CN 114357301A
Authority
CN
China
Prior art keywords
sample
media data
initial
trigger
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111675687.5A
Other languages
Chinese (zh)
Inventor
张绍亮
谢若冰
王瑞
杨智鸿
夏锋
林乐宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202111675687.5A priority Critical patent/CN114357301A/en
Publication of CN114357301A publication Critical patent/CN114357301A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9035Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9038Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/906Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The application discloses a data processing method, a device and a readable storage medium, wherein the method comprises the following steps: acquiring a general attribute feature and a first future attribute feature respectively corresponding to each first sample media data by a first sample object, outputting a first trigger probability of the first sample object for each first sample media data according to the general attribute feature and the first future attribute feature in an initial discriminator, and outputting a second trigger probability for candidate sample media data provided by an initial generator in the initial discriminator; and adjusting the model parameters of the initial countermeasure network according to the first trigger probability and the second trigger probability to obtain the target countermeasure network. By adopting the method and the device, the running time of the probability prediction model can be reduced, and the recommended candidate media data quality can be improved. The embodiment of the application can be applied to various scenes such as cloud technology, artificial intelligence, intelligent traffic, auxiliary driving and the like.

Description

Data processing method, device and readable storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a data processing method, device, and readable storage medium.
Background
With the advent of the digital age, reading objects have higher and higher requirements for the associated recommendation effect of reading content. The associated recommendation is applied to scenes such as databases, multimedia data watching, travel route analysis and the like, and is common operation in artificial intelligence models and computer deep learning. Common associated recommendation models include recommendation algorithms based on traditional methods such as collaborative filtering and logistic regression, recommendation algorithms using deep learning ranking algorithms and reinforcement learning algorithms, and Mask Language Models (MLM).
The currently common associated recommendation model has a wide coverage, can provide associated recommended media data of multiple types of target media data, and currently, the media data are often associated and recommended based on some hot content, so that the recommended media data are too single in type, and the recommendation quality is reduced.
Disclosure of Invention
The embodiment of the application provides a data processing method, data processing equipment and a readable storage medium, which can enrich types of recommended media data and improve recommendation quality.
An embodiment of the present application provides a data processing method, including:
determining general attribute characteristics of the first sample object corresponding to each first sample media data according to first historical trigger attribute information associated with the first sample object and media attribute information corresponding to at least two first sample media data respectively;
acquiring a first future attribute feature of the first sample object, which is associated with the first historical trigger attribute information at a future time;
screening candidate sample media data from the at least two second sample media data through an initial generator in the initial confrontation network, and sending the candidate sample media data to an initial discriminator in the initial confrontation network by the initial generator;
in the initial discriminator, according to the general attribute feature and the first future attribute feature respectively corresponding to each first sample media data of the first sample object, outputting a first trigger probability of the first sample object for each first sample media data, and outputting a second trigger probability for the candidate sample media data in the initial discriminator;
and according to the first trigger probability and the second trigger probability, adjusting model parameters of the initial confrontation network to obtain a target confrontation network, and determining a target generator in the target confrontation network as a probability prediction model for predicting the trigger probability of the target object for the media data.
Further, the at least two first sample media data comprise first sample media data SiI is a positive integer;
the method further comprises the following steps:
determining object attribute information of the first sample object, trigger record information of the first sample object and distribution environment information of history trigger first sample media data as first history trigger attribute information associated with the first sample object; the history triggering first sample media data is the first sample media data recorded in the triggering record information;
media data S of the first sampleiThe media data tag, the media data content category and the media data scene category are determined as the first sample media data SiCorresponding media attribute information.
Further, screening out candidate sample media data among the at least two second sample media data by an initial generator in the initial confrontation network, comprising:
determining general attribute characteristics of the second sample object corresponding to each second sample media data according to second historical trigger attribute information associated with the second sample object and media attribute information corresponding to at least two second sample media data;
inputting the general attribute characteristics of the second sample objects corresponding to each second sample media data into an initial generator in the initial confrontation network, and generating first generation trigger probabilities of the second sample objects corresponding to each second sample media data through the initial generator;
screening S matched samples from the at least two second sample media data according to the second historical trigger attribute information; the S matched samples are second sample media data which are not triggered by a second sample object; s is a positive integer;
the S matched samples are sequenced according to the first generation triggering probability to obtain sequenced S matched samples, and K matched samples are obtained from the sequenced S matched samples and serve as candidate sample media data; k is a positive integer less than or equal to S.
Further, the initial discriminator comprises a feature convolution layer, a neural network sensing layer and a full-connection activation layer;
in the initial discriminator, outputting a first trigger probability of the first sample object for the first sample media data according to the general attribute feature and the first future attribute feature of the first sample object for the candidate sample media data, including:
performing convolution fusion processing on the first sample object aiming at the general attribute feature and the first future attribute feature corresponding to the first sample media data through the feature convolution layer to obtain a first sample convolution fusion feature, and inputting the first sample convolution fusion feature into the neural network sensing layer;
performing weighted conversion processing on the first sample convolution fusion feature through a neural network perception layer to obtain a feature to be activated corresponding to the first sample convolution fusion feature, and inputting the feature to be activated into a full-connection activation layer;
and activating the to-be-activated feature through the full-connection activation layer to obtain a first trigger probability of the first sample object for the first sample media data.
Further, outputting, in the initial discriminator, a second trigger probability for the candidate sample media data, comprising:
acquiring a second future attribute feature of the second sample object, which is associated with the second historical trigger attribute information at a future time;
and in the initial discriminator, outputting a second trigger probability of the second sample object for the candidate sample media data according to the general attribute feature and the second future attribute feature of the second sample object for the candidate sample media data.
Further, according to the first trigger probability and the second trigger probability, adjusting model parameters of the initial countermeasure network to obtain a target countermeasure network, including:
determining a first loss value for the first sample media data according to the first trigger probability and a real trigger tag for the first sample media data;
determining a second loss value for the candidate sample media data according to the second trigger probability;
adjusting the model parameters of the initial discriminator according to the confrontation discrimination loss function, the first loss value and the second loss value;
adjusting the model parameters of the initial generator according to the confrontation generation loss function, the first loss value and the second loss value;
and generating the target countermeasure network according to the adjusted initial discriminator and the adjusted initial generator.
Further, generating the target countermeasure network according to the adjusted initial discriminator and the adjusted initial generator, including:
determining general attribute characteristics of the second sample object corresponding to each second sample media data according to second historical trigger attribute information associated with the second sample object and media attribute information corresponding to at least two second sample media data;
inputting the second sample object into an initial generator according to the general attribute characteristics corresponding to each second sample media data, and generating a first generation trigger probability corresponding to each second sample media data by the second sample object through the initial generator;
determining a third loss value for the second sample media data according to the first generation trigger probability and the real trigger tag for the second sample media data;
and adjusting the model parameters of the adjusted initial generator according to the supervision loss function and the third loss value to obtain a target generator, and determining the target generator and the adjusted initial discriminator as a target countermeasure network.
Further, generating the target countermeasure network according to the adjusted initial discriminator and the adjusted initial generator, including:
inputting the first sample object into an initial generator aiming at the universal attribute characteristics corresponding to each first sample media data, and generating a second generation trigger probability corresponding to each first sample media data by the first sample object through the initial generator;
acquiring first hidden features of the first sample object corresponding to each first sample media data from an initial generator, and acquiring second hidden features of the first sample object corresponding to each first sample media data from an initial discriminator;
adjusting the adjusted model parameters of the initial generator according to the error between the first triggering probability and the second generating triggering probability and the error between the first hidden feature and the second hidden feature to obtain a target generator;
and adjusting the adjusted model parameters of the initial discriminator according to the error between the first trigger probability and the second generation trigger probability and the error between the first hidden feature and the second hidden feature to obtain a target discriminator, and determining the target generator and the target discriminator as a target countermeasure network.
An embodiment of the present application provides a data processing apparatus, including:
the universal characteristic determining module is used for determining the universal characteristic of the first sample object corresponding to each first sample media data according to the first historical trigger attribute information associated with the first sample object and the media attribute information corresponding to at least two first sample media data;
the future characteristic acquisition module is used for acquiring a first future attribute characteristic of the first sample object, which is associated with the first historical trigger attribute information at a future time;
the candidate sample screening module is used for screening candidate sample media data from at least two second sample media data through an initial generator in the initial confrontation network, and the initial generator sends the candidate sample media data to an initial discriminator in the initial confrontation network;
a first probability triggering module, configured to, in the initial discriminator, output a first triggering probability of the first sample object for each first sample media data according to the general attribute feature and the first future attribute feature, respectively corresponding to each first sample media data, of the first sample object;
a second probability triggering module for outputting a second triggering probability for the candidate sample media data in the initial discriminator;
and the confrontation model adjusting module is used for adjusting model parameters of the initial confrontation network according to the first trigger probability and the second trigger probability to obtain a target confrontation network, and determining a target generator in the target confrontation network as a probability prediction model for predicting the trigger probability of the target object for the media data.
Wherein the at least two first sample media data comprise first sample media data SiI is a positive integer;
the data processing apparatus further includes:
the history information triggering module is used for determining the object attribute information of the first sample object, the triggering record information of the first sample object and the distribution environment information of the history triggering first sample media data as the first history triggering attribute information associated with the first sample object; the history triggering first sample media data is the first sample media data recorded in the triggering record information;
a media information determination module for determining the first sample media data SiThe media data tag, the media data content category and the media data scene category are determined as the first sample media data SiCorresponding media attribute information.
Wherein, the candidate sample screening module comprises:
the characteristic determining unit is used for determining the universal attribute characteristics of the second sample object corresponding to each second sample media data according to the second historical trigger attribute information associated with the second sample object and the media attribute information corresponding to at least two second sample media data;
the generator generating unit is used for inputting the second sample objects into an initial generator in the initial confrontation network aiming at the universal attribute characteristics corresponding to each second sample media data, and generating first generation trigger probabilities corresponding to each second sample media data by the second sample objects through the initial generator;
the matching sample screening unit is used for screening S matching samples from at least two second sample media data according to the second historical trigger attribute information; the S matched samples are second sample media data which are not triggered by a second sample object; s is a positive integer;
the candidate sample acquisition unit is used for sequencing the S matched samples according to the first generation trigger probability to obtain S sequenced matched samples, and acquiring K matched samples from the S sequenced matched samples to serve as candidate sample media data; k is a positive integer less than or equal to S.
The initial discriminator comprises a feature convolution layer, a neural network perception layer and a full-connection activation layer;
a first probability triggering module comprising:
the characteristic convolution unit is used for carrying out convolution fusion processing on the general attribute characteristic and the first future attribute characteristic of the first sample object corresponding to the first sample media data through the characteristic convolution layer to obtain a first sample convolution fusion characteristic, and inputting the first sample convolution fusion characteristic into the neural network sensing layer;
the characteristic conversion unit is used for carrying out convolution fusion processing on the general attribute characteristic and the first future attribute characteristic of the first sample object corresponding to the first sample media data through the characteristic convolution layer to obtain a first sample convolution fusion characteristic, and inputting the first sample convolution fusion characteristic into the neural network sensing layer;
and the feature activation unit is used for activating the feature to be activated through the full-connection activation layer to obtain a first trigger probability of the first sample object for the first sample media data.
Wherein the second probabilistic triggering module comprises:
the characteristic obtaining unit is used for obtaining a second future attribute characteristic of the second sample object, which is associated with the second historical trigger attribute information at a future time;
and the trigger probability output unit is used for outputting a second trigger probability of the second sample object for the candidate sample media data in the initial discriminator according to the general attribute feature and the second future attribute feature of the second sample object corresponding to the candidate sample media data.
Wherein the confrontation model adjustment module comprises:
a first loss determining unit, configured to determine a first loss value for the first sample media data according to the first trigger probability and the true trigger tag for the first sample media data;
a second loss determination unit for determining a second loss value for the candidate sample media data according to a second trigger probability;
the discrimination model adjusting unit is used for adjusting the model parameters of the initial discriminator according to the confrontation discrimination loss function, the first loss value and the second loss value;
the generating model adjusting unit is used for adjusting model parameters of the initial generator according to the countermeasure generating loss function, the first loss value and the second loss value;
and the confrontation network generating unit is used for generating the target confrontation network according to the adjusted initial discriminator and the adjusted initial generator.
Wherein, the confrontation network generating unit comprises:
the characteristic determining subunit is configured to determine, according to second history trigger attribute information associated with the second sample object and media attribute information corresponding to at least two second sample media data, a general attribute characteristic corresponding to each second sample media data of the second sample object;
the first probability generation subunit is used for inputting the second sample objects into the initial generator according to the general attribute characteristics corresponding to each second sample media data, and generating first generation trigger probabilities corresponding to each second sample media data by the second sample objects through the initial generator;
a loss determining subunit, configured to determine a third loss value for the second sample media data according to the first generation trigger probability and the true trigger tag for the second sample media data;
and the first network determining subunit is used for adjusting the model parameters of the adjusted initial generator according to the supervision loss function and the third loss value to obtain a target generator, and determining the target generator and the adjusted initial discriminator as a target countermeasure network.
Wherein, the confrontation network generating unit comprises:
the second probability generation subunit is used for inputting the first sample object into the initial generator according to the general attribute characteristics corresponding to each first sample media data, and generating a second generation trigger probability corresponding to each first sample media data by the initial generator;
the hidden feature obtaining subunit is configured to obtain, from the initial generator, first hidden features corresponding to the first sample object for each first sample media data, and obtain, from the initial discriminator, second hidden features corresponding to the first sample object for each first sample media data;
the generation model adjusting subunit is used for adjusting the adjusted model parameters of the initial generator according to the error between the first trigger probability and the second generation trigger probability and the error between the first hidden feature and the second hidden feature to obtain a target generator;
and the second network determination subunit is used for adjusting the adjusted model parameters of the initial discriminator according to the error between the first trigger probability and the second generation trigger probability and the error between the first hidden feature and the second hidden feature to obtain a target discriminator, and determining the target generator and the target discriminator as a target countermeasure network.
One aspect of the present application provides a computer device, comprising: a processor, a memory, a network interface;
the processor is connected to the memory and the network interface, wherein the network interface is used for providing a data communication function, the memory is used for storing a computer program, and the processor is used for calling the computer program to enable the computer device to execute the method in the embodiment of the application.
An aspect of the present embodiment provides a computer-readable storage medium, in which a computer program is stored, where the computer program is adapted to be loaded by a processor and to execute the method in the present embodiment.
An aspect of an embodiment of the present application provides a computer program product or a computer program, where the computer program product or the computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium; the processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions to cause the computer device to perform the method in the embodiment of the present application.
In this embodiment of the application, according to first history trigger attribute information associated with a first sample object and media attribute information corresponding to at least two first sample media data, a general attribute feature corresponding to each first sample media data of the first sample object is determined, and further, a first future attribute feature associated with the first history trigger attribute information at a future time of the first sample object may be obtained. In the process of training the initial confrontation network, the initial discriminator can output a first trigger probability of each first sample media data of the first sample object based on the general attribute feature and the first future attribute feature, the initial discriminator can also output a second trigger probability corresponding to the candidate sample media data provided by the initial generator, model parameters of the initial confrontation network are adjusted according to the first trigger probability and the second trigger probability to obtain a target confrontation network, and a target generator in the target confrontation network is determined as a probability prediction model for predicting the trigger probability of the target object for the media data, and as the first future attribute feature at the future time is introduced in the embodiment of the application, the probability prediction model obtained in the training can better dig out media data which can be matched with the target object and can have richer types, therefore, the types of the recommended media data can be enriched, and the recommendation quality is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a system architecture diagram according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a scenario for media data association recommendation provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an initial countermeasure network provided in an embodiment of the present application;
fig. 5a is a schematic structural diagram of an initial discriminator according to an embodiment of the present application;
fig. 5b is a schematic structural diagram of an initial generator according to an embodiment of the present application;
FIG. 6 is a schematic flow chart diagram of another data processing method provided in the embodiments of the present application;
FIG. 7 is a schematic flow chart diagram of another data processing method provided in the embodiments of the present application;
fig. 8 is a schematic structural diagram of another initial countermeasure network provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For ease of understanding, the following brief explanation of partial nouns is first made:
artificial Intelligence (AI) is a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human Intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results. In other words, artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that can react in a manner similar to human intelligence. Artificial intelligence is the research of the design principle and the realization method of various intelligent machines, so that the machines have the functions of perception, reasoning and decision making.
The artificial intelligence technology is a comprehensive subject and relates to the field of extensive technology, namely the technology of a hardware level and the technology of a software level. The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a voice processing technology, a natural language processing technology, machine learning/deep learning, automatic driving, intelligent traffic and the like.
Referring to fig. 1, fig. 1 is a schematic diagram of a system architecture according to an embodiment of the present disclosure. As shown in fig. 1, the system may include a service server 100 and a terminal cluster, and the terminal cluster may include: the terminal device 200a, the terminal device 200b, the terminal devices 200c, …, and the terminal device 200n, it is understood that the system may include one or more terminal devices, and the number of terminal devices is not limited in this application. The above-mentioned terminal device may be an electronic device, including but not limited to a mobile phone, a tablet computer, a desktop computer, a notebook computer, a palm computer, a vehicle-mounted device, an Augmented Reality/Virtual Reality (AR/VR) device, a helmet display, a smart television, a wearable device, a smart speaker, a digital camera, a camera, and other Mobile Internet Devices (MID) with network access capability, or a terminal device in a scene such as a train, a ship, or a flight, and the like.
Communication connection may exist between the terminal clusters, for example, communication connection exists between the terminal device 200a and the terminal device 200b, and communication connection exists between the terminal device 200a and the terminal device 200 c. Meanwhile, any terminal device in the terminal cluster may have a communication connection with the service server 100, for example, a communication connection exists between the terminal device 200a and the service server 100, where the communication connection is not limited to a connection manner, and may be directly or indirectly connected through a wired communication manner, may also be directly or indirectly connected through a wireless communication manner, and may also be through other manners, which is not limited in this application.
It should be understood that each terminal device in the terminal cluster shown in fig. 1 may be installed with an application client, and when the application client runs in each terminal device, data interaction, i.e. the above-mentioned communication connection, may be performed between the application client and the service server 100 shown in fig. 1, respectively. The application client can be an application client with a data sorting function, such as a short video application, a live broadcast application, a social application, an instant messaging application, a game application, a music application, a shopping application, a novel application, a browser and the like. The application client may be an independent client, or may be an embedded sub-client integrated in a certain client (for example, a social client, an educational client, a multimedia client, and the like), which is not limited herein.
For ease of subsequent understanding and explanation, please refer to fig. 2 together, and fig. 2 is a schematic view of a scenario for media data recommendation provided by an embodiment of the present application. In fig. 2, the terminal device 200c is playing media data through a target application, at this time, the terminal device 200c may send an association recommendation request to the service server 100, the service server 100 may obtain first history trigger attribute information of a target object corresponding to the terminal device 200c based on the association recommendation request (for example, the first history trigger attribute information may be determined based on object attribute information of the target object and media data triggered by the target object), and further, the service server 100 may generate corresponding generic attribute features for the media data to be recommended based on the first history trigger attribute information and media attribute information of the media data to be recommended in the database, at this time, the service server 100 may invoke a probability prediction model, input the generic attribute features corresponding to the media data to be recommended into the probability prediction model, the probability prediction model can output a target trigger probability corresponding to the media data to be recommended, and the target trigger probability can also be understood as the interest degree of the target object for the media data to be recommended, so that the service server 100 can push a plurality of media data to be recommended with higher target trigger probability to the terminal device 200 c. For example, if the number of media data pushed to the terminal device 200c is S (S is a positive integer), the terminal device 200c may sort based on the target trigger probabilities of the S media data, display a media data list containing the sorted S media data in the target application, and the target object triggers the media data in the media data list to play the triggered media data.
For a training process of the probabilistic predictive model, please refer to the corresponding embodiments of fig. 3 to fig. 8 below.
It is understood that, in the specific implementation of the present application, the data related to the first historical trigger attribute information and the like need to be approved or agreed by the user when the above embodiments of the present application are applied to specific products or technologies, and the collection, use and processing of the related data need to comply with the related laws and regulations and standards of the related countries and regions.
It is understood that the method provided by the embodiment of the present application may be executed by a computer device, and the computer device may be a terminal device or a service server, or a system composed of a terminal device and a service server. The service server may be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, or a cloud server providing basic cloud computing services such as a cloud database, a cloud service, cloud computing, a cloud function, cloud storage, a network service, cloud communication, middleware service, domain name service, security service, a CDN, a big data and artificial intelligence platform, and the like. The terminal devices include, but are not limited to, mobile phones, computers, intelligent voice interaction devices, intelligent household appliances, vehicle-mounted terminals, and the like. The terminal device and the service server may be directly or indirectly connected in a wired or wireless manner, which is not limited in this embodiment of the present application.
It can be understood that the system architecture described above is applicable to media data recommendation scenarios such as related recommendation of target media data, play sorting of media data, target media data analysis, and the like, and specific service scenarios will not be listed here.
Further, please refer to fig. 3, where fig. 3 is a schematic flow chart of a data processing method according to an embodiment of the present application. As shown in fig. 3, the data processing method may include at least the following steps S101 to S105.
Step S101, determining general attribute characteristics of the first sample object corresponding to each first sample media data according to the first history trigger attribute information associated with the first sample object and the media attribute information corresponding to at least two first sample media data;
specifically, the first sample object may be an object that operates on the target application. The first historical trigger attribute information may be information obtained by analyzing historical behaviors of the first sample object, for example, if the first sample object has clicked on video a, video B, and video C in the near future, the first historical trigger attribute information corresponding to the first sample object may be determined based on object attribute information (such as gender, age, interests, and the like) of the first sample object itself and browsing duration, click time, and video attribute information (such as attributes of video type, video duration, and the like) for video a, video B, and video C, that is, the first historical trigger attribute information may be used to represent habits and points of interest of the first sample object in watching video in the near future. Further, the first history trigger attribute information may include object attribute information of the first sample object, trigger record information of the first sample object, and distribution environment information of history triggered first sample media data, where the trigger record information is a record of the first sample media data triggered by the first sample object, and the trigger record information specifically includes parameters such as media attribute information of the first sample media data triggered by the first sample object, trigger time, browsing duration, an acquisition manner, trigger frequency, and the like, where the acquisition manner may include a manner obtained by self-recommending by the service server and a manner obtained by performing search based on a search keyword. The history triggering first sample media data is the first sample media data recorded in the triggering recording information, and therefore, the distribution environment information may include factors such as the network quality, the geographical area where the terminal corresponding to the first sample object acquires the history triggering first sample media data.
The first sample media data may be sample media data triggered by the first sample object, or may also be sample media data not triggered by the first sample object. The media attribute information corresponding to the first sample media data may include a media data tag, a media data content category, and a media data scene category, where, taking video as an example, the media data tag may include a comedy tag, a documentary tag, and an emotion class tag, the media data content type may include sports, music, literature, history, science and technology, and food, and the media data scene category may include a video support category configuring a multi-function display, an article support category configuring a dedicated screen, an audio support category configuring a speaker, and a composite category combining at least two support categories. Optionally, the media attribute information may further include a data format type, and the data format type may include video, audio, articles, pictures or tweets with pictures, and the like.
The object attribute information of the first sample object, the trigger record information of the first sample object and the distribution environment information of the history trigger first sample media data can be obtained from the media database. The media database may include at least two sample objects, object attribute information of the at least two sample objects, trigger record information of the at least two sample objects, sample media data triggered by history of the at least two sample objects, distribution environment information of history trigger sample media data corresponding to each sample object, media data tags of the at least two sample media data, media data content categories of the at least two sample media data, media data scene categories of the at least two sample media data, and the like.
It is to be understood that the at least two first sample media data may comprise first sample media data Si, i being a positive integer; that is, the computer device may determine, as the first history trigger attribute information associated with the first sample object, the object attribute information of the first sample object, the trigger record information of the first sample object, and the distribution environment information of the history trigger first sample media data, which are acquired from the media database, and determine, as the media attribute information corresponding to the first sample media data Si, the media data tag, the media data content category, and the media data scene category of the first sample media data Si, which are acquired from the media database.
After the computer device obtains the first history trigger attribute information and the media attribute information corresponding to the first sample media data Si, the first history trigger attribute information and the media attribute information corresponding to the first sample media data Si may be spliced into a general attribute feature of the first sample object corresponding to the first sample media data Si, and so on, the computer device may generate a general attribute feature of the first sample object corresponding to each first sample media data. Or, the computer device may also perform convolution processing on the first history trigger attribute information and the media attribute information corresponding to the first sample media data Si, respectively, to obtain a first attribute convolution feature corresponding to the first history trigger attribute information and a second attribute convolution feature of the media attribute information corresponding to the first sample media data Si, and then splice or fuse the first attribute convolution feature and the second attribute convolution feature to obtain a general attribute feature of the first sample object corresponding to the first sample media data Si, and so on, the computer device may generate a general attribute feature of the first sample object corresponding to each first sample media data, respectively.
Step S102, acquiring a first future attribute feature of the first sample object, which is associated with the first historical trigger attribute information at a future time;
specifically, with the current time as a boundary, an arbitrary time after the current time may be a future time. The computer device may determine, at a future time of the first sample object, a feature associated with the first historical trigger attribute information as a first future attribute feature corresponding to the first sample object.
It will be appreciated that the historical trigger behavior of the first sample object is of great relevance to the future trigger behavior of the sample object. From the two consecutive moments, the trigger of the first sample object has coherence, and the future attribute feature can be used as a supplement of the general attribute feature and a fine-grained footnote. For example, if the historical trigger first sample media data is a kitten, then a dairy cat may be the first future attribute feature in view of the kitten as the first historical trigger attribute feature. If the association is performed in units of hours, days, months, years, etc. longer than the time of day, the points of interest of the first sample object may migrate among different topics, and the first future attribute feature may help the recommendation system in the target application capture potential associations between different points of interest. For example, if the history-triggered first sample media data is a food product, the cooking tool may be a first future attribute feature in view of the food product as the first history-triggered attribute feature. For another example, if the history triggering first sample media data is a kitten, a plurality of fields such as food loved by the kitten, books related to the habit of the kitten, movie and television works appearing with the kitten as a role can be conjectured by taking the kitten as the first history triggering attribute feature, the associated fields can be secondarily expanded in the plurality of fields, and finally obtained related information of each field can be used as the first future attribute feature. When the first future attribute feature related to the history triggering first sample media data is obtained, all related information can be sorted according to the matching degree, the first N related information which are relatively matched and contain wide fields are selected, and the selected N related information are aggregated to obtain the first future attribute feature. The first future attribute feature may be obtained by performing average pooling aggregation on the related tag feature, the content category feature, and the scene type feature of the media data at a future time.
Step S103, screening candidate sample media data from at least two second sample media data through an initial generator in the initial confrontation network, and sending the candidate sample media data to an initial discriminator in the initial confrontation network by the initial generator;
specifically, the computer device determines, according to second historical trigger attribute information associated with the second sample object and media attribute information corresponding to at least two second sample media data, a common attribute feature corresponding to each second sample media data of the second sample object. Wherein the first sample object is different from the second sample object, and at least two first sample media data and at least two second sample media data may be different from each other or may partially overlap with each other. The parameter dimension included in the second history trigger attribute information may be the same as the parameter dimension included in the first history trigger attribute information, and therefore, the parameter dimension included in the generic attribute feature corresponding to each second sample media data of the second sample object may also be the same as the parameter dimension included in the generic attribute feature corresponding to each first sample media data of the first sample object, and therefore, specific contents included in the second history trigger attribute information and the generic attribute feature corresponding to each second sample media data of the second sample object are not repeated here.
Further, please refer to fig. 4, in which fig. 4 is a schematic structural diagram of an initial countermeasure network according to an embodiment of the present application. In fig. 4, the computer device may input the generic attribute features corresponding to the second sample objects for each second sample media data into an initial generator in the initial confrontation network 300, and generate a first generation trigger probability corresponding to the second sample objects for each second sample media data through the initial generator (e.g., the first generation trigger probability may be used to characterize the probability of the second sample objects clicking on the second sample media data). In the initial generator, S matched samples can be screened from at least two second sample media data according to the second historical trigger attribute information, and it can be known that the second sample media data triggered by the second sample object is recorded in the second historical trigger attribute information, so that the second sample media data, except for the second sample media data recorded by the second historical trigger attribute information, of the at least two second sample media data can be determined as the matched samples, that is, the S matched samples are the second sample media data not triggered by the second sample object; s is a positive integer.
Further, the initial generator may sequence the S matching samples according to the first generation trigger probability to obtain the sequenced S matching samples, and obtain K matching samples from the sequenced S matching samples as candidate sample media data; k is a positive integer less than or equal to S. And K can be selected in a self-adaptive manner according to the training requirement of the initial countermeasure network. The S sorted matching samples may be sorted from high to low based on the first generation trigger probability.
Step S104, in the initial discriminator, according to the general attribute feature and the first future attribute feature respectively corresponding to each first sample media data of the first sample object, outputting a first trigger probability of the first sample object for each first sample media data, and outputting a second trigger probability for the candidate sample media data in the initial discriminator;
specifically, referring to fig. 4 again, the computer device may input the general attribute feature and the first future attribute feature corresponding to the first sample object for each first sample media data into the initial discriminator. Please refer to fig. 5a, fig. 5a is a schematic structural diagram of an initial discriminator according to an embodiment of the present application. As shown in fig. 5a, the initial arbiter comprises a feature convolution layer, a neural network sensing layer and a fully connected active layer. Taking the first sample media data Si of the at least two first sample media data as an example, performing convolution fusion processing on the first sample object aiming at the general attribute feature and the first future attribute feature corresponding to the first sample media data Si through the feature convolution layer to obtain the first sample convolution fusion feature, specifically, firstly generating a convolution feature a of the first sample object aiming at the general attribute feature corresponding to the first sample media data Si, generating a convolution feature B corresponding to the first future attribute feature, and then fusing the convolution feature a and the convolution feature B to obtain the first sample convolution fusion feature. For example, the feature convolution layer may be an Adaptive Factorization Network (AFN). The initial arbiter further inputs the first sample convolution fusion feature into a neural network sensing layer, and performs weighted conversion processing on the first sample convolution fusion feature through the neural network sensing layer to obtain a feature to be activated corresponding to the first sample convolution fusion feature, for example, the neural network sensing layer may include a Multilayer Perceptron (MLP) and a Gate-limited circulation system (GRU, Gate recovery Unit). The initial discriminator further inputs the feature to be activated into the full connection activation layer, and performs activation processing on the feature to be activated through the full connection activation layer to obtain a first trigger probability of the first sample object for the first sample media data Si.
The initial arbiter may use a basic framework of a List-weighted on Deep learning based recommender (List-wise Recommendation) model for deployment, may use Double Deep learning (Double DQN) as a basic reinforcement learning model, and may simplify the model and improve robustness under a joint training framework of the reinforcement learning and generation countermeasure network (GAN). The state in the initial arbiter may represent an output feature (e.g., may be the to-be-activated feature described above) corresponding to both the generic attribute feature and the first future attribute feature, and the feedback (reward) in the initial arbiter may represent a first trigger probability of the first sample object for some first sample media data.
For example, if Up represents object attribute information of a first sample object, Uc represents trigger record information of the first sample object, Ci represents distribution environment information of a certain history trigger first sample media data, Di represents media attribute information corresponding to the history trigger first sample media data, Ffi represents first future attribute feature, d representstRepresenting the currently identified first sample media data Si, stMay refer to the feature to be activated, q, corresponding to the currently identified first sample media data SiDMay be a first trigger probability for the first sample media data Si, and the first sample convolution fusion feature may be
Figure BDA0003451200030000171
The feature to be activated may be st D=MLP(GRU({f1 D,…,ft-1 D})) the first trigger probability may be qD(st,at)=RELU(MLP(Concat(st D,dt)))。
Further, the computer device may obtain a second future attribute feature of the second sample object, which is associated with the second historical trigger attribute information at a future time, wherein a principle of obtaining the second future attribute feature is the same as a principle of obtaining the first future attribute feature, and details thereof are not repeated here. In the initial discriminator, the second trigger probability of the second sample object for the candidate sample media data is output according to the general attribute feature and the second future attribute feature of the second sample object for the candidate sample media data, and the second trigger probability may also be output based on the model structure of the initial discriminator shown in fig. 5a, that is, the principle of the initial discriminator for outputting the second trigger probability is the same as the principle of outputting the first trigger probability, and details thereof are not repeated here.
Optionally, please refer to fig. 5b together, and fig. 5b is a schematic structural diagram of an initial generator according to an embodiment of the present application. An initial generator as shown in fig. 5b may be used to output the first generation trigger probability mentioned in S103. The initial generator may also include a feature convolution layer, a neural network sensing layer, and a fully-connected active layer, and a network structure of each layer in the initial generator may be the same as that of the initial arbiter, which is different from that of the initial arbiter in that data input to the feature convolution layer of the initial gas generation only includes a generic attribute feature. Taking the second sample media data Sj of the at least two second sample media data as an example, the initial generator may perform convolution processing on the second sample object for the general attribute feature corresponding to the second sample media data Sj through the feature convolution layer to obtain a second sample convolution feature, and input the second sample convolution feature into the neural network sensing layer. The initial generator performs weighted conversion processing on the second sample convolution characteristics through the neural network sensing layer to obtain characteristics to be activated corresponding to the second sample convolution characteristics, the initial generator inputs the characteristics to be activated into the full-connection activation layer, and the initial generator performs activation processing on the characteristics to be activated through the full-connection activation layer to obtain first generation trigger probability of the second sample object for the second sample media data Sj.
For example, if UpObject attribute information indicating a second sample object, Uc ' trigger record information indicating the second sample object, Ci ' distribution environment information indicating that a certain history triggers the second sample media data Sj, Di ' media attribute information corresponding to the second sample media data triggered by the second sample object, dt"second sample media data Sj, s representing current recognitiont"may refer to the feature to be activated, q, corresponding to the currently identified second sample media data SjGMay be a first generation trigger probability for the second sample media data Sj, and the second sample convolution fusion feature may be fi GAFN (Up ', Uc', Ci ', Di'), the feature to be activated may be st G=MLP(GRU({f1 G,…,ft-1 G})) the first generation trigger probability may be qG(st',at')=RELU(MLP(Concat(st G,dt')))。
And S105, adjusting model parameters of the initial confrontation network according to the first trigger probability and the second trigger probability to obtain a target confrontation network, and determining a target generator in the target confrontation network as a probability prediction model for predicting the trigger probability of the target object for the media data.
Specifically, the computer device may determine a first loss value of each first sample media data according to the first trigger probability and the real trigger tag of each first sample media data. A second loss value for the candidate sample media data may also be determined based on the second trigger probability. Further adjusting the model parameters of the initial discriminator according to the confrontation discrimination loss function, the first loss value and the second loss value; the model parameters of the initial generator are adjusted according to the countermeasure generation loss function, the first loss value and the second loss value, and if the adjusted initial discriminator and the adjusted initial generator satisfy the model convergence condition, the adjusted initial discriminator and the adjusted initial generator may be determined as the target countermeasure network, and further the target generator in the target countermeasure network may be determined as a probability prediction model for predicting the trigger probability of the target object for the media data, for example, as shown in fig. 2, the probability prediction model may be used for recommending the media data to the target object.
For example, if σ (h)φ(u,d|fc,ff) Is) represents a first probability of triggering,
Figure BDA0003451200030000191
may refer to a first loss value for at least two first sample media data,
Figure BDA0003451200030000192
may refer to a first loss value for a positive sample media data (i.e. a first sample media data triggered by a first sample object) of the at least two first sample media data,
Figure BDA0003451200030000193
may refer to a first loss value for negative sample media data (i.e., first sample media data that has not been triggered by a first sample object) of the at least two first sample media data.
Figure BDA0003451200030000194
May be a second loss value for the candidate sample media data, wherein the loss function against the network may be:
Figure BDA0003451200030000195
the loss functions of the network include a countermeasure discrimination loss function and a countermeasure generation loss function. It is understood that the confrontation discriminant loss function is a model optimization objective with the maximization of the sum of the first loss value and the second loss value, and therefore, the confrontation discriminant loss function may be:
Figure BDA0003451200030000196
Figure BDA0003451200030000197
Figure BDA0003451200030000201
the countering generation loss function is a model optimization objective that minimizes a sum of the first loss value and the second loss value, and thus, the countering generation loss function may be:
Figure BDA0003451200030000202
Figure BDA0003451200030000203
optionally, the loss function of the countermeasure network may also be:
Figure BDA0003451200030000204
Figure BDA0003451200030000205
wherein
Figure BDA0003451200030000206
ytMay be determined based on the true trigger tag, against the loss function L of the networkG,DIncluding a challenge discriminant loss function and a challenge generation loss function. It will be appreciated that the above-described,
Figure BDA0003451200030000207
may be a first loss value of the countering discriminant loss function,
Figure BDA0003451200030000208
the second loss value may be a confrontation discriminant loss function, and the confrontation discriminant loss function takes the maximization of the sum of the first loss value and the second loss value as a model optimization target, so the confrontation discriminant loss function may be:
Figure BDA0003451200030000209
Figure BDA00034512000300002010
the countering generation loss function is a model optimization objective that minimizes a sum of the first loss value and the second loss value, and thus, the countering generation loss function may be:
Figure BDA00034512000300002011
Figure BDA00034512000300002012
optionally, after the target countermeasure network is generated, the target generators in the target countermeasure network may be further optimized through reinforcement learning, Pθ(d|u,fc) May be a first trigger probability, f, of the first sample object u for the first sample media data dcMay be a generic attribute feature, f, of the first sample object u for the first sample media data dfMay be a first future property feature of the first sample object u, and the optimization function may be:
Figure BDA0003451200030000211
the target generator is further adjusted based on the optimization function, and the adjusted target generator is determined as a probability prediction model for predicting a trigger probability of the target object for the media data.
It is understood that in the specific implementation of the present application, referring to obtaining any relevant data of the first sample object, the second sample object and the target object, when the above embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use and processing of the relevant data need to comply with relevant laws and regulations and standards of relevant countries and regions.
In this embodiment of the application, according to first history trigger attribute information associated with a first sample object and media attribute information corresponding to at least two first sample media data, a general attribute feature corresponding to each first sample media data of the first sample object is determined, and further, a first future attribute feature associated with the first history trigger attribute information at a future time of the first sample object may be obtained. In the process of training the initial confrontation network, the initial discriminator can output a first trigger probability of each first sample media data of the first sample object based on the general attribute feature and the first future attribute feature, the initial discriminator can also output a second trigger probability corresponding to the candidate sample media data provided by the initial generator, model parameters of the initial confrontation network are adjusted according to the first trigger probability and the second trigger probability to obtain a target confrontation network, and a target generator in the target confrontation network is determined as a probability prediction model for predicting the trigger probability of the target object for the media data, and as the first future attribute feature at the future time is introduced in the embodiment of the application, the probability prediction model obtained in the training can better dig out media data which can be matched with the target object and can have richer types, therefore, the types of the recommended media data can be enriched, and the recommendation quality is improved.
Further, please refer to fig. 6, wherein a specific implementation of the step S105 may refer to fig. 6, and fig. 6 is a schematic flow chart of a data processing method according to an embodiment of the present application. As shown in fig. 6, the data processing method may include at least the following steps S201 to S205.
Step S201, determining a first loss value aiming at first sample media data according to a first trigger probability and a real trigger label aiming at the first sample media data; determining a second loss value for the candidate sample media data according to the second trigger probability; adjusting the model parameters of the initial discriminator according to the confrontation discrimination loss function, the first loss value and the second loss value; and adjusting the model parameters of the initial generator according to the countermeasure generation loss function, the first loss value and the second loss value.
The specific process of this step may refer to the specific description of step S105 in the embodiment corresponding to fig. 3, and is not described here again.
Step S202, determining general attribute characteristics of the second sample object corresponding to each second sample media data according to second history trigger attribute information associated with the second sample object and media attribute information corresponding to at least two second sample media data;
step S203, inputting the second sample object into an initial generator aiming at the universal attribute characteristics corresponding to each second sample media data, and generating a first generation trigger probability corresponding to each second sample media data of the second sample object through the initial generator;
for specific implementation manners of S202 and S203, reference may be made to the process of generating the general attribute features corresponding to each second sample media data of the second sample object in S103 in the embodiment corresponding to fig. 3, and the process of generating the first generation trigger probability, which is not described herein again.
Step S204, determining a third loss value aiming at the second sample media data according to the first generation trigger probability and the real trigger label aiming at the second sample media data;
specifically, the real trigger tag of any one of the second sample media data may be a tag clicked by the second sample object or a tag not clicked by the second sample object, and for the second sample media data with the real trigger tag, it is denoted as at~Ereal. According to the first generation triggering probability q corresponding to each second sample media dataG(st',at')(qG(st',at') may be referred to the corresponding description of fig. 5b above) and the real trigger tag for each second sample media data, a third loss value for each second sample media data may be determined:
Figure BDA0003451200030000221
wherein the content of the first and second substances,
Figure BDA0003451200030000222
yt' may be determined based on the true trigger tag.
And S205, adjusting the model parameters of the adjusted initial generator according to the supervision loss function and the third loss value to obtain a target generator, and determining the target generator and the adjusted initial discriminator as a target countermeasure network.
Specifically, the real trigger label of the second sample object is used as the supervision information to obtain the supervision loss function,
Figure BDA0003451200030000231
thus, can be based on LMSEThe loss value of (a) further adjusts the model parameters of the adjusted initial generator to obtain the target generator.
In the embodiment of the application, the general attribute characteristics of the target media data and the future attribute characteristics of the target media data are input into the initial confrontation network, and then the supervision loss function and the knowledge distillation optimization function are used for carrying out combined optimization, so that a more robust probability prediction model can be obtained. The method and the device introduce the first future attribute characteristics at the future moment, so that the probability prediction model obtained through training can better mine media data which can be matched with the target object and have richer types, and further, the first trigger history information of the first sample object is used as a supervision characteristic, so that the relevance between the probability prediction model and the first sample object can be increased, the types of recommended media data can be enriched, and the recommendation quality is improved.
Further, please refer to fig. 7, wherein a specific implementation of the step S105 may refer to fig. 7, and fig. 7 is a schematic flow chart of a data processing method according to an embodiment of the present application. As shown in fig. 7, the data processing method may include at least the following steps S301 to S305.
S301, determining a first loss value aiming at first sample media data according to the first trigger probability and a real trigger label aiming at the first sample media data; determining a second loss value for the candidate sample media data according to the second trigger probability; adjusting the model parameters of the initial discriminator according to the confrontation discrimination loss function, the first loss value and the second loss value; adjusting model parameters of the initial generator according to the countermeasure generation loss function, the first loss value and the second loss value
The specific process of this step may refer to the specific description of step S105 in the embodiment corresponding to fig. 3, and is not described here again.
Step S302, inputting the first sample object into an initial generator aiming at the universal attribute characteristics corresponding to each first sample media data, and generating a second generation trigger probability corresponding to each first sample media data by the first sample object through the initial generator;
specifically, the principle that the initial generator outputs the second generation trigger probability corresponding to the first sample media data may be the same as the principle that the initial generator outputs the first generation trigger probability corresponding to the second sample media data in the embodiment corresponding to fig. 5b, and details are not repeated here.
Step S303, acquiring first hidden features of the first sample object corresponding to each first sample media data from the initial generator, and acquiring second hidden features of the first sample object corresponding to each first sample media data from the initial discriminator;
specifically, first hidden features corresponding to the first sample objects respectively for each first sample media data are obtained from the initial generator
Figure BDA0003451200030000241
First latent feature
Figure BDA0003451200030000242
For the to-be-activated feature in fig. 5b, second hidden features corresponding to the first sample objects for each first sample media data may be obtained from the initial discriminator
Figure BDA0003451200030000243
Second latent feature
Figure BDA0003451200030000244
May be the to-be-activated feature of fig. 5a described above.
Step S304, adjusting the adjusted model parameters of the initial generator according to the error between the first trigger probability and the second generation trigger probability and the error between the first hidden feature and the second hidden feature to obtain a target generator;
specifically, a knowledge distillation loss function can be obtained:
Figure BDA0003451200030000245
the error between the first trigger probability and the second generated trigger probability can be expressed as a function of the knowledge distillation loss
Figure BDA0003451200030000246
The error between the first latent feature and the second latent feature may be expressed as a function of the distillation loss of knowledge
Figure BDA0003451200030000247
Knowledge-based distillation loss function LKDThe generated loss value can adjust the model parameters of the adjusted initial generator to obtain the target generator.
Step S305, according to the error between the first trigger probability and the second generation trigger probability and the error between the first hidden feature and the second hidden feature, adjusting the model parameters of the adjusted initial discriminator to obtain a target discriminator, and determining the target generator and the target discriminator as a target countermeasure network.
In particular, the same can be based on the knowledge of the distillation loss function LKDThe generated loss value is used for adjusting the model parameters of the adjusted initial discriminator to obtainAnd determining the target generator and the target arbiter as the target countermeasure network to the target arbiter.
Alternatively, the knowledge distillation loss function L can also be obtained in another wayKDThe resulting loss value: inputting the general attribute features and the second future attribute features of the second sample object corresponding to each second sample media data Sj into the initial discriminator, and generating a third trigger probability of the second sample object corresponding to each second sample media data Sj through the initial discriminator (the principle that the third trigger probability is generated by the initial discriminator is the same as the principle that the first trigger probability is generated by the initial discriminator, and the description is omitted here); acquiring third hidden features of the second sample object corresponding to each second sample media data Sj from the initial generator, and acquiring fourth hidden features of the second sample object corresponding to each second sample media data Sj from the initial discriminator; and adjusting the model parameters of the adjusted initial generator according to the error between the third trigger probability and the first generation trigger probability and the error between the third implicit feature and the fourth implicit feature to obtain a target generator. Adjusting the model parameters of the adjusted initial discriminator according to the error between the third trigger probability and the first generation trigger probability and the error between the third implicit feature and the fourth implicit feature to obtain a target discriminator, and determining the target generator and the target discriminator as a target countermeasure network. Wherein the error between the third trigger probability and the first generation trigger probability can be expressed as a function of the knowledge distillation loss
Figure BDA0003451200030000251
The error between the third implicit feature and the fourth implicit feature may be expressed as a function of the distillation loss of knowledge
Figure BDA0003451200030000252
In the embodiment of the application, the general attribute characteristics of the target media data and the future attribute characteristics of the target media data are input into the initial confrontation network, and then the supervision loss function and the knowledge distillation optimization function are used for carrying out combined optimization, so that a more robust probability prediction model can be obtained. The method and the device introduce the first future attribute characteristics at the future moment, so that the probability prediction model obtained through training can better mine media data which can be matched with the target object and have richer types, further, the initial discriminator is used as a teacher model, the initial generator is used as a student model, knowledge distillation is carried out, the initial generator is updated, the prediction accuracy of the probability prediction model can be improved, the precision of the probability prediction model is improved, the types of recommended media data can be enriched, and the recommendation quality is improved.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an initial countermeasure network according to an embodiment of the present application. Wherein, as shown in fig. 8, the initial confrontation network can be jointly optimized and trained through the confrontation training, the supervised loss function optimization and the knowledge distillation, as shown in fig. 8, the confrontation training of the initial confrontation network can be realized through the general attribute feature, the first future attribute feature, the first generation trigger probability, the first trigger probability and the second trigger probability, for example, the confrontation training of the initial confrontation network can be realized through the loss function L of the confrontation network in the corresponding embodiment S105 of fig. 3G,DAnd (5) realizing the confrontation training. Further, the initial generator and the initial arbiter can be jointly optimized by countertraining, knowledge distillation, and by means of a supervised loss function. In fig. 8, the supervisory loss function L in the corresponding embodiment of fig. 6 can also be usedMSEAnd carrying out further optimization adjustment on the initial generator. In FIG. 8, the above-mentioned knowledge distillation loss function L in the corresponding embodiment of FIG. 7 can also be usedKDAnd respectively carrying out optimization adjustment on the initial generator and the initial discriminator. Wherein, the joint optimization function for the initial countermeasure network can be L1=λ1LG,D2LMESThe joint optimization function L1It can be understood that: based on lambda1LG2LMESFor the initial generatorLine model parameter adjustment based on λ1LDAnd adjusting model parameters of the initial discriminator. Alternatively, the joint optimization function for the initial countermeasure network may also be L2=λ1LG,D3LKDThe joint optimization function L2It can be understood that: based on lambda1LG3LKDPerforming model parameter adjustment on the initial generator based on lambda1LDAdjusting model parameters of the initial discriminator based on lambda3LKDAnd further optimizing the adjusted initial discriminator. Alternatively, the joint optimization function for the initial challenge network may also be L ═ λ1LG,D2LMES3LKDThe joint optimization function L can be understood as: based on lambda1LG2LMES3LKDPerforming model parameter adjustment on the initial generator based on lambda1LDAdjusting model parameters of the initial discriminator based on lambda3LKDAnd further optimizing the adjusted initial discriminator. Wherein λ is1、λ2And λ3Respectively, the weight coefficients in the joint optimization function. By jointly optimizing a function L1Or L2Or L may implement model training for the initial challenge network.
The target countermeasure network may be referred to as Future attribute feature based countermeasure modeling (AFE). The target countermeasure network may be deployed based on a basic frame of a List-mode Recommendation on Deep Learning (List-mode Recommendation) model, or may be deployed based on a basic frame of another model, for example, a Point-mode Recommendation (Point-mode Recommendation) model of an Automatic Feature Interaction (Automatic) network is used to deploy a target generator and a target discriminator of the target countermeasure network. The target countermeasure network has universality for different frameworks.
In this embodiment of the application, according to first history trigger attribute information associated with a first sample object and media attribute information corresponding to at least two first sample media data, a general attribute feature corresponding to each first sample media data of the first sample object is determined, and further, a first future attribute feature associated with the first history trigger attribute information at a future time of the first sample object may be obtained. In the process of training the initial confrontation network, the initial discriminator can output a first trigger probability of each first sample media data of the first sample object based on the general attribute feature and the first future attribute feature, the initial discriminator can also output a second trigger probability corresponding to the candidate sample media data provided by the initial generator, model parameters of the initial confrontation network are adjusted according to the first trigger probability and the second trigger probability to obtain a target confrontation network, and a target generator in the target confrontation network is determined as a probability prediction model for predicting the trigger probability of the target object for the media data, and as the first future attribute feature at the future time is introduced in the embodiment of the application, the probability prediction model obtained in the training can better dig out media data which can be matched with the target object and can have richer types, therefore, the types of the recommended media data can be enriched, and the recommendation quality is improved.
Further, please refer to fig. 9, where fig. 9 is a schematic structural diagram of a data processing apparatus according to an embodiment of the present application. The data processing means may be a computer program (including program code) running on a computer device, for example, an application software; the apparatus may be used to perform the corresponding steps in the methods provided by the embodiments of the present application. As shown in fig. 9, the data processing apparatus 1 may include: a general feature determination module 11, a future feature acquisition module 12, a candidate sample screening module 13, a first probability triggering module 14, a second probability triggering module 15, a confrontation model adjustment module 16, a history information triggering module 17, and a media information determination module 18.
A general characteristic determining module 11, configured to determine, according to first history trigger attribute information associated with the first sample object and media attribute information corresponding to at least two first sample media data, a general attribute characteristic corresponding to each first sample media data of the first sample object;
a future characteristic obtaining module 12, configured to obtain a first future attribute characteristic of the first sample object at a future time, where the first future attribute characteristic is associated with the first historical trigger attribute information;
a candidate sample screening module 13, configured to screen candidate sample media data from the at least two second sample media data through an initial generator in the initial confrontation network, where the initial generator sends the candidate sample media data to an initial discriminator in the initial confrontation network;
a first probability triggering module 14, configured to output, in the initial arbiter, a first triggering probability of the first sample object for each first sample media data according to the general attribute feature and the first future attribute feature respectively corresponding to the first sample object for each first sample media data;
a second probability triggering module 15, configured to output a second triggering probability for the candidate sample media data in the initial discriminator;
and the confrontation model adjusting module 16 is configured to adjust a model parameter of the initial confrontation network according to the first trigger probability and the second trigger probability to obtain a target confrontation network, and determine a target generator in the target confrontation network as a probability prediction model for predicting the trigger probability of the target object for the media data.
Specific functional implementation manners of the general feature determining module 11, the future feature obtaining module 12, the candidate sample screening module 13, the first probability triggering module 14, the second probability triggering module 15, and the confrontation model adjusting module 16 may refer to steps S101 to S105 in the embodiment corresponding to fig. 3, which are not described herein again.
Referring again to fig. 9, the at least two first sample media data include first sample media data SiI is a positive integer;
the data processing apparatus 1 may further include: a history information triggering module 17 and a media information determining module 18.
A history information triggering module 17, configured to determine object attribute information of the first sample object, trigger record information of the first sample object, and distribution environment information of history triggered first sample media data as first history trigger attribute information associated with the first sample object; the history triggering first sample media data is the first sample media data recorded in the triggering record information;
the media information determining module 18 is configured to determine the media data tag, the media data content category, and the media data scene category of the first sample media data Si as media attribute information corresponding to the first sample media data Si.
The specific functional implementation manners of the history information triggering module 17 and the media information determining module 18 may refer to step S101 in the embodiment corresponding to fig. 3, which is not described herein again.
Referring again to fig. 9, the candidate sample screening module 13 includes:
a feature determining unit 131, configured to determine, according to second history trigger attribute information associated with the second sample object and media attribute information corresponding to at least two second sample media data, a general attribute feature corresponding to each second sample media data of the second sample object;
a generator generating unit 132, configured to input the second sample object into an initial generator in the initial confrontation network for a universal attribute feature corresponding to each second sample media data, and generate, by the initial generator, a first generation trigger probability corresponding to each second sample media data for the second sample object;
a matching sample screening unit 133, configured to screen S matching samples from the at least two second sample media data according to the second historical trigger attribute information; the S matched samples are second sample media data which are not triggered by a second sample object; s is a positive integer;
the candidate sample obtaining unit 134 is configured to rank the S matching samples according to the first generation trigger probability to obtain ranked S matching samples, and obtain K matching samples from the ranked S matching samples as candidate sample media data; k is a positive integer less than or equal to S.
For specific functional implementation manners of the feature determining unit 131, the generator generating unit 132, the matching sample screening unit 133, and the candidate sample acquiring unit 134, reference may be made to step S103 in the embodiment corresponding to fig. 3, which is not described herein again.
Referring to fig. 9, the initial discriminator includes a feature convolution layer, a neural network sensing layer and a full-connection activation layer;
a first probability triggering module 14 comprising:
the feature convolution unit 141 is configured to perform convolution fusion processing on the first sample object for the general attribute feature and the first future attribute feature corresponding to the first sample media data through the feature convolution layer to obtain a first sample convolution fusion feature, and input the first sample convolution fusion feature into the neural network sensing layer;
the feature conversion unit 142 is configured to perform convolution fusion processing on the first sample object for the general attribute feature and the first future attribute feature corresponding to the first sample media data through the feature convolution layer to obtain a first sample convolution fusion feature, and input the first sample convolution fusion feature into the neural network sensing layer;
the feature activation unit 143 is configured to activate, through the full connection activation layer, the feature to be activated, and obtain a first trigger probability of the first sample object for the first sample media data.
The specific functional implementation manners of the feature convolution unit 141, the feature conversion unit 142, and the feature activation unit 143 may refer to step S104 in the embodiment corresponding to fig. 3, which is not described herein again.
Referring to fig. 9 again, the second probabilistic triggering module 15 includes:
a feature obtaining unit 151, configured to obtain a second future attribute feature, which is associated with the second historical trigger attribute information, of the second sample object at a future time;
a trigger probability output unit 152, configured to output, in the initial discriminator, a second trigger probability of the second sample object for the candidate sample media data according to the general attribute feature and the second future attribute feature corresponding to the candidate sample media data of the second sample object.
The specific functional implementation manners of the feature obtaining unit 151 and the trigger probability output unit 152 may refer to step S104 in the embodiment corresponding to fig. 3, which is not described herein again.
Referring again to fig. 9, the confrontation model adjustment module 16 includes:
a first loss determining unit 161, configured to determine a first loss value for the first sample media data according to the first trigger probability and the real trigger tag for the first sample media data;
a second loss determining unit 162 for determining a second loss value for the candidate sample media data according to a second trigger probability;
a discriminant model adjustment unit 163 for adjusting the model parameters of the initial discriminator according to the confrontation discriminant loss function, the first loss value, and the second loss value;
a generative model adjustment unit 164 for adjusting model parameters of the initial generator according to the antagonistic generative loss function, the first loss value, and the second loss value;
and a competing network generating unit 165 for generating a target competing network according to the adjusted initial discriminator and the adjusted initial generator.
The specific functional implementation manners of the first loss determining unit 161, the second loss determining unit 162, the discriminant model adjusting unit 163, the generative model adjusting unit 164, and the opposing network generating unit 165 may refer to step S105 in the foregoing corresponding embodiment in fig. 3, and are not described herein again.
Referring to fig. 9 again, the countering network generation unit 165 includes:
a feature determination subunit 1651, configured to determine, according to the second historical trigger attribute information associated with the second sample object and the media attribute information corresponding to the at least two second sample media data, a common attribute feature corresponding to the second sample object for each second sample media data;
a first probability generating subunit 1652, configured to input the second sample object into the initial generator for the general attribute feature corresponding to each second sample media data, and generate, by the initial generator, a first generation trigger probability corresponding to each second sample media data of the second sample object;
a loss determination subunit 1653, configured to determine a third loss value for the second sample media data according to the first generation trigger probability and the true trigger tag for the second sample media data;
a first network determining subunit 1654, configured to adjust the model parameter of the adjusted initial generator according to the supervision loss function and the third loss value to obtain a target generator, and determine the target generator and the adjusted initial discriminator as the target countermeasure network.
For specific functional implementation of the feature determining subunit 1651, the first probability generating subunit 1652, the loss determining subunit 1653, and the first network determining subunit 1654, reference may be made to steps S201 to S204 in the embodiment corresponding to fig. 6, which is not described herein again.
Referring to fig. 9 again, the countering network generation unit 165 includes:
a second probability generating subunit 1655, configured to input the first sample object into the initial generator for the general attribute feature corresponding to each first sample media data, and generate, by the initial generator, a second generation trigger probability corresponding to each first sample media data of the first sample object;
a hidden feature obtaining subunit 1656, configured to obtain, from the initial generator, first hidden features corresponding to the first sample object for each piece of first sample media data, and obtain, from the initial discriminator, second hidden features corresponding to the first sample object for each piece of first sample media data;
a generation model adjusting subunit 1657, configured to adjust a model parameter of the adjusted initial generator according to an error between the first trigger probability and the second generation trigger probability and an error between the first hidden feature and the second hidden feature, so as to obtain a target generator;
a second network determining subunit 1658, configured to adjust the adjusted model parameter of the initial discriminator according to an error between the first trigger probability and the second generation trigger probability and an error between the first hidden feature and the second hidden feature, to obtain a target discriminator, and determine the target generator and the target discriminator as a target countermeasure network.
The specific functional implementation manners of the second probability generation sub-unit 1655, the hidden feature obtaining sub-unit 1656, the generation model adjustment sub-unit 1657, and the second network determination sub-unit 1658 may refer to step S204 in the foregoing embodiment in fig. 6, and are not described herein again.
In the embodiment of the application, the general attribute characteristics of the target media data and the future attribute characteristics of the target media data are input into the initial confrontation network, and then the supervision loss function and the knowledge distillation optimization function are used for carrying out combined optimization, so that a recommendation model with higher robustness can be obtained. According to the method provided by the embodiment of the application, a general framework suitable for different scenes is constructed based on the Future attribute feature countermeasure modeling (AFE), and the recommendation effect is assisted to be improved on different recommendation models according to various Future attribute features. A group of joint optimization functions is designed, and modes such as counterstudy, loss supervision function and knowledge distillation optimization function are combined, so that the stability of model training is greatly improved. The method provided by the embodiment of the application can enrich the types of the recommended media data and improve the recommendation quality.
Further, please refer to fig. 10, where fig. 10 is a schematic structural diagram of a computer device according to an embodiment of the present application. As shown in fig. 10, the computer apparatus 1000 may include: at least one processor 1001, such as a CPU, at least one network interface 1004, a user interface 1003, memory 1005, at least one communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display (Display) and a Keyboard (Keyboard), and the network interface 1004 may optionally include a standard wired interface and a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as at least one disk memory. The memory 1005 may optionally also be at least one storage device located remotely from the aforementioned processor 1001. As shown in fig. 10, the memory 1005, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and a device control application program.
In the computer device 1000 shown in fig. 10, the network interface 1004 may provide a network communication function; the user interface 1003 is an interface for providing a user with input; and the processor 1001 may be used to invoke a device control application stored in the memory 1005 to implement:
determining general attribute characteristics of the first sample object corresponding to each first sample media data according to first historical trigger attribute information associated with the first sample object and media attribute information corresponding to at least two first sample media data respectively;
acquiring a first future attribute feature of the first sample object, which is associated with the first historical trigger attribute information at a future time;
screening out candidate sample media data from at least two second sample media data through an initial generator in an initial confrontation network, wherein the initial generator sends the candidate sample media data to an initial discriminator in the initial confrontation network;
in the initial arbiter, outputting a first trigger probability of the first sample object for each first sample media data according to the common attribute feature and the first future attribute feature respectively corresponding to the first sample object for each first sample media data, and outputting a second trigger probability for the candidate sample media data in the initial arbiter;
and adjusting model parameters of the initial countermeasure network according to the first trigger probability and the second trigger probability to obtain a target countermeasure network, and determining a target generator in the target countermeasure network as a probability prediction model for predicting the trigger probability of the target object for the media data.
Wherein the first sample object may be an object operating on the target application. The first historical trigger attribute information may be information obtained by analyzing historical behaviors of the first sample object, for example, if the first sample object has clicked on video a, video B, and video C in the near future, the first historical trigger attribute information corresponding to the first sample object may be determined based on object attribute information (such as gender, age, interests, and the like) of the first sample object itself and browsing duration, click time, and video attribute information (such as attributes of video type, video duration, and the like) for video a, video B, and video C, that is, the first historical trigger attribute information may be used to represent habits and points of interest of the first sample object in watching video in the near future. Further, the first history trigger attribute information may include object attribute information of the first sample object, trigger record information of the first sample object, and distribution environment information of history triggered first sample media data, where the trigger record information is a record of the first sample media data triggered by the first sample object, and the trigger record information specifically includes parameters such as media attribute information of the first sample media data triggered by the first sample object, trigger time, browsing duration, an acquisition manner, trigger frequency, and the like, where the acquisition manner may include a manner obtained by self-recommending by the service server and a manner obtained by performing search based on a search keyword. The history triggering first sample media data is the first sample media data recorded in the triggering recording information, and therefore, the distribution environment information may include factors such as the network quality, the geographical area where the terminal corresponding to the first sample object acquires the history triggering first sample media data.
The first sample media data may be sample media data triggered by the first sample object, or may also be sample media data not triggered by the first sample object. The media attribute information corresponding to the first sample media data may include a media data tag, a media data content category, and a media data scene category, where, taking video as an example, the media data tag may include a comedy tag, a documentary tag, and an emotion class tag, the media data content type may include sports, music, literature, history, science and technology, and food, and the media data scene category may include a video support category configuring a multi-function display, an article support category configuring a dedicated screen, an audio support category configuring a speaker, and a composite category combining at least two support categories. Optionally, the media attribute information may further include a data format type, and the data format type may include video, audio, articles, pictures or tweets with pictures, and the like.
It should be understood that the computer device 1000 described in this embodiment of the present application may perform the description of the data processing method in the embodiment corresponding to fig. 2, fig. 3, fig. 4, fig. 5a, fig. 5b, and fig. 6, and may also perform the description of the data processing apparatus 1 in the embodiment corresponding to fig. 9, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, where the computer program includes program instructions, and the program instructions, when executed by a processor, implement the data processing method provided in each step in fig. 2, fig. 3, fig. 4, fig. 5a, fig. 5b, and fig. 6, which may specifically refer to the implementation manners provided in each step in fig. 2, fig. 3, fig. 4, fig. 5a, fig. 5b, and fig. 6, and are not described herein again. In addition, the beneficial effects of the same method are not described in detail.
The computer readable storage medium may be the data processing apparatus provided in any of the foregoing embodiments or an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. The computer readable storage medium may also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) card, a flash card (flash card), and the like, provided on the computer device. Further, the computer-readable storage medium may also include both an internal storage unit and an external storage device of the computer device. The computer-readable storage medium is used for storing the computer program and other programs and data required by the computer device. The computer readable storage medium may also be used to temporarily store data that has been output or is to be output.
Embodiments of the present application also provide a computer program product or computer program comprising computer instructions stored in a computer-readable storage medium. The processor of the computer device reads the computer instruction from the computer-readable storage medium, and executes the computer instruction, so that the computer device can perform the description of the data processing method in the embodiments corresponding to fig. 2, fig. 3, fig. 4, fig. 5a, fig. 5b, and fig. 6, which is not described herein again. In addition, the beneficial effects of the same method are not described in detail.
The term "comprises" and any variations thereof in the description and claims of the embodiments of the present application and in the drawings is intended to cover non-exclusive inclusions. For example, a process, method, apparatus, product, or apparatus that comprises a list of steps or elements is not limited to the listed steps or modules, but may alternatively include other steps or modules not listed or inherent to such process, method, apparatus, product, or apparatus.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The method and the related apparatus provided by the embodiments of the present application are described with reference to the flowchart and/or the structural diagram of the method provided by the embodiments of the present application, and each flow and/or block of the flowchart and/or the structural diagram of the method, and the combination of the flow and/or block in the flowchart and/or the block diagram can be specifically implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block or blocks of the block diagram. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block or blocks of the block diagram. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block or blocks.
The above disclosure is only for the purpose of illustrating the preferred embodiments of the present application and is not to be construed as limiting the scope of the present application, so that the present application is not limited thereto, and all equivalent variations and modifications can be made to the present application.

Claims (11)

1. A data processing method, comprising:
determining general attribute characteristics of the first sample object corresponding to each first sample media data according to first historical trigger attribute information associated with the first sample object and media attribute information corresponding to at least two first sample media data respectively;
acquiring a first future attribute feature of the first sample object, which is associated with the first historical trigger attribute information at a future time;
screening out candidate sample media data from at least two second sample media data through an initial generator in an initial confrontation network, wherein the initial generator sends the candidate sample media data to an initial discriminator in the initial confrontation network;
in the initial arbiter, outputting a first trigger probability of the first sample object for each first sample media data according to the common attribute feature and the first future attribute feature respectively corresponding to the first sample object for each first sample media data, and outputting a second trigger probability for the candidate sample media data in the initial arbiter;
and adjusting model parameters of the initial countermeasure network according to the first trigger probability and the second trigger probability to obtain a target countermeasure network, and determining a target generator in the target countermeasure network as a probability prediction model for predicting the trigger probability of the target object for the media data.
2. The method according to claim 1, wherein the at least two first sample media data comprise first sample media data Si, i being a positive integer; the method further comprises the following steps:
determining object attribute information of the first sample object, trigger record information of the first sample object and distribution environment information of history trigger first sample media data as first history trigger attribute information associated with the first sample object; the history triggering first sample media data is the first sample media data recorded in the triggering record information;
and determining the media data label, the media data content category and the media data scene category of the first sample media data Si as the media attribute information corresponding to the first sample media data Si.
3. The method of claim 1, wherein the screening of candidate sample media data among at least two second sample media data by an initial generator in an initial confrontation network comprises:
determining general attribute characteristics of the second sample object corresponding to each second sample media data according to second historical trigger attribute information associated with the second sample object and media attribute information corresponding to at least two second sample media data;
inputting the general attribute characteristics of the second sample object corresponding to each second sample media data into an initial generator in the initial confrontation network, and generating a first generation trigger probability of the second sample object corresponding to each second sample media data through the initial generator;
screening S matched samples from the at least two second sample media data according to the second historical trigger attribute information; the S matched samples are second sample media data which are not triggered by the second sample object; s is a positive integer;
sequencing the S matched samples according to the first generation triggering probability to obtain S sequenced matched samples, and acquiring K matched samples from the S sequenced matched samples to serve as candidate sample media data; and K is a positive integer less than or equal to S.
4. The method of claim 1, wherein the initial arbiter comprises a feature convolution layer, a neural network aware layer, and a fully-connected active layer;
the outputting, in the initial discriminator, a first trigger probability of the first sample object for the first sample media data according to the common attribute feature and the first future attribute feature of the first sample object for the candidate sample media data includes:
performing convolution fusion processing on the general attribute feature and the first future attribute feature of the first sample object corresponding to the first sample media data through the feature convolution layer to obtain a first sample convolution fusion feature, and inputting the first sample convolution fusion feature into the neural network sensing layer;
performing weighted conversion processing on the first sample convolution fusion feature through the neural network sensing layer to obtain a feature to be activated corresponding to the first sample convolution fusion feature, and inputting the feature to be activated into the full-connection activation layer;
and activating the feature to be activated through the full-connection activation layer to obtain a first trigger probability of the first sample object for the first sample media data.
5. The method of claim 3, wherein outputting, in the initial discriminator, a second trigger probability for the candidate sample media data comprises:
acquiring a second future attribute feature of the second sample object, which is associated with the second historical trigger attribute information at a future time;
in the initial discriminator, outputting a second trigger probability of the second sample object for the candidate sample media data according to the general attribute feature and the second future attribute feature of the second sample object for the candidate sample media data.
6. The method of claim 1, wherein adjusting the model parameters of the initial countermeasure network to obtain a target countermeasure network according to the first trigger probability and the second trigger probability comprises:
determining a first loss value for the first sample media data according to the first trigger probability and a true trigger tag for the first sample media data;
determining a second loss value for the candidate sample media data according to the second trigger probability;
adjusting the model parameters of the initial discriminator according to a confrontation discriminant loss function, the first loss value and the second loss value;
adjusting model parameters of the initial generator according to a countermeasure generation loss function, the first loss value and the second loss value;
and generating the target countermeasure network according to the adjusted initial discriminator and the adjusted initial generator.
7. The method of claim 6, wherein generating the target countermeasure network from the adjusted initial arbiter and the adjusted initial generator comprises:
determining general attribute characteristics of the second sample object corresponding to each second sample media data according to second historical trigger attribute information associated with the second sample object and media attribute information corresponding to at least two second sample media data;
inputting the general attribute features of the second sample objects corresponding to each second sample media data into the initial generator, and generating first generation trigger probabilities of the second sample objects corresponding to each second sample media data through the initial generator;
determining a third loss value for the second sample media data according to the first generation trigger probability and a genuine trigger tag for the second sample media data;
and adjusting the model parameters of the adjusted initial generator according to a supervision loss function and the third loss value to obtain a target generator, and determining the target generator and the adjusted initial discriminator as a target countermeasure network.
8. The method of claim 6, wherein generating the target countermeasure network from the adjusted initial arbiter and the adjusted initial generator comprises:
inputting the general attribute characteristics of the first sample object corresponding to each first sample media data into the initial generator, and generating second generation trigger probabilities of the first sample object corresponding to each first sample media data through the initial generator;
acquiring first hidden features of the first sample object corresponding to each first sample media data from the initial generator, and acquiring second hidden features of the first sample object corresponding to each first sample media data from the initial discriminator;
adjusting the model parameters of the adjusted initial generator according to the error between the first trigger probability and the second generation trigger probability and the error between the first hidden feature and the second hidden feature to obtain a target generator;
adjusting the model parameters of the adjusted initial discriminator according to the error between the first trigger probability and the second generation trigger probability and the error between the first hidden feature and the second hidden feature to obtain a target discriminator, and determining the target generator and the target discriminator as a target countermeasure network.
9. A computer device, comprising: a processor, a memory, and a network interface;
the processor is coupled to the memory and the network interface, wherein the network interface is configured to provide data communication functionality, the memory is configured to store program code, and the processor is configured to invoke the program code to perform the method of any of claims 1-7.
10. A computer-readable storage medium, in which a computer program is stored which is adapted to be loaded by a processor and to carry out the method of any one of claims 1 to 7.
11. A computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the steps of the method of claims 1-7.
CN202111675687.5A 2021-12-31 2021-12-31 Data processing method, device and readable storage medium Pending CN114357301A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111675687.5A CN114357301A (en) 2021-12-31 2021-12-31 Data processing method, device and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111675687.5A CN114357301A (en) 2021-12-31 2021-12-31 Data processing method, device and readable storage medium

Publications (1)

Publication Number Publication Date
CN114357301A true CN114357301A (en) 2022-04-15

Family

ID=81105502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111675687.5A Pending CN114357301A (en) 2021-12-31 2021-12-31 Data processing method, device and readable storage medium

Country Status (1)

Country Link
CN (1) CN114357301A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115292587A (en) * 2022-07-15 2022-11-04 浙江大学 Recommendation method and system based on knowledge distillation and causal reasoning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115292587A (en) * 2022-07-15 2022-11-04 浙江大学 Recommendation method and system based on knowledge distillation and causal reasoning
CN115292587B (en) * 2022-07-15 2023-07-14 浙江大学 Recommendation method and system based on knowledge distillation and causal reasoning

Similar Documents

Publication Publication Date Title
CN111444428B (en) Information recommendation method and device based on artificial intelligence, electronic equipment and storage medium
CN111209440B (en) Video playing method, device and storage medium
CN111353091A (en) Information processing method and device, electronic equipment and readable storage medium
WO2017183242A1 (en) Information processing device and information processing method
CN110234018B (en) Multimedia content description generation method, training method, device, equipment and medium
WO2021155691A1 (en) User portrait generating method and apparatus, storage medium, and device
US11750866B2 (en) Systems and methods for generating adapted content depictions
CN111783712A (en) Video processing method, device, equipment and medium
CN112434184B (en) Deep interest network sequencing method based on historical movie posters
CN112989212B (en) Media content recommendation method, device and equipment and computer storage medium
CN112395515B (en) Information recommendation method and device, computer equipment and storage medium
CN114357301A (en) Data processing method, device and readable storage medium
CN113395594A (en) Video processing method, device, equipment and medium
US20210266637A1 (en) Systems and methods for generating adapted content depictions
CN115423016A (en) Training method of multi-task prediction model, multi-task prediction method and device
CN112269943B (en) Information recommendation system and method
CN112579884B (en) User preference estimation method and device
CN111597361B (en) Multimedia data processing method, device, storage medium and equipment
CN113821676A (en) Video retrieval method, device, equipment and storage medium
CN114996435A (en) Information recommendation method, device, equipment and storage medium based on artificial intelligence
CN113761272A (en) Data processing method, data processing equipment and computer readable storage medium
CN114970494A (en) Comment generation method and device, electronic equipment and storage medium
CN114817692A (en) Method, device and equipment for determining recommended object and computer storage medium
CN111881352A (en) Content pushing method and device, computer equipment and storage medium
CN113569557B (en) Information quality identification method, device, equipment, storage medium and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination