CN113495966B - Interactive operation information determining method and device and video recommendation system - Google Patents

Interactive operation information determining method and device and video recommendation system Download PDF

Info

Publication number
CN113495966B
CN113495966B CN202010190752.4A CN202010190752A CN113495966B CN 113495966 B CN113495966 B CN 113495966B CN 202010190752 A CN202010190752 A CN 202010190752A CN 113495966 B CN113495966 B CN 113495966B
Authority
CN
China
Prior art keywords
information
interactive operation
determining
prediction model
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010190752.4A
Other languages
Chinese (zh)
Other versions
CN113495966A (en
Inventor
王君
洪立印
江鹏
胡勇
冷德维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010190752.4A priority Critical patent/CN113495966B/en
Publication of CN113495966A publication Critical patent/CN113495966A/en
Application granted granted Critical
Publication of CN113495966B publication Critical patent/CN113495966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The disclosure relates to a method and a device for determining interactive operation information and a video recommendation system. The method comprises the following steps: acquiring an interactive operation log of a platform account, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in a first service scene; inputting the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained by training according to the historical operation log; the historical operation log is second interactive operation information of the reference interactive operation for the reference account in at least two business scenes; the reference interactive operation comprises a target interactive operation; and determining third interactive operation information of the platform account according to the output of the operation prediction model, wherein the third interactive operation information is used for indicating whether the platform account can execute the operation information of the target interactive operation in the first service scene. According to the technical scheme, the operation information of the platform account for executing the specific interactive operation in the service scene can be accurately predicted.

Description

Interactive operation information determining method and device and video recommendation system
Technical Field
The disclosure relates to the field of network technologies, and in particular, to a method and a device for determining interactive operation information, a video recommendation system, electronic equipment and a storage medium.
Background
In the fields of searching, recommending, advertising, etc., it is often necessary to estimate whether an interaction operation is performed or not, the probability of the interaction operation being performed, etc., in a network platform, for example, to obtain a p xtr (predicted X through rate, estimated probability of occurrence of an X operation), etc. At present, a pre-estimation model is often adopted to pre-estimate the execution probability of a specific interaction operation. Under most scenes, when the data volume is sufficient, the traditional pre-estimation model can well pre-estimate the probability of the interactive operation.
However, the inventors found that training of these models in the related art has certain requirements on the amount of data. The amount of log data in the scene of a new application is often small, in which case the accuracy of the interaction is often low if a conventional model is directly employed to determine the probability of the interaction.
Disclosure of Invention
The disclosure provides a method and a device for determining interactive operation information, a video recommendation system, electronic equipment and a storage medium, so as to at least solve the problem of poor interactive operation information prediction effect in the related art. The technical scheme of the present disclosure is as follows:
According to a first aspect of an embodiment of the present disclosure, there is provided a method for determining interactive operation information, including: acquiring an interactive operation log of a platform account, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in a first service scene; inputting the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained by training according to a historical operation log; the history operation log is second interactive operation information of reference interactive operation for reference accounts in at least two business scenes; the at least two business scenes are associated with each other and comprise the first business scene; the reference interactive operation comprises a target interactive operation; and determining third interactive operation information of the platform account according to the output of the operation prediction model, wherein the third interactive operation information is used for indicating whether the platform account can execute the operation information of the target interactive operation in the first service scene.
In an exemplary embodiment, the at least two service scenarios further include a second service scenario; before the step of inputting the interactive operation log into the trained operation prediction model, the method for determining interactive operation information further comprises: determining a platform account set and first media information of the first service scene; the platform account set comprises account information of a first candidate account in the first business scene; the first media information is media information pushed to the first candidate account; determining a candidate account set and second media information of a candidate service scene; the candidate account set comprises account information of a second candidate account in the candidate business scene; the second media information is media information pushed to the second candidate account; determining a first relevance of the platform account set and the candidate account set; determining a second relevance of the first media information and the second media information; and determining the second service scene from the candidate service scenes according to at least one of the first correlation and the second correlation.
In an exemplary embodiment, the training step of operating the prediction model includes: inputting a historical operation log of the second business scene into a first prediction model to train an embedded layer of the first prediction model; inputting the trained output of the embedded layer and the historical operation log of the first business scenario into a second prediction model to train the second prediction model; determining the trained second predictive model as the operational predictive model.
In an exemplary embodiment, the step of inputting the interactive operation log into a trained operation prediction model includes: inputting the interactive operation log into the operation prediction model to trigger the operation prediction model to determine the probability of the platform account executing the target interactive operation on candidate media information, wherein the probability is used for determining third interactive operation information of the platform account; wherein the candidate media information includes media information of the client under the first service scene.
In an exemplary embodiment, the candidate media information includes candidate videos; after the step of determining third interoperability information of the platform account according to the output of the operation prediction model, the method for determining the interoperability information further includes: sorting the candidate videos according to the third interactive operation information; determining a preset number of candidate videos ranked in front as recommended videos; pushing the recommended video to a video display page of the platform account.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for determining interactive operation information, including: an operation log obtaining unit configured to perform obtaining an interactive operation log of a platform account, where the interactive operation log is used to record first interactive operation information executed by the platform account in a first service scenario; an operation log input unit configured to perform input of the interactive operation log into a trained operation prediction model, wherein the operation prediction model is trained according to a historical operation log; the history operation log is second interactive operation information of reference interactive operation for reference accounts in at least two business scenes; the at least two business scenes are associated with each other and comprise the first business scene; the reference interactive operation comprises a target interactive operation; and an operation information determining unit configured to perform determination of third interactive operation information of the platform account according to the output of the operation prediction model, wherein the third interactive operation information is used for indicating whether the platform account performs the operation information of the target interactive operation in the first service scene.
In an exemplary embodiment, the at least two service scenarios further include a second service scenario; the device for determining the interactive operation information further comprises: a first information determining unit configured to perform determining a platform account set and first media information of the first service scenario; the platform account set comprises account information of a first candidate account in the first business scene; the first media information is media information pushed to the first candidate account; a second information determining unit configured to perform determining a candidate account set of the candidate service scenario and second media information; the candidate account set comprises account information of a second candidate account in the candidate business scene; the second media information is media information pushed to the second candidate account; a first relevance determining unit configured to perform determining a first relevance of the platform account set and the candidate account set; a second correlation determination unit configured to perform determination of a second correlation of the first media information and the second media information; and a service scenario determining unit configured to perform determination of the second service scenario from the candidate service scenarios according to at least one of the first correlation and the second correlation.
In an exemplary embodiment, the apparatus for determining the interaction information further includes: a first model training unit configured to perform input of a history operation log of the second business scenario into a first prediction model to train an embedded layer of the first prediction model; a second model training unit configured to perform inputting of the trained output of the embedded layer and the historical operation log of the first business scenario into a second prediction model to train the second prediction model; a model determination unit configured to perform determination of the trained second prediction model as the operation prediction model.
In an exemplary embodiment, the operation log input unit is further configured to perform inputting the interactive operation log into the operation prediction model, so as to trigger the operation prediction model to determine a probability of the platform account performing the target interactive operation on candidate media information, where the probability is used to determine third interactive operation information of the platform account; wherein the candidate media information includes media information of the client under the first service scene.
In an exemplary embodiment, the candidate media information includes candidate videos; the apparatus for determining the interactive operation information further includes: a first video ranking unit configured to perform ranking of the candidate videos according to the third interoperation information; a first video determination unit configured to perform determination of a preset number of candidate videos ranked ahead as recommended videos; the first video recommending unit is configured to execute pushing of the recommended video to a video showing page of the platform account.
According to a third aspect of the embodiments of the present disclosure, there is provided a recommendation system for video, including: a server and a client; the client is configured to execute and acquire an interactive operation log of a platform account, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in a first service scene; the server is configured to perform inputting the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained through training according to a historical operation log; the history operation log is second interactive operation information of reference interactive operation for reference accounts in at least two business scenes; the at least two business scenes are associated with each other and comprise the first business scene; the reference interactive operation comprises a target interactive operation; determining third interactive operation information of the platform account according to the output of the operation prediction model, wherein the third interactive operation information is used for indicating whether the platform account can execute the operation information of the target interactive operation in the first service scene; sequencing the candidate videos according to the third interactive operation information; the candidate videos comprise videos of clients in the first business scene; determining a preset number of candidate videos ranked in front as recommended videos; pushing the recommended video to the client; the client is further configured to perform outputting the recommended video to a video presentation page of the platform account.
According to a fourth aspect of embodiments of the present disclosure, there is provided an electronic device comprising a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of determining the interoperation information as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the method of determining interoperation information as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: and training an operation prediction model according to historical operation logs of at least two associated business scenes, when probability prediction needs to be carried out on the operation of the platform account in the first business scene, obtaining an interactive operation log of the platform account, inputting the interactive operation log into the trained operation prediction model, and determining third interactive operation information of the platform account for executing specific interactive operation in the first business scene according to the output of the operation prediction model so as to determine whether the platform account can execute target interactive operation in the first business scene. According to the embodiment provided by the disclosure, when the data volume in the first service scene is insufficient, accurate training of the operation prediction model can be completed, so that the interactive operation information of the platform account for executing the specific interactive operation in the first service scene is accurately predicted.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
FIG. 1 is an application environment diagram illustrating a method of determining interoperation information according to an example embodiment;
FIG. 2 is a flow chart illustrating a method of determining interoperation information according to an example embodiment;
FIG. 3 is a flowchart illustrating an operational predictive model training in accordance with an exemplary embodiment;
FIG. 4 is a block diagram of an operational prediction model, according to an example embodiment;
FIG. 5 is a display interface diagram of a recommended video, shown in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating an interoperation prediction for multiple scenarios in accordance with an exemplary embodiment;
FIG. 7 is a flowchart illustrating a method of determining interoperation information according to another example embodiment;
FIG. 8 is a block diagram illustrating a means for determining interoperation information according to an example embodiment;
FIG. 9 is a block diagram of a recommendation system for video, according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present disclosure. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The method for determining the interactive operation information provided by the embodiment of the disclosure can be applied to the electronic device shown in fig. 1. Referring to fig. 1, an electronic device 100 may include one or more of the following components: a processing component 101, a memory 102, a power supply component 103, a multimedia component 104, an audio component 105, an input/output (I/O) interface 106, a sensor component 107, and a communication component 108.
The processing component 101 generally controls overall operation of the electronic device 100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 101 may include one or more processors 109 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 101 may include one or more modules that facilitate interactions between the processing component 101 and other components. For example, the processing component 101 may include a multimedia module to facilitate interaction between the multimedia component 104 and the processing component 101.
The memory 102 is configured to store various types of data to support operations at the electronic device 100. Examples of such data include instructions for any application or method operating on the electronic device 100, contact data, phonebook data, messages, pictures, video, and so forth. The memory 102 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
The power supply assembly 103 provides power to the various components of the electronic device 100. Power supply components 103 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 100.
The multimedia component 104 includes a screen between the electronic device 100 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 104 includes a front-facing camera and/or a rear-facing camera. When the electronic device 100 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 105 is configured to output and/or input audio signals. For example, the audio component 105 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 102 or transmitted via the communication component 108. In some embodiments, the audio component 105 further comprises a speaker for outputting audio signals.
The I/O interface 106 provides an interface between the processing component 101 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 107 includes one or more sensors for providing status assessment of various aspects of the electronic device 100. For example, the sensor assembly 107 may detect an on/off state of the electronic device 100, a relative positioning of components, such as a display and keypad of the electronic device 100, a change in position of the electronic device 100 or a component of the electronic device 100, the presence or absence of a user's contact with the electronic device 100, an orientation or acceleration/deceleration of the electronic device 100, and a change in temperature of the electronic device 100. The sensor assembly 107 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 107 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 107 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 108 is configured to facilitate communication between the electronic device 100 and other devices in a wired or wireless manner. The electronic device 100 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 108 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 108 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
Fig. 2 is a flowchart illustrating a method for determining interactive operation information, which is used in an electronic device, according to an exemplary embodiment, and includes the following steps:
In step S201, an interaction log of a platform account is obtained, where the interaction log is used to record first interaction information executed by the platform account in a first service scenario.
The service scene refers to a software scene written for a certain application purpose, and may be a specific application program such as a short video application program, a video playing application program, a live broadcast application program, or a certain version or a certain page of the application program. When using an application program, a network user often needs to register an account, and the registered platform account has information such as an account number and personal information of the user, so that the user can perform corresponding interactive operations in the corresponding application program through the platform account, for example: video playing, live broadcast publishing, praise, forwarding, etc. Specifically, the first service scenario refers to a current application program aimed at by the electronic device in the running process, that is, the interactive operation of the platform account in the first service scenario needs to be predicted currently. The platform account is a network account registered in the first service scene, further, the platform account can be one network account or a plurality of network accounts, if the platform accounts are a plurality of network accounts, the electronic device can respectively predict the interaction operation information of the network accounts for executing specific interaction operation in the first service scene in an asynchronous or synchronous mode, and for convenience of description, the platform account is used as one in the following description.
The interactive operation log may refer to log data generated when the platform account performs the reference interactive operation in the first service scenario, and may be data formed by active operation of the platform account: number of praise, praise time, praise account, etc., and data formed by passive operations: viewing duration (play duration), page dwell duration, etc. Further, the interactive operation log may be a historical operation log generated by the platform account in a set historical time period (the time period may be determined according to actual conditions), that is, the interactive operation of the platform account in a current or future time period (the time period may be determined according to actual conditions) is predicted according to the historical operation log.
Reference to an interaction may refer to an interaction that can be performed in a business scenario, where the interactions corresponding to different applications may differ, for example: in short video applications, the reference interactive operation may be a login, logout, slide, play, pause play, attention, praise, comment, view comment, forward, type mark, etc. operation; in a video playback platform, the reference interactive operation may be a login, logout, swipe, play, pause, send a bullet screen, forward, upload, download, etc. operation. It should be noted that, each reference interaction may form a reference interaction set, and in an actual application scenario, the platform account may execute all or part of the reference interactions in the reference interaction set.
The login may be that the permission of using the application program is obtained through the account password of the application program, and the corresponding interaction operation can be executed in the application program. The application program can be logged out after the application program is logged out, and the corresponding application program can be directly closed after the application program is logged out. The sliding may refer to sliding up/down/left/right on a certain interface of the application program, so as to implement operations such as updating video, exiting pages, and the like. The playing can be that a certain video playing interface is opened in the application program to play the video, and the video playing can be realized through specific operations such as clicking, double clicking, sliding up, sliding down, sliding left, sliding right and the like. Pause of playing may refer to stopping playing a current video or live broadcast, etc., and may be achieved by a specific operation such as clicking. Attention can refer to establishing association relation with a specific account through a specific attention control, and when information of the specific account concerned is updated, the account concerned can be acquired in real time. Praise may refer to the implementation of a specific praise control, where a corresponding praise label may be attached to a specific account (or specific media information) being praised, while the number of praise may be accumulated when a plurality of praise is received. The comment can refer to that a user publishes own views of media information such as video through a dialog box, and the account (or other accounts) being commented can see the comment content. The comment can be checked through a specific comment check control, and the platform account can see the views of other accounts on the media information and can see comments of other accounts on comments made by the platform account. Forwarding may refer to migrating media information to other platforms, so that accounts of other platforms can acquire corresponding media information, for example, forwarding a short video published in a fast-handed to a micro-message, so that a micro-message friend can watch the short video. The type tag may refer to an operation of tagging media information, for example, tagging a short video as a "fun video". The send bullet screen may send itself a view of a video or live broadcast through a particular bullet screen send dialog box. The uploading may be uploading local video, articles, and other media information to a server where the application program is located, where other platform accounts can view or download corresponding media information. The downloading may be downloading media information in the application locally.
In many business scenarios, the platform account needs to be predicted to interoperate at present or in the future for a period of time, so that information is pushed to the platform account, and a specific prediction process of the interoperation is as follows:
in step S202, the interactive operation log is input into a trained operation prediction model, where the operation prediction model is obtained by training according to a historical operation log; the history operation log is second interactive operation information of reference interactive operation for reference accounts in at least two business scenes; the at least two business scenes are associated with each other and comprise the first business scene; the reference interaction comprises a target interaction.
The operation prediction model refers to a model which analyzes input log data and outputs information such as probability and the like corresponding to specific interaction operation executed by a corresponding platform account. The operation prediction model may be a neural network model, which is capable of extracting feature information from input data, analyzing and learning the feature information, and outputting the result. Further, in certain exemplary embodiments, the operational prediction model may be an LR model, an FM model, a DNN model, a DCNN (Deep convolutional neural networks, deep convolutional neural network) model, a DN (Deconvolutional networks, deconvolution neural network) model, a GAN (Generative adversarial networks, generative countermeasure network) model, or the like. Further, the operation prediction model may include an input layer, a hidden layer (may be multiple layers, for example, 128 layers), and an output layer, where the hidden layer may include multiple neurons, each neuron performs two classifications on the input of the previous layer, and then outputs a final operation prediction result to the output layer through layer-by-layer analysis.
Further, the operation prediction model may include a forward propagation process (may also be referred to as a feed-forward process) and a backward propagation process (may also be referred to as a feedback process) in the training process, that is, a forward propagation process and a backward propagation process constitute a training process of the operation prediction model.
The association of the at least two business scenes can mean that a certain association relationship exists between the at least two business scenes, and the association relationship can be the similarity of functions provided by the at least two business scenes, the similarity of information provided by the at least two business scenes, the similarity of a user system and the like. The at least two traffic scenarios may include a first traffic scenario and a second traffic scenario. The service scenarios corresponding to the first service scenario and the second service scenario may certainly differ (otherwise, the same service scenario is the same), so the first service scenario and the second service scenario may be regarded as different applications or versions of different applications, for example: a fast-handed master station, a large screen, etc.
Further, the reference interaction operation may be determined according to the actual interaction operation, and may be different from or the same as the interaction operation corresponding to the interaction operation information of the platform account in the first service scenario. In addition, the second business scenario may be more than one, i.e. the operation prediction model may fuse log data of at least two business scenarios. In order to facilitate joint training of the operation prediction model, the log data of the first service scene and the second service scene can be designed in a unified format, or the logs of the log data can be preprocessed into similar formats. In addition, the target interaction may be one or more of reference interactions, which may be selected by the developer from the reference interactions, or may be determined by an algorithm according to a function provided by the first business scenario, for example: the first service scenario is a fast-handed master station, and if the user finds that the operation frequently used in the service scenario is to play short videos through detection, the target interaction operation can be determined to play the short videos.
Further, the historical operation log is interaction information of the reference account for executing the reference interaction operation in the at least two business scenes in the past time (relative to the current running moment of the electronic equipment), the historical operation log can represent the possible interaction operation of the platform account in the first business scene to a large extent, the operation prediction model is trained through the historical operation log, and the trained operation prediction model has high accuracy.
According to the method, a trained operation prediction model is obtained through combined training according to historical operation logs, the trained operation prediction model fuses log data in a first service scene and can reflect actual operation conditions of users in the first service scene, meanwhile, the operation prediction model fuses log data in other service scenes, more training data can be fused as much as possible when the data quantity of the first service scene is insufficient, and then a more accurate trained operation prediction model is obtained. The trained operation prediction model can perform operations such as feature extraction and classification on the interactive operation log, and further determine operation information of each reference interactive operation executed by the platform account in the first service scene. If the prediction result corresponding to the model trained by the currently acquired training data has a larger difference from the actual result, the corresponding training sample data volume can be considered to be insufficient. Further, whether the data amount is sufficient can be determined according to the complexity of the functions implemented by the application program, and for the complex application program, the data amount required for training is large.
Further, the interactive operation log may be converted into a vector form and then input into the operation prediction model.
In step S203, third interaction operation information of the platform account is determined according to the output of the operation prediction model, where the third interaction operation information is used to indicate whether the platform account will execute the operation information of the target interaction operation in the first service scenario.
The third interaction information may refer to a possibility that the platform account performs the target interaction in a future period of time (a specific period of time may be determined according to an actual situation), and may be represented by a specific probability value. Specifically, the third interaction information may be represented by pxtr (predicted X through rate), that is, the estimated occurrence probability of the X behavior, where the X behavior may be click, like praise, follow attention, and the like. Further, the third interactive operation information may be directly output by the operation prediction model, or may be obtained by computing by the electronic device according to classification information output by the operation prediction model.
According to the method for determining the interactive operation information, an operation prediction model is trained according to historical operation logs of at least two associated business scenes, when probability prediction needs to be carried out on operation of a platform account in a first business scene, the interactive operation log of the platform account is obtained, the interactive operation log is input into the trained operation prediction model, and third interactive operation information of specific interactive operation of the platform account in the first business scene is determined according to output of the operation prediction model. According to the embodiment provided by the disclosure, when the data volume in the first service scene is insufficient, accurate training of the operation prediction model can be completed, so that the interactive operation information of the platform account for executing the specific interactive operation in the first service scene is accurately predicted.
The second business scenario may be determined according to a variety of ways, such as: according to the developer of the service scenario (for example, an application program which belongs to the same developer is determined to be the aforementioned second service scenario), the category which belongs to (for example, an application program which belongs to the same short video playing platform is determined to be the aforementioned second service scenario), the account system (a collection of a plurality of accounts), the provided media information, and the like. The following illustrates a process of determining a second business scenario from an account hierarchy, provided media information:
in an exemplary embodiment, the at least two service scenarios further include a second service scenario; before the step of inputting the interactive operation log into the trained operation prediction model, the method for determining interactive operation information further comprises: determining a platform account set and first media information of the first service scene; the platform account set comprises account information of a first candidate account in the first business scene; the first media information is media information pushed to the first candidate account; determining a candidate account set and second media information of a candidate service scene; the candidate account set comprises account information of a second candidate account in the candidate business scene; the second media information is media information pushed to the second candidate account; determining a first relevance of the platform account set and the candidate account set; determining a second relevance of the first media information and the second media information; and determining the second service scene from the candidate service scenes according to at least one of the first correlation and the second correlation.
The account set may refer to a set formed by all or part of accounts registered in the application program, and the accounts in the account set may have association relations (for example, friends, family, friends, etc.), or may not have association relations. The media information may refer to information pushed by a client, a server, etc. to a corresponding platform account, and may be video (including short video), articles, pictures, etc.
In addition, the candidate service scenario may refer to all or part of the applications available to the electronic device, and the number of the candidate service scenarios may be more than one. Further, a certain association relationship may exist between candidate service scenarios, for example: developed by the same developer, provide similar functionality (e.g., both short video, live applications), etc.
The relevance refers to the association degree of two variables, such as the association degree of accounts contained in an account set and the association degree between media information.
Specifically, in the embodiment of the present disclosure, the first correlation may refer to a degree of association between accounts in the platform account set and the candidate account set, and may specifically refer to a proportion of the common accounts (or associated accounts) or a correlation between accounts (for example, a ratio of accounts that are friends, colleagues, etc. or friends, colleagues, etc.). Further, taking the proportion of the common account as an example, the determining process of the first correlation may be: determining account information of an account set A contained in a platform account set, determining account information of an account set B contained in a candidate account set, determining the number C1 of the same accounts in the account set A and the account set B, determining the total number C2 of the account set A and the account set B, and determining the ratio of the number C1 to the total number C2 as a first correlation.
In the embodiment of the present disclosure, the second correlation may refer to a degree of similarity between the first media information and the second media information, and specifically may be a duty ratio of the same media information, a duty ratio of media information with the same source in all media information, and so on. Taking the duty ratio of the same media information as an example, the second correlation determination process may be: the number C3 of the same media information in the first media information and the second media information is determined, the total number C4 of the first media information and the second media information is determined, and the ratio of the number C3 to the total number C4 is determined as a second correlation.
According to at least one of the first correlation and the second correlation, the implementation process of determining the second service scene from the candidate service scenes may be: 1. if the first correlation is larger than the set threshold, judging that the corresponding candidate service scene meets the condition, and determining the candidate service scene as a second service scene; 2. and if the second correlation is larger than the set threshold value, judging that the corresponding candidate business scene meets the condition, determining the corresponding candidate business scene as the second business scene, and if the first correlation is larger than the set threshold value and if the second correlation is larger than the set threshold value, judging that the corresponding candidate business scene meets the condition, and determining the corresponding candidate business scene as the second business scene. Wherein the threshold condition to be satisfied by the first correlation and the second correlation may be determined according to actual situations, which is not limited by the present disclosure. Of course, the condition that the first correlation/second correlation needs to satisfy may be not only above a certain threshold, but also within a certain threshold range or other conditions.
The embodiment of the disclosure realizes the process of determining the second service scene related to the first service scene, mainly determines the second service scene through the correlation of the account system and the media information, has simple realization process and reliable result, and can effectively ensure the reliability of the input data of the operation prediction model.
In an exemplary embodiment, as shown in fig. 3, the training step of the operation prediction model includes:
s301, inputting a historical operation log of the second business scene into a first prediction model to train an embedded layer of the first prediction model.
The first prediction model may refer to a neural network model for determining a certain probability of performing an interaction, and may be an LR model, an FM model, a DNN model, a DCNN model, a DN model, a GAN model, or the like. And the first prediction model performs machine learning operations such as feature extraction, feature analysis, classification and the like on the historical operation log of the second business scene to obtain prediction operation information. The predictive operation information may be a probability that each reference account, which is output by the first predictive model after learning the historical operation log of the second business scenario, performs the reference interaction operation. Further, the historical operation log of the second business scenario may be input into an embedding layer (input layer) of the first predictive model. The hidden layer connected with the embedded layer by layer analyzes the historical operation log of the second service scene, outputs the historical operation log to the output layer, and obtains the predicted operation information by the output layer, wherein the process can be regarded as a forward propagation process.
Furthermore, training of the first prediction model can be completed through a back propagation process, and training of the embedded layer is completed at the same time.
Specifically, the actual account operation information of the reference account is compared with the predicted operation information to obtain a corresponding loss value (difference value), an operation loss value is obtained, a loss function (also called a lost function) is constructed according to the operation loss value, the weight and the like in the first prediction model are adjusted by minimizing the loss function, and when the last hidden layer (the hidden layer connected with the output layer) gradually feeds back and adjusts to reach the first hidden layer and reaches the embedded layer, model training is considered to be finished. This process can be considered as a back propagation process, where the relevant parameters of the embedded layer are updated as well, thus completing the training of the embedded layer.
S302, the trained output of the embedded layer and the historical operation log of the first business scene are input into a second prediction model to train the second prediction model.
The second prediction model may also refer to a neural network model for determining a probability of performing a certain interaction, and may be an LR model, an FM model, a DNN model, a DCNN model, a DN model, a GAN model, or the like. Further, the first prediction model and the second prediction model may be the same model or may be different models (for example, the number of layers of the hidden layer may be different).
The training process of the second predictive model may also include a forward propagation process and a backward propagation process. The training process is similar to the first predictive model and will not be described in detail herein.
The second prediction model is jointly trained through the output of the trained embedded layer and the historical operation log of the first business scene, so that the prediction accuracy of the operation prediction model can be improved while training machine resources are saved.
In some exemplary embodiments, the second predictive model may also be trained using only the output of the embedded layer, which may reduce training samples and increase the efficiency of model training.
S303, determining the trained second prediction model as the operation prediction model.
The trained second prediction model fuses input data of the first prediction model and the second prediction model, so that the interactive operation information of the first business scene can be accurately predicted, and the operation prediction model is used for predicting the operation information of the platform account for executing the target interactive operation in the first business scene. And further obtaining third interactive operation information.
The above process of training the operation prediction model may be implemented by offline training (offline training) or online training (online training).
According to the method and the device for processing the interactive operation log, the second prediction model obtained through the combined training of the historical operation logs of the first service scene and the second service scene is determined to be the operation prediction model and is used for analyzing the interactive operation log and determining the third interactive operation information, log data in the two service scenes are comprehensively considered, and the obtained third interactive operation information has high accuracy.
Further, as shown in fig. 4, a more specific framework of the operation prediction model may be taken as an example, where the first prediction model and the second prediction model are both DNN models, and a training process of the operation prediction model is as follows:
forward propagation process of the first predictive model: the user data log stream (shared common embedding input 1) under the multi-service scene is used as a training sample to be input into a first prediction model, wherein the first prediction model comprises n hidden layers (layer 1/layer 2 … layer n), the size of n can be determined according to practical situations, for example, can be determined according to the complexity of an application program function, if the application program function is complex, n can take a larger value), and the hidden layers process the input log stream and output a result (output) through an output layer, so as to obtain a predicted value.
Back propagation process of the second predictive model: comparing the predicted value with the actual value, constructing a lost function according to the comparison result, and updating model parameters of the lost function according to a gradient descent algorithm, wherein shared common embedding input 1 is updated.
Forward propagation process of the second predictive model: the updated shared common embedding input 1 is input into a second prediction model by taking a log stream (input) of a currently studied service scene as a training sample, the second prediction model includes m hidden layers (shared layer 1/shared layer 2 … shared layer m, where the size of m can be determined according to practical situations, for example, can be determined according to the complexity of an application program function, if the application program function is complex, m can take a larger value, and the size of m can be equal to or not equal to the size of n), the second prediction model performs joint training on the input log stream, and obtains corresponding results according to different training tasks, as shown in fig. 4, the second prediction model obtains output of corresponding k training targets through k training tasks (where the task layer can be more than one layer). The training tasks may be divided according to types of interaction operations, for example, log data corresponding to the interaction operations (such as clicking, focusing, etc.) actively performed by the user is used as one training task, passively formed log data is used as another training task, and further, output results corresponding to one training task may be more than one.
Back propagation process of the second predictive model: and reversely adjusting parameters such as weights of each hidden layer and the input log stream according to the output result of the output layer, and determining the adjusted second prediction model as a trained operation prediction model.
In an exemplary embodiment, the step of inputting the interactive operation log into a trained operation prediction model includes: and inputting the interactive operation log into a trained operation prediction model, extracting operation characteristic information from the interactive operation log through the operation prediction model, and outputting the probability of executing each reference interactive operation by the platform account according to the operation characteristic information so as to obtain third interactive operation information for executing target interactive operation.
The operation feature information may refer to a feature value in the interactive operation log, and feature extraction may be performed on a vector corresponding to the interactive operation log by using a feature extraction method to obtain a feature vector. Further, the interaction may be considered by the vector of [0, 1], and the interaction may be considered by the vector of [0,1 ].
The hidden layer may include a plurality of neurons therein, with the neurons of the current layer being connected to the respective neurons of the subsequent layer, which is also referred to as full connection. The extraction of the operation characteristic information of the interactive operation log can be carried out through the hidden layer of the operation prediction model, the extracted characteristic vector is input into the next layer, the extraction of the operation characteristic information is carried out by the next layer, and the final probability can be output to the output layer through the layer-by-layer analysis.
According to the method and the device for extracting the characteristic information of the interactive operation log, the characteristic information of the interactive operation log is extracted through the operation prediction model, the operation prediction model can fully fuse various possible characteristic information in the interactive operation log, accurate probability prediction information is obtained, and therefore third interactive operation information is obtained.
Further, in an exemplary embodiment, the step of inputting the interactive operation log into a trained operation prediction model includes: inputting the interactive operation log into the operation prediction model to trigger the operation prediction model to determine the probability of the platform account executing the target interactive operation on candidate media information; the candidate media information comprises media information of the client under the first service scene, and the probability is used for determining third interactive operation information of the platform account.
The candidate media information is media information uploaded by a platform account in the first service scene through a client (can be short video recorded by the client, etc.), or can be media information obtained directly by a server corresponding to the first service scene (can be media information searched from a network by the server through an information searching tool, or can be media information generated through a certain algorithm).
The target interaction operation comprises at least one of playing, praying, paying attention, forwarding and commenting. Further, taking the target interaction operation as playing as an example, the following describes the execution process of the operation prediction model: after the operation prediction model acquires the input interactive operation log, carrying out feature analysis on the interactive operation log, classifying each feature to distinguish whether the interactive operation log is related to playing, and integrating the classified results to obtain a probability value, wherein the probability value is the probability of the platform account executing playing operation on the candidate media information. Praise, attention, forwarding, comment are the same and will not be described in detail here. Further, when the target interaction is two or more of play, praise, attention, forwarding and comment, the operation prediction model may be controlled to analyze the interaction log in a synchronous or asynchronous manner, so as to obtain a probability value, which is specifically exemplified as follows: assuming that the target interactive operation is playing and praying, after the operation prediction model acquires the input interactive operation log, performing feature analysis on the interactive operation log, classifying each feature to distinguish whether the interactive operation log is related to playing and praying or not, and integrating the classified result to obtain two probability values, wherein the two probability values are the probability that the platform account performs playing and praying operation on the candidate media information.
Specifically, the step of determining the third interactive operation information of the platform account according to the output of the operation prediction model includes: and determining whether the platform account can execute third interactive operation information of target interactive operation on the candidate media information in the first service scene according to the probability output by the operation prediction model.
Further, when the probability value is higher than a set threshold (which may be determined according to practical situations, for example, 90%), the platform account may be considered to perform the target interaction operation on the candidate media information in the first service scenario. For example: the operation prediction model determines that the probability of the platform account executing the playing operation on a short video is 95% according to the interactive operation log, and then the platform account can be considered to execute the playing operation on the short video in the first service scene.
According to the method and the device for the interaction operation, the interaction operation information is analyzed through the operation prediction model, probability prediction information corresponding to interaction operations such as playing, praying and the like of candidate media information uploaded by the client side by the platform account is determined, and third interaction operation information of specific target interaction operations of the platform account on the media information can be accurately determined.
Still further, in an exemplary embodiment, the candidate media information includes candidate videos; after the step of determining third interoperability information of the platform account according to the output of the operation prediction model, the method for determining the interoperability information further includes: sorting the candidate videos according to the third interactive operation information; determining a preset number of candidate videos ranked in front as recommended videos; pushing the recommended video to a video display page of the platform account.
The preset number can be determined according to practical situations, and can be 9, 10 and the like. Further, if the platform account performs corresponding interactive operation on the recommended video, new video information can be recommended to the platform account, so that the requirement of a user on video watching is met, and the use experience of an application program is improved.
The video presentation page refers to a page for presenting video in an application program, and is capable of displaying a corresponding video (a schematic diagram for presenting recommended video information on the video presentation page may be shown in fig. 5), and also capable of responding according to an operation of a platform account, for example: and clicking a certain recommended video by the platform account, and starting playing the corresponding recommended video in the interface.
According to the embodiment of the disclosure, the purpose of accurately recommending the video to the platform account is achieved through the trained operation prediction model.
The foregoing embodiments illustrate implementations of media information recommendation to a platform account in a first business scenario. In fact, since the training process of the operation prediction model is applied to the log data in the second service scenario, the operation prediction model may also be used to recommend media information to the network account in the second service scenario.
In an exemplary embodiment, after the step of determining the first probability prediction information of the target interaction performed by the platform account in the first service scenario according to the output of the operation prediction model, the method for determining the interaction information further includes: acquiring an interactive operation log of a platform account in a second service scene, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in the second service scene; inputting the interactive operation log in the second service scene into an operation prediction model; determining fourth interactive operation information of the platform account according to the output of the operation prediction model, wherein the fourth interactive operation information is used for indicating whether the platform account can execute the operation information of the target interactive operation in the second service scene; sequencing the candidate videos according to the fourth interactive operation information; the candidate video comprises video information uploaded to the second service scene by the client; determining a preset number of candidate videos ranked in front as recommended videos; pushing the recommended video to a video presentation page of the platform account.
Wherein, the target interaction operation in the embodiment of the disclosure may be consistent with the target interaction operation when determining the third interaction operation information; different interactions belonging to the same set of interactions are also possible, for example: the target interaction operation when the first service scene is calculated is praise, and the target interaction operation when the second service scene is calculated is forwarding.
As shown in the foregoing embodiment, the second service scenario may be more than one. In the following, a plurality of second service scenarios are taken as an example, and implementation manner of video recommendation in a plurality of application programs is described. As shown in fig. 6, training of a multi-task learning model (i.e., an operation prediction model) is performed by N (the value of N may be determined according to the actual situation) mixed scenes (one scene corresponds to one application program), and the trained operation prediction model runs on a prediction server (i.e., the electronic device in the foregoing embodiment), where the prediction server provides corresponding interactive operation information for online services of the N scenes (may be implemented by servers corresponding to respective scenes) according to the output of the operation prediction model, so that the online services complete video recommendation and other processes.
Furthermore, the online service can adjust the prediction server according to the actual operation condition of the network account, and even train the operation prediction model again so as to obtain a more accurate operation prediction model.
According to the embodiment of the disclosure, the operation prediction model can be jointly trained according to the log data of a plurality of application programs, so that corresponding information can be provided for a plurality of online services through the trained operation prediction model, and the normal operation of the online services is ensured.
In an exemplary embodiment, an application example of the method for determining the interactive operation information of the present disclosure is provided, as shown in fig. 7, including the following steps:
s701, determining a platform account set and first media information of a first service scene;
s702, determining a candidate account set and second media information of a candidate service scene;
s703, determining a first correlation of the platform account set and the candidate account set; determining a second relevance of the first media information and the second media information;
s704, determining a second service scene from the candidate service scenes according to at least one of the first correlation and the second correlation;
s705, inputting a history operation log of the second business scene into a first prediction model to train an embedded layer of the first prediction model;
S706, inputting the trained output of the embedded layer and the historical operation log of the first business scene into a second prediction model to train the second prediction model;
s707, determining the trained second prediction model as an operation prediction model;
s708, acquiring an interactive operation log of the platform account;
s709, inputting the interactive operation log of the platform account into a trained operation prediction model;
s710, determining third interactive operation information of the platform account according to the output of the operation prediction model;
s711, sorting the candidate videos according to the third interactive operation information;
s712, determining the candidate videos with the preset number ranked in front as recommended videos;
s713, pushing the recommended video to a video display page of the platform account.
According to the method for determining the interactive operation information, the operation prediction model is trained through log data in the associated first service scene and the second service scene, the interactive operation log executed by the platform account in the first service scene is input into the trained operation prediction model, and further the interactive operation information of the platform account for executing target interactive operation in the first service scene is determined according to the output of the operation prediction model. When the data volume in the first service scene is insufficient, accurate training of the operation prediction model can be completed, probability prediction information of specific interactive operation of the platform account in the application program can be further accurately predicted, and accurate recommended video information can be obtained.
A typical operation prediction model is LR, FM, DNN. In most scenes, when the data volume is sufficient, compared with the traditional machine learning methods such as LR and FM, the DNN model can reach better operation probability accuracy. In order to better understand the above method, an application example of the method for determining the interoperability information of the disclosure will be described in detail below by taking the DNN model as an example, and the method includes the following steps:
a) And acquiring the user data log stream under the multi-service scene. And acquiring client exposure logs and user behavior logs and server logs (containing context information and the like at the time) under a plurality of scene services.
B) And training a mixed service scene multitasking model. And simultaneously, inputting user data log streams in a multi-service scene as training samples, designing a multi-task multi-target DNN learning network, and jointly training a pxtr pre-estimation model (operation prediction model) in a plurality of scenes.
The specific network design may be as shown in fig. 4, taking two service scenes (the method is not limited to two service scenes and may be a plurality of service scenes) as an example of recommending two service scenes by using a discovery page video of two independent apps of a fast-handed master station and a fast-handed large screen of a short video platform, and the network is divided into two parts: on the left is the xtr network (i.e., the first predictive model) of the fast-handed master discovery page, and on the right is the predictive network (i.e., the second predictive model) of the large screen versions xtr, which may be a multi-task learning network, with the two models sharing the underlying common embedding input (embedded layer). In the training process, the mass data of the fast hand master station is used for training the network on the left side, so that the embedding (common embedding input) of the bottom layer is fully trained; the data of the fast-handed large screen and common embedding input of the shared bottom layer are used as common input to train the right-side multi-target network, so that the data can be converged faster and better than the data of the large screen is used for train the bottom layer of the multi-target network, and the estimation effect is higher. The trained right-hand network may be used to determine pctr (predicted click through rate, estimated click rate) and the like.
C) And online pre-estimating service. The existing scheme is that each service generally has independent estimated service, and even different tasks or targets of the same service have independent estimated service. According to the embodiment of the disclosure, a model produced by a mixed service scene multitask training method can be utilized to deploy a p (text r) estimation service capable of simultaneously serving a plurality of service scenes. For example, the logs of a plurality of business scenes such as a fast hand fast version, a setup version, a large screen version, a master station and the like can be trained in a combined mode to form a multitasking and multi-target large-scale DNN model, then a set of on-line pxtr estimation service is deployed, and xtr estimation service is provided for the fast version, the setup version, the large screen version and the master station. A set of model and pre-estimation service can provide services for a plurality of business scenes at the same time, so that the model and pre-estimation effect can be improved, and the purposes of convenient deployment and maintenance can be achieved.
The embodiment of the disclosure can realize the following effects:
1. the method solves the problem that models such as DNN and the like are difficult to train when service log quantity is small, particularly the problem that enough samples are not used for training the models when a new service starts, and the method for training the multi-task multi-target model of the mixed service scene can assist in training the models of the new service scene by using mass data of other similar service scenes, so that better model effects are obtained.
2. The multi-target online learning of the mixed multiple business scenes can save training machine resources, meanwhile, the multiple scenes are jointly trained, the sample scale can be increased, more sample characteristics can be captured, better model effects can be reduced and obtained compared with the independent training model of each business scene, and therefore the accuracy of the pxtr estimated model under each business scene can be improved. Taking fast-handed fast version short video recommendation as an example, when the fast-handed fast version is just released online, only a very small number of users can jointly train the pxtr estimated task of the fast-handed fast version and the pxtr estimated task of the fast-handed master station by adopting a mixed service scene model training method, and a multi-task online learning method is adopted, so that the fast-handed fast version can train a large-scale multitask DNN model when the user log is very small, and the model effect and online effect are greatly improved.
It should be understood that, although the steps in the flowcharts of fig. 2 and 7 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2 and 7 may include steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of the steps or stages in other steps or other steps.
Fig. 8 is a block diagram illustrating an apparatus 800 for determining interoperation information according to an example embodiment. Referring to fig. 8, the apparatus includes an operation log acquisition unit 801, an operation log input unit 802, and an operation information determination unit 803.
An operation log obtaining unit 801 is configured to perform obtaining an interaction operation log of a platform account, where the interaction operation log is used to record first interaction operation information that is performed by the platform account in a first service scenario.
An operation log input unit 802 configured to perform input of the interactive operation log into a trained operation prediction model, wherein the operation prediction model is trained according to a historical operation log; the history operation log is second interactive operation information of reference interactive operation for reference accounts in at least two business scenes; the at least two business scenes are associated with each other and comprise the first business scene; the reference interaction comprises a target interaction.
An operation information determining unit 803 configured to perform determination of third interactive operation information of the platform account according to an output of the operation prediction model, wherein the third interactive operation information is operation information indicating whether the platform account will perform the target interactive operation in the first service scenario.
According to the interactive operation information determining device provided by the disclosure, an operation prediction model is trained according to historical operation logs of at least two associated business scenes, when probability prediction needs to be carried out on operation of a platform account in a first business scene, an interactive operation log of the platform account is obtained, the interactive operation log is input into the trained operation prediction model, and third interactive operation information of specific interactive operation of the platform account in the first business scene is determined according to output of the operation prediction model. According to the embodiment provided by the disclosure, when the data volume in the first service scene is insufficient, accurate training of the operation prediction model can be completed, and further the interactive operation information for executing specific interactive operation in the platform account can be accurately predicted.
In an exemplary embodiment, the at least two service scenarios further include a second service scenario; the device for determining the interactive operation information further comprises: a first information determining unit configured to perform determining a platform account set and first media information of the first service scenario; the platform account set comprises account information of a first candidate account in the first business scene; the first media information is media information pushed to the first candidate account; a second information determining unit configured to perform determining a candidate account set of the candidate service scenario and second media information; the candidate account set comprises account information of a second candidate account in the candidate business scene; the second media information is media information pushed to the second candidate account; a first relevance determining unit configured to perform determining a first relevance of the platform account set and the candidate account set; a second correlation determination unit configured to perform determination of a second correlation of the first media information and the second media information; and a service scenario determining unit configured to perform determination of the second service scenario from the candidate service scenarios according to at least one of the first correlation and the second correlation.
In an exemplary embodiment, the apparatus for determining the interactive operation information further includes: a first model training unit configured to perform input of a history operation log of the second business scenario into a first prediction model to train an embedded layer of the first prediction model; a second model training unit configured to perform inputting of the trained output of the embedded layer and the historical operation log of the first business scenario into a second prediction model to train the second prediction model; a model determination unit configured to perform determination of the trained second prediction model as the operation prediction model.
In an exemplary embodiment, the operation log input unit is further configured to perform inputting the interactive operation log into the operation prediction model, so as to trigger the operation prediction model to determine a probability of the platform account performing the target interactive operation on candidate media information, where the probability is used to determine third interactive operation information of the platform account; wherein the candidate media information includes media information of the client under the first service scene.
In an exemplary embodiment, the candidate media information includes candidate videos; the apparatus for determining the interactive operation information further includes: a first video ranking unit configured to perform ranking of the candidate videos according to the third interoperation information; a first video determination unit configured to perform determination of a preset number of candidate videos ranked ahead as recommended videos; the first video recommending unit is configured to execute pushing of the recommended video to a video showing page of the platform account.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
FIG. 9 is a block diagram of a recommendation system for video, according to an exemplary embodiment. Referring to fig. 9, the system includes a server 901 and a client 902 connected by a network, the client being configured to perform an interactive operation log for obtaining a platform account, where the interactive operation log is used to record first interactive operation information performed by the platform account in a first service scenario; the server is configured to perform inputting the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained through training according to a historical operation log; the history operation log is second interactive operation information of reference interactive operation for reference accounts in at least two business scenes; the at least two business scenes are associated with each other and comprise the first business scene; the reference interactive operation comprises a target interactive operation; determining third interactive operation information of the platform account according to the output of the operation prediction model, wherein the third interactive operation information is used for indicating whether the platform account can execute the operation information of the target interactive operation in the first service scene; sequencing the candidate videos according to the third interactive operation information; the candidate videos comprise videos of clients in the first business scene; determining a preset number of candidate videos ranked in front as recommended videos; pushing the recommended video to the client; the client is further configured to perform outputting the recommended video to a video presentation page of the platform account.
According to the video recommendation system provided by the disclosure, an operation prediction model is trained according to historical operation logs of at least two associated business scenes, when probability prediction needs to be carried out on operation of a platform account in a first business scene, an interactive operation log of the platform account is obtained, the interactive operation log is input into the trained operation prediction model, and third interactive operation information of specific interactive operation of the platform account in the first business scene is determined according to output of the operation prediction model. According to the embodiment provided by the disclosure, when the data volume in the first service scene is insufficient, accurate training of the operation prediction model can be completed, so that the interactive operation information of the platform account for executing the specific interactive operation in the first service scene is accurately predicted.
In an exemplary embodiment, there is provided an electronic device including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of determining the interoperation information as described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is provided, such as memory 102, including instructions executable by processor 109 of electronic device 100 to perform the above-described method. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any adaptations, uses, or adaptations of the disclosure following the general principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (13)

1. A method for determining interactive operation information, comprising:
acquiring an interactive operation log of a platform account, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in a first service scene;
inputting the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained by training according to a historical operation log; the history operation log is second interactive operation information of reference interactive operation for reference accounts in at least two business scenes; the at least two business scenes are associated with each other and comprise the first business scene; the reference interactive operation comprises a target interactive operation;
And determining third interactive operation information of the platform account according to the output of the operation prediction model, wherein the third interactive operation information is used for indicating whether the platform account can execute the operation information of the target interactive operation in the first service scene.
2. The method for determining interoperation information of claim 1, wherein the at least two service scenarios further comprise a second service scenario; before the step of inputting the interactive operation log into the trained operation prediction model, the method for determining interactive operation information further comprises:
determining a platform account set and first media information of the first service scene; the platform account set comprises account information of a first candidate account in the first business scene; the first media information is media information pushed to the first candidate account;
determining a candidate account set and second media information of a candidate service scene; the candidate account set comprises account information of a second candidate account in the candidate business scene; the second media information is media information pushed to the second candidate account;
Determining a first relevance of the platform account set and the candidate account set;
determining a second relevance of the first media information and the second media information;
and determining the second service scene from the candidate service scenes according to at least one of the first correlation and the second correlation.
3. The method for determining interactive operation information according to claim 2, wherein the training step of the operation prediction model comprises:
inputting a historical operation log of the second business scene into a first prediction model to train an embedded layer of the first prediction model;
inputting the trained output of the embedded layer and the historical operation log of the first business scenario into a second prediction model to train the second prediction model;
determining the trained second predictive model as the operational predictive model.
4. A method of determining interoperability information according to any one of claims 1 to 3, wherein the step of entering the interoperability log into a trained operation prediction model includes:
inputting the interactive operation log into the operation prediction model to trigger the operation prediction model to determine the probability of the platform account executing the target interactive operation on candidate media information, wherein the probability is used for determining third interactive operation information of the platform account; wherein the candidate media information includes media information of the client under the first service scene.
5. The method of claim 4, wherein the candidate media information comprises candidate videos;
after the step of determining third interoperability information of the platform account according to the output of the operation prediction model, the method for determining the interoperability information further includes:
sorting the candidate videos according to the third interactive operation information;
determining a preset number of candidate videos ranked in front as recommended videos;
pushing the recommended video to a video display page of the platform account.
6. An apparatus for determining interactive operation information, comprising:
an operation log obtaining unit configured to perform obtaining an interactive operation log of a platform account, where the interactive operation log is used to record first interactive operation information executed by the platform account in a first service scenario;
an operation log input unit configured to perform input of the interactive operation log into a trained operation prediction model, wherein the operation prediction model is trained according to a historical operation log; the history operation log is second interactive operation information of reference interactive operation for reference accounts in at least two business scenes; the at least two business scenes are associated with each other and comprise the first business scene; the reference interactive operation comprises a target interactive operation;
And an operation information determining unit configured to perform determination of third interactive operation information of the platform account according to the output of the operation prediction model, wherein the third interactive operation information is used for indicating whether the platform account performs the operation information of the target interactive operation in the first service scene.
7. The apparatus for determining interoperation information of claim 6, wherein the at least two traffic scenarios further comprise a second traffic scenario; the device for determining the interactive operation information further comprises:
a first information determining unit configured to perform determining a platform account set and first media information of the first service scenario; the platform account set comprises account information of a first candidate account in the first business scene; the first media information is media information pushed to the first candidate account;
a second information determining unit configured to perform determining a candidate account set of the candidate service scenario and second media information; the candidate account set comprises account information of a second candidate account in the candidate business scene; the second media information is media information pushed to the second candidate account;
A first relevance determining unit configured to perform determining a first relevance of the platform account set and the candidate account set;
a second correlation determination unit configured to perform determination of a second correlation of the first media information and the second media information;
and a service scenario determining unit configured to perform determination of the second service scenario from the candidate service scenarios according to at least one of the first correlation and the second correlation.
8. The apparatus for determining mutual operation information according to claim 7, wherein the apparatus for determining mutual operation information further comprises:
a first model training unit configured to perform input of a history operation log of the second business scenario into a first prediction model to train an embedded layer of the first prediction model;
a second model training unit configured to perform inputting of the trained output of the embedded layer and the historical operation log of the first business scenario into a second prediction model to train the second prediction model;
a model determination unit configured to perform determination of the trained second prediction model as the operation prediction model.
9. The apparatus according to any one of claims 6 to 8, wherein the operation log input unit is further configured to perform inputting the operation log into the operation prediction model to trigger the operation prediction model to determine a probability that the platform account performs the target interaction on candidate media information, the probability being used to determine third interaction information of the platform account; wherein the candidate media information includes media information of the client under the first service scene.
10. The apparatus for determining interoperation information of claim 9, wherein the candidate media information comprises candidate videos; the apparatus for determining the interactive operation information further includes:
a first video ranking unit configured to perform ranking of the candidate videos according to the third interoperation information;
a first video determination unit configured to perform determination of a preset number of candidate videos ranked ahead as recommended videos;
the first video recommending unit is configured to execute pushing of the recommended video to a video showing page of the platform account.
11. A video recommendation system, comprising: a server and a client;
the client is configured to execute and acquire an interactive operation log of a platform account, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in a first service scene;
the server is configured to perform inputting the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained through training according to a historical operation log; the history operation log is second interactive operation information of reference interactive operation for reference accounts in at least two business scenes; the at least two business scenes are associated with each other and comprise the first business scene; the reference interactive operation comprises a target interactive operation; determining third interactive operation information of the platform account according to the output of the operation prediction model, wherein the third interactive operation information is used for indicating whether the platform account can execute the operation information of the target interactive operation in the first service scene; sequencing the candidate videos according to the third interactive operation information; the candidate videos comprise videos of clients in the first business scene; determining a preset number of candidate videos ranked in front as recommended videos; pushing the recommended video to the client;
The client is further configured to perform outputting the recommended video to a video presentation page of the platform account.
12. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of determining interoperability information according to any one of claims 1 to 5.
13. A storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the method of determining interoperation information of any of claims 1 to 5.
CN202010190752.4A 2020-03-18 2020-03-18 Interactive operation information determining method and device and video recommendation system Active CN113495966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010190752.4A CN113495966B (en) 2020-03-18 2020-03-18 Interactive operation information determining method and device and video recommendation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010190752.4A CN113495966B (en) 2020-03-18 2020-03-18 Interactive operation information determining method and device and video recommendation system

Publications (2)

Publication Number Publication Date
CN113495966A CN113495966A (en) 2021-10-12
CN113495966B true CN113495966B (en) 2023-06-23

Family

ID=77993017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010190752.4A Active CN113495966B (en) 2020-03-18 2020-03-18 Interactive operation information determining method and device and video recommendation system

Country Status (1)

Country Link
CN (1) CN113495966B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629665A (en) * 2018-05-08 2018-10-09 北京邮电大学 A kind of individual commodity recommendation method and system
WO2018223194A1 (en) * 2017-06-09 2018-12-13 Alerte Digital Sport Pty Ltd Systems and methods of prediction of injury risk with a training regime
CN109408724A (en) * 2018-11-06 2019-03-01 北京达佳互联信息技术有限公司 Multimedia resource estimates the determination method, apparatus and server of clicking rate
CN109862432A (en) * 2019-01-31 2019-06-07 厦门美图之家科技有限公司 Clicking rate prediction technique and device
CN110276446A (en) * 2019-06-26 2019-09-24 北京百度网讯科技有限公司 The method and apparatus of model training and selection recommendation information
CN110400169A (en) * 2019-07-02 2019-11-01 阿里巴巴集团控股有限公司 A kind of information-pushing method, device and equipment
CN110442790A (en) * 2019-08-07 2019-11-12 腾讯科技(深圳)有限公司 Recommend method, apparatus, server and the storage medium of multi-medium data
CN110717099A (en) * 2019-09-25 2020-01-21 优地网络有限公司 Method and terminal for recommending film

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018223194A1 (en) * 2017-06-09 2018-12-13 Alerte Digital Sport Pty Ltd Systems and methods of prediction of injury risk with a training regime
CN108629665A (en) * 2018-05-08 2018-10-09 北京邮电大学 A kind of individual commodity recommendation method and system
CN109408724A (en) * 2018-11-06 2019-03-01 北京达佳互联信息技术有限公司 Multimedia resource estimates the determination method, apparatus and server of clicking rate
CN109862432A (en) * 2019-01-31 2019-06-07 厦门美图之家科技有限公司 Clicking rate prediction technique and device
CN110276446A (en) * 2019-06-26 2019-09-24 北京百度网讯科技有限公司 The method and apparatus of model training and selection recommendation information
CN110400169A (en) * 2019-07-02 2019-11-01 阿里巴巴集团控股有限公司 A kind of information-pushing method, device and equipment
CN110442790A (en) * 2019-08-07 2019-11-12 腾讯科技(深圳)有限公司 Recommend method, apparatus, server and the storage medium of multi-medium data
CN110717099A (en) * 2019-09-25 2020-01-21 优地网络有限公司 Method and terminal for recommending film

Also Published As

Publication number Publication date
CN113495966A (en) 2021-10-12

Similar Documents

Publication Publication Date Title
CN108197327B (en) Song recommendation method, device and storage medium
CN109684510B (en) Video sequencing method and device, electronic equipment and storage medium
CN110782034A (en) Neural network training method, device and storage medium
CN109308490B (en) Method and apparatus for generating information
CN109145828B (en) Method and apparatus for generating video category detection model
CN111708941A (en) Content recommendation method and device, computer equipment and storage medium
US20170118298A1 (en) Method, device, and computer-readable medium for pushing information
CN110175223A (en) A kind of method and device that problem of implementation generates
CN111753895A (en) Data processing method, device and storage medium
CN115909127A (en) Training method of abnormal video recognition model, abnormal video recognition method and device
CN111046927A (en) Method and device for processing labeled data, electronic equipment and storage medium
US10592832B2 (en) Effective utilization of idle cycles of users
CN114049529A (en) User behavior prediction method, model training method, electronic device, and storage medium
CN110941727B (en) Resource recommendation method and device, electronic equipment and storage medium
CN113495966B (en) Interactive operation information determining method and device and video recommendation system
CN115994266A (en) Resource recommendation method, device, electronic equipment and storage medium
CN111143608A (en) Information pushing method and device, electronic equipment and storage medium
CN112559673A (en) Language processing model training method and device, electronic equipment and storage medium
CN115203543A (en) Content recommendation method, and training method and device of content recommendation model
CN111553800B (en) Data processing method and device, electronic equipment and storage medium
CN114943336A (en) Model pruning method, device, equipment and storage medium
CN115238126A (en) Method, device and equipment for reordering search results and computer storage medium
US11010935B2 (en) Context aware dynamic image augmentation
CN113886674A (en) Resource recommendation method and device, electronic equipment and storage medium
CN111753266A (en) User authentication method, multimedia content pushing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant