CN113495966A - Determination method and device of interactive operation information and recommendation system of video - Google Patents

Determination method and device of interactive operation information and recommendation system of video Download PDF

Info

Publication number
CN113495966A
CN113495966A CN202010190752.4A CN202010190752A CN113495966A CN 113495966 A CN113495966 A CN 113495966A CN 202010190752 A CN202010190752 A CN 202010190752A CN 113495966 A CN113495966 A CN 113495966A
Authority
CN
China
Prior art keywords
information
determining
prediction model
interactive operation
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010190752.4A
Other languages
Chinese (zh)
Other versions
CN113495966B (en
Inventor
王君
洪立印
江鹏
胡勇
冷德维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202010190752.4A priority Critical patent/CN113495966B/en
Publication of CN113495966A publication Critical patent/CN113495966A/en
Application granted granted Critical
Publication of CN113495966B publication Critical patent/CN113495966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/75Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The disclosure relates to a method and a device for determining interactive operation information and a video recommendation system. The method comprises the following steps: the method comprises the steps of obtaining an interactive operation log of a platform account, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in a first service scene; inputting the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained by training according to the historical operation log; the historical operation log is second interactive operation information for executing reference interactive operation on reference accounts in at least two service scenes; the reference interactive operation comprises a target interactive operation; and determining third interoperation information of the platform account according to the output of the operation prediction model, wherein the third interoperation information is used for indicating whether the platform account can execute the operation information of the target interoperation in the first service scene. According to the technical scheme, the operation information of the platform account for executing the specific interactive operation in the service scene can be accurately predicted.

Description

Determination method and device of interactive operation information and recommendation system of video
Technical Field
The present disclosure relates to the field of network technologies, and in particular, to a method and an apparatus for determining interactive operation information, a video recommendation system, an electronic device, and a storage medium.
Background
In the fields of search, recommendation, advertisement, and the like, it is often necessary to estimate whether the user performs the interactive operation in the network platform, the probability of the interactive operation, and the like, for example, obtain a predicted X through rate (pxtr) and the like. Currently, a prediction model is often used to predict the execution probability of a particular interactive operation. In most scenes, when the data volume is sufficient, the traditional estimation model can well estimate the probability of interactive operation.
However, the inventors found that in the related art, training of these models places certain demands on the amount of data. For a new application program, the log data volume in the scene where the new application program is located is often small, and in this case, if the traditional model is directly adopted to determine the probability of the interactive operation, the accuracy is often low.
Disclosure of Invention
The present disclosure provides a method and an apparatus for determining interoperation information, a video recommendation system, an electronic device, and a storage medium, so as to at least solve a problem of poor interoperation information prediction effect in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, a method for determining interoperation information is provided, including: the method comprises the steps of obtaining an interactive operation log of a platform account, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in a first service scene; inputting the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained by training according to a historical operation log; the historical operation log is second interactive operation information for executing reference interactive operation on reference accounts in at least two service scenes; the at least two service scenarios are associated with each other and comprise the first service scenario; the reference interactive operation comprises a target interactive operation; determining third interoperation information of the platform account according to the output of the operation prediction model, wherein the third interoperation information is used for indicating whether the platform account will execute the operation information of the target interoperation in the first service scenario.
In an exemplary embodiment, the at least two service scenarios further include a second service scenario; before the step of inputting the interactive operation log into the trained operation prediction model, the method for determining the interactive operation information further includes: determining a set of platform accounts and first media information for the first business scenario; the platform account set comprises account information of a first candidate account in the first service scene; the first media information is media information pushed to the first candidate account; determining a candidate account set and second media information of a candidate service scene; the candidate account set comprises account information of a second candidate account in the candidate service scene; the second media information is the media information pushed to the second candidate account; determining a first correlation of the set of platform accounts and the set of candidate accounts; determining a second correlation of the first media information and the second media information; determining the second service scenario from the candidate service scenarios according to at least one of the first correlation and the second correlation.
In an exemplary embodiment, the training step of the operation prediction model includes: inputting the historical operation log of the second service scene into a first prediction model so as to train an embedded layer of the first prediction model; inputting the trained output of the embedding layer and the historical operation log of the first business scenario into a second prediction model to train the second prediction model; determining the trained second predictive model as the operational predictive model.
In an exemplary embodiment, the step of inputting the interactive operation log into the trained operation prediction model includes: inputting the interaction operation log into the operation prediction model to trigger the operation prediction model to determine a probability of the platform account performing the target interaction operation on candidate media information, wherein the probability is used for determining third interaction operation information of the platform account; the candidate media information comprises media information of the client under the first service scene.
In an exemplary embodiment, the candidate media information includes candidate videos; after the step of determining third interoperation information of the platform account according to the output of the operation prediction model, the method of determining the interoperation information further includes: sorting the candidate videos according to the third interactive operation information; determining the candidate videos with the top rank in the preset number as recommended videos; and pushing the recommended video to a video display page of the platform account.
According to a second aspect of the embodiments of the present disclosure, there is provided an apparatus for determining interoperation information, including: the operation log obtaining unit is configured to execute obtaining of an interactive operation log of a platform account, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in a first service scene; an operation log input unit configured to perform input of the interactive operation log into a trained operation prediction model, wherein the operation prediction model is trained according to a historical operation log; the historical operation log is second interactive operation information for executing reference interactive operation on reference accounts in at least two service scenes; the at least two service scenarios are associated with each other and comprise the first service scenario; the reference interactive operation comprises a target interactive operation; an operation information determination unit configured to perform determining third interoperation information of the platform account according to an output of the operation prediction model, wherein the third interoperation information is used for indicating operation information of whether the platform account will perform the target interoperation in the first service scenario.
In an exemplary embodiment, the at least two service scenarios further include a second service scenario; the device for determining the interactive operation information further comprises: a first information determination unit configured to perform determining a set of platform accounts and first media information for the first service scenario; the platform account set comprises account information of a first candidate account in the first service scene; the first media information is media information pushed to the first candidate account; a second information determination unit configured to perform determining a candidate account set and second media information of a candidate service scenario; the candidate account set comprises account information of a second candidate account in the candidate service scene; the second media information is the media information pushed to the second candidate account; a first relevance determination unit configured to perform determining a first relevance of the set of platform accounts and the set of candidate accounts; a second correlation determination unit configured to perform determining a second correlation of the first media information and the second media information; a traffic scenario determination unit configured to perform determining the second traffic scenario from the candidate traffic scenarios according to at least one of the first correlation and the second correlation.
In an exemplary embodiment, the apparatus for determining interoperation information further includes: a first model training unit configured to perform input of a historical operation log of the second business scenario into a first prediction model to train an embedding layer of the first prediction model; a second model training unit configured to perform input of the trained output of the embedding layer and the historical operation log of the first business scenario into a second prediction model to train the second prediction model; a model determination unit configured to perform determination of the trained second prediction model as the operation prediction model.
In an exemplary embodiment, the operation log input unit is further configured to perform inputting the interaction operation log into the operation prediction model to trigger the operation prediction model to determine a probability that the platform account performs the target interaction operation on candidate media information, where the probability is used to determine third interaction operation information of the platform account; the candidate media information comprises media information of the client under the first service scene.
In an exemplary embodiment, the candidate media information includes candidate videos; the device for determining the interactive operation information further comprises: a first video sorting unit configured to perform sorting of the candidate videos according to the third interoperation information; a first video determination unit configured to perform determination of a preset number of top-ranked candidate videos as recommended videos; a first video recommendation unit configured to execute pushing the recommended video to a video presentation page of the platform account.
According to a third aspect of the embodiments of the present disclosure, there is provided a recommendation system for a video, including: a server and a client; the client is configured to execute and acquire an interactive operation log of a platform account, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in a first service scene; the server is configured to input the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained by training according to a historical operation log; the historical operation log is second interactive operation information for executing reference interactive operation on reference accounts in at least two service scenes; the at least two service scenarios are associated with each other and comprise the first service scenario; the reference interactive operation comprises a target interactive operation; determining third interoperation information of the platform account according to the output of the operation prediction model, wherein the third interoperation information is used for indicating whether the platform account will execute the operation information of the target interoperation in the first business scenario; sorting the candidate videos according to the third interactive operation information; the candidate videos comprise videos of the client in the first service scene; determining the candidate videos with the top rank in the preset number as recommended videos; pushing the recommended video to the client; the client is further configured to execute outputting the recommended video to a video presentation page of the platform account.
According to a fourth aspect of embodiments of the present disclosure, there is provided an electronic device comprising a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of determining interoperation information as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a storage medium, wherein instructions of the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method for determining interoperation information as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: training an operation prediction model according to historical operation logs of at least two associated service scenes, acquiring an interactive operation log of a platform account when probability prediction is needed to be performed on operation of the platform account in a first service scene, inputting the interactive operation log into the trained operation prediction model, and determining third interactive operation information of the platform account for executing specific interactive operation in the first service scene according to output of the operation prediction model so as to determine whether the platform account can execute target interactive operation in the first service scene. According to the embodiment provided by the disclosure, when the data volume in the first service scene is insufficient, the accurate training of the operation prediction model can be completed, and then the interactive operation information of the platform account executing the specific interactive operation in the first service scene is accurately predicted.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a diagram illustrating an application environment for a method of determining interoperation information, according to an exemplary embodiment;
FIG. 2 is a flow chart illustrating a method of determination of interoperation information according to an exemplary embodiment;
FIG. 3 is a flow diagram illustrating an operational prediction model training in accordance with an exemplary embodiment;
FIG. 4 is a block diagram illustrating an operational prediction model in accordance with an exemplary embodiment;
FIG. 5 is a diagram of a display interface for a recommended video, shown in accordance with an exemplary embodiment;
FIG. 6 is a block diagram illustrating interoperation prediction for multiple scenes in accordance with an exemplary embodiment;
FIG. 7 is a flow chart illustrating a method of determination of interoperation information according to another exemplary embodiment;
FIG. 8 is a block diagram illustrating an apparatus for determining interoperation information in accordance with an exemplary embodiment;
FIG. 9 is a block diagram illustrating a video recommendation system in accordance with an exemplary embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the disclosure. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The method for determining the interoperation information provided by the embodiment of the disclosure may be applied to the electronic device shown in fig. 1. Referring to fig. 1, electronic device 100 may include one or more of the following components: processing component 101, memory 102, power component 103, multimedia component 104, audio component 105, interface to input/output (I/O) 106, sensor component 107, and communication component 108.
The processing component 101 generally controls overall operations of the electronic device 100, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 101 may include one or more processors 109 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 101 may include one or more modules that facilitate interaction between the processing component 101 and other components. For example, the processing component 101 may include a multimedia module to facilitate interaction between the multimedia component 104 and the processing component 101.
The memory 102 is configured to store various types of data to support operations at the electronic device 100. Examples of such data include instructions for any application or method operating on the electronic device 100, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 102 may be implemented by any type or combination of volatile or non-volatile storage devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 103 provides power to the various components of the electronic device 100. Power components 103 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for electronic device 100.
The multimedia component 104 includes a screen that provides an output interface between the electronic device 100 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 104 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 100 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 105 is configured to output and/or input audio signals. For example, the audio component 105 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 100 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 102 or transmitted via the communication component 108. In some embodiments, audio component 105 also includes a speaker for outputting audio signals.
The I/O interface 106 provides an interface between the processing component 101 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 107 includes one or more sensors for providing various aspects of status assessment for the electronic device 100. For example, the sensor component 107 may detect an open/closed state of the electronic device 100, the relative positioning of components, such as a display and keypad of the electronic device 100, the sensor component 107 may also detect a change in the position of the electronic device 100 or a component of the electronic device 100, the presence or absence of user contact with the electronic device 100, orientation or acceleration/deceleration of the electronic device 100, and a change in the temperature of the electronic device 100. The sensor assembly 107 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 107 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 107 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 108 is configured to facilitate wired or wireless communication between the electronic device 100 and other devices. The electronic device 100 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 108 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 108 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 100 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
Fig. 2 is a flowchart illustrating a method for determining interoperation information, which is used in an electronic device, according to an exemplary embodiment, and as shown in fig. 2, the method includes the following steps:
in step S201, an interactive operation log of a platform account is obtained, where the interactive operation log is used to record first interactive operation information executed by the platform account in a first service scenario.
The service scene refers to a software scene written for a certain application purpose, and may be a specific application program such as a short video application program, a video playing application program, a live broadcast application program, or a certain version or a certain page of the application program. When using an application program, a network user often needs to register an account, the registered platform account has information such as an account number and personal information of the user, and the user can perform corresponding interactive operations in the corresponding application program through the platform account, for example: video playing, live broadcasting release, praise, forwarding and the like. Specifically, the first service scenario refers to a current application program targeted by the electronic device in the running process, that is, the interactive operation of the platform account in the first service scenario needs to be predicted currently. The platform account is a network account registered in the first service scenario, further, the platform account may be one network account or multiple network accounts, if the platform accounts are multiple, the electronic device may respectively predict, in an asynchronous or synchronous manner, the interaction operation information of the network accounts, which perform the specific interaction operation in the first service scenario, and for convenience of description, the platform account is described as one platform account in the following.
The interactive operation log may refer to log data generated when the platform account performs the reference interactive operation in the first service scenario, and may be data formed by the platform account through active operation: the number of praise times, praise time, praise account, etc., and data formed by passive operations: viewing duration (play duration), page dwell duration, etc. Further, the interactive operation log may be a historical operation log generated by the platform account within a set historical time period (the length of the time period may be determined according to actual conditions), that is, the interactive operation of the platform account within a current or future time period (the length of the time period may be determined according to actual conditions) is predicted according to the historical operation log.
The reference interaction operation may refer to an interaction operation that can be performed in a business scenario, and the interaction operations corresponding to different applications may differ, for example: in the short video application program, the reference interactive operation can be operations of login, logout, sliding, playing, pause playing, attention, praise, comment viewing, forwarding, type marking and the like; in the video playing platform, the reference interactive operation may be login, logout, sliding, playing, pausing playing, sending barrage, forwarding, uploading, downloading and the like. It should be noted that each reference interactive operation may form a reference interactive operation set, and in an actual application scenario, the platform account may execute all or part of the reference interactive operations in the reference interactive operation set.
The login may refer to obtaining the authority to use the application program through an account password of the application program, and performing corresponding interactive operation in the application program. The quitting can quit the use of the application program, and the application program can run in the background after quitting and can also directly close the corresponding application program. Sliding may refer to performing up/down/left/right sliding on some interface of an application to implement operations such as updating of a video, exiting of a page, and the like. The playing may refer to opening a certain video playing interface in the application program to play the video, and may be implemented by specific operations such as clicking, double-clicking, sliding up, sliding down, sliding left, and sliding right. The pause playing may refer to stopping playing the current video or live broadcasting, and may be realized by a specific operation such as clicking. The attention can be established through a specific attention control and is associated with a specific account, and when the information of the concerned specific account is updated, the concerned account can be acquired in real time. Like may refer to being implemented through a particular like control, where a particular account (or particular media information) that is like will be tagged with a corresponding like label, while upon receipt of a plurality of like, the number of like may be accumulated. The comment can mean that a user posts own view of media information such as videos through a dialog box, and the commented account (or other accounts) can see the comment content. The review can be realized through a specific review control, and the platform account can see the viewpoints of other accounts on the media information and the reviews of other accounts on the user can be seen at the same time. Forwarding may refer to transferring the media information to another platform so that the account of the other platform can also obtain the corresponding media information, for example, forwarding a certain short video published in the express to the wechat so that the wechat friends can view the short video. The type tag may refer to an operation of tagging media information, for example, a certain short video as a "funny video". The transmission barrage can send out own view of a certain video or live broadcast through a specific barrage transmission dialog box. The uploading may be to upload the local media information such as video and article to the server where the application program is located, and at this time, the other platform account can view or download the corresponding media information. Downloading may be downloading the media information in the application locally.
In many business scenarios, the interactive operation of a platform account at the present or a future time needs to be predicted, and then information is pushed to the platform account, and the specific prediction process of the interactive operation is as follows:
in step S202, inputting the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained by training according to a historical operation log; the historical operation log is second interactive operation information for executing reference interactive operation on reference accounts in at least two service scenes; the at least two service scenarios are associated with each other and comprise the first service scenario; the reference interoperation comprises a target interoperation.
The operation prediction model is a model for analyzing the input log data and outputting information such as probability corresponding to the execution of specific interactive operation by the corresponding platform account. The operation prediction model can be a neural network model, and the model can extract characteristic information from input data, analyze and learn the characteristic information and output a result. Further, in some exemplary embodiments, the operation prediction model may be an LR model, an FM model, a DNN model, a DCNN (Deep convolutional neural network) model, a DN (deconvolution neural network) model, a GAN (generalized adaptive network) model, or the like. Further, the operation prediction model may include an input layer, a hidden layer (which may be a plurality of layers, for example, 128 layers), and an output layer, where the hidden layer may include a plurality of neurons, and each neuron classifies the input of the previous layer, performs layer-by-layer analysis, and then outputs the final operation prediction result to the output layer.
Further, the operation prediction model may include a forward propagation process (also referred to as a feed-forward process) and a backward propagation process (also referred to as a feedback process) in the training process, that is, a forward propagation process and a backward propagation process constitute a training process of the operation prediction model.
The at least two service scenes are associated, which means that a certain association relationship exists between the two, and the association relationship may be similarity of functions provided by the two, similarity of information provided by the two, similarity of user systems, and the like. The at least two service scenarios may include a first service scenario and a second service scenario. The service scenarios corresponding to the first service scenario and the second service scenario are certainly different (otherwise, the same service scenario is used), and therefore, the first service scenario and the second service scenario may be considered as different applications or versions of different applications, for example: a speedy master station, a large screen version, etc.
Further, the reference interactive operation may be determined according to an actual interactive operation thereof, and may be different from or the same as the interactive operation corresponding to the interactive operation information of the platform account in the first service scenario. In addition, the number of the second service scenarios may be more than one, that is, the operation prediction model may fuse log data of at least two service scenarios. In order to facilitate the joint training of the operation prediction model, the log data of the first service scenario and the second service scenario can be designed in a unified manner, or the logs of the log data can be preprocessed into similar formats. In addition, the target interaction operation may be one or more of the reference interaction operations, and the target interaction operation may be selected from the reference interaction operations by a developer, or may be determined by an algorithm according to a function provided by the first service scenario, for example: the first service scene is a express master station, and if the fact that the operation frequently used by the user in the service scene is playing the short video is detected, the target interactive operation can be determined to be playing the short video.
Further, the historical operation logs are interaction information of the reference account performing reference interaction operations in the at least two service scenarios in the past time (relative to the current time of the electronic device running), the historical operation logs can greatly characterize possible interaction operations of the platform account in the first service scenario, the operation prediction model is trained through the historical operation logs, and the trained operation prediction model has high accuracy.
The trained operation prediction model is obtained through combined training according to the historical operation logs, the trained operation prediction model fuses log data in a first service scene, the actual operation condition of a user in the first service scene can be reflected, meanwhile, the operation prediction model also fuses log data in other service scenes, more training data can be fused as far as possible when the data volume of the first service scene is insufficient, and then the trained operation prediction model is obtained more accurately. The trained operation prediction model can perform operations such as feature extraction and classification on the interactive operation log, and further determine operation information of each reference interactive operation executed by the platform account in the first service scene. If the difference between the prediction result corresponding to the model trained by the currently acquired training data and the actual result is large, the corresponding training sample data size is considered to be insufficient. Further, whether the data size is sufficient or not can be determined according to the complexity of the functions implemented by the application program, and for the complex application program, the data size required for training is large.
Further, the interactive operation log can be converted into a vector form and then input into the operation prediction model.
In step S203, determining third interoperation information of the platform account according to the output of the operation prediction model, where the third interoperation information is used to indicate whether the platform account will execute the operation information of the target interoperation in the first service scenario.
The third interactive operation information may refer to a possibility that the platform account performs the target interactive operation within a future period of time (a specific time period may be determined according to an actual situation), and may be represented by a specific probability value. Specifically, the third interoperation information may be represented by pxtr (predicted X through rate), that is, the estimated occurrence probability of the X behavior, where the X behavior may be click, like click, follow focus, and the like. Further, the third interoperation information may be directly output by the operation prediction model, or may be obtained by the electronic device through operation according to the classification information output by the operation prediction model.
The method for determining the interactive operation information includes the steps of training an operation prediction model according to historical operation logs of at least two associated service scenes, obtaining an interactive operation log of a platform account when probability prediction needs to be carried out on operation of the platform account in a first service scene, inputting the interactive operation log into the trained operation prediction model, and determining third interactive operation information of the platform account for executing specific interactive operation in the first service scene according to output of the operation prediction model. According to the embodiment provided by the disclosure, when the data volume in the first service scene is insufficient, the accurate training of the operation prediction model can be completed, and then the interactive operation information of the platform account executing the specific interactive operation in the first service scene is accurately predicted.
The second traffic scenario may be determined according to a number of ways, for example: the content is determined according to the developer of the service scene (for example, the application program developed by the same developer is determined as the second service scene), the category (for example, the application program developed by the same short video playing platform is determined as the second service scene), the account system (the set of a plurality of accounts), the provided media information, and the like. The following illustrates the process of determining the second service scenario from the account system and the provided media information:
in an exemplary embodiment, the at least two service scenarios further include a second service scenario; before the step of inputting the interactive operation log into the trained operation prediction model, the method for determining the interactive operation information further includes: determining a set of platform accounts and first media information for the first business scenario; the platform account set comprises account information of a first candidate account in the first service scene; the first media information is media information pushed to the first candidate account; determining a candidate account set and second media information of a candidate service scene; the candidate account set comprises account information of a second candidate account in the candidate service scene; the second media information is the media information pushed to the second candidate account; determining a first correlation of the set of platform accounts and the set of candidate accounts; determining a second correlation of the first media information and the second media information; determining the second service scenario from the candidate service scenarios according to at least one of the first correlation and the second correlation.
The account set may refer to a set formed by all or part of accounts registered in an application, and the accounts in the account set may have an association relationship (for example, friends, family, friends, etc.) or may not have an association relationship. The media information may refer to information pushed by a client, a server, and the like to a corresponding platform account, and may be videos (including short videos), articles, pictures, and the like.
In addition, the candidate service scenario may refer to all or part of the application programs that can be acquired by the electronic device, and the number of the application programs may be more than one. Further, there may be a certain association relationship between candidate service scenarios, for example: developed by the same developer, provided similar functionality (e.g., all short video, live applications), etc.
The correlation refers to the degree of association between two variables, for example, the degree of association between accounts included in the account set, and the degree of association between media information.
Specifically, in the embodiment of the present disclosure, the first correlation may refer to a degree of association between the platform account set and the accounts in the candidate account set, and may specifically refer to a ratio of the included common accounts (or associated accounts) or an association between the accounts (for example, a proportion of accounts that are friends, or colleagues, or friends, or colleagues, or the like). Further, taking the proportion of the common account as an example, the determining process of the first correlation may be: determining account information of an account set A contained in a platform account set, determining account information of an account set B contained in a candidate account set, determining the number C1 of accounts in the account set A and the account set B, determining the total number C2 of the account set A and the account set B, and determining the ratio of the number C1 to the total number C2 as a first correlation.
The second correlation in the embodiment of the present disclosure may refer to a similarity degree between the first media information and the second media information, and specifically may be a ratio of the same media information, a ratio of media information having the same source in all media information, and the like. Taking the ratio of the same media information as an example, the second correlation determination process may be: determining the number of identical media information of the first media information and the second media information C3, determining the total number of the first media information and the second media information C4, and determining a ratio of the number C3 to the total number C4 as a second correlation.
The implementation process of determining the second service scenario from the candidate service scenarios according to at least one of the first correlation and the second correlation may be: 1. if the first correlation is larger than the set threshold, judging that the corresponding candidate service scene meets the condition, and determining the candidate service scene as a second service scene; 2. if the second correlation is larger than the set threshold, judging that the corresponding candidate service scene meets the condition, and determining the candidate service scene as the second service scene, and if the first correlation is larger than the set threshold and the second correlation is larger than the set threshold, judging that the corresponding candidate service scene meets the condition, and determining the candidate service scene as the second service scene. The threshold condition that needs to be satisfied by the first correlation and the second correlation may be determined according to actual situations, which is not limited by the present disclosure. Of course, the condition that needs to be satisfied by the first correlation/the second correlation may be not only greater than a certain threshold, but also within a certain threshold range or other conditions.
The method and the device for determining the second service scene associated with the first service scene achieve the process of determining the second service scene associated with the first service scene, the second service scene is determined mainly through the correlation between an account system and media information, the implementation process is simple, the result is reliable, and the reliability of input data of the operation prediction model can be effectively guaranteed.
In an exemplary embodiment, as shown in fig. 3, the training step of the operation prediction model includes:
s301, inputting the historical operation log of the second service scene into a first prediction model so as to train an embedded layer of the first prediction model.
The first prediction model may refer to a neural network model for determining a certain interactive operation execution probability, and may be an LR model, an FM model, a DNN model, a DCNN model, a DN model, a GAN model, or the like. The first prediction model performs machine learning operations such as feature extraction, feature analysis and classification on the historical operation logs of the second service scene to obtain prediction operation information. The prediction operation information may be probabilities that the respective reference accounts, which are output by the first prediction model after learning the historical operation logs of the second business scenario, perform the reference interaction operation. Further, a historical oplog of the second business scenario may be input into an embedding layer (input layer) of the first predictive model. And the hidden layer connected with the embedded layer by layer analyzes the historical operation log of the second service scene, outputs the historical operation log to the output layer, and obtains the prediction operation information from the output layer, wherein the process can be regarded as a forward propagation process.
Further, the training of the first prediction model and the training of the embedding layer can be completed through a back propagation process.
Specifically, the actual account operation information of the reference account is compared with the predicted operation information to obtain a corresponding loss value (difference value), that is, an operation loss value is obtained, a loss function (which may also be referred to as a lost function) is constructed according to the operation loss value, the weight and the like in the first prediction model are adjusted by minimizing the loss function, and when the last hidden layer (the hidden layer connected with the output layer) is gradually fed back and adjusted to reach the first hidden layer and reach the embedded layer, the model training is considered to be finished. This process can be considered as a back-propagation process, in which case the relevant parameters of the embedding layer are also updated, thus completing the training of the embedding layer.
S302, inputting the trained output of the embedding layer and the historical operation log of the first service scene into a second prediction model so as to train the second prediction model.
The second prediction model may also refer to a neural network model for determining the probability of performing a certain interactive operation, and may be an LR model, an FM model, a DNN model, a DCNN model, a DN model, a GAN model, or the like. Further, the first prediction model and the second prediction model may be the same model or different models (for example, the number of hidden layers included therein may be different).
The training process of the second predictive model may also include a forward propagation process and a backward propagation process. The training process is similar to the first prediction model and is not repeated here.
In the step, the second prediction model is jointly trained through the output of the trained embedded layer and the historical operation log of the first service scene, so that the prediction accuracy of the operation prediction model can be improved while the training machine resources are saved.
In some exemplary embodiments, the output of the embedding layer may be used to train the second prediction model, which may reduce training samples and improve the efficiency of model training.
S303, determining the trained second prediction model as the operation prediction model.
The trained second prediction model fuses the input data of the first prediction model and the input data of the second prediction model, and can accurately predict the interactive operation information of the first service scene. And further obtaining third interactive operation information.
The above process of training the operation prediction model can be realized by off-line training (offline learning) or online training (online learning).
According to the embodiment of the disclosure, the second prediction model obtained by joint training of the historical operation logs of the first service scene and the second service scene is determined as the operation prediction model, and is used for analyzing the interactive operation logs and determining the third interactive operation information, the log data in the two service scenes are comprehensively considered, and the obtained third interactive operation information has higher accuracy.
Further, a more specific framework of the operation prediction model may be as shown in fig. 4, and taking the first prediction model and the second prediction model as DNN models as an example, the training process of the operation prediction model is as follows:
forward propagation process of the first prediction model: inputting a user data log stream (shared common embedding input 1) in a multi-service scene as a training sample into a first prediction model, wherein the first prediction model comprises n hidden layers (layer 1/layer 2 … layer n, where the size of n can be determined according to an actual situation, for example, the size can be determined according to the complexity of an application program function, and if the application program function is complex, n can take a larger value), and the hidden layers process the input log stream and output a result (output) by an output layer, i.e., obtaining a predicted value.
The back propagation process of the second prediction model: and comparing the estimated value with the actual value, constructing a lost function according to the comparison result, and updating the model parameters of the lost function according to a gradient descent algorithm, wherein the shared common embedding input 1 is also updated.
Forward propagation process of the second prediction model: inputting the updated shared common embedding input 1 into a second prediction model by using a log stream (input) of a currently researched service scene as a training sample, where the second prediction model includes m hidden layers (shared layer 1/shared layer 2 … shared layer m, where the size of m may be determined according to an actual situation, for example, according to the complexity of an application program function, if the application program function is complex, m may take a larger value, and the size of m may be equal to or not equal to the size of n), performing joint training on the input log streams by using the second prediction model, and obtaining corresponding results according to different training tasks, as shown in fig. 4, the second prediction model obtains outputs of k corresponding training targets by using k training tasks (where the task layer may be more than one layer). The training tasks may be divided according to the type of the interactive operation, for example, log data corresponding to the user actively performing the interactive operation (e.g., clicking, focusing, etc.) is used as one training task, log data passively formed is used as another training task, and further, more than one output result corresponding to one training task may be provided.
The back propagation process of the second prediction model: and reversely adjusting parameters such as the weight of each hidden layer and each input layer and the input log stream according to the output result of the output layer, and determining the adjusted second prediction model as a trained operation prediction model.
In an exemplary embodiment, the step of inputting the interactive operation log into the trained operation prediction model includes: inputting the interactive operation log into a trained operation prediction model, extracting operation characteristic information from the interactive operation log through the operation prediction model, and outputting the probability of executing each reference interactive operation by the platform account according to the operation characteristic information, thereby obtaining third interactive operation information for executing target interactive operation.
The operation characteristic information may refer to a characteristic value in the interactive operation log, and the characteristic extraction may be performed on a vector corresponding to the interactive operation log by a characteristic extraction method to obtain a characteristic vector. Further, the interactive operation can be represented by the vector of [0,0,1], and the interactive operation can be focused by the vector of [0,1,1 ].
The hidden layer may include a plurality of neurons, and the neurons in the current layer may be connected to the neurons in the next layer, which is also referred to as full connection. The operation characteristic information of the interactive operation log can be extracted by operating a hidden layer of the prediction model, the extracted characteristic vector is input into the next layer, the operation characteristic information is extracted by the next layer, and the final probability can be output to an output layer through layer-by-layer analysis.
The method and the device for extracting the feature information of the interactive operation log perform feature information extraction on the interactive operation log through the operation prediction model, and the operation prediction model can fully fuse various possible feature information in the interactive operation log to obtain accurate probability prediction information so as to obtain third interactive operation information.
Further, in an exemplary embodiment, the step of inputting the interactive operation log into the trained operation prediction model includes: inputting the interaction operation log into the operation prediction model to trigger the operation prediction model to determine the probability of the platform account performing the target interaction operation on the candidate media information; the candidate media information comprises media information of the client under the first service scene, and the probability is used for determining third interoperation information of the platform account.
The candidate media information is media information (such as short videos recorded by the client) uploaded by the platform account in the first service scene through the client, or media information directly obtained by a server corresponding to the first service scene (media information searched by the server from a network through an information search tool, or media information generated through a certain algorithm).
The target interactive operation comprises at least one of playing, praise, concern, forwarding and comment. Further, taking the target interactive operation as the playing example, the following describes the execution process of the operation prediction model: after the operation prediction model obtains the input interactive operation log, performing feature analysis on the interactive operation log, classifying each feature to distinguish whether the interactive operation log is related to playing, and further integrating classification results to obtain a probability value, wherein the probability value is the probability of the platform account performing playing operation on the candidate media information. The same principles of praise, concern, forwarding and comment are not described herein. Further, when the target interaction operation is two or more of play, praise, concern, forward, and comment, the operation prediction model may be controlled to analyze the interaction operation log in a synchronous or asynchronous manner, so as to obtain a probability value, which is specifically exemplified as follows: assuming that the target interactive operation is playing and praise, after the operation prediction model obtains the input interactive operation log, performing feature analysis on the interactive operation log, classifying each feature to distinguish whether the feature is related to playing and praise, and further integrating classification results to obtain two probability values, wherein the two probability values are probabilities of the platform account performing playing and praise operations on the candidate media information.
Specifically, the step of determining the third interoperation information of the platform account according to the output of the operation prediction model includes: determining, according to the probability output by the operation prediction model, whether the platform account will execute third interoperation information of a target interoperation on the candidate media information in the first service scenario.
Further, when the probability value is higher than a set threshold (which may be determined according to actual conditions, for example, 90%), it may be considered that the platform account performs the target interaction operation on the candidate media information in the first service scenario. For example: and the operation prediction model determines that the probability of the platform account performing the playing operation on a certain short video is 95% according to the interactive operation log, and then the platform account can be considered to perform the playing operation on the short video in the first service scene.
According to the method and the device, the interactive operation information is analyzed through the operation prediction model, the probability prediction information corresponding to the interactive operations of the platform account, such as playing and praise execution of the candidate media information uploaded by the client side, is determined, and the third interactive operation information of the platform account, which executes the specific target interactive operation on the media information, can be accurately determined.
Further, in an exemplary embodiment, the candidate media information includes candidate videos; after the step of determining third interoperation information of the platform account according to the output of the operation prediction model, the method of determining the interoperation information further includes: sorting the candidate videos according to the third interactive operation information; determining the candidate videos with the top rank in the preset number as recommended videos; and pushing the recommended video to a video display page of the platform account.
The preset number can be determined according to actual conditions, and can be 9, 10, and the like. Further, if the platform account performs corresponding interactive operation on the recommended video, new video information can be recommended for the platform account to meet the watching requirement of the user on the video and improve the use experience of the application program.
The video display page refers to a page for performing video display in an application, and can display a corresponding video (a schematic diagram of recommending video information to be displayed on the video display page may be as shown in fig. 5), and can also respond according to an operation of a platform account, for example: and when the platform account clicks a certain recommended video, starting to play the corresponding recommended video in the interface.
The method and the device achieve the purpose of accurately recommending the video to the platform account through the trained operation prediction model.
The foregoing embodiments illustrate implementations of media information recommendation to a platform account in a first business scenario. In fact, since the training process of the operation prediction model is applied to the log data in the second business scenario, the operation prediction model can also be used for recommending the media information to the network account in the second business scenario.
In an exemplary embodiment, after the step of determining first probability prediction information for the platform account to perform the target interactive operation in the first business scenario according to the output of the operation prediction model, the method for determining the interactive operation information further includes: acquiring an interactive operation log of a platform account in a second service scene, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in the second service scene; inputting the interactive operation log in the second service scene into an operation prediction model; determining fourth interoperation information of the platform account according to the output of the operation prediction model, wherein the fourth interoperation information is used for indicating whether the platform account will execute the operation information of the target interoperation in the second service scenario; sorting the candidate videos according to the fourth interactive operation information; the candidate video comprises video information uploaded to the second service scene by the client; determining the candidate videos with the top rank in the preset number as recommended videos; and pushing the recommended video to a video display page of the platform account.
The target interactive operation in the embodiment of the present disclosure may be consistent with the target interactive operation when the third interactive operation information is determined; or different interoperations belonging to the same set of interoperations, such as: and calculating the target interactive operation as praise in the first service scene, and calculating the target interactive operation as forwarding in the second service scene.
As shown in the foregoing embodiments, the number of the second service scenarios may be more than one. In the following, a plurality of second service scenarios are taken as an example to describe an implementation manner of video recommendation in a plurality of applications. As shown in fig. 6, a multi-tasklearning (i.e., an operation prediction model) model is trained through N (the value of N may be determined according to actual conditions) mixed scenes (one scene corresponds to one application), the trained operation prediction model runs on a prediction server (i.e., an electronic device in the foregoing embodiment), and the prediction server provides corresponding interactive operation information for online services (which may be implemented by servers corresponding to the scenes) of the N scenes according to the output of the operation prediction model, so that the online services complete video recommendation and other processes.
Furthermore, the online service can adjust the prediction server according to the actual operation condition of the network account, and even can train the operation prediction model again to obtain a more accurate operation prediction model.
According to the embodiment of the disclosure, the operation prediction model can be jointly trained according to the log data of a plurality of application programs, and further, corresponding information can be provided for a plurality of online services through the trained operation prediction model, so that the normal operation of the online services is ensured.
In an exemplary embodiment, an application example of the determination method of the interoperation information according to the present disclosure is provided, as shown in fig. 7, including the following steps:
s701, determining a platform account set and first media information of a first service scene;
s702, determining a candidate account set and second media information of a candidate service scene;
s703, determining a first correlation between the platform account set and the candidate account set; determining a second correlation of the first media information and the second media information;
s704, determining a second service scene from the candidate service scenes according to at least one of the first correlation and the second correlation;
s705, inputting the historical operation log of the second service scene into a first prediction model so as to train an embedded layer of the first prediction model;
s706, inputting the trained output of the embedding layer and the historical operation log of the first service scene into a second prediction model to train the second prediction model;
s707, determining the trained second prediction model as an operation prediction model;
s708, acquiring an interactive operation log of the platform account;
s709, inputting the interactive operation log of the platform account into a trained operation prediction model;
s710, determining third interactive operation information of the platform account according to the output of the operation prediction model;
s711, sequencing the candidate videos according to the third interactive operation information;
s712, determining the candidate videos with the preset number ranked in the front as recommended videos;
s713, pushing the recommended video to a video display page of the platform account.
According to the method for determining the interactive operation information, the operation prediction model is trained through log data in the associated first service scene and the associated second service scene, the interactive operation log executed by the platform account in the first service scene is input into the trained operation prediction model, and then the interactive operation information of the platform account in the first service scene for executing the target interactive operation is determined according to the output of the operation prediction model. When the data volume in the first service scene is insufficient, accurate training of the operation prediction model can be completed, and therefore probability prediction information of the platform account for executing specific interactive operation in the application program is accurately predicted, and accurate recommended video information is obtained.
Common operation prediction models are LR, FM, DNN, and the like. In most scenarios, when the data volume is sufficient, the DNN model can often achieve better operation probability accuracy than conventional machine learning methods such as LR and FM. In order to better understand the above method, an application example of the determination method of the interoperation information of the present disclosure is described in detail below by taking a DNN model as an example, and includes the following steps:
A) and acquiring a user data log stream in a multi-service scene. And acquiring client exposure logs, user behavior logs and server logs (including current context information and the like) under multiple scene services.
B) And training a mixed service scene multitask model. And simultaneously, inputting user data log streams in a multi-service scene as training samples, designing a multi-task and multi-target DNN learning network, and jointly training a pxtr pre-estimation model (operation prediction model) in a plurality of scenes.
As shown in fig. 4, a specific network design may be as follows, taking two service scenarios recommended by discovery page videos of a short video platform master station with express and a large-screen version with express as an example (the method is not limited to two service scenarios, and may be multiple service scenarios), where the network is divided into two parts: the left side is xtr network (i.e. the first predictive model) of the express Master discovery page, and the right side is the large screen version of the various xtr predictive networks (i.e. the second predictive model), which may be a multi-task learning network, with both models sharing the underlying common embedding input. In the training process, mass data of the master station of the express hand is used for training a network on the left side, so that the imbedding (common imbedding input) on the bottom layer is fully trained; the data of the fast-handed large screen and the common embedding input of the shared bottom layer are used as common input to train the multi-target network on the right side, so that the convergence can be faster and better compared with the mode that the bottom layer embedding is trained only by using the large-screen data, and a higher estimation effect is obtained. The trained right side network can be used to determine pctr (predicted click through rate) and other data.
C) And performing online pre-estimation service. The existing scheme is that each business generally has an independent pre-estimation service, and even different tasks or targets of the same business have independent pre-estimation services. According to the embodiment of the disclosure, a pxtr pre-estimation service which can simultaneously serve a plurality of service scenes can be deployed by utilizing a model generated by a mixed service scene multi-task training method. For example, logs of a plurality of service scenarios such as a fast-hand top version, a setup version, a large-screen version and a master station can be jointly trained to train a multi-task multi-target large-scale DNN model, then a set of on-line pxtr pre-estimation services are deployed, and xtr pre-estimation services are provided for the top version, the setup version, the large-screen version and the master station. A set of model and pre-estimation service can provide service for a plurality of business scenes simultaneously, so that the model and pre-estimation effects can be improved, and the purposes of convenient deployment and maintenance can be achieved.
The disclosed embodiment can achieve the following effects:
1. the method for training the multi-task multi-target model of the mixed service scene solves the problem that the DNN and other models are difficult to train when the service log quantity is small, particularly the problem that the model training model does not have enough samples when a new service starts, and can assist the model training of the new service scene by using mass data of other similar service scenes to obtain a better model effect.
2. The multi-target online learning mixing a plurality of service scenes can save training machine resources, and meanwhile, the multi-scene combined training can increase the sample scale and capture more sample characteristics, and compared with a single training model of each service scene, the model can reduce the obtaining of better model effect, so that the accuracy of the pxtr pre-estimation model in each service scene can be improved. By taking the fast-hand fast version short video recommendation as an example, when the fast version is just released online, only a few users adopt a mixed service scene model training method, the pxtr pre-estimation task of the fast version and the pxtr pre-estimation task of the fast-hand master station can be trained jointly, and a multi-task online learning method is adopted, so that the fast version can train a large-scale multi-task DNN model when the user logs are few, and the model effect and the online effect are greatly improved.
It should be understood that although the steps in the flowcharts of fig. 2 and 7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2 and 7 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed in turn or alternately with other steps or at least some of the other steps.
Fig. 8 is a block diagram illustrating an apparatus 800 for determining interoperation information according to an example embodiment. Referring to fig. 8, the apparatus includes an operation log acquisition unit 801, an operation log input unit 802, and an operation information determination unit 803.
An operation log obtaining unit 801 configured to perform obtaining of an interaction operation log of a platform account, where the interaction operation log is used to record first interaction operation information performed by the platform account in a first service scenario.
An operation log input unit 802 configured to perform input of the interactive operation log into a trained operation prediction model, wherein the operation prediction model is trained according to a historical operation log; the historical operation log is second interactive operation information for executing reference interactive operation on reference accounts in at least two service scenes; the at least two service scenarios are associated with each other and comprise the first service scenario; the reference interoperation comprises a target interoperation.
An operation information determining unit 803 configured to perform determining third interoperation information of the platform account according to the output of the operation prediction model, wherein the third interoperation information is used for indicating operation information of whether the platform account will perform the target interoperation in the first business scenario.
The device for determining the interactive operation information, provided by the present disclosure, trains an operation prediction model according to historical operation logs of at least two associated service scenarios, acquires an interactive operation log of a platform account when probability prediction needs to be performed on operation of the platform account in a first service scenario, inputs the interactive operation log into the trained operation prediction model, and determines third interactive operation information of the platform account for executing specific interactive operation in the first service scenario according to output of the operation prediction model. According to the embodiment provided by the disclosure, when the data volume in the first service scene is insufficient, the accurate training of the operation prediction model can be completed, and then the interactive operation information for executing the specific interactive operation in the platform account can be accurately predicted.
In an exemplary embodiment, the at least two service scenarios further include a second service scenario; the device for determining the interactive operation information further comprises: a first information determination unit configured to perform determining a set of platform accounts and first media information for the first service scenario; the platform account set comprises account information of a first candidate account in the first service scene; the first media information is media information pushed to the first candidate account; a second information determination unit configured to perform determining a candidate account set and second media information of a candidate service scenario; the candidate account set comprises account information of a second candidate account in the candidate service scene; the second media information is the media information pushed to the second candidate account; a first relevance determination unit configured to perform determining a first relevance of the set of platform accounts and the set of candidate accounts; a second correlation determination unit configured to perform determining a second correlation of the first media information and the second media information; a traffic scenario determination unit configured to perform determining the second traffic scenario from the candidate traffic scenarios according to at least one of the first correlation and the second correlation.
In an exemplary embodiment, the apparatus for determining interoperation information further includes: a first model training unit configured to perform input of a historical operation log of the second business scenario into a first prediction model to train an embedding layer of the first prediction model; a second model training unit configured to perform input of the trained output of the embedding layer and the historical operation log of the first business scenario into a second prediction model to train the second prediction model; a model determination unit configured to perform determination of the trained second prediction model as the operation prediction model.
In an exemplary embodiment, the operation log input unit is further configured to perform inputting the interaction operation log into the operation prediction model to trigger the operation prediction model to determine a probability that the platform account performs the target interaction operation on candidate media information, where the probability is used to determine third interaction operation information of the platform account; the candidate media information comprises media information of the client under the first service scene.
In an exemplary embodiment, the candidate media information includes candidate videos; the device for determining the interactive operation information further comprises: a first video sorting unit configured to perform sorting of the candidate videos according to the third interoperation information; a first video determination unit configured to perform determination of a preset number of top-ranked candidate videos as recommended videos; a first video recommendation unit configured to execute pushing the recommended video to a video presentation page of the platform account.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 9 is a block diagram illustrating a video recommendation system in accordance with an exemplary embodiment. Referring to fig. 9, the system includes a server 901 and a client 902 connected to a network, where the client is configured to execute obtaining an interactive operation log of a platform account, where the interactive operation log is used to record first interactive operation information executed by the platform account in a first service scenario; the server is configured to input the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained by training according to a historical operation log; the historical operation log is second interactive operation information for executing reference interactive operation on reference accounts in at least two service scenes; the at least two service scenarios are associated with each other and comprise the first service scenario; the reference interactive operation comprises a target interactive operation; determining third interoperation information of the platform account according to the output of the operation prediction model, wherein the third interoperation information is used for indicating whether the platform account will execute the operation information of the target interoperation in the first business scenario; sorting the candidate videos according to the third interactive operation information; the candidate videos comprise videos of the client in the first service scene; determining the candidate videos with the top rank in the preset number as recommended videos; pushing the recommended video to the client; the client is further configured to execute outputting the recommended video to a video presentation page of the platform account.
The video recommendation system provided by the disclosure trains an operation prediction model according to historical operation logs of at least two associated service scenes, acquires an interactive operation log of a platform account when probability prediction needs to be performed on operation of the platform account in a first service scene, inputs the interactive operation log into the trained operation prediction model, and determines third interactive operation information of the platform account for executing specific interactive operation in the first service scene according to output of the operation prediction model. According to the embodiment provided by the disclosure, when the data volume in the first service scene is insufficient, the accurate training of the operation prediction model can be completed, and then the interactive operation information of the platform account executing the specific interactive operation in the first service scene is accurately predicted.
In an exemplary embodiment, there is provided an electronic device including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the method of determining interoperation information as described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is provided that includes instructions, such as the memory 102, that are executable by the processor 109 of the electronic device 100 to perform the above-described method. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for determining interoperation information, comprising:
the method comprises the steps of obtaining an interactive operation log of a platform account, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in a first service scene;
inputting the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained by training according to a historical operation log; the historical operation log is second interactive operation information for executing reference interactive operation on reference accounts in at least two service scenes; the at least two service scenarios are associated with each other and comprise the first service scenario; the reference interactive operation comprises a target interactive operation;
determining third interoperation information of the platform account according to the output of the operation prediction model, wherein the third interoperation information is used for indicating whether the platform account will execute the operation information of the target interoperation in the first service scenario.
2. The method of claim 1, wherein the at least two service scenarios further comprise a second service scenario; before the step of inputting the interactive operation log into the trained operation prediction model, the method for determining the interactive operation information further includes:
determining a set of platform accounts and first media information for the first business scenario; the platform account set comprises account information of a first candidate account in the first service scene; the first media information is media information pushed to the first candidate account;
determining a candidate account set and second media information of a candidate service scene; the candidate account set comprises account information of a second candidate account in the candidate service scene; the second media information is the media information pushed to the second candidate account;
determining a first correlation of the set of platform accounts and the set of candidate accounts;
determining a second correlation of the first media information and the second media information;
determining the second service scenario from the candidate service scenarios according to at least one of the first correlation and the second correlation.
3. The method for determining interactive operation information according to claim 2, wherein the step of training the operation prediction model includes:
inputting the historical operation log of the second service scene into a first prediction model so as to train an embedded layer of the first prediction model;
inputting the trained output of the embedding layer and the historical operation log of the first business scenario into a second prediction model to train the second prediction model;
determining the trained second predictive model as the operational predictive model.
4. The method for determining interoperation information according to any one of claims 1 to 3, wherein the step of inputting the interoperation log into a trained operation prediction model includes:
inputting the interaction operation log into the operation prediction model to trigger the operation prediction model to determine a probability of the platform account performing the target interaction operation on candidate media information, wherein the probability is used for determining third interaction operation information of the platform account; the candidate media information comprises media information of the client under the first service scene.
5. The method of claim 4, wherein the candidate media information comprises candidate videos;
after the step of determining third interoperation information of the platform account according to the output of the operation prediction model, the method of determining the interoperation information further includes:
sorting the candidate videos according to the third interactive operation information;
determining the candidate videos with the top rank in the preset number as recommended videos;
and pushing the recommended video to a video display page of the platform account.
6. An apparatus for determining interoperation information, comprising:
the operation log obtaining unit is configured to execute obtaining of an interactive operation log of a platform account, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in a first service scene;
an operation log input unit configured to perform input of the interactive operation log into a trained operation prediction model, wherein the operation prediction model is trained according to a historical operation log; the historical operation log is second interactive operation information for executing reference interactive operation on reference accounts in at least two service scenes; the at least two service scenarios are associated with each other and comprise the first service scenario; the reference interactive operation comprises a target interactive operation;
an operation information determination unit configured to perform determining third interoperation information of the platform account according to an output of the operation prediction model, wherein the third interoperation information is used for indicating operation information of whether the platform account will perform the target interoperation in the first service scenario.
7. The apparatus for determining interworking information according to claim 6, wherein the at least two service scenarios further include a second service scenario; the device for determining the interactive operation information further comprises:
a first information determination unit configured to perform determining a set of platform accounts and first media information for the first service scenario; the platform account set comprises account information of a first candidate account in the first service scene; the first media information is media information pushed to the first candidate account;
a second information determination unit configured to perform determining a candidate account set and second media information of a candidate service scenario; the candidate account set comprises account information of a second candidate account in the candidate service scene; the second media information is the media information pushed to the second candidate account;
a first relevance determination unit configured to perform determining a first relevance of the set of platform accounts and the set of candidate accounts;
a second correlation determination unit configured to perform determining a second correlation of the first media information and the second media information;
a traffic scenario determination unit configured to perform determining the second traffic scenario from the candidate traffic scenarios according to at least one of the first correlation and the second correlation.
8. A system for recommending videos, comprising: a server and a client;
the client is configured to execute and acquire an interactive operation log of a platform account, wherein the interactive operation log is used for recording first interactive operation information executed by the platform account in a first service scene;
the server is configured to input the interactive operation log into a trained operation prediction model, wherein the operation prediction model is obtained by training according to a historical operation log; the historical operation log is second interactive operation information for executing reference interactive operation on reference accounts in at least two service scenes; the at least two service scenarios are associated with each other and comprise the first service scenario; the reference interactive operation comprises a target interactive operation; determining third interoperation information of the platform account according to the output of the operation prediction model, wherein the third interoperation information is used for indicating whether the platform account will execute the operation information of the target interoperation in the first business scenario; sorting the candidate videos according to the third interactive operation information; the candidate videos comprise videos of the client in the first service scene; determining the candidate videos with the top rank in the preset number as recommended videos; pushing the recommended video to the client;
the client is further configured to execute outputting the recommended video to a video presentation page of the platform account.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the method of determining interoperation information according to any of claims 1 to 5.
10. A storage medium having instructions that, when executed by a processor of an electronic device, enable the electronic device to perform the method of determining interoperation information according to any one of claims 1 to 5.
CN202010190752.4A 2020-03-18 2020-03-18 Interactive operation information determining method and device and video recommendation system Active CN113495966B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010190752.4A CN113495966B (en) 2020-03-18 2020-03-18 Interactive operation information determining method and device and video recommendation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010190752.4A CN113495966B (en) 2020-03-18 2020-03-18 Interactive operation information determining method and device and video recommendation system

Publications (2)

Publication Number Publication Date
CN113495966A true CN113495966A (en) 2021-10-12
CN113495966B CN113495966B (en) 2023-06-23

Family

ID=77993017

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010190752.4A Active CN113495966B (en) 2020-03-18 2020-03-18 Interactive operation information determining method and device and video recommendation system

Country Status (1)

Country Link
CN (1) CN113495966B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629665A (en) * 2018-05-08 2018-10-09 北京邮电大学 A kind of individual commodity recommendation method and system
WO2018223194A1 (en) * 2017-06-09 2018-12-13 Alerte Digital Sport Pty Ltd Systems and methods of prediction of injury risk with a training regime
CN109408724A (en) * 2018-11-06 2019-03-01 北京达佳互联信息技术有限公司 Multimedia resource estimates the determination method, apparatus and server of clicking rate
CN109862432A (en) * 2019-01-31 2019-06-07 厦门美图之家科技有限公司 Clicking rate prediction technique and device
CN110276446A (en) * 2019-06-26 2019-09-24 北京百度网讯科技有限公司 The method and apparatus of model training and selection recommendation information
CN110400169A (en) * 2019-07-02 2019-11-01 阿里巴巴集团控股有限公司 A kind of information-pushing method, device and equipment
CN110442790A (en) * 2019-08-07 2019-11-12 腾讯科技(深圳)有限公司 Recommend method, apparatus, server and the storage medium of multi-medium data
CN110717099A (en) * 2019-09-25 2020-01-21 优地网络有限公司 Method and terminal for recommending film

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018223194A1 (en) * 2017-06-09 2018-12-13 Alerte Digital Sport Pty Ltd Systems and methods of prediction of injury risk with a training regime
CN108629665A (en) * 2018-05-08 2018-10-09 北京邮电大学 A kind of individual commodity recommendation method and system
CN109408724A (en) * 2018-11-06 2019-03-01 北京达佳互联信息技术有限公司 Multimedia resource estimates the determination method, apparatus and server of clicking rate
CN109862432A (en) * 2019-01-31 2019-06-07 厦门美图之家科技有限公司 Clicking rate prediction technique and device
CN110276446A (en) * 2019-06-26 2019-09-24 北京百度网讯科技有限公司 The method and apparatus of model training and selection recommendation information
CN110400169A (en) * 2019-07-02 2019-11-01 阿里巴巴集团控股有限公司 A kind of information-pushing method, device and equipment
CN110442790A (en) * 2019-08-07 2019-11-12 腾讯科技(深圳)有限公司 Recommend method, apparatus, server and the storage medium of multi-medium data
CN110717099A (en) * 2019-09-25 2020-01-21 优地网络有限公司 Method and terminal for recommending film

Also Published As

Publication number Publication date
CN113495966B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN109684510B (en) Video sequencing method and device, electronic equipment and storage medium
CN109800325B (en) Video recommendation method and device and computer-readable storage medium
WO2020087979A1 (en) Method and apparatus for generating model
CN109145828B (en) Method and apparatus for generating video category detection model
RU2640632C2 (en) Method and device for delivery of information
CN111353091A (en) Information processing method and device, electronic equipment and readable storage medium
CN105635824A (en) Personalized channel recommendation method and system
CN110175223A (en) A kind of method and device that problem of implementation generates
CN111708941A (en) Content recommendation method and device, computer equipment and storage medium
CN109783656B (en) Recommendation method and system of audio and video data, server and storage medium
CN110225398B (en) Multimedia object playing method, device and equipment and computer storage medium
CN107992937B (en) Unstructured data judgment method and device based on deep learning
US10592832B2 (en) Effective utilization of idle cycles of users
CN115203543A (en) Content recommendation method, and training method and device of content recommendation model
CN115909127A (en) Training method of abnormal video recognition model, abnormal video recognition method and device
CN111046927A (en) Method and device for processing labeled data, electronic equipment and storage medium
CN110728981A (en) Interactive function execution method and device, electronic equipment and storage medium
EP2752853A1 (en) Worklist with playlist and query for video composition by sequentially selecting segments from servers depending on local content availability
CN110941727A (en) Resource recommendation method and device, electronic equipment and storage medium
CN113495966B (en) Interactive operation information determining method and device and video recommendation system
CN114727119B (en) Live broadcast continuous wheat control method, device and storage medium
US11010935B2 (en) Context aware dynamic image augmentation
CN113609380A (en) Label system updating method, searching method, device and electronic equipment
CN113886674A (en) Resource recommendation method and device, electronic equipment and storage medium
CN110929771A (en) Image sample classification method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant