CN112925972B - Information pushing method, device, electronic equipment and storage medium - Google Patents

Information pushing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112925972B
CN112925972B CN201911241705.1A CN201911241705A CN112925972B CN 112925972 B CN112925972 B CN 112925972B CN 201911241705 A CN201911241705 A CN 201911241705A CN 112925972 B CN112925972 B CN 112925972B
Authority
CN
China
Prior art keywords
information
feature vector
sample
pushing
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911241705.1A
Other languages
Chinese (zh)
Other versions
CN112925972A (en
Inventor
沈琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN201911241705.1A priority Critical patent/CN112925972B/en
Publication of CN112925972A publication Critical patent/CN112925972A/en
Application granted granted Critical
Publication of CN112925972B publication Critical patent/CN112925972B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The application relates to an information pushing method, an information pushing device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a push object of a target application program logged in a target terminal, and acquiring push object information and content information to be pushed; respectively carrying out feature extraction on the information of the content to be pushed and the information of the object to be pushed to obtain feature vectors of the information of the content to be pushed and feature vectors of the information of the object to be pushed; acquiring a main page information feature vector of a push object on a main page, and splicing the information feature vector of the content to be pushed, the information feature vector of the push object and the main page information feature vector to obtain a target feature vector; acquiring a click probability predicted value of the content information to be pushed according to the target feature vector; and determining target push content information from the information of the content to be pushed according to the click probability prediction value, and pushing the target push content information to the target terminal. By adopting the method, the accuracy of click probability prediction can be improved, and the pushed target content information is more accurate.

Description

Information pushing method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of computers, and in particular relates to an information pushing method, an information pushing device, electronic equipment and a storage medium.
Background
With the continuous development of computer technology, more and more application programs are continuously emerging, and under the condition that the application programs are closed, content selection pushing is generally performed in a pushing notification mode, so that a user is guided to click the pushing notification to enter the application programs, and the user activity of the application programs is improved. The traditional information pushing method is characterized in that the characteristics of pushing content information clicked by a user history are learned by a machine learning method, then the probability that candidate content information in a content information candidate pool is clicked by the user is predicted by a machine learning model, and then content information with higher clicking probability is selected for pushing, but the data size of pushing content is small when the user history clicks, training data is insufficient, and the prediction accuracy of the clicking probability of pushing content information is low.
Disclosure of Invention
The disclosure provides an information pushing method, an information pushing device, an electronic device and a storage medium, so as to at least solve the problem that the click probability of pushing content information in the related art is low in prediction accuracy. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided an information pushing method, including:
Determining a push object of a target application program logged in a target terminal, and acquiring push object information and content information to be pushed of the push object;
respectively extracting characteristics of the content information to be pushed and the pushing object information to obtain a content information characteristic vector to be pushed and a pushing object information characteristic vector;
acquiring a main page information feature vector of a main page of the push object in a target application program, and splicing the information feature vector of the content to be pushed, the information feature vector of the push object and the main page information feature vector to obtain the target feature vector;
acquiring a click probability predicted value of the content information to be pushed according to the target feature vector;
and determining target push content information from the content information to be pushed according to the click probability predicted value, and pushing the target push content information to the target terminal.
In one embodiment, the step of obtaining the main page information feature vector of the main page of the push object in the target application program includes:
acquiring main page information of the push object in a main page;
Classifying the main page information to obtain user portrait information, author portrait information and video information of the pushing object;
respectively acquiring a user portrait feature vector corresponding to the user portrait information, an author portrait feature vector corresponding to the author portrait information and a video feature vector corresponding to the video information through an information embedding network;
and splicing the user portrait feature vector, the author portrait feature vector and the video feature vector to obtain the main page information feature vector of the pushing object.
In one embodiment, the training step of the information embedding network includes:
acquiring a first training data set, wherein the first training data set comprises main page information of a sample object and user operation information of the sample object;
classifying the sample homepage information to obtain sample user portrait information, sample author portrait information and sample video information;
respectively acquiring a sample user portrait feature vector corresponding to the sample user portrait information, a sample author portrait feature vector corresponding to the sample author portrait information and a sample video feature vector corresponding to the sample video information through an information embedding network;
Splicing the sample user portrait feature vector, the sample author portrait feature vector and the sample video feature vector to obtain a sample main page information feature vector of the sample object;
inputting the characteristic vector of the sample main page information into a user behavior prediction model to obtain predicted behavior information;
and determining a first sample loss value according to the user operation information and the predicted behavior information, and carrying out parameter adjustment on the information embedded network and the user behavior prediction model according to the first sample loss value until model training is finished.
In one embodiment, the user operation information includes a user endorsing a video sequence, a user clicking on a video sequence, and a user focusing on a video sequence; the prediction behavior information comprises a prediction point praise video sequence, a prediction click video sequence and a prediction attention video sequence;
the step of determining a first sample loss value according to user operation information and the predicted behavior information comprises the following steps:
calculating a praise loss value according to the predicted point praise video sequence and the user praise video sequence;
calculating a click loss value according to the predicted click video sequence and the user click video sequence;
Calculating a focus loss value according to the predicted focus video sequence and the user focus video sequence;
a first sample loss value is calculated from the praise loss value, the click loss value, and the attention loss value.
In one embodiment, summarizing, the step of extracting features of the to-be-pushed content information and the push object information to obtain a feature vector of the to-be-pushed content information and a feature vector of the push object information, includes:
respectively inputting the information of the content to be pushed and the information of the pushing object into a feature extraction network of a click probability prediction model, and acquiring a feature vector of the information of the content to be pushed and a feature vector of the information of the pushing object through the feature extraction network;
the step of obtaining the click probability prediction value of the content information to be pushed according to the target feature vector comprises the following steps:
and inputting the target feature vector into a click probability prediction network of the click probability prediction model, and acquiring a click probability prediction value of the content information to be pushed through the click probability prediction network.
In one embodiment, the training step of the click probability prediction model includes:
Acquiring second training sample data, wherein the second training sample data comprises historical pushing content information of a sample pushing object and standard click labels corresponding to the historical pushing information;
respectively inputting the history pushing information and sample pushing object information of the sample pushing object into a feature extraction network of a click probability prediction model, and acquiring a history pushing content information feature vector and a sample pushing object information feature vector through the feature extraction network;
acquiring a sample main page information feature vector of a main page of the sample push object in a target application program, and splicing the history push content information feature vector, the sample push object information feature vector and the sample main page information feature vector to obtain the sample target feature vector;
inputting the sample target feature vector into a click probability prediction network of the click probability prediction model, and acquiring the click type of the historical push content information through the click probability prediction network;
calculating a second sample loss value according to the click type and the standard click label;
and carrying out parameter adjustment on the characteristic extraction network of the click probability prediction model and the click probability prediction network according to the second sample loss value until reaching the training ending condition.
According to a second aspect of the embodiments of the present disclosure, there is provided an information pushing apparatus, including:
the pushing object determining unit is configured to execute determining a pushing object of a target application program logged in a target terminal, and acquire pushing object information and content information to be pushed of the pushing object;
the feature vector acquisition unit is configured to perform feature extraction on the to-be-pushed content information and the push object information respectively to obtain a feature vector of the to-be-pushed content information and a feature vector of the push object information;
the feature vector splicing unit is configured to acquire a main page information feature vector of a main page of the push object in a target application program, splice the content information feature vector to be pushed, the push object information feature vector and the main page information feature vector, and acquire the target feature vector;
the click probability prediction unit is configured to acquire a click probability prediction value of the content information to be pushed according to the target feature vector;
and the information pushing unit is configured to determine target push content information from the content information to be pushed according to the click probability prediction value, and push the target push content information to the target terminal.
In one embodiment, the feature vector stitching unit is configured to perform: acquiring main page information of the push object in a main page;
classifying the main page information to obtain user portrait information, author portrait information and video information of the pushing object;
respectively acquiring a user portrait feature vector corresponding to the user portrait information, an author portrait feature vector corresponding to the author portrait information and a video feature vector corresponding to the video information through an information embedding network;
and splicing the user portrait feature vector, the author portrait feature vector and the video feature vector to obtain the main page information feature vector of the pushing object.
In one embodiment, the apparatus further comprises a first model training unit configured to perform acquiring a first training data set, wherein the first training data set comprises main page information of a sample object and user operation information of the sample object; classifying the sample homepage information to obtain sample user portrait information, sample author portrait information and sample video information; respectively acquiring a sample user portrait feature vector corresponding to the sample user portrait information, a sample author portrait feature vector corresponding to the sample author portrait information and a sample video feature vector corresponding to the sample video information through an information embedding network; splicing the sample user portrait feature vector, the sample author portrait feature vector and the sample video feature vector to obtain a sample main page information feature vector of the sample object; inputting the characteristic vector of the sample main page information into a user behavior prediction model to obtain predicted behavior information; and determining a first sample loss value according to the user operation information and the predicted behavior information, and carrying out parameter adjustment on the information embedded network and the user behavior prediction model according to the first sample loss value until model training is finished.
In one embodiment, the feature vector obtaining unit is configured to perform inputting the to-be-pushed content information and the push object information into a feature extraction network of a click probability prediction model, and obtain the to-be-pushed content information feature vector and the push object information feature vector through the feature extraction network;
the click probability prediction unit is configured to input the target feature vector into a click probability prediction network of the click probability prediction model, and obtain a click probability prediction value of the content information to be pushed through the click probability prediction network.
In one embodiment, the apparatus further includes a second model training unit configured to perform acquiring second training sample data, wherein the second training sample data includes historical push content information of the sample push object and a standard click tag corresponding to the historical push information; respectively inputting the history pushing information and sample pushing object information of the sample pushing object into a feature extraction network of a click probability prediction model, and acquiring a history pushing content information feature vector and a sample pushing object information feature vector through the feature extraction network; acquiring a sample main page information feature vector of a main page of the sample push object in a target application program, and splicing the history push content information feature vector, the sample push object information feature vector and the sample main page information feature vector to obtain the sample target feature vector; inputting the sample target feature vector into a click probability prediction network of the click probability prediction model, and acquiring the click type of the historical push content information through the click probability prediction network; calculating a second sample loss value according to the click type and the standard click label; and carrying out parameter adjustment on the characteristic extraction network of the click probability prediction model and the click probability prediction network according to the second sample loss value until reaching the training ending condition.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, comprising: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement the information push method as described above;
according to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the information push method as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects: after a push object and information of content to be pushed of a target application program are logged in a target terminal, respectively obtaining a characteristic vector of the information of the content to be pushed and a characteristic vector of the push object, then obtaining a characteristic vector of the main page information of the push object in the target application program, splicing the characteristic vector of the information of the content to be pushed, the characteristic vector of the push object and the characteristic vector of the main page information to obtain a target characteristic vector, finally predicting a click probability predicted value of the information of the content to be pushed according to the target characteristic vector, determining the information of the content to be pushed from the information of the content to be pushed according to the click probability predicted value, and pushing the information of the content to be pushed to the target terminal. According to the method, the characteristic information of the main page of the pushing object in the application program is introduced into the click probability prediction of the content information to be pushed, so that more priori characteristic information is added into the click probability prediction of the content information to be pushed, the accuracy of the click probability prediction of the content information to be pushed is improved, the pushed target content information is more accurate, and the click probability index of the pushing content is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is an application environment diagram according to one information push method shown in an exemplary embodiment.
Fig. 2 is a flow chart illustrating a method of information pushing according to an exemplary embodiment.
Fig. 3 is a flowchart for acquiring a home page information feature vector of the push object in a home page according to an exemplary embodiment.
FIG. 4 is a block diagram illustrating an information embedding network, a user behavior prediction model, and a click probability prediction model, according to an example embodiment.
Fig. 5 is a block diagram illustrating an information pushing apparatus according to an exemplary embodiment.
Fig. 6 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Fig. 1 is an application environment diagram of an information pushing method in an embodiment, where the information pushing method is applied to an information pushing system. The information push system includes a terminal 110 and a server 120. The terminal 110 and the server 120 are connected through a network. The terminal 110 may be a desktop terminal or a mobile terminal, and the mobile terminal may be at least one of a mobile phone, a tablet computer, a notebook computer, and the like. The server 120 may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers. As shown in fig. 1, the server 120 determines a push object of a target application program logged in the target terminal 110, and obtains push object information and content information to be pushed of the push object; respectively carrying out feature extraction on the information of the content to be pushed and the information of the object to be pushed to obtain feature vectors of the information of the content to be pushed and feature vectors of the information of the object to be pushed; acquiring a main page information feature vector of a main page of a push object in a target application program, and splicing the information feature vector of the content to be pushed, the information feature vector of the push object and the main page information feature vector to obtain a target feature vector; acquiring a click probability predicted value of the content information to be pushed according to the target feature vector; and determining target push content information from the information of the content to be pushed according to the click probability prediction value, pushing the target push content information to the target terminal, and pushing the target push content information to the target terminal 110.
Fig. 2 is a flowchart of an information pushing method according to an exemplary embodiment, and as shown in fig. 1, the information pushing method is used in a terminal, and includes the following steps.
Step S210: and determining a push object of a target application program logged in a target terminal, and acquiring push object information and content information to be pushed of the push object.
The push object refers to a user logged in to the target terminal. The push object information refers to user information in the push scenario. The content information to be pushed is pre-selected content pushed to the target terminal, and can be content pushed to the target terminal in a popup window mode of message notification reminding.
For example, in a short video application, a push object refers to a viewing user logged in the application, and content information to be pushed refers to a preselected short video that is pushed to the viewing user in a notification message alert manner if the application is closed. For another example, in shopping application software, the push object refers to a buyer user logged in the application program, and the content information to be pushed refers to preselected merchandise information recommended to the buyer user in a notification message reminding manner when the application program is closed. It should be understood that the number of the content information to be pushed is multiple, and is not content information which is pushed to the target terminal finally, and after the click probability of the content information to be pushed is obtained respectively, one target push content is determined from the multiple content information to be pushed and pushed to the target terminal.
Step S220: and respectively carrying out feature extraction on the information to be pushed and the information of the pushing object to obtain feature vectors of the information to be pushed and feature vectors of the information of the pushing object.
After the information of the content to be pushed and the information of the object to be pushed are obtained, feature extraction is performed on the information of the content to be pushed and the information of the object to be pushed respectively, so as to obtain feature vectors of the information of the content to be pushed corresponding to the information of the content to be pushed and feature vectors of the information of the object to be pushed corresponding to the information of the object to be pushed.
Specifically, feature extraction can be performed on the content information to be pushed and the object information to be pushed through a pre-constructed neural network model for extracting features. The neural network model is used for carrying out compression coding on the content information to be pushed and account information of the push object so as to extract low-level semantic feature vectors which are lower in dimension and can represent the content information to be pushed and low-level semantic feature vectors which can represent the information of the push object
Step S230: and acquiring a main page information feature vector of a main page of the push object in the target application program, and splicing the to-be-pushed content information feature vector, the push object information feature vector and the main page information feature vector to obtain the target feature vector.
The main page refers to a main page in the application program, and the main page information feature vector of the main page refers to feature vectors which are acquired in a scene corresponding to the main page and include, but are not limited to, feature vectors used for representing user information feature information, content information feature information and the like of a pushing object in the scene of the main page.
Specifically, after the information feature vector of the content to be pushed and the information feature vector of the pushing object are obtained, the information feature vector of the content to be pushed, the information feature vector of the pushing object and the information feature vector of the main page are spliced to obtain a target feature vector; the target feature vector contains more feature information related to user information of the pushing object, feature information related to content information to be pushed and information between the user and the content information to be pushed, so that the follow-up prediction of click probability of the content to be pushed is more accurate.
For example, in a short video application program, a home page includes a popular page, a focus page, a home page such as the same city, and content information such as a short video in a scene corresponding to the home page is included in different home pages, user information, short video data, operation information of a user on the short video, and the like in the corresponding scenes of the home page are obtained, and further feature vectors of home page information such as feature vectors representing user information, feature vectors representing video data information, feature vectors representing association relation between account information and video data are obtained through feature extraction, so as to supplement the description of user information of a pushing object and content information to be pushed.
It should be appreciated that the main page information feature vector is introduced as an unchangeable feature vector into the click probability prediction, i.e., when calculating click probability prediction values of a plurality of pieces of content information to be pushed, wherein the spliced main page information feature vector is kept unchanged.
Step S240: and acquiring a click probability predicted value of the content information to be pushed according to the target feature vector.
Specifically, after the target feature vector is obtained, the target feature vector may be input into a neural network model trained in advance for predicting click probability, and the click probability prediction value of the content information to be pushed is predicted through the neural network model for predicting click probability. Further, the neural network model for predicting the click probability may be a two-class model, through which the confidence that the content information to be pushed is classified into the clicked open type and the ignored closed type is predicted, so as to determine the probability value of being clicked open for the confidence of the clicked open type according to the content information to be pushed.
Step S250: and determining target push content information from the information of the content to be pushed according to the click probability prediction value, and pushing the target push content information to the target terminal.
After obtaining the click probability of each piece of content information to be pushed, the content information to be pushed with the largest click probability can be determined to be the target push content information, and the target push content information is sent to the target terminal, so that content information pushing is realized.
According to the information pushing method, after the pushing object and the information of the content to be pushed of the target application program are obtained, the characteristic vector of the content to be pushed and the characteristic vector of the pushing object are obtained respectively, then the characteristic vector of the main page information of the pushing object in the target application program is obtained, the characteristic vector of the content to be pushed, the characteristic vector of the pushing object and the characteristic vector of the main page information are spliced to obtain the target characteristic vector, finally the click probability predicted value of the content information to be pushed is predicted according to the target characteristic vector, the target content information to be pushed is determined from the content information to be pushed according to the click probability predicted value, and the target content information is pushed to the target terminal. According to the method, the characteristic information of the main page of the pushing object in the application program is introduced into the click probability prediction of the content information to be pushed, so that more priori characteristic information is added into the click probability prediction of the content information to be pushed, the accuracy of the click probability prediction of the content information to be pushed is improved, the pushed target content information is more accurate, and the click probability index of the pushing content is improved.
In one embodiment, as shown in fig. 3, the step of acquiring the home page information feature vector of the home page of the push object in the target application program includes:
step S310: and acquiring the main page information of the push object in the main page.
Step S320: and classifying the main page information to obtain user portrait information, author portrait information and video information of the pushing object.
The main page refers to a main page in the application program, and the main page information refers to content information and user information under the main page and can comprise user portrait information, author portrait information and video information; the user portrait information refers to portrait information of a consumer of which the push object is content information, and includes, for example, a region where the push object is located, age, sex, a praise video sequence of the push object, a video sequence clicked and watched by the push object, and the like; the author portrayal information refers to portrayal information of a producer of a push object as content information, for example, video type of a video generated by the push object, image information of a video cover, title information of the video, behavior action information of a person in the video, and the like; the video information refers to work information of a video work included in a main page, work information of a video work produced by other producers who push objects to pay attention to, and the like, for example, the type of the video work, title information of a video, behavior action information of a person in the video, and the like.
Specifically, after determining the object to be pushed, main page information of the object to be pushed in other main pages can be acquired, wherein the main page information comprises user portrait information, author portrait information, video information and the like of the object to be pushed.
Step S330: the user portrait feature vector corresponding to the user portrait information, the author portrait feature vector corresponding to the author portrait information and the video feature vector corresponding to the video information are respectively obtained through an information embedding network.
The information embedding network is used for carrying out compression coding on various information acquired from the main page so as to acquire feature vectors representing corresponding information.
Specifically, for different types of information, there is information embedded in the network corresponding thereto. As shown in fig. 4, for user portrait information, inputting user portrait details into a first information embedded network corresponding to the user portrait information, and obtaining user portrait feature vector information; for the author portrayal information, inputting the author portrayal information into a second information embedded network corresponding to the author portrayal information to obtain an author portrayal feature vector; and for the video information, inputting the video work into a third information embedded network corresponding to the video information, and obtaining the video feature vector. It should be appreciated that the network model parameters of the first information embedded network, the second information embedded network, and the third embedded network are different.
Step S340: and splicing the user portrait feature vector, the author portrait feature vector and the video feature vector to obtain the main page information feature vector of the push object.
After the user portrait feature vector, the author portrait feature vector and the video feature vector are obtained, the three types of feature vectors are spliced to obtain the homepage information feature vector.
In connection with fig. 4, this embodiment will be further described by taking a short video application program as an example, where the home page includes a home page, a focus page, a city page, and other home pages, and the data volume of the short video displayed every day and the interactive operation information such as clicking, praying, forwarding, and the like by the user is huge. The method comprises the steps of obtaining data information such as short video content information, user related information and the like in scenes corresponding to all main pages in the main pages, classifying the data information to obtain user portrait information, author portrait information and video information, inputting the user portrait information, the author portrait information and the video information into corresponding information embedding networks, and obtaining user portrait feature vectors representing the user information, video feature vectors representing the video data information and author portrait feature vectors representing the author information, which are output by all information embedding networks. And then splicing the user portrait feature vector, the author portrait feature vector and the video feature vector to obtain a homepage information feature vector, inputting the homepage information feature vector into a click probability prediction network, and splicing the homepage information feature vector and the video feature vector to obtain a target feature vector.
In one embodiment, the training step of the information embedding network comprises: acquiring a first training data set, wherein the first training data set comprises main page information of a sample object and user operation information of the sample object; classifying sample homepage information to obtain sample user portrait information, sample author portrait information and sample video information; respectively acquiring a sample user portrait characteristic vector corresponding to sample user portrait information, a sample author portrait characteristic vector corresponding to sample author portrait information and a sample video characteristic vector corresponding to sample video information through an information embedding network; splicing the sample user portrait feature vector, the sample author portrait feature vector and the sample video feature vector to obtain a sample main page information feature vector of the sample object; inputting the characteristic vector of the sample main page information into a user behavior prediction model to obtain predicted behavior information; and determining a first sample loss value according to the user operation information and the predicted behavior information, and carrying out parameter adjustment on the information embedded network and the user behavior prediction model according to the first sample loss value until model training is finished.
The implementation is a training process of the information embedding network. The training process of the information embedding network may specifically be that a first training data set for training the information embedding network is obtained, where the first training data set includes main page information of a sample object and user operation information of the sample object, where the user operation information refers to operation information made by a user for content information, where the operation information includes a content information sequence operated by the user, for example, in a short video application program, the user performs a click operation for displaying content information (i.e., a short video) appearing through an interface to play the short video, and at this time, the operation information may include short video identification numbers of all short videos that the user clicks to play; for another example, after the user views the short video, the user performs a praise operation on the short video, and at this time, the operation information may further include identification numbers of all short videos praised by the user.
After the main page information of the sample object is obtained, the main page information of each sample is classified correspondingly to determine which information belongs to user portrait information, which information data belongs to video data of an author and which information belongs to author portrait information, and the user portrait information, the author portrait information and the video information are respectively input into a pre-built information embedded network to obtain sample user portrait feature vectors corresponding to the sample user portrait information, sample author portrait feature vectors corresponding to the sample author portrait information and sample video feature vectors corresponding to the sample video information. Specifically, in one embodiment, for different types of information, there is information embedded in the network corresponding thereto.
After the sample user portrait feature vector, the sample author portrait feature vector and the sample video feature vector are obtained, the feature vectors are spliced to obtain a sample main page information feature vector of the sample object, and the sample main page information feature vector is input into a user behavior prediction model to obtain prediction behavior information, wherein the prediction behavior information corresponds to user operation information. For example, in the short video application program, the operation information may include a short video identification number of a short video clicked by the user, and at this time, the user behavior prediction model predicts whether the user clicks to view a certain short video, and the output predicted behavior information is the short video identification number clicked by the user.
And calculating a loss value by comparing the predicted behavior information and the user operation information, and then reversely learning the information embedding according to the loss value, so as to continuously optimize the information embedding network until the model training is finished. The model training ending condition can be set or adjusted according to actual requirements, for example, when the training loss value reaches the minimum, the model training can be considered to be ended, or the training can be considered to be ended when the training reaches a certain number of times.
Further, in one embodiment, the user operation information includes a user approving the video sequence, a user clicking the video sequence, and a user focusing on the video sequence; the prediction behavior information comprises a prediction point praise video sequence, a prediction click video sequence and a prediction attention video sequence;
the step of determining a first sample loss based on the user operation information and the predicted behavior information comprises: calculating a praise loss value according to the predicted point praise video sequence and the user praise video sequence; calculating a click loss value according to the predicted click video sequence and the user click video sequence; calculating a focus loss value according to the predicted focus video sequence and the user focus video sequence; the first sample loss value is calculated from the point-and-praise loss value, the click loss value, and the attention loss value.
Wherein the user-click video sequence may include video identification numbers of all videos in which the sample object is clicked in the main page, the user-click video sequence may include video identification numbers of all videos in which the sample object is clicked for viewing in the main page, and the user-attention video sequence may include identification numbers of all videos in which the sample object is attention-operated in the main page.
The predicted point-of-interest video sequence may include video identification numbers of all videos that the sample object predicted by the user behavior prediction model endorses in the main page, the predicted click video sequence may include video identification numbers of all videos that the sample object predicted by the user behavior prediction model performs click viewing in the main page, and the predicted interest video sequence may include identification numbers of all videos that the sample object predicted by the user behavior prediction model performs the interest operation in the main page.
In one embodiment, the praise loss value is calculated according to the predicted point praise video sequence and the user praise video sequence, specifically, the praise loss value is determined by comparing the difference degree of the video identification numbers between the predicted point praise video sequence and the user praise video sequence. For example, the user likes the video sequence { video 1, video 2, video 3, video 5}, predicts the predicted point likes the video sequence { video 2, video 3, video 4, video 6}, compares the two obtained differences to be 50%, and determines the likes loss value to be 50. Similarly, the click loss value and the attention loss value may be specifically determined as the click loss value by comparing the difference between the video identification numbers of the predicted click video sequence and the user click video sequence, and determined as the attention loss value by comparing the difference between the video identification numbers of the predicted attention video sequence and the user attention video sequence.
After the praise loss value, the click loss value and the attention loss value are obtained, specifically, the praise loss value, the click loss value and the attention loss value are weighted and calculated to obtain a first sample loss value of the information embedded network and the user behavior prediction model. And the method can also be used for carrying out average value calculation on the praise loss value, the click loss value and the attention loss value to obtain a first sample loss value of the information embedded network and the user behavior prediction model.
This embodiment will be further described with reference to fig. 4 by taking a short video application as an example. The training process of the information embedding network may specifically be that a first training data set for training information embedding network is obtained, where the first training data set includes main page information of a sample object and user operation information of the sample object, where the user operation information refers to operation information made by a user for content information in a main page, and the operation information includes a sequence of video identification numbers that the user clicks to play, a sequence of video identification numbers that the user clicks to like, and a sequence of video identification numbers that the user pays attention to, for example, the user clicks to short videos in the main page to play short videos, and the operation information includes short video identification numbers of all short videos clicked by the user; after the user watches the short videos, the user performs praise operation on the short videos, and at this time, the operation information can further comprise short video identification numbers of all the short videos praise by the user.
After the sample user portrait feature vector, the sample author portrait feature vector and the sample video feature vector are obtained, the feature vectors are spliced to obtain a sample main page information feature vector of the sample object, the sample main page information feature vector is input into a user behavior prediction model, and the user behavior prediction model is used for predicting whether a user clicks, likes and pays attention to a certain short video included in the sample video information, and finally outputting a predicted point-liked video sequence, a predicted click video sequence and a predicted attention video sequence. And finally, calculating a first sample loss value by comparing the predicted point praise video sequence, the predicted click video sequence, the predicted attention video sequence with the user praise video sequence, the user click video sequence and the user attention video sequence, and then reversely learning information embedding according to the first sample loss value, so as to continuously optimize the information embedding network until model training is finished.
In one embodiment, the step of extracting features of the to-be-pushed content information and the to-be-pushed object information to obtain feature vectors of the to-be-pushed content information and feature vectors of the to-be-pushed object information respectively includes: respectively inputting the content information to be pushed and the pushing object information into a feature extraction network of the click probability prediction model, and acquiring a feature vector of the content information to be pushed and a feature vector of the pushing object information through the feature extraction network; the step of obtaining the click probability predicted value of the content information to be pushed according to the target feature vector comprises the following steps: and inputting the target feature vector into a click probability prediction network of the click probability prediction model, and acquiring a click probability prediction value of the content information to be pushed through the click probability prediction network.
The method comprises the steps of respectively inputting content information to be pushed and user information into a pre-constructed click probability prediction model, and obtaining a pushing object feature vector representing user information features in a pushing scene and the content information feature vector to be pushed of the content information to be pushed through a feature extraction network of the click probability prediction model.
The click probability prediction model is a network model for predicting a probability value that the information of the content to be pushed is clicked to be opened by the pushing object according to the information of the content to be pushed and the user information of the pushing object. The click probability prediction model is a trained click probability prediction model and can be directly used for predicting the probability value that the information of the content to be pushed is clicked and opened by the pushing object.
Further, the click probability prediction model may include a feature extraction network and a click probability prediction network; the feature extraction network is used for carrying out compression coding on the content information to be pushed and the pushing object information of the pushing object so as to extract low-level semantic feature vectors which are lower in dimension and can represent the content information to be pushed and the user information;
the click probability prediction network is used for predicting the click probability of the content information to be pushed according to the characteristic vector representing the user information and the content information to be pushed. Specifically, the click probability prediction network may be a two-class model, and through the click probability prediction network, the confidence that the content information to be pushed is the type of being clicked to open and the confidence that the content information to be pushed is the type of being ignored to close are predicted, so that the probability value of being clicked to open is determined according to the confidence that the content information to be pushed is the type of being clicked to open.
In one embodiment, the training step of the click probability prediction model includes: acquiring second training sample data, wherein the second training sample data comprises historical pushing content information of a sample pushing object and standard click labels corresponding to the historical pushing information; respectively inputting the history push information and sample push object information of a sample push object into a feature extraction network of a click probability prediction model, and acquiring a history push content information feature vector and a sample push object information feature vector through the feature extraction network; acquiring a sample main page information feature vector of a main page of a sample push object in a target application program, and splicing the historical push content information feature vector, the sample push object information feature vector and the sample main page information feature vector to obtain a sample target feature vector; inputting the sample target feature vector into a click probability prediction network of a click probability prediction model, and acquiring the click type of the historical push content information through the click probability prediction network; calculating a second sample loss value according to the click type and the standard click label; and carrying out parameter adjustment on the characteristic extraction network of the click probability prediction model and the click probability prediction network according to the second sample loss value until reaching the training ending condition.
The implementation is a training process of a click probability prediction model. The training process of the click probability prediction model may specifically be that a second training data set for training the click probability prediction model is obtained, where the second training data set includes historical push information of the sample push object and a corresponding standard click label, where whether the corresponding historical push information is clicked by the user or not is recorded in the standard click label. For example, when a push of a certain video content is clicked on by a user, its standard click tag is identified as "1", and when the push of the video content is not clicked on by the user, its standard click tag is identified as "0".
After the sample pushing object and the history pushing information are determined, the pushing object information and the history pushing information of the sample pushing object are input into a pre-constructed feature extraction network to obtain feature vectors representing user information and history pushing content information corresponding to the sample recommending object.
After the history push content information feature vector and the sample push object information feature vector are obtained, the sample target feature vector is obtained by splicing the history push content information feature vector, the sample push object information feature vector and the main page information feature vector of the sample push object, and then the click type of the history push content information is predicted by inputting the sample target feature vector into a click prediction network, and it is understood that the click type comprises a clicked on type and a ignored off type.
And calculating a sample loss value by comparing the click type of the predicted historical push content information with the standard click label of the historical push content information, and then carrying out reverse learning on a characteristic extraction network of the click probability prediction model and the click probability prediction network according to the sample loss value, so as to continuously optimize the click probability prediction model until model training is finished. The model training ending condition can be set or adjusted according to actual requirements, for example, when the training loss value reaches the minimum, the model training can be considered to be ended, or the training can be considered to be ended when the training reaches a certain number of times.
Fig. 5 is a block diagram of an information pushing apparatus according to an exemplary embodiment. Referring to fig. 5, the apparatus includes a pushing object determining unit 510, a feature vector acquiring unit 520, a feature vector stitching unit 530, a click probability predicting unit 540, and an information pushing unit 550.
The push object determining unit 510 is configured to perform determining a push object of a target application program logged in a target terminal, and obtain push object information and content information to be pushed of the push object;
the feature vector obtaining unit 520 is configured to perform feature extraction on the to-be-pushed content information and the push object information respectively, so as to obtain a feature vector of the to-be-pushed content information and a feature vector of the push object information;
A feature vector stitching unit 530 configured to perform acquiring a main page information feature vector of a main page of a push object in a target application, stitching the content information feature vector to be pushed, the push object information feature vector, and the main page information feature vector, and obtaining a target feature vector;
a click probability prediction unit 540 configured to perform obtaining a click probability prediction value of the content information to be pushed according to the target feature vector;
the information pushing unit 550 is configured to determine target push content information from the content information to be pushed according to the click probability prediction value, and push the target push content information to the target terminal.
In one embodiment, the feature vector stitching unit is configured to perform: acquiring main page information of a push object in a main page; classifying the main page information to obtain user portrait information, author portrait information and video information of a pushing object; respectively acquiring a user portrait feature vector corresponding to the user portrait information, an author portrait feature vector corresponding to the author portrait information and a video feature vector corresponding to the video information through an information embedding network; and splicing the user portrait feature vector, the author portrait feature vector and the video feature vector to obtain the main page information feature vector of the push object.
In one embodiment, the apparatus further comprises a first model training unit configured to perform acquiring a first training data set, wherein the first training data set comprises main page information of the sample object and user operation information of the sample object; classifying sample homepage information to obtain sample user portrait information, sample author portrait information and sample video information; respectively acquiring a sample user portrait characteristic vector corresponding to sample user portrait information, a sample author portrait characteristic vector corresponding to sample author portrait information and a sample video characteristic vector corresponding to sample video information through an information embedding network; splicing the sample user portrait feature vector, the sample author portrait feature vector and the sample video feature vector to obtain a sample main page information feature vector of the sample object; inputting the characteristic vector of the sample main page information into a user behavior prediction model to obtain predicted behavior information; and determining a first sample loss value according to the user operation information and the predicted behavior information, and carrying out parameter adjustment on the information embedded network and the user behavior prediction model according to the first sample loss value until model training is finished.
In one embodiment, the user operation information includes a user endorsing the video sequence, a user clicking the video sequence, and a user focusing on the video sequence; the prediction behavior information comprises a prediction point praise video sequence, a prediction click video sequence and a prediction attention video sequence; a first model training unit specifically configured to perform calculation of a praise loss value from the predicted point praise video sequence and the user praise video sequence; calculating a click loss value according to the predicted click video sequence and the user click video sequence; calculating a focus loss value according to the predicted focus video sequence and the user focus video sequence; the first sample loss value is calculated from the point-and-praise loss value, the click loss value, and the attention loss value.
In one embodiment, the feature vector obtaining unit is configured to perform inputting the content information to be pushed and the pushing object information into a feature extraction network of the click probability prediction model respectively, and obtain the feature vector of the content information to be pushed and the feature vector of the pushing object information through the feature extraction network; and the click probability prediction unit is configured to input the target feature vector into a click probability prediction network of the click probability prediction model, and obtain a click probability prediction value of the content information to be pushed through the click probability prediction network.
In one embodiment, the apparatus further comprises a second model training unit configured to perform acquiring second training sample data, wherein the second training sample data comprises historical push content information of the sample push object and standard click tags corresponding to the historical push information; respectively inputting the history push information and sample push object information of a sample push object into a feature extraction network of a click probability prediction model, and acquiring a history push content information feature vector and a sample push object information feature vector through the feature extraction network; acquiring a sample main page information feature vector of a main page of a sample push object in a target application program, and splicing the historical push content information feature vector, the sample push object information feature vector and the sample main page information feature vector to obtain a sample target feature vector; inputting the sample target feature vector into a click probability prediction network of a click probability prediction model, and acquiring the click type of the historical push content information through the click probability prediction network; calculating a second sample loss value according to the click type and the standard click label; and carrying out parameter adjustment on the characteristic extraction network of the click probability prediction model and the click probability prediction network according to the second sample loss value until reaching the training ending condition.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
In one embodiment, an electronic device, which may be a server, is provided, an internal structure thereof may be as shown in fig. 6, and fig. 6 is a block diagram of an electronic device according to an exemplary embodiment. The electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the electronic device is configured to provide computing and control capabilities. The memory of the electronic device includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the electronic device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement an information push method.
Those skilled in the art will appreciate that the structure shown in fig. 6 is merely a block diagram of a portion of the structure associated with aspects of the present disclosure and is not limiting of the electronic device to which aspects of the present disclosure are applied, and that a particular electronic device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, an electronic device is provided that includes a processor, a memory for storing processor-executable instructions; wherein the processor is configured to execute instructions to implement the information push method of any of the embodiments above.
It should be noted that, the user/author information related to the present application is collected and analyzed for subsequent processing after the authorization of the user/author.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (14)

1. An information pushing method is characterized by comprising the following steps:
determining a push object of a target application program logged in a target terminal, and acquiring push object information and content information to be pushed of the push object;
respectively extracting characteristics of the content information to be pushed and the pushing object information to obtain a content information characteristic vector to be pushed and a pushing object information characteristic vector;
acquiring a main page information feature vector of a main page of the push object in a target application program, and splicing the information feature vector of the content to be pushed, the information feature vector of the push object and the main page information feature vector to obtain the target feature vector; the main page information comprises user portrait information, author portrait information and video information of the pushing object; the video information comprises the work information of the video works in the main page and the work information of the video works produced by other producers focused by the pushing object;
acquiring a click probability predicted value of the content information to be pushed according to the target feature vector;
and determining target push content information from the content information to be pushed according to the click probability predicted value, and pushing the target push content information to the target terminal.
2. The information pushing method according to claim 1, wherein the step of acquiring the main page information feature vector of the main page of the pushing object in the target application program includes:
acquiring main page information of the push object in a main page;
classifying the main page information to obtain user portrait information, author portrait information and video information of the pushing object;
respectively acquiring a user portrait feature vector corresponding to the user portrait information, an author portrait feature vector corresponding to the author portrait information and a video feature vector corresponding to the video information through an information embedding network;
and splicing the user portrait feature vector, the author portrait feature vector and the video feature vector to obtain the main page information feature vector of the pushing object.
3. The information pushing method according to claim 2, wherein the training step of the information embedding network includes:
acquiring a first training data set, wherein the first training data set comprises main page information of a sample object and user operation information of the sample object;
classifying the sample homepage information to obtain sample user portrait information, sample author portrait information and sample video information;
Respectively acquiring a sample user portrait feature vector corresponding to the sample user portrait information, a sample author portrait feature vector corresponding to the sample author portrait information and a sample video feature vector corresponding to the sample video information through an information embedding network;
splicing the sample user portrait feature vector, the sample author portrait feature vector and the sample video feature vector to obtain a sample main page information feature vector of the sample object;
inputting the characteristic vector of the sample main page information into a user behavior prediction model to obtain predicted behavior information;
and determining a first sample loss value according to the user operation information and the predicted behavior information, and carrying out parameter adjustment on the information embedded network and the user behavior prediction model according to the first sample loss value until model training is finished.
4. The information pushing method according to claim 3, wherein the user operation information includes a user's approval video sequence, a user's click video sequence, and a user's attention video sequence; the prediction behavior information comprises a prediction point praise video sequence, a prediction click video sequence and a prediction attention video sequence;
The step of determining a first sample loss according to the user operation information and the predicted behavior information comprises the following steps:
calculating a praise loss value according to the predicted point praise video sequence and the user praise video sequence;
calculating a click loss value according to the predicted click video sequence and the user click video sequence;
calculating a focus loss value according to the predicted focus video sequence and the user focus video sequence;
a first sample loss value is calculated from the praise loss value, the click loss value, and the attention loss value.
5. The information pushing method according to claim 1, wherein the step of extracting features of the to-be-pushed content information and the push object information to obtain feature vectors of the to-be-pushed content information and feature vectors of the push object information includes:
respectively inputting the information of the content to be pushed and the information of the pushing object into a feature extraction network of a click probability prediction model, and acquiring a feature vector of the information of the content to be pushed and a feature vector of the information of the pushing object through the feature extraction network;
the step of obtaining the click probability prediction value of the content information to be pushed according to the target feature vector comprises the following steps:
And inputting the target feature vector into a click probability prediction network of the click probability prediction model, and acquiring a click probability prediction value of the content information to be pushed through the click probability prediction network.
6. The information pushing method according to claim 5, wherein the training step of the click probability prediction model includes:
acquiring second training sample data, wherein the second training sample data comprises historical pushing content information of a sample pushing object and standard click labels corresponding to the historical pushing information;
respectively inputting the history pushing information and sample pushing object information of the sample pushing object into a feature extraction network of a click probability prediction model, and acquiring a history pushing content information feature vector and a sample pushing object information feature vector through the feature extraction network;
acquiring a sample main page information feature vector of a main page of the sample push object in a target application program, and splicing the history push content information feature vector, the sample push object information feature vector and the sample main page information feature vector to obtain the sample target feature vector;
Inputting the sample target feature vector into a click probability prediction network of the click probability prediction model, and acquiring the click type of the historical push content information through the click probability prediction network;
calculating a second sample loss value according to the click type and the standard click label;
and carrying out parameter adjustment on the characteristic extraction network of the click probability prediction model and the click probability prediction network according to the second sample loss value until reaching the training ending condition.
7. An information pushing apparatus, characterized by comprising:
the pushing object determining unit is configured to execute determining a pushing object of a target application program logged in a target terminal, and acquire pushing object information and content information to be pushed of the pushing object;
the feature vector acquisition unit is configured to perform feature extraction on the to-be-pushed content information and the push object information respectively to obtain a feature vector of the to-be-pushed content information and a feature vector of the push object information;
the feature vector splicing unit is configured to acquire a main page information feature vector of a main page of the push object in a target application program, splice the content information feature vector to be pushed, the push object information feature vector and the main page information feature vector, and acquire the target feature vector; the main page information comprises user portrait information, author portrait information and video information of the pushing object; the video information comprises the work information of the video works in the main page and the work information of the video works produced by other producers focused by the pushing object;
The click probability prediction unit is configured to acquire a click probability prediction value of the content information to be pushed according to the target feature vector;
and the information pushing unit is configured to determine target push content information from the content information to be pushed according to the click probability prediction value, and push the target push content information to the target terminal.
8. The information pushing apparatus according to claim 7, wherein the feature vector stitching unit is configured to perform: acquiring main page information of the push object in a main page; classifying the main page information to obtain user portrait information, author portrait information and video information of the pushing object; respectively acquiring a user portrait feature vector corresponding to the user portrait information, an author portrait feature vector corresponding to the author portrait information and a video feature vector corresponding to the video information through an information embedding network; and splicing the user portrait feature vector, the author portrait feature vector and the video feature vector to obtain the main page information feature vector of the pushing object.
9. The information pushing apparatus according to claim 8, further comprising a first model training unit configured to perform acquisition of a first training data set, wherein the first training data set includes main page information of a sample object and user operation information of the sample object; classifying the sample homepage information to obtain sample user portrait information, sample author portrait information and sample video information; respectively acquiring a sample user portrait feature vector corresponding to the sample user portrait information, a sample author portrait feature vector corresponding to the sample author portrait information and a sample video feature vector corresponding to the sample video information through an information embedding network; splicing the sample user portrait feature vector, the sample author portrait feature vector and the sample video feature vector to obtain a sample main page information feature vector of the sample object; inputting the characteristic vector of the sample main page information into a user behavior prediction model to obtain predicted behavior information; and determining a first sample loss value according to the user operation information and the predicted behavior information, and carrying out parameter adjustment on the information embedded network and the user behavior prediction model according to the first sample loss value until model training is finished.
10. The information pushing apparatus according to claim 9, wherein the user operation information includes a user approval video sequence, a user click video sequence, and a user attention video sequence; the prediction behavior information comprises a prediction point praise video sequence, a prediction click video sequence and a prediction attention video sequence; the first model training unit is further configured to perform a praise loss value calculation according to the predicted praise video sequence and the user praise video sequence; calculating a click loss value according to the predicted click video sequence and the user click video sequence; calculating a focus loss value according to the predicted focus video sequence and the user focus video sequence; a first sample loss value is calculated from the praise loss value, the click loss value, and the attention loss value.
11. The information pushing apparatus according to claim 7, wherein the feature vector obtaining unit is configured to perform inputting of the content information to be pushed and the pushing object information into a feature extraction network of a click probability prediction model, respectively, and obtain the content information feature vector to be pushed and the pushing object information feature vector through the feature extraction network;
The click probability prediction unit is configured to input the target feature vector into a click probability prediction network of the click probability prediction model, and obtain a click probability prediction value of the content information to be pushed through the click probability prediction network.
12. The information pushing device according to claim 11, further comprising a second model training unit configured to perform acquiring second training sample data, wherein the second training sample data includes historical push content information of the sample push object and standard click tags corresponding to the historical push information; respectively inputting the history pushing information and sample pushing object information of the sample pushing object into a feature extraction network of a click probability prediction model, and acquiring a history pushing content information feature vector and a sample pushing object information feature vector through the feature extraction network; acquiring a sample main page information feature vector of a main page of the sample push object in a target application program, and splicing the history push content information feature vector, the sample push object information feature vector and the sample main page information feature vector to obtain the sample target feature vector; inputting the sample target feature vector into a click probability prediction network of the click probability prediction model, and acquiring the click type of the historical push content information through the click probability prediction network; calculating a second sample loss value according to the click type and the standard click label; and carrying out parameter adjustment on the characteristic extraction network of the click probability prediction model and the click probability prediction network according to the second sample loss value until reaching the training ending condition.
13. An electronic device, comprising:
a processor; a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the information push method of any one of claims 1 to 6.
14. A storage medium, which when executed by a processor of an electronic device, enables the electronic device to perform the information push method of any one of claims 1 to 6.
CN201911241705.1A 2019-12-06 2019-12-06 Information pushing method, device, electronic equipment and storage medium Active CN112925972B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911241705.1A CN112925972B (en) 2019-12-06 2019-12-06 Information pushing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911241705.1A CN112925972B (en) 2019-12-06 2019-12-06 Information pushing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112925972A CN112925972A (en) 2021-06-08
CN112925972B true CN112925972B (en) 2024-03-08

Family

ID=76161639

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911241705.1A Active CN112925972B (en) 2019-12-06 2019-12-06 Information pushing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112925972B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108551589A (en) * 2018-03-16 2018-09-18 青岛海信电器股份有限公司 A kind of UI Preferences method and terminal
CN109543069A (en) * 2018-10-31 2019-03-29 北京达佳互联信息技术有限公司 Video recommendation method, device and computer readable storage medium
CN109684554A (en) * 2018-12-26 2019-04-26 腾讯科技(深圳)有限公司 The determination method and news push method of the potential user of news
CN109783724A (en) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 Management method, terminal device and the medium of social network information
CN109800325A (en) * 2018-12-26 2019-05-24 北京达佳互联信息技术有限公司 Video recommendation method, device and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108551589A (en) * 2018-03-16 2018-09-18 青岛海信电器股份有限公司 A kind of UI Preferences method and terminal
CN109543069A (en) * 2018-10-31 2019-03-29 北京达佳互联信息技术有限公司 Video recommendation method, device and computer readable storage medium
CN109783724A (en) * 2018-12-14 2019-05-21 深圳壹账通智能科技有限公司 Management method, terminal device and the medium of social network information
CN109684554A (en) * 2018-12-26 2019-04-26 腾讯科技(深圳)有限公司 The determination method and news push method of the potential user of news
CN109800325A (en) * 2018-12-26 2019-05-24 北京达佳互联信息技术有限公司 Video recommendation method, device and computer readable storage medium

Also Published As

Publication number Publication date
CN112925972A (en) 2021-06-08

Similar Documents

Publication Publication Date Title
CN111444428A (en) Information recommendation method and device based on artificial intelligence, electronic equipment and storage medium
CN110364146B (en) Speech recognition method, speech recognition device, speech recognition apparatus, and storage medium
CN109783730A (en) Products Show method, apparatus, computer equipment and storage medium
CN111966914B (en) Content recommendation method and device based on artificial intelligence and computer equipment
CN112364204B (en) Video searching method, device, computer equipment and storage medium
CN109275047B (en) Video information processing method and device, electronic equipment and storage medium
CN111382361A (en) Information pushing method and device, storage medium and computer equipment
CN111858973A (en) Multimedia event information detection method, device, server and storage medium
CN111625715A (en) Information extraction method and device, electronic equipment and storage medium
CN113204699B (en) Information recommendation method and device, electronic equipment and storage medium
CN116821475B (en) Video recommendation method and device based on client data and computer equipment
CN114817692A (en) Method, device and equipment for determining recommended object and computer storage medium
CN112330442A (en) Modeling method and device based on ultra-long behavior sequence, terminal and storage medium
CN112925972B (en) Information pushing method, device, electronic equipment and storage medium
CN114245232B (en) Video abstract generation method and device, storage medium and electronic equipment
CN112883256B (en) Multitasking method, apparatus, electronic device and storage medium
CN115374348A (en) Information recommendation method, information recommendation device and readable storage medium
CN113407772B (en) Video recommendation model generation method, video recommendation method and device
CN113806622A (en) Recommendation method, device and equipment
CN113297417A (en) Video pushing method and device, electronic equipment and storage medium
CN114866818B (en) Video recommendation method, device, computer equipment and storage medium
CN117708340B (en) Label text determining method, model training and adjusting method, device and medium
CN117909542A (en) Video recommendation method, device, equipment and storage medium
CN117221623A (en) Resource determination method, device, electronic equipment and storage medium
CN116129181A (en) Display information pushing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant