CN111782928B - Information pushing method, device and computer readable storage medium - Google Patents

Information pushing method, device and computer readable storage medium Download PDF

Info

Publication number
CN111782928B
CN111782928B CN201910419027.7A CN201910419027A CN111782928B CN 111782928 B CN111782928 B CN 111782928B CN 201910419027 A CN201910419027 A CN 201910419027A CN 111782928 B CN111782928 B CN 111782928B
Authority
CN
China
Prior art keywords
information
output information
feature
input
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910419027.7A
Other languages
Chinese (zh)
Other versions
CN111782928A (en
Inventor
党浩明
王金成
严严
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Wodong Tianjun Information Technology Co Ltd
Priority to CN201910419027.7A priority Critical patent/CN111782928B/en
Publication of CN111782928A publication Critical patent/CN111782928A/en
Application granted granted Critical
Publication of CN111782928B publication Critical patent/CN111782928B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses an information pushing method, an information pushing device and a computer readable storage medium, and relates to the field of data processing. The information pushing method comprises the following steps: acquiring one or more attributes of a user and one or more attributes of a candidate object as original attributes; generating input information according to the original attribute; inputting the input information into a feature calculation model to obtain first output information, second output information and third output information, wherein the first output information is determined according to linear calculation results of a plurality of features, the second output information is calculated according to results obtained by performing first nonlinear combination on the features in the input information, and the third output information is calculated according to results obtained by performing second nonlinear combination on the features in the input information; determining a recommended value of the candidate object for the user according to the first output information, the second output information and the third output information; and pushing the candidate object to the user under the condition that the recommended value is larger than a preset threshold value.

Description

Information pushing method, device and computer readable storage medium
Technical Field
The present invention relates to the field of data processing, and in particular, to an information pushing method, an information pushing device, and a computer readable storage medium.
Background
In the related art, the attention degree of the user to objects such as commodities, articles, microblogs and the like is obtained through estimating the click rate, so that targeted recommendation is performed for the user. When deep learning is gradually shifted from academia to industry, researchers find that deep learning can better solve the problem of click rate estimation.
Disclosure of Invention
The inventors have realized that deep learning tends to result in overgeneralization. When the interaction behavior matrix between the user and the object is sparse and large in scale, the model is easy to recommend some irrelevant commodities. This can result in wastage of computing resources, network resources, and storage resources.
One technical problem to be solved by the embodiment of the invention is as follows: how to improve the accuracy of the recommendation to save computing resources, network resources and storage resources.
According to a first aspect of some embodiments of the present invention, there is provided an information pushing method, including: acquiring one or more attributes of a user and one or more attributes of a candidate object as original attributes; generating input information according to the original attribute, wherein the input information comprises a plurality of features; inputting the input information into a feature calculation model to obtain first output information, second output information and third output information output by the feature calculation model, wherein the first output information is determined by the feature calculation model according to linear calculation results of a plurality of features, the second output information is calculated by the feature calculation model according to results obtained by performing first nonlinear combination on the features in the input information, and the third output information is calculated by the feature calculation model according to results obtained by performing second nonlinear combination on the features in the input information; determining a recommended value of the candidate object for the user according to the first output information, the second output information and the third output information; and pushing the candidate object to the user under the condition that the recommended value is larger than a preset threshold value.
In some embodiments, the feature computation model includes a linear computation interface, a feature combination computation interface, and a deep learning computation interface; and inputting the input information into the feature calculation model, and obtaining first output information determined by the linear calculation interface based on the linear recommendation model, second output information determined by the feature combination calculation interface based on the feature combination model and third output information determined by the deep learning calculation interface based on the deep learning model.
In some embodiments, generating the input information from the original attributes includes: generating a first input vector according to the features corresponding to the original attributes and the features corresponding to the preset combination of the original attributes; generating a second input vector according to the features corresponding to the original attributes; the first input vector and the second input vector are determined as input information so that the first input vector is input to the linear computing interface, the second input vector is input to the feature combination computing interface and the deep learning computing interface.
In some embodiments, the first input feature is a sparse feature and the second input feature is a dense feature.
In some embodiments, generating the first input vector according to the feature corresponding to the original attribute and the feature corresponding to the preset combination of the original attributes includes: obtaining a single-heat coding result of the features corresponding to the original attributes and the features corresponding to the preset combination of the original attributes; and generating a first input vector according to the single-hot coding result corresponding to each characteristic.
In some embodiments, generating the second input vector from the features corresponding to the original attributes includes: determining embedded features corresponding to each original attribute according to a preset corresponding relation; and generating a second input feature according to the embedded feature corresponding to each original attribute.
In some embodiments, obtaining the second output information of the feature combination computing interface determined based on the feature combination model comprises: and determining an output result of a k+1 layer of the feature combination model according to the product of the second input vector and the transpose of the output vector of the k layer of the feature combination model, wherein k is a positive integer.
In some embodiments, obtaining third output information of the deep learning computing interface determined based on the deep learning model includes: determining a first intermediate value according to a weight parameter, a bias parameter and an output result of a first layer, a first layer and a first+1 layer of the deep learning model, wherein l is a positive integer; inputting the first intermediate value into an S-shaped growth curve function to obtain a second intermediate value; the product of the first intermediate value and the second intermediate value is determined as an output result of the first +1 layer of the deep learning model.
In some embodiments, determining the recommended value of the candidate object for the user based on the first output information, the second output information, and the third output information includes: inputting the product of the combination result of the first output information, the second output information and the third output information and a preset weight vector into an activation function; and determining the output result of the activation function as a recommended value of the candidate object for the user.
In some embodiments, the information pushing method further includes: obtaining training data, wherein the training data comprises one or more attributes of a user and one or more attributes of an object as original attributes; determining a marking value of the training data according to the operation information of the corresponding object of the user corresponding to the training data; generating input information for training according to the original attribute of the training data, wherein the input information for training comprises a plurality of characteristics; inputting the input information for training into a feature calculation model to obtain first output information, second output information and third output information output by the feature calculation model; determining a recommended value corresponding to the training data according to the first output information, the second output information and the third output information; and adjusting parameters of the feature calculation model according to the difference between the recommended value and the marked value corresponding to the training data until the difference between the recommended value and the marked value corresponding to the training data is smaller than a preset value.
According to a second aspect of some embodiments of the present invention, there is provided an information pushing apparatus, including: an attribute acquisition module configured to acquire one or more attributes of a user and one or more attributes of a candidate object as original attributes; an input information generation module configured to generate input information according to the original attribute, wherein the input information includes a plurality of features; the characteristic calculation model is configured to output first output information, second output information and third output information according to the input information, wherein the first output information is determined by the characteristic calculation model according to linear calculation results of a plurality of characteristics, the second output information is calculated by the characteristic calculation model according to results obtained by performing first nonlinear combination on the characteristics in the input information, and the third output information is calculated by the characteristic calculation model according to results obtained by performing second nonlinear combination on the characteristics of the input information; a recommendation value determining module configured to determine a recommendation value of the candidate object for the user according to the first output information, the second output information and the third output information; and the pushing module is configured to push the candidate object to the user when the recommended value is greater than a preset threshold value.
In some embodiments, the feature computation model includes a linear computation interface, a feature combination computation interface, and a deep learning computation interface; the linear computing interface is configured to determine first output information based on the linear recommendation model; the feature combination computing interface is configured to determine second output information based on the feature combination model; the deep learning computing interface is configured to determine third output information based on the deep learning model.
According to a third aspect of some embodiments of the present invention, there is provided an information pushing apparatus, including: a memory; and a processor coupled to the memory, the processor configured to perform any one of the foregoing information pushing methods based on instructions stored in the memory.
According to a fourth aspect of some embodiments of the present invention, there is provided a computer-readable storage medium having stored thereon a computer program, wherein the program, when executed by a processor, implements any one of the foregoing information push methods.
Some of the embodiments of the above invention have the following advantages or benefits: the embodiment of the invention can perform linear calculation, linear combination and nonlinear combination on the characteristics, so that the recommended value can reflect the characteristics of the characteristics and the calculation result after generalization based on the characteristics. Therefore, the recommended value can reflect the association degree of the user and the candidate object. For the situations of more types and numbers of users and fewer interaction behaviors of the users and candidate objects, the embodiment of the invention can push more accurate information, so that computing resources, network resources and storage resources are saved.
Other features of the present invention and its advantages will become apparent from the following detailed description of exemplary embodiments of the invention, which proceeds with reference to the accompanying drawings.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
Fig. 1 is a flowchart of an information pushing method according to some embodiments of the present invention.
Fig. 2 is a flow chart of a feature calculation method according to some embodiments of the invention.
Fig. 3 is a flow chart of a first input vector determination method according to some embodiments of the invention.
Fig. 4 is a flow chart of a second input vector determination method according to some embodiments of the invention.
FIG. 5 is a flow chart of a feature computation model training method according to some embodiments of the invention.
Fig. 6 is a schematic structural diagram of an information pushing device according to some embodiments of the present invention.
Fig. 7 is a schematic structural diagram of an information pushing device according to other embodiments of the present invention.
Fig. 8 is a schematic structural diagram of an information pushing device according to still other embodiments of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but should be considered part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
In some embodiments, the recommendation value for the user for each candidate object in the candidate set may be calculated one by one. An embodiment of the information push method of the present invention is described below with reference to fig. 1.
Fig. 1 is a flowchart of an information pushing method according to some embodiments of the present invention. As shown in fig. 1, the information pushing method of this embodiment includes steps S102 to S110.
In step S102, one or more attributes of the user and one or more attributes of the candidate object are acquired as original attributes.
The attributes of the user may include, for example, the user's age, gender, academic, residential area, home situation, historical behavioral data, and the like. The historical behavior data of the user can be information such as browsing, clicking, purchasing, collecting, forwarding and the like of the user on the object in a preset time period.
When the candidate object is an article, the attribute of the candidate object may include price, class and size; where the candidate object is text, its attributes may include, for example, category, length, genre, etc. In addition, the attribute of the candidate object can also include historical operated data, including information that the candidate object is browsed, clicked, purchased, collected, forwarded and the like in a preset time period.
The original attribute may be represented in text, numerical, etc. types. When calculated using the original attributes, they can be converted into features represented by numerical values or vectors.
In step S104, input information is generated according to the original attribute, wherein the input information includes a plurality of features.
In step S106, the input information is input into the feature calculation model, and the first output information, the second output information and the third output information output by the feature calculation model are obtained, where the first output information is determined by the feature calculation model according to the linear calculation results of the plurality of features, the second output information is determined by the feature calculation model according to the result obtained by performing the first nonlinear combination on the features in the input information, and the third output information is determined by the feature calculation model according to the result obtained by performing the second nonlinear combination on the features in the input information. The first output information, the second output information, and the third output information may be represented in the form of vectors.
The linear calculation result of the features does not combine the features in the input information any more, so that the characteristics of each feature can be kept, and the feature calculation model has better memory. For the situation that the interaction information between the user and the candidate object is less, the linear calculation can obtain accurate recommendation results.
The first nonlinear combination may be an explicit combination between features and the second nonlinear combination may be an implicit combination between features. Explicit combining between features is a way of combining the process between features that can be traced back explicitly, e.g. the result of the combining can be obtained by multiplying the elements in different feature vectors. The implicit combination between features is a combination way that cannot trace back the combination process between features. For example, when the network structure is adopted to perform implicit combination of features, the calculation result of each layer needs to be input into an activation function to perform calculation so as to obtain the output result of the layer.
By adopting the model to carry out nonlinear combination among the features, the high-dimensional combination of the features can be automatically carried out. Therefore, the feature calculation model can have stronger generalization capability. When the number and types of users and candidate objects are increased, accurate recommendation results can be obtained through the generalization capability of the feature calculation model.
In some embodiments, the feature computation model may employ interfaces to invoke other models to obtain various output information. For example, the feature computation model may include a linear computation interface, a feature combination computation interface, and a deep learning computation interface. In this case, the input information may be input into the feature calculation model, and the first output information determined by the linear calculation interface based on the linear recommendation model, the second output information determined by the feature combination calculation interface based on the feature combination model, and the third output information determined by the deep learning calculation interface based on the deep learning model are obtained.
In step S108, a recommended value of the candidate object for the user is determined according to the first output information, the second output information, and the third output information.
In some embodiments, the recommendation value may be determined according to a weighted combination of the first output information, the second output information, and the third output information. For example, the product of the combined result of the first output information, the second output information, and the third output information and the preset weight vector may be input into the activation function, and then the output result of the activation function may be determined as a recommended value of the candidate object for the user. For example, the recommended value P may be determined using formula (1).
Wherein,respectively representing the transposition of the first output information, the transposition of the second output information and the transposition of the third output information; w represents a preset weight parameter; sigmoid (·) represents an S-type growth curve function. Thus, the recommended value can be mapped into a fixed interval to facilitate differentComparison between recommended values of candidate objects.
In step S110, in the case where the recommended value is greater than the preset threshold, the candidate object is pushed to the user. In some embodiments, the plurality of candidate objects to be pushed to the user may be further sorted according to the recommendation value, and the sorted candidate objects are pushed to the user.
By the method of the embodiment, the characteristics can be subjected to linear calculation and nonlinear combination, so that the recommended value can reflect the characteristics of the characteristics and the calculation result after generalization based on the characteristics. Therefore, the recommended value can reflect the association degree of the user and the candidate object. For the situations of more types and numbers of users and fewer interaction behaviors of the users and candidate objects, the embodiment of the invention can push more accurate information, so that computing resources, network resources and storage resources are saved.
The basis for the nonlinear combination of features by the model is mostly determined based on training data. For example, parameters of the feature calculation model may be determined after training in advance using training data. The feature calculation model may perform linear and non-linear combinations of features based on the determined parameters.
In addition, when some known combinations of attributes can achieve a better effect, features corresponding to the combinations of the attributes can be generated in advance, so that the linear recommendation model predicts based on the preset combinations of the attributes. The feature combination model and the deep learning model can automatically combine and predict the features based on the features corresponding to the original attributes. An embodiment of the feature calculation method of the present invention is described below with reference to fig. 2.
Fig. 2 is a flow chart of a feature calculation method according to some embodiments of the invention. As shown in fig. 2, the feature calculation method of this embodiment includes steps S202 to S214.
In step S202, a first input vector is generated according to the features corresponding to the original attributes and the features corresponding to the preset combination of the original attributes. In some embodiments, the predetermined combination of original attributes in the first input vector is a low-order combination.
Exemplary original attributes and preset combinations of original attributes for the user and candidate are shown in table 1. Column 5 of table 1 is a preset combination of original attributes. Thus, when generating a first input vector including four dimensions according to table 1, the 1 st to 4 th dimensions of the vector may represent a user age, a user gender, an item type, a user gender+an item type, respectively.
TABLE 1
In some embodiments, the preset combination of original attributes may be obtained through feature engineering.
In step S204, a second input vector is generated according to the features corresponding to the original attributes.
In some embodiments, the second input vector may include only features corresponding to the original attributes, and not features corresponding to the preset combination of the original attributes. Since the feature combination calculation interface and the deep learning calculation interface which process the second input vector can automatically combine features, the step of performing attribute combination in advance can be skipped. Thus, the calculation efficiency of the model is improved.
In step S206, the first input vector and the second input vector are determined as input information.
In step S208, a first input vector in the input information is input to the linear computing interface in the feature computing model, and first output information determined by the linear computing interface based on the linear recommendation model is obtained.
In some embodiments, the linear model is calculated as shown in equation (2).
y 1 =a T x 1 +b (2)
Wherein y is 1 Representing the first output information, x 1 Representing a first input vector, a, b representing model parameters obtained by pre-training, a T Representing vector aIs a transpose of (a).
In step S210, a second input vector of the input information is input to the feature combination calculation interface in the feature calculation model, and second output information determined by the feature combination calculation interface based on the feature combination model is obtained.
The feature combination model is a neural network model. In some embodiments, the output of each layer of the feature combination model is determined from the intersection of the output of the previous layer with the second input vector, e.g., from the product of the second input vector and the transpose of the output vector of the previous layer. The output result c of the k+1 layer of the feature combination model can be optionally determined by using the formula (3) k+1
Wherein k is the identification of the layer number of the feature combination model, is a positive integer, and x 2 Representing a second input vector, w k Representing weight parameters between the kth layer and the (k+1) th layer of the feature combination model, b k+1 Representing the bias parameters between the kth layer and the (k+1) th layer of the feature combination model.
Therefore, after each feature combination result is obtained, the feature combination model combines the features in the original second input vector with the last crossing result, so that a high-dimensional feature combination result can be obtained.
In step S212, a second input vector of the input information is input to the deep learning computing interface, and third output information determined by the deep learning computing interface based on the deep learning model is obtained.
In some embodiments, determining a first intermediate value according to a weight parameter, a bias parameter and an output result of a first layer +1 of the deep learning model, wherein l is a positive integer; inputting the first intermediate value into an S-shaped growth curve function (Sigmoid) to obtain a second intermediate value; the product of the first intermediate value and the second intermediate value is determined as an output result of the first +1 layer of the deep learning model.
In some casesIn an embodiment, the output result h of the first layer of the deep learning network model can be calculated by using the formula (4) based on the impulse function Swish l+1
h l+1 =Swish(W l h l +b l )
=(W l h l +b l )*sigmoid(W l h l +b l ) (4)
Wherein l is the identification of the layer number of the deep learning model, is a positive integer and W l Representing weight parameters between the first layer and the first layer (1+1) of the deep learning model, b l Representing the bias parameters between the first layer and the first layer +1 of the feature combination model, and sigmoid (·) representing an S-shaped growth curve function.
In step S214, a recommended value of the candidate object for the user is determined according to the first output information, the second output information, and the third output information.
In some embodiments, the first input feature is a sparse feature and the second input feature is a dense feature. The determination methods of the first input vector and the second input vector are described below with reference to fig. 3 and 4, respectively.
Fig. 3 is a flow chart of a first input vector determination method according to some embodiments of the invention. As shown in fig. 3, the first input vector determination method of this embodiment includes steps S302 to S304.
In step S302, a One-Hot (One-Hot) encoding result of the feature corresponding to the original attribute and the feature corresponding to the preset combination of the original attribute is obtained.
In step S304, a first input vector is generated according to the result of the one-hot encoding corresponding to each feature.
The features obtained by the single thermal encoding result are typically high-dimensional sparse features. Since the linear model is easy to handle such features, the first input vector generated by this method does not affect the computational efficiency and the original features of the features can be preserved.
Fig. 4 is a flow chart of a second input vector determination method according to some embodiments of the invention. As shown in fig. 4, the second input vector determination method of this embodiment includes steps S402 to S404.
In step S402, according to the preset correspondence, the embedded feature corresponding to each original attribute is determined. For example, each feature may be mapped to a low-dimensional, dense vector as an embedded feature through prior training. So that statistical methods can be used to measure the similarity between features.
In step S404, a second input feature is generated from the embedded feature corresponding to each original attribute.
By converting the original attribute into the embedded feature, the number of nodes of the feature combination model and the deep learning model can be reduced, and the calculation efficiency is improved.
Some embodiments of the present invention may also train the feature computation model in advance to improve the accuracy of the pushing. An embodiment of the feature calculation model training method of the present invention is described below with reference to fig. 5.
FIG. 5 is a flow chart of a feature computation model training method according to some embodiments of the invention. As shown in fig. 5, the feature calculation model training method of this embodiment includes steps S502 to S512.
In step S502, training data is acquired, wherein the training data includes one or more attributes of a user, one or more attributes of an object, as original attributes.
In step S504, the tag value of the training data is determined according to the operation information of the user corresponding to the training data on the corresponding object.
The operation information of the user on the corresponding object indicates whether the user performs a preset operation, such as clicking, browsing, collecting, purchasing, etc., on the corresponding object. And determining a corresponding marking value according to the category corresponding to the operation information. For example, when a user clicks on an item, the item is marked as 1, and when the item is not clicked, the item is marked as 0; for another example, the user browses an article for more than 30 seconds, then marks 1, otherwise marks 0.
In step S506, input information for training is generated according to the original attribute of the training data, wherein the input information for training includes a plurality of features.
In step S508, input information for training is input into the feature calculation model, and first, second, and third output information output by the feature calculation model are obtained.
The first output information is determined by a feature calculation model according to the linear calculation results of the plurality of features, the second output information is determined by a feature calculation model according to the result of performing first nonlinear combination on the features in the input information, and the third output information is determined by the result of performing second nonlinear combination on the features in the input information by the feature calculation model.
In step S510, a recommended value corresponding to the training data is determined according to the first output information, the second output information, and the third output information.
In step S512, parameters of the feature calculation model are adjusted according to the difference between the recommended value and the labeled value corresponding to the training data until the difference between the recommended value and the labeled value corresponding to the training data is smaller than a preset value.
By the method of the embodiment, the feature calculation model can be uniformly adjusted according to the feature calculation results based on three different modes, so that the feature calculation model can have memory capacity and generalization capacity at the same time. Therefore, the information pushing result determined according to the feature calculation model is more accurate.
An embodiment of the information pushing apparatus of the present invention is described below with reference to fig. 6.
Fig. 6 is a schematic structural diagram of an information pushing device according to some embodiments of the present invention. As shown in fig. 6, the information pushing apparatus 60 of this embodiment includes: an attribute acquisition module 610 configured to acquire one or more attributes of a user and one or more attributes of a candidate object as original attributes; an input information generation module 620 configured to generate input information from the original attributes, wherein the input information includes a plurality of features; a feature calculation model 630 configured to output, based on the input information, first output information, the first output information being determined by the feature calculation model based on a result of linear calculation of the plurality of features, second output information, the second output information being determined by the feature calculation model based on a result of first nonlinear combination of the features in the input information, and third output information, the third output information being determined by the feature calculation model based on a result of second nonlinear combination of the features of the input information; a recommendation value determining module 640 configured to determine a recommendation value of the candidate object for the user according to the first output information, the second output information, and the third output information; the pushing module 650 is configured to push the candidate object to the user if the recommended value is greater than a preset threshold.
In some embodiments, feature computation model 630 includes linear computation interface 6310, feature combination computation interface 6320, and deep learning computation interface 6330; the linear computing interface 6310 is configured to determine the first output information based on the linear recommendation model; the feature combination computing interface 6320 is configured to determine second output information based on the feature combination model; the deep learning computing interface 6330 is configured to determine third output information based on the deep learning model.
In some embodiments, the input information generating module 620 is further configured to generate the first input vector according to the feature corresponding to the original attribute and the feature corresponding to the preset combination of the original attribute; generating a second input vector according to the features corresponding to the original attributes; the first input vector and the second input vector are determined as input information so that the first input vector is input to the linear computing interface, the second input vector is input to the feature combination computing interface and the deep learning computing interface.
In some embodiments, the first input feature is a sparse feature and the second input feature is a dense feature.
In some embodiments, the input information generation module 620 is further configured to obtain a result of the single thermal encoding of the feature corresponding to the original attribute and the feature corresponding to the preset combination of the original attribute; and generating a first input vector according to the single-hot coding result corresponding to each characteristic.
In some embodiments, the input information generating module 420 is further configured to determine an embedded feature corresponding to each original attribute according to a preset correspondence; and generating a second input feature according to the embedded feature corresponding to each original attribute.
In some embodiments, the feature combination calculation interface 6320 is further configured to determine an output result of a k+1-th layer of the feature combination model from a product of the second input vector and a transpose of an output vector of a k-th layer of the feature combination model, where k is a positive integer.
In some embodiments, the deep learning computing interface 6330 is further configured to determine a first intermediate value from the weight parameters, the bias parameters, and the output results of the first layer between the first layer, the first+1 layer of the deep learning model, where l is a positive integer; inputting the first intermediate value into an S-shaped growth curve function to obtain a second intermediate value; the product of the first intermediate value and the second intermediate value is determined as an output result of the first +1 layer of the deep learning model.
In some embodiments, the recommendation value determining module 640 is further configured to input the product of the combination result of the first output information, the second output information, the third output information and the preset weight vector into the activation function; and determining the output result of the activation function as a recommended value of the candidate object for the user.
In some embodiments, the information pushing device 60 further comprises a training module 660 configured to obtain training data, wherein the training data comprises one or more attributes of the user, one or more attributes of the object, as original attributes; determining a marking value of the training data according to the operation information of the corresponding object of the user corresponding to the training data; generating input information for training according to the original attribute of the training data, wherein the input information for training comprises a plurality of characteristics; inputting the input information for training into a feature calculation model to obtain first output information, second output information and third output information output by the feature calculation model; determining a recommended value corresponding to the training data according to the first output information, the second output information and the third output information; and adjusting parameters of the feature calculation model according to the difference between the recommended value and the marked value corresponding to the training data until the difference between the recommended value and the marked value corresponding to the training data is smaller than a preset value.
Fig. 7 is a schematic structural diagram of an information pushing device according to other embodiments of the present invention. As shown in fig. 7, the information push device 70 of this embodiment includes: a memory 710 and a processor 720 coupled to the memory 710, the processor 720 being configured to perform the information pushing method of any of the previous embodiments based on instructions stored in the memory 710.
The memory 710 may include, for example, system memory, fixed nonvolatile storage media, and so forth. The system memory stores, for example, an operating system, application programs, boot Loader (Boot Loader), and other programs.
Fig. 8 is a schematic structural diagram of an information pushing device according to still other embodiments of the present invention. As shown in fig. 8, the information pushing apparatus 80 of this embodiment includes: memory 810 and processor 820 may also include an input-output interface 830, a network interface 840, a storage interface 850, and the like. These interfaces 830, 840, 850 and the memory 810 and processor 820 may be connected by, for example, a bus 860. The input/output interface 830 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, a touch screen, and the like. The network interface 840 provides a connection interface for various networking devices. Storage interface 850 provides a connection interface for external storage devices such as SD cards, U-discs, and the like.
An embodiment of the present invention also provides a computer-readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements any one of the foregoing information push methods.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable non-transitory storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flowchart and/or block of the flowchart illustrations and/or block diagrams, and combinations of flowcharts and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (14)

1. An information pushing method, comprising:
acquiring one or more attributes of a user and one or more attributes of a candidate object as original attributes;
generating input information according to the original attribute, wherein the input information comprises a plurality of features;
inputting the input information into a feature calculation model to obtain first output information, second output information and third output information output by the feature calculation model, wherein the first output information is determined by the feature calculation model according to linear calculation results of the plurality of features, the second output information is determined by the feature calculation model according to a result obtained by performing first nonlinear combination on the features in the input information, and the third output information is determined by the feature calculation model according to a result obtained by performing second nonlinear combination on the features in the input information;
determining a recommended value of the candidate object for the user according to the first output information, the second output information and the third output information;
and pushing the candidate object to the user under the condition that the recommended value is larger than a preset threshold value.
2. The information pushing method of claim 1, wherein the feature computation model comprises a linear computation interface, a feature combination computation interface, and a deep learning computation interface;
and inputting the input information into the feature calculation model, and obtaining first output information determined by the linear calculation interface based on the linear recommendation model, second output information determined by the feature combination calculation interface based on the feature combination model and third output information determined by the deep learning calculation interface based on the deep learning model.
3. The information pushing method of claim 2, wherein the generating input information according to the original attribute comprises:
generating a first input vector according to the features corresponding to the original attributes and the features corresponding to the preset combination of the original attributes;
generating a second input vector according to the features corresponding to the original attributes;
the first input vector and the second input vector are determined as input information so that the first input vector is input to the linear computing interface, the second input vector is input to the feature combination computing interface and the deep learning computing interface.
4. The information pushing method of claim 3, wherein the first input feature is a sparse feature and the second input feature is a dense feature.
5. The information pushing method according to claim 4, wherein the generating the first input vector according to the feature corresponding to the original attribute and the feature corresponding to the preset combination of the original attribute includes:
obtaining a single-heat coding result of the features corresponding to the original attributes and the features corresponding to the preset combination of the original attributes;
and generating a first input vector according to the single-hot coding result corresponding to each characteristic.
6. The information pushing method according to claim 4, wherein the generating a second input vector according to the feature corresponding to the original attribute includes:
determining embedded features corresponding to each original attribute according to a preset corresponding relation;
and generating a second input feature according to the embedded feature corresponding to each original attribute.
7. The information pushing method of claim 2, wherein obtaining the second output information determined by the feature combination computing interface based on the feature combination model comprises:
and determining an output result of a k+1 layer of the feature combination model according to the product of the second input vector and the transpose of the output vector of the k layer of the feature combination model, wherein k is a positive integer.
8. The information pushing method of claim 2, wherein obtaining third output information determined by the deep learning computing interface based on the deep learning model comprises:
determining a first intermediate value according to a weight parameter, a bias parameter and an output result of a first layer, a first layer and a first+1 layer of the deep learning model, wherein l is a positive integer;
inputting the first intermediate value into an S-shaped growth curve function to obtain a second intermediate value;
the product of the first intermediate value and the second intermediate value is determined as an output result of the first +1 layer of the deep learning model.
9. The information pushing method according to any one of claims 1 to 8, wherein the determining a recommended value of the candidate object for the user according to the first output information, the second output information, and the third output information includes:
inputting the product of the combination result of the first output information, the second output information and the third output information and a preset weight vector into an activation function;
and determining an output result of the activation function as a recommended value of the candidate object for the user.
10. The information pushing method according to any one of claims 1 to 8, further comprising:
obtaining training data, wherein the training data comprises one or more attributes of a user and one or more attributes of an object, and the user is used as an original attribute;
determining a marking value of the training data according to the operation information of the corresponding object of the user corresponding to the training data;
generating input information for training according to original attributes of training data, wherein the input information for training comprises a plurality of characteristics;
inputting the input information for training into a feature calculation model to obtain first output information, second output information and third output information output by the feature calculation model;
determining a recommended value corresponding to the training data according to the first output information, the second output information and the third output information;
and adjusting parameters of the feature calculation model according to the difference between the recommended value and the marked value corresponding to the training data until the difference between the recommended value and the marked value corresponding to the training data is smaller than a preset value.
11. An information pushing apparatus, comprising:
an attribute acquisition module configured to acquire one or more attributes of a user and one or more attributes of a candidate object as original attributes;
an input information generation module configured to generate input information according to an original attribute, wherein the input information includes a plurality of features;
the feature calculation model is configured to output first output information, second output information and third output information according to the input information, wherein the first output information is determined by the feature calculation model according to linear calculation results of the plurality of features, the second output information is determined by the feature calculation model according to a result obtained by performing first nonlinear combination on the features in the input information, and the third output information is determined by the feature calculation model according to a result obtained by performing second nonlinear combination on the features in the input information;
a recommendation value determining module configured to determine a recommendation value of the candidate object for the user according to the first output information, the second output information, and the third output information;
and the pushing module is configured to push the candidate object to the user under the condition that the recommended value is larger than a preset threshold value.
12. The information pushing device of claim 11, wherein the feature computation model comprises a linear computation interface, a feature combination computation interface, and a deep learning computation interface;
the linear computing interface is configured to determine first output information based on a linear recommendation model;
the feature combination computing interface is configured to determine second output information based on a feature combination model;
the deep learning computing interface is configured to determine third output information based on a deep learning model.
13. An information pushing apparatus, comprising:
a memory; and
a processor coupled to the memory, the processor configured to perform the information push method of any of claims 1-10 based on instructions stored in the memory.
14. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, implements the information push method of any of claims 1 to 10.
CN201910419027.7A 2019-05-20 2019-05-20 Information pushing method, device and computer readable storage medium Active CN111782928B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910419027.7A CN111782928B (en) 2019-05-20 2019-05-20 Information pushing method, device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910419027.7A CN111782928B (en) 2019-05-20 2019-05-20 Information pushing method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111782928A CN111782928A (en) 2020-10-16
CN111782928B true CN111782928B (en) 2023-12-08

Family

ID=72755573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910419027.7A Active CN111782928B (en) 2019-05-20 2019-05-20 Information pushing method, device and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111782928B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112258297A (en) * 2020-11-12 2021-01-22 北京沃东天骏信息技术有限公司 Method, device and computer-readable storage medium for pushing description information of article
CN117670467A (en) * 2023-11-01 2024-03-08 广州市数商云网络科技有限公司 Supply-purchase management and control method and device based on B2B industrial platform

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108076154A (en) * 2017-12-21 2018-05-25 广东欧珀移动通信有限公司 Application message recommends method, apparatus and storage medium and server
WO2018210124A1 (en) * 2017-05-15 2018-11-22 京东方科技集团股份有限公司 Clothing recommendation method and clothing recommendation device
CN108920641A (en) * 2018-07-02 2018-11-30 北京理工大学 A kind of information fusion personalized recommendation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446970A (en) * 2014-06-10 2016-03-30 华为技术有限公司 Item recommendation method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018210124A1 (en) * 2017-05-15 2018-11-22 京东方科技集团股份有限公司 Clothing recommendation method and clothing recommendation device
CN108076154A (en) * 2017-12-21 2018-05-25 广东欧珀移动通信有限公司 Application message recommends method, apparatus and storage medium and server
CN108920641A (en) * 2018-07-02 2018-11-30 北京理工大学 A kind of information fusion personalized recommendation method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于降噪自编码器网络与词向量的信息推荐方法;郭喻栋;郭志刚;席耀一;;计算机工程(12);全文 *
融合用户属性的隐语义模型推荐算法;巫可;战荫伟;李鹰;;计算机工程(12);全文 *

Also Published As

Publication number Publication date
CN111782928A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN109582956A (en) text representation method and device applied to sentence embedding
Schramski et al. Network environ theory, simulation, and EcoNet® 2.0
CN106251174A (en) Information recommendation method and device
CN112085565A (en) Deep learning-based information recommendation method, device, equipment and storage medium
Cioaca et al. Second-order adjoints for solving PDE-constrained optimization problems
CN111782928B (en) Information pushing method, device and computer readable storage medium
KR101635283B1 (en) Method for analyzing data based on matrix factorization model and apparatus therefor
Dehghan et al. On the reflexive and anti-reflexive solutions of the generalised coupled Sylvester matrix equations
CN109410001A (en) A kind of Method of Commodity Recommendation, system, electronic equipment and storage medium
Balomenos et al. Finite element reliability and sensitivity analysis of structures using the multiplicative dimensional reduction method
CN106411683A (en) Determination method and apparatus of key social information
CN113821724B (en) Time interval enhancement-based graph neural network recommendation method
Mikula et al. Evolution of curves on a surface driven by the geodesic curvature and external force
Gordon et al. TSI-GNN: extending graph neural networks to handle missing data in temporal settings
Fan Parameter estimation of stable distributions
Maheshwari et al. Non-homogeneous space-time fractional Poisson processes
JP5826721B2 (en) Missing value prediction device, product recommendation device, method and program
CN109728958B (en) Network node trust prediction method, device, equipment and medium
CN103778329B (en) A kind of construct the method that data supply value
Head Skewed and extreme: Useful distributions for economic heterogeneity
JP5860828B2 (en) Action probability estimation device, method, and program
CN111694945A (en) Legal association recommendation method and device based on neural network
Balakrishna et al. Parameter estimation in minification processes
Duan et al. Plae: Time-series prediction improvement by adaptive decomposition
Wang et al. Cross-domain collaborative recommendation by transfer learning of heterogeneous feedbacks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant