CN111737586A - Information recommendation method, device, equipment and computer readable storage medium - Google Patents

Information recommendation method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN111737586A
CN111737586A CN202010835094.XA CN202010835094A CN111737586A CN 111737586 A CN111737586 A CN 111737586A CN 202010835094 A CN202010835094 A CN 202010835094A CN 111737586 A CN111737586 A CN 111737586A
Authority
CN
China
Prior art keywords
vector
sample
information
dimension
embedded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010835094.XA
Other languages
Chinese (zh)
Other versions
CN111737586B (en
Inventor
杨建博
周星
卢鑫炎
李武军
姚开浪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010835094.XA priority Critical patent/CN111737586B/en
Publication of CN111737586A publication Critical patent/CN111737586A/en
Application granted granted Critical
Publication of CN111737586B publication Critical patent/CN111737586B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the application provides an information recommendation method, an information recommendation device, information recommendation equipment and a computer-readable storage medium, wherein the method comprises the following steps: mapping the first eigenvector of the information to be recommended and the second eigenvector of the target object respectively to obtain a first dimension reduction vector and a second dimension reduction vector correspondingly; vector embedding processing is respectively carried out on the first dimension reduction vector and the second dimension reduction vector by adopting a vector to be embedded, the first embedding vector of the information to be recommended and the second embedding vector of the target object are correspondingly obtained, and the vector to be embedded is obtained by learning after an information recommendation model is trained; determining the matching degree between the information to be recommended and the target object according to the first embedding vector and the second embedding vector; and when the matching degree is greater than the threshold value, recommending the information to be recommended to the target object. By the embodiment of the application, the storage cost of the characteristic data and the recommendation model parameters can be reduced, and the performance of the information recommendation model cannot be reduced.

Description

Information recommendation method, device, equipment and computer readable storage medium
Technical Field
The embodiment of the application relates to the technical field of internet, and relates to but is not limited to an information recommendation method, an information recommendation device, information recommendation equipment and a computer-readable storage medium.
Background
The recommendation system algorithm in the information recommendation model generally calculates the "score" or "preference" of a user for an item through a dot product operation between embedded vectors of the user and the item, wherein the embedded vectors of the user and the item are obtained by linear transformation or nonlinear transformation of feature vectors of the user and the item respectively. The process of linear transformation or non-linear transformation of the feature vector implicitly includes the operation of learning an embedded vector for each dimension of the feature, and the number of the embedded vectors to be learned determines the size of the model parameter, so that the storage cost of the model parameter is in direct proportion to the dimension of the feature.
In order to pursue higher recommendation accuracy, high-dimensional features become a common setting, but the high-dimensional features bring great load to the memory of a single server, so how to reduce the feature dimensions of users and articles is a critical problem to be solved at present. Currently, a common method for reducing feature dimension includes: principal Component Analysis (PCA), and feature selection.
However, the feature after dimensionality reduction by the principal component analysis method can damage the sparse structure of the original high latitude feature, so that the storage overhead of feature data is increased sharply; while feature selection requires additional supervisory information. Therefore, in the related art, neither of the two methods for reducing the feature dimension can effectively reduce the overhead of feature data storage, and at the same time, can ensure the performance of the information recommendation model.
Disclosure of Invention
The embodiment of the application provides an information recommendation method, an information recommendation device, information recommendation equipment and a computer-readable storage medium, wherein mapping processing and vector embedding processing are sequentially performed on a first feature vector of information to be recommended and a second feature vector of a target object respectively, the vector to be embedded is introduced during the vector embedding processing, and high-dimensional features are mapped to a low-dimensional space through the vector to be embedded, so that the storage overhead of feature data can be reduced, and the performance of an information recommendation model cannot be obviously reduced.
The technical scheme of the embodiment of the application is realized as follows:
the embodiment of the application provides an information recommendation method, which comprises the following steps:
respectively extracting features of the information to be recommended and the target object, and correspondingly obtaining a first feature vector with a first dimension and a second feature vector with a second dimension;
mapping the first feature vector and the second feature vector respectively to obtain a first dimension-reduced vector with a third dimension and a second dimension-reduced vector with a fourth dimension; wherein the third dimension is less than the first dimension and the fourth dimension is less than the second dimension;
vector embedding processing is carried out on the first dimension reduction vector and the second dimension reduction vector respectively by adopting vectors to be embedded, and a first embedded vector with a fifth dimension of the information to be recommended and a second embedded vector with a sixth dimension of the target object are obtained correspondingly; the fifth dimension is smaller than the third dimension, the sixth dimension is smaller than the fourth dimension, and the vector to be embedded is obtained by learning after training an information recommendation model;
determining the matching degree between the information to be recommended and the target object according to the first embedded vector and the second embedded vector;
and when the matching degree is greater than a threshold value, recommending the information to be recommended to the target object.
In some embodiments, the method further comprises: determining the matching degree between the information to be recommended and the target object by adopting an information recommendation model; wherein a mapper in the information recommendation model comprises at least a vector embedding layer; the parameters in the vector embedding layer include at least: a third weight of the sample recommendation information, a fourth weight of the sample object, a third vector to be embedded of the sample recommendation information, and a fourth vector to be embedded of the sample object;
correspondingly, the correcting the parameters in the mapper according to the loss result comprises:
according to the loss result, correcting at least one of the following: the third weight, the fourth weight, the third vector to be embedded, and the fourth vector to be embedded.
An embodiment of the present application provides an information recommendation device, including:
the characteristic extraction module is used for respectively extracting characteristics of the information to be recommended and the target object to correspondingly obtain a first characteristic vector with a first dimension and a second characteristic vector with the first dimension;
the dimensionality reduction processing module is used for respectively mapping the first eigenvector and the second eigenvector to correspondingly obtain a first dimensionality reduction vector with a third dimensionality and a second dimensionality reduction vector with a fourth dimensionality; wherein the third dimension is less than the first dimension and the fourth dimension is less than the second dimension;
the vector embedding processing module is used for respectively carrying out vector embedding processing on the first dimension reduction vector and the second dimension reduction vector by adopting a vector to be embedded, and correspondingly obtaining a first embedding vector with a fifth dimension of the information to be recommended and a second embedding vector with a sixth dimension of the target object; the fifth dimension is smaller than the third dimension, the sixth dimension is smaller than the fourth dimension, and the vector to be embedded is obtained by learning after training an information recommendation model;
the determining module is used for determining the matching degree between the information to be recommended and the target object according to the first embedded vector and the second embedded vector;
and the information recommending module is used for recommending the information to be recommended to the target object when the matching degree is greater than a threshold value.
An embodiment of the present application provides an information recommendation device, including: a memory for storing executable instructions; and the processor is used for realizing the method when executing the executable instructions stored in the memory.
Embodiments of the present application provide a computer-readable storage medium storing executable instructions for causing a processor to implement the above-mentioned method when executed.
The embodiment of the application has the following beneficial effects: the method comprises the steps of respectively and sequentially carrying out dimension reduction processing and vector embedding processing on a first eigenvector of information to be recommended and a second eigenvector of a target object to realize dimension reduction processing twice, correspondingly obtaining a first embedded vector of the information to be recommended and a second embedded vector of the target object, determining the matching degree between the information to be recommended and the target object according to the first embedded vector and the second embedded vector, and realizing recommendation of the information to be recommended. Therefore, the vector to be embedded is introduced during vector embedding processing, high-dimensional features are mapped to a low-dimensional space through the vector to be embedded, the vector to be embedded is only learned for the features of the low-dimensional space, the number of model parameters can be successfully reduced, the storage cost of feature data is reduced, and the performance of an information recommendation model cannot be reduced.
Drawings
FIG. 1 is an alternative architecture diagram of an information recommendation system provided by an embodiment of the present application;
FIG. 2A is a schematic diagram of an alternative structure of the information recommendation system applied to the blockchain system according to the embodiment of the present application;
FIG. 2B is an alternative block diagram according to an embodiment of the present disclosure;
FIG. 3 is a schematic structural diagram of a server provided in an embodiment of the present application;
FIG. 4 is a schematic flow chart of an alternative information recommendation method provided in the embodiments of the present application;
FIG. 5 is an alternative flow chart of an information recommendation method provided in an embodiment of the present application;
FIG. 6 is an alternative structural diagram of an information recommendation system provided in an embodiment of the present application;
FIG. 7 is an alternative flow chart of an information recommendation method provided in an embodiment of the present application;
FIG. 8 is an alternative flow chart of an information recommendation method provided in an embodiment of the present application;
FIG. 9A is an alternative structural diagram of an information recommendation model provided by an embodiment of the present application;
FIG. 9B is an alternative structural diagram of an information recommendation model provided by an embodiment of the present application;
FIG. 9C is an alternative structural diagram of an information recommendation model provided by an embodiment of the present application;
FIG. 9D is an alternative structural diagram of an information recommendation model provided by an embodiment of the present application;
FIG. 10 is an alternative flowchart of a training method for an information recommendation model according to an embodiment of the present disclosure;
fig. 11 is an alternative flowchart of an information recommendation method provided in an embodiment of the present application;
fig. 12 is a schematic structural diagram of a mapper provided in the embodiment of the present application;
fig. 13 is an architecture diagram of a recommender as provided in embodiments of the present application.
Detailed Description
In order to make the objectives, technical solutions and advantages of the present application clearer, the present application will be described in further detail with reference to the attached drawings, the described embodiments should not be considered as limiting the present application, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts shall fall within the protection scope of the present application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the embodiments of the present application belong. The terminology used in the embodiments of the present application is for the purpose of describing the embodiments of the present application only and is not intended to be limiting of the present application.
Before explaining the embodiments of the present application, terms referred to in the present application are first explained:
1) matrix Completion (MC, Matrix Completion): the method refers to a realization process of giving an object matrix, only observing partial elements in the object matrix and recovering missing elements in the matrix according to the observed partial elements.
2) Inductive Matrix Completion (IMC): the method refers to that on the basis of a matrix completion task, if each row or each column of a matrix has collected eigenvectors, the matrix completion task is called inductive matrix completion.
3) Feature Hashing (Feature Hashing): refers to a dimension reduction technique that maps high-dimensional features into a low-dimensional feature space.
In order to better understand the information recommendation method provided in the embodiment of the present application, first, an information recommendation method in the related art is described:
because the storage cost of the model parameters is in a direct proportion relation with the dimension of the features, when the features are in a low dimension, the storage cost of the model parameters is not large, but when the features are in a high dimension, the storage cost of the model parameters is large, and the bottleneck for the successful deployment of the model is formed.
At present, under the conditions of data explosion growth and various feature mining technologies, the feature dimension can easily reach 109And above, in order to pursue higher recommendation accuracy, high dimensional features become a common setting. In a characteristic dimension of 109In the application scenario of (2), if the dimension of the embedded vector is 32, at least 238GB of storage space is required to store the model parameters, which brings a huge load to the memory of a single server, and it is more difficult to successfully deploy the model under the setting of multitask parallel operation. In addition, the high storage overhead of the model parameters also reduces the inference speed of the model, because the cache hit rate decreases with the increase of the memory space. Therefore, how to reduce the storage overhead of the model parameters and at the same timeWithout significantly degrading the model performance is a significant problem.
The key to the above problem is how to reduce the feature dimension of users and articles, because after reducing the feature dimension, the number of embedded vectors that need to be learned for features is reduced, and the number of model parameters is also reduced. In the related art, a common method for reducing feature dimension includes: principal component analysis and feature selection.
The idea of the principal component analysis method is to minimize reconstruction errors, project original eigenvectors to eigenvector directions corresponding to k eigenvalues in front of the covariance matrix, and enable the reconstruction errors to be minimum (k is the dimensionality of the eigenvectors after dimensionality reduction); the idea of feature selection is to choose to retain features with a high Information content, which is calculated from the tag Information of the data by a specific Information metric, such as Mutual Information (Mutual Information).
Although the two dimension reduction technologies can be widely applied to a plurality of machine learning tasks and obtain good results, due to the characteristics of the task of the recommendation system, the two dimension reduction technologies cannot well solve the problem of high storage overhead of model parameters in the recommendation system, and both the two dimension reduction technologies have some defects:
although principal component analysis is a good reconstruction of the original features, it has three drawbacks that make it difficult to apply to the recommendation system task: firstly, the features after dimensionality reduction are dense, which can destroy the sparse structure of the original high-dimensional features, thereby sharply increasing the storage overhead of feature data; secondly, the size of a mapping matrix required to be calculated in the principal component analysis method is in a direct proportion relation with the characteristic dimension, and the size is consistent with the size of an embedded vector matrix required to be learned; thirdly, the computational complexity of the principal component analysis method is
Figure 986065DEST_PATH_IMAGE001
Wherein
Figure 762260DEST_PATH_IMAGE002
Is the dimension of the feature(s),
Figure 807576DEST_PATH_IMAGE003
is the number of samples, in recommending system tasks,
Figure 858578DEST_PATH_IMAGE003
and
Figure 204109DEST_PATH_IMAGE002
very large, e.g. up to 109Is a very common setup where the time overhead of principal component analysis is large.
The feature selection method requires supervision information for calculation, such as data labels, when calculating the information metric values. Different from the traditional supervised learning, no effective measurement method specially designed for recommending system tasks exists at present.
Based on the above problems in the related art, an embodiment of the present application provides an information recommendation method, where feature extraction is performed on information to be recommended and a target object respectively to obtain a first feature vector and a second feature vector correspondingly, then mapping processing and vector embedding processing are performed on the first feature vector of the information to be recommended and the second feature vector of the target object respectively in sequence to obtain a first embedded vector of the information to be recommended and a second embedded vector of the target object correspondingly, and finally a matching degree between the information to be recommended and the target object is determined according to the first embedded vector and the second embedded vector to implement recommendation of the information to be recommended. Therefore, the two-time dimension reduction process is realized through the mapping process and the vector embedding process, the vector to be embedded is introduced during the vector embedding process, the high-dimensional features are mapped to the low-dimensional space through the vector to be embedded, the vector to be embedded is only learned for the features of the low-dimensional space, the number of model parameters can be successfully reduced, the storage cost of feature data is reduced, and the performance of an information recommendation model cannot be reduced.
An exemplary application of the information recommendation device provided in the embodiments of the present application is described below, and the information recommendation device provided in the embodiments of the present application may be implemented as a notebook computer, a tablet computer, a desktop computer, a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), an intelligent robot, an e-book reader, or any other terminal having computing and data processing capabilities or a terminal having a capability of receiving information to be recommended, and may also be implemented as a server. Next, an exemplary application when the information recommendation apparatus is implemented as a server will be explained.
Referring to fig. 1, fig. 1 is an alternative architecture diagram of an information recommendation system 10 provided in an embodiment of the present application. To implement information recommendation to the user's terminal, the terminals (the first terminal 100-1 and the second terminal 100-2 are exemplarily shown) are connected to the server 300 through the network 200, and the network 200 may be a wide area network or a local area network, or a combination of the two.
Here, the first terminal 100-1 is taken as a target object for explanation, and the first terminal 100-1 displays an interface of an Application (APP) on the current interface 110-1, where the APP may be a shopping APP or a video playing APP, for example. The first terminal 100-1 and the second terminal 100-2 may also display information to be recommended on the current interface. In the embodiment of the application, the server 300 performs feature extraction on information to be recommended to correspondingly obtain a first feature vector, and meanwhile, the server 300 obtains feature data of a target object (namely, the terminal 100-1), performs feature extraction on the target object to correspondingly obtain a second feature vector; then, mapping the first eigenvector and the second eigenvector respectively to obtain a first dimension reduction vector and a second dimension reduction vector correspondingly, so that one-time dimension reduction of the first eigenvector and the second eigenvector is realized; respectively carrying out vector embedding processing on the first dimension reduction vector and the second dimension reduction vector to correspondingly obtain a first embedded vector of information to be recommended and a second embedded vector of a target object, so that secondary dimension reduction is realized; determining the matching degree between the information to be recommended and the target object according to the first embedding vector and the second embedding vector; finally, it is determined that the information to be recommended can be recommended to the first terminal 100-1, so that the information to be recommended is sent to the first terminal 100-1, and after receiving the information to be recommended, the first terminal 100-1 displays the information to be recommended on the current interface 110-1.
In other embodiments, the server 300 may determine matching degrees of the information to be recommended and the first terminal 100-1 and the second terminal 100-2, respectively, then determine a target terminal receiving the information to be recommended in the first terminal 100-1 and the second terminal 100-2 according to the two determined matching degrees, and send the information to be recommended to the target terminal.
The information recommendation system 10 related To the embodiment of the present application may also be a distributed system 201 of a blockchain system, referring To fig. 2A, fig. 2A is an optional structural schematic diagram of the information recommendation system 10 provided in the embodiment of the present application applied To the blockchain system, where the distributed system 201 may be a distributed node formed by a plurality of nodes 202 (any form of computing devices in an access network, such as a server and a user terminal) and a client 203, a Peer-To-Peer (P2P, Peer To Peer) network is formed between the nodes, and the P2P Protocol is an application layer Protocol operating on a Transmission Control Protocol (TCP). In a distributed system, any machine, such as a server or a terminal, can join to become a node, and the node comprises a hardware layer, a middle layer, an operating system layer and an application layer.
In the distributed system 201, each node 202 corresponds to one of the terminal 100 and the server, and data of the node 202 is collected at each node 202. For example, for the first terminal 100-1, an application program running on the first terminal 100-1, interaction data of a user on the first terminal 100-1 (e.g., frequency and number of clicks on a certain application, number of clicks on a certain product in a certain application, etc.) and the like can be collected. The interaction data can depict the user portrait and determine the habit and the preference of the user, so that the data can be collected for determining the matching degree of the information to be recommended and the first terminal 100-1 in the subsequent information recommendation. In some embodiments, the mapping relationship between the information to be recommended and the target object when the information is successfully recommended in the information recommendation process can be collected.
In the embodiment of the application, the data are collected and stored in the uplink mode, so that the stored data can be directly acquired from the block chain system in the subsequent information recommendation process, information support is provided for the subsequent information recommendation process according to the stored data, and the subsequent characteristic data of the target object is determined through the data acquired from the block chain.
In the embodiment of the application, in the blockchain system, the historical interaction data and the recommendation information of each terminal are recorded and cannot be changed, and the server recommends new information along with further operation of a user on the terminal, so that the user portrait is updated, and the data stored in the blockchain is also updated. Therefore, through the block chain system in the embodiment of the application, the interactive data of the terminal can be overlaid and updated in time, so that the feature data of a new target object can be provided subsequently during information recommendation, a new feature vector can be obtained, and the problem of inaccurate information recommendation caused by updating of the user interactive data is solved.
Referring to fig. 2B, fig. 2B is an optional schematic diagram of a Block Structure (Block Structure) provided in this embodiment, each Block includes a hash value of a transaction record (hash value of the Block) stored in the Block and a hash value of a previous Block, and the blocks are connected by the hash values to form a Block chain. The block may include information such as a time stamp at the time of block generation. A block chain (Blockchain), which is essentially a decentralized database, is a string of data blocks associated by using cryptography, and each data block contains related information for verifying the validity (anti-counterfeiting) of the information and generating a next block.
Referring to fig. 3, fig. 3 is a schematic structural diagram of a server 300 according to an embodiment of the present application, where the server 300 shown in fig. 3 includes: at least one processor 310, memory 350, at least one network interface 320, and a user interface 330. The various components in server 300 are coupled together by a bus system 340. It will be appreciated that the bus system 340 is used to enable communications among the components connected. The bus system 340 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 340 in fig. 3.
The Processor 310 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like, wherein the general purpose Processor may be a microprocessor or any conventional Processor, or the like.
The user interface 330 includes one or more output devices 331, including one or more speakers and/or one or more visual display screens, that enable presentation of media content. The user interface 330 also includes one or more input devices 332, including user interface components to facilitate user input, such as a keyboard, mouse, microphone, touch screen display, camera, other input buttons and controls.
The memory 350 may be removable, non-removable, or a combination thereof. Exemplary hardware devices include solid state memory, hard disk drives, optical disk drives, and the like. Memory 350 optionally includes one or more storage devices physically located remote from processor 310. The memory 350 may include either volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory. The nonvolatile Memory may be a Read Only Memory (ROM), and the volatile Memory may be a Random Access Memory (RAM). The memory 350 described in embodiments herein is intended to comprise any suitable type of memory. In some embodiments, memory 350 is capable of storing data, examples of which include programs, modules, and data structures, or subsets or supersets thereof, as exemplified below, to support various operations.
An operating system 351 including system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
a network communication module 352 for communicating to other computing devices via one or more (wired or wireless) network interfaces 320, exemplary network interfaces 320 including: bluetooth, wireless compatibility authentication (WiFi), and Universal Serial Bus (USB), etc.;
an input processing module 353 for detecting one or more user inputs or interactions from one of the one or more input devices 332 and translating the detected inputs or interactions.
In some embodiments, the apparatus provided by the embodiments of the present application may be implemented in software, and fig. 3 illustrates an information recommendation apparatus 354 stored in the memory 350, where the information recommendation apparatus 354 may be an information recommendation apparatus in the server 300, and may be software in the form of programs and plug-ins, and includes the following software modules: feature extraction module 3541, dimension reduction processing module 3542, vector embedding processing module 3543, determination module 3544, and information recommendation module 3545, which are logical and thus can be arbitrarily combined or further separated depending on the functionality implemented. The functions of the respective modules will be explained below.
In other embodiments, the apparatus provided in the embodiments of the present Application may be implemented in hardware, and for example, the apparatus provided in the embodiments of the present Application may be a processor in the form of a hardware decoding processor, which is programmed to execute the information recommendation method provided in the embodiments of the present Application, for example, the processor in the form of the hardware decoding processor may be one or more Application Specific Integrated Circuits (ASICs), DSPs, Programmable Logic Devices (PLDs), Complex Programmable Logic Devices (CPLDs), Field Programmable Gate Arrays (FPGAs), or other electronic components.
The information recommendation method provided by the embodiment of the present application will be described below in conjunction with an exemplary application and implementation of the server provided by the embodiment of the present application. Referring to fig. 4, fig. 4 is an optional flowchart of an information recommendation method provided in an embodiment of the present application, and will be described with reference to the steps shown in fig. 4.
Step S401, feature extraction is respectively carried out on the information to be recommended and the target object, and a first feature vector with a first dimension and a second feature vector with a second dimension are correspondingly obtained.
Here, the information to be recommended may be any one form of information, and for example, the information to be recommended may be commodity information, advertisement, video, news, application programs, and the like. By taking information to be recommended as commodity information as an example, the method of the embodiment of the application can be applied to a server of a commodity seller or a third-party recommendation server, and the server determines whether the commodity is interested in the target object through the method of the embodiment of the application, so that commodity recommendation is carried out on the target object. The target object can be a user terminal, and the user terminal collects interaction data of a user in a preset historical time period so as to form target object information. The interactive data may include, but is not limited to: the system comprises data of clicking data, browsing data, frequency and times of clicking certain application, times of clicking certain product in certain application, information of praise or comment, information of purchased commodity, reserved service information, terminal use duration and time period and the like of a user. By acquiring the interactive data of the target object, the user portrait of the terminal user can be depicted through the interactive data, and the habit and the preference of the user can be determined, so that accurate information recommendation can be performed on the user.
In the embodiment of the present application, the performing of feature extraction on the target object may be performing feature extraction on the obtained target object information. The feature extraction is performed on the information to be recommended and the target object, and text data corresponding to the information to be recommended and the target object can be converted into numerical data.
For example, all texts in the corpus can be pre-segmented, all words (vocabularies) are collected and numbered, and a vocabulary (vocabulary) is constructed, wherein the vocabulary is a dictionary structure, key is a word, and value is an index of the word. With the vocabulary in place, a vector can be used to represent a single vocabulary. Each word is represented as a vector of n columns, called a word vector, with column 0 of the word vector corresponding to index No. 0 in the vocabulary (vocabular), column 1 of the word vector corresponding to index No. 1 in the vocabulary (vocabular), and so on. After the word vector is formed, for the information to be recommended and the target object, the corresponding word vector can be searched in the vocabulary according to the information to be recommended and the text data of the target object, and the first feature vector of the information to be recommended and the second feature vector of the target object are formed according to the searched word vector. In this embodiment of the present application, the dimension of the first feature vector and the dimension of the second feature vector may be the same or different.
Step S402, mapping the first feature vector and the second feature vector respectively to obtain a first dimension-reducing vector with a third dimension and a second dimension-reducing vector with a fourth dimension.
Here, the mapping process is a first dimension reduction process of reducing a first feature vector and a second feature vector of a high dimension into a first dimension reduction vector and a second dimension reduction vector of a low dimension, respectively, that is, a vector dimension of the first dimension reduction vector is smaller than a vector dimension of the first feature vector, and a vector dimension of the second dimension reduction vector is smaller than a vector dimension of the second feature vector. In the embodiment of the present application, the first dimension-reduced vector has a third dimension, and the second dimension-reduced vector has a fourth dimension, where the third dimension is smaller than the first dimension, and the fourth dimension is smaller than the second dimension.
In this embodiment of the present application, any one of mapping processing manners may be adopted to implement dimension reduction processing on the first eigenvector and the second eigenvector, and of course, dimension reduction processing may be implemented by projecting eigenvalues of the first eigenvector and the second eigenvector to a low-dimensional vector space by minimizing a reconstruction error in addition to dimension reduction through vector mapping.
Step S403, using the vectors to be embedded, and performing vector embedding processing on the first dimension reduction vector and the second dimension reduction vector respectively to obtain a first embedded vector with a fifth dimension and a second embedded vector with a target object with a sixth dimension of the information to be recommended correspondingly.
Here, the vector embedding process is to point to the first dimension-reduced vector and the second dimension-reduced vector to respectively embed a vector to be embedded, so that the first dimension-reduced vector and the second dimension-reduced vector are subjected to further dimension reduction, and the first embedded vector and the second embedded vector are correspondingly obtained, that is, the first dimension-reduced vector and the second dimension-reduced vector are respectively subjected to point multiplication by the vector to be embedded. And the dimension of the first embedded vector is smaller than that of the first dimension-reduced vector, and the dimension of the second embedded vector is smaller than that of the second dimension-reduced vector.
In some embodiments, the vector to be embedded may be a vector obtained by continuously training a specific information recommendation model and then learning, that is, the vector to be embedded is a pre-trained vector, and the vector embedding process is performed on the first dimension reduction vector and the second dimension reduction vector respectively through the pre-trained vector. For example, a first to-be-embedded vector for performing vector embedding processing on the first dimension-reduced vector may be trained in advance, and the first to-be-embedded vector is dot-multiplied by the first dimension-reduced vector to implement vector embedding processing on the first dimension-reduced vector; meanwhile, a second vector to be embedded for carrying out vector embedding processing on the second dimension-reduced vector can be trained in advance, and the second vector to be embedded is point-multiplied by the second dimension-reduced vector to realize the vector embedding processing on the second dimension-reduced vector, wherein the training process of the first vector to be embedded and the training process of the second vector to be embedded are the same, and the values of the first vector to be embedded and the second vector to be embedded can be the same or different.
In some embodiments, the vector to be embedded may be obtained by training using an information recommendation model, and sample data (e.g., sample recommendation information and sample objects) is input into the information recommendation model, and a sample matching degree between the sample recommendation information and the sample objects is determined and output by the information recommendation model; then, determining a difference value between the sample matching degree and the real matching degree through a preset loss model to obtain a loss result; and finally, optimizing parameters (the parameters include but are not limited to the vector to be embedded and parameters such as a first weight and a second weight for weighting and summing in the vector embedding process) in the information recommendation model according to the loss result until the optimized information recommendation model can accurately determine the matching degree between the sample recommendation information and the sample object. For convenience of understanding, the training process of the information recommendation model will be specifically described below.
In this embodiment of the present application, the dimension reduction process in step S402 is used to realize the primary dimension reduction of the first eigenvector and the second eigenvector, and the vector embedding process in step S403 is used to realize the secondary dimension reduction of the first eigenvector and the second eigenvector, that is, in this embodiment of the present application, the dimension reduction process is performed twice to obtain the first embedded vector and the second embedded vector.
And S404, determining the matching degree between the information to be recommended and the target object according to the first embedded vector and the second embedded vector.
Here, an inner product of the first embedding vector and the second embedding vector is calculated by multiplying the first embedding vector and the second embedding vector, thereby obtaining a distance between the first embedding vector and the second embedding vector, and a matching degree between the information to be recommended and the target object is determined according to the distance between the first embedding vector and the second embedding vector. The larger the distance between the first embedded vector and the second embedded vector is, the lower the matching degree between the information to be recommended and the target object is; the smaller the distance between the first embedding vector and the second embedding vector is, the higher the matching degree between the information to be recommended and the target object is.
And step S405, recommending the information to be recommended to the target object when the matching degree is greater than the threshold value.
Here, when the matching degree is greater than the threshold, it indicates that the correlation between the information to be recommended and the target object is high, and the target object has a high interest in the information to be recommended, and therefore the target object has a high interaction probability with the information to be recommended, and therefore, the information to be recommended is recommended to the target object.
According to the information recommendation method provided by the embodiment of the application, the dimension reduction processing and the vector embedding processing are sequentially performed on the first eigenvector of the information to be recommended and the second eigenvector of the target object respectively, so that the dimension reduction processing is performed twice, the first embedded vector of the information to be recommended and the second embedded vector of the target object are correspondingly obtained, the matching degree between the information to be recommended and the target object is determined according to the first embedded vector and the second embedded vector, and the recommendation of the information to be recommended is realized. Therefore, two-time dimensionality reduction processing is realized, high-dimensional features are mapped to a low-dimensional space, and vectors to be embedded are only learned for the features of the low-dimensional space, so that the number of model parameters can be successfully reduced, the storage cost of feature data is reduced, and the performance of an information recommendation model cannot be reduced.
In some embodiments, the information recommendation system includes a server and a plurality of terminals, and may determine, from among the plurality of terminals, a terminal with the highest matching degree with the information to be recommended as a target terminal (i.e., a target object), and recommend the information to be recommended to the target terminal. Here, an application scenario of the embodiment of the present application is described by taking an example in which the information recommendation system includes a server and two terminals (a first terminal and a second terminal). Fig. 5 is an optional flowchart of the information recommendation method provided in the embodiment of the present application, and as shown in fig. 5, the method includes the following steps:
step S501, the server extracts the features of the information to be recommended to obtain a first feature vector of the information to be recommended.
Step S502, the server acquires the terminal data of the first terminal.
Here, the terminal data is interactive data of a user of the first terminal on the first terminal, and in this embodiment of the application, the terminal data of the first terminal in a preset historical time period may be acquired.
Step S503, the server performs feature extraction on the terminal data of the first terminal to obtain a feature vector of the first terminal.
Step S504, the server acquires the terminal data of the second terminal.
And step S505, the server extracts the characteristics of the terminal data of the second terminal to obtain the characteristic vector of the second terminal.
The second terminal is a terminal different from the first terminal, and the second terminal corresponds to a different user from the first terminal, so that the terminal data of the first terminal is different from the terminal data of the second terminal, and correspondingly, after the feature extraction is performed, the obtained feature vector of the first terminal is different from the feature vector of the second terminal.
It should be noted that step S501, step S502, and step S504 do not have a strict sequence, and may be performed simultaneously or in any sequence. The three feature extraction processes of feature extraction on the information to be recommended in step S501, feature extraction on the terminal data of the first terminal in step S503, and feature extraction on the terminal data of the second terminal in step S505 may be implemented by the same server or different servers. For example, if the first terminal is a mobile phone and the second terminal is a portable game device, the feature extraction is performed on the terminal data of the first terminal, which can be implemented by an application server installed on the first terminal, so as to obtain a feature vector of the first terminal; extracting the characteristics of the terminal data of the second terminal, which can be realized by a game equipment server, to obtain the characteristic vector of the second terminal; extracting features of the information to be recommended, wherein the feature extraction can be realized by an application server corresponding to the information to be recommended to obtain a first feature vector of the information to be recommended; then, the server of the information recommendation system obtains the feature vector of the first terminal sent by the application server, the feature vector of the second terminal sent by the game device server, and the first feature vector of the information to be recommended sent by the application server.
Step S506, the server performs dimension reduction processing on the first feature vector, the feature vector of the first terminal, and the feature vector of the second terminal, respectively, to obtain a first dimension reduction vector, a dimension reduction vector of the first terminal, and a dimension reduction vector of the second terminal, correspondingly.
Here, the dimension reduction processing on the first feature vector, the feature vector of the first terminal, and the feature vector of the second terminal may be implemented by any dimension reduction processing method.
Step S507, the server performs vector embedding processing on the first dimension reduction vector, the dimension reduction vector of the first terminal, and the dimension reduction vector of the second terminal, respectively, to obtain a first embedded vector of the information to be recommended, an embedded vector of the first terminal, and an embedded vector of the second terminal correspondingly.
The vector embedding processing can be implemented by adopting a preset vector to be embedded, and the first embedding vector of the information to be recommended, the embedding vector of the first terminal and the embedding vector of the second terminal are correspondingly obtained by point-multiplying the first dimension-reducing vector, the dimension-reducing vector of the first terminal and the dimension-reducing vector of the second terminal by different vectors to be embedded.
Step S508, the server determines a first matching degree between the information to be recommended and the first terminal according to the first embedded vector of the information to be recommended and the embedded vector of the first terminal.
In step S509, the server determines a second matching degree between the information to be recommended and the second terminal according to the first embedded vector of the information to be recommended and the embedded vector of the second terminal.
In step S510, the server determines whether the first matching degree is greater than the second matching degree. If the judgment result is yes, executing step S511; if the judgment result is no, step S512 is executed.
In step S511, the server determines the first terminal as the target terminal.
Here, if the first matching degree is greater than the second matching degree, it indicates that the user of the first terminal has a higher interest in the information to be recommended, and therefore, the information to be recommended is recommended to the first terminal.
And step S512, the server determines the second terminal as a target terminal.
Here, if the first matching degree is smaller than the second matching degree, it indicates that the user of the second terminal has a higher interest in the information to be recommended, and therefore, the information to be recommended is recommended to the second terminal. In some embodiments, when the first matching degree is equal to the second matching degree, the first terminal and the second terminal are simultaneously determined as target terminals, and the information to be recommended is simultaneously recommended to the first terminal and the second terminal.
In step S513, the server recommends the information to be recommended to the first terminal.
In other embodiments, the relationship between the first matching degree and the second matching degree and a preset threshold may be compared, and when any one or more of the first matching degree and the second matching degree is greater than the preset threshold, the corresponding one or more terminals may be determined as the target terminal.
In an information recommendation system formed by a plurality of terminals, a dimension reduction process and a vector embedding process are respectively performed on a first eigenvector of information to be recommended and an eigenvector of each terminal in sequence to obtain a first embedded vector of the information to be recommended and an embedded vector of each terminal correspondingly, a matching degree between the information to be recommended and each terminal is determined according to the first embedded vector and the embedded vector of each terminal, and then one or more terminals most suitable for recommending the information to be recommended are determined from the plurality of terminals according to the determined matching degrees to realize recommendation of the information to be recommended. Therefore, the information to be recommended and the high-dimensional features corresponding to each terminal are mapped to the low-dimensional space, and the embedded vectors are only learned for the features of the low-dimensional space, so that the number of model parameters can be successfully reduced, the storage cost of feature data is reduced, and the performance of an information recommendation model cannot be reduced. And moreover, according to the matching degree between the information to be recommended and each terminal, the terminal which is most suitable for receiving the information to be recommended can be determined, and the optimal recommendation of the information is realized.
In some embodiments, an information base to be recommended and a terminal information base (or a user information base) are provided, information to be recommended applicable to each terminal in the terminal information base can be matched from the information base to be recommended, and the matched information to be recommended is recommended to each terminal.
Fig. 6 is an optional structural schematic diagram of an information recommendation system provided in an embodiment of the present application, as shown in fig. 6, an information recommendation system 60 includes a server 61 and a plurality of terminals (for example, a terminal 62, a terminal 63, and a terminal 64 in fig. 6), the server 61 provides an information library 65 to be recommended and a terminal information library 66, where the information library 65 to be recommended includes at least one piece of information 651 to be recommended, and each piece of information 651 to be recommended in the information library 65 to be recommended may be recommended to a terminal corresponding to any piece of terminal information 661 in the terminal information library 66; the terminal information base 66 includes at least one piece of terminal information 661, where each piece of terminal information 661 corresponds to a terminal, and the terminal information 661 can be an identifier of the terminal.
Based on the information recommendation system in fig. 6, for each information 651 to be recommended in the information library 65 to be recommended, the information recommendation method in the embodiment of the present application may be adopted to determine the matching degree between the information 651 to be recommended and the terminal corresponding to each terminal information 661 in the terminal information library 66, and then determine the terminal with the highest matching degree or the terminal whose matching degree meets the preset condition as the recommendation terminal of the information 651 to be recommended, and recommend the information 651 to be recommended to the determined recommendation terminal. It should be noted that the same information 651 to be recommended may be recommended to terminals corresponding to at least two pieces of terminal information in the terminal information base, and different information 651 to be recommended may also be recommended to terminals corresponding to the same piece of terminal information in the terminal information base.
Fig. 7 is an optional flowchart of the information recommendation method according to the embodiment of the present application, and as shown in fig. 7, the method includes the following steps:
step S701, feature extraction is carried out on information to be recommended, and a first feature vector is obtained correspondingly. Wherein the first feature vector has a first dimension.
Step S702, a first preset Hash function is adopted to carry out mapping processing on the first characteristic vector to obtain a first dimension reduction vector with a third dimension; wherein the third dimension is smaller than the first dimension.
Step S703 is to obtain a first weight of the information to be recommended and a first vector to be embedded of the information to be recommended.
Here, the first vector to be embedded is a preset vector to be embedded and is used for implementing vector embedding processing on a first dimension-reduced vector of information to be recommended, a dimension of the first vector to be embedded can be determined according to an embedding requirement of the vector embedding processing, where the embedding requirement refers to how many dimensions of the first embedded vector are obtained after the vector embedding processing, and therefore, the dimension of the first vector to be embedded can be determined according to the dimension of the finally obtained first embedded vector. The first weight and the first vector to be embedded are both learnable parameters, and the optimal first weight and the optimal first vector to be embedded can be obtained by training the model.
In some embodiments, the first weight may also be a weight value obtained by continuously training and learning a specific information recommendation model, that is, the first weight is a pre-trained weight value, and the vector embedding process is subjected to weighted summation by the pre-trained first weight.
Step S704, perform a point multiplication on the component of each dimension in the first dimension-reduced vector, respectively with the first weight and the first to-be-embedded vector, to obtain a first point multiplication result corresponding to the component of each dimension.
Here, the component of each dimension in the first dimension-reduced vector refers to an element value of each dimension in the first dimension-reduced vector, the first weight is a weight preset in the information recommendation model, the first weight may be an n-dimensional vector, and the dimension of the first weight may be the same as or different from that of the first dimension-reduced vector.
Step S705, summing the first point multiplication results corresponding to the at least two components to obtain a first embedded vector with a fifth dimension of the information to be recommended.
Here, after calculating a first point multiplication result obtained by performing point multiplication on the component of each dimension, the first weight and the first vector to be embedded, the first point multiplication results corresponding to all the components may be summed, or invalid first point multiplication results may be filtered and summed to obtain the first embedded vector of the information to be recommended.
In the embodiment of the application, the optimal first weight is obtained through learning, the importance of each feature in the first dimension reduction vector can be obtained through learning, the interpretability of a model result is enhanced, and meanwhile the influence of conflict caused by high-dimension mapping to low-dimension mapping can be reduced. By learning to obtain the optimal first to-be-embedded vector, the non-important features in the first dimension-reduced vector can be removed during vector embedding processing, so that important information in the first dimension-reduced vector is reserved. The vector embedding processing in the embodiment of the present application aims to make a first embedded vector of information to be recommended and a second embedded vector of a target object closer to each other, that is, make features corresponding to elements in the first embedded vector and the second embedded vector more approximate and correlated with each other.
In some embodiments, when the components of each dimension in the first dimension-reduced vector are respectively point-multiplied with the first vector to be embedded, the rotation, reduction and enlargement processing of the first dimension-reduced vector may be performed, that is, the rotation of the first dimension-reduced vector is realized through the first vector to be embedded. And (5) carrying out reduction and enlargement processing.
In some embodiments, the first vector to be embedded may be further updated, where the updating method includes:
in step S71, a similarity between the matching degree and a first preset matching degree is determined.
Here, the matching degree calculated by the information recommendation model in the last recommendation process may be obtained, and the similarity between the matching degree and a first preset matching degree may be calculated, where the first preset matching degree may be a true matching degree between the information to be recommended and the target object in the last recommendation process. In the embodiment of the present application, the similarity between the matching degree and the first preset matching degree may be calculated by using a preset loss function.
In step S72, a first update parameter of the first vector to be embedded is determined according to the similarity.
Here, if the similarity between the matching degree and the first preset matching degree is high, the first update parameter of the first vector to be embedded is small; if the similarity between the matching degree and the first preset matching degree is low, the first updating parameter of the first vector to be embedded is large.
Step S73, update the first to-be-embedded vector with the first update parameter to obtain an updated first to-be-embedded vector.
Here, the first to-be-embedded vector is corrected by the first update parameter, so that the updated first to-be-embedded vector can more accurately perform vector embedding processing on the dimension-reduced vector, that is, the information recommendation model with the updated first to-be-embedded vector can calculate more accurate matching degree between the information to be recommended and the target object.
After the updated first vector to be embedded is obtained, correspondingly, steps S703 to S705 may also be implemented by: and step S74, carrying out vector embedding processing on the first dimension-reduced vector by using the updated first vector to be embedded.
Step S706, feature extraction is carried out on the target object, and a second feature vector is correspondingly obtained.
Step S707, a second preset hash function is adopted to perform mapping processing on the second feature vector to obtain a second dimensionality reduction vector with a fourth dimensionality; wherein the fourth dimension is less than the second dimension.
Step S708, obtain a second weight of the target object and a second to-be-embedded vector of the target object.
Here, the learning process of the second weight and the second vector to be embedded is the same as the learning process of the first weight and the first vector to be embedded. The second weight may also be a weight value obtained by continuously training a specific information recommendation model and then learning, that is, the second weight is a pre-trained weight value, and the vector embedding processing procedure is subjected to weighted summation by the pre-trained second weight.
Step S709, perform dot multiplication on the component of each dimension in the second dimension-reduced vector, with the second weight and the second to-be-embedded vector, respectively, to obtain a second dot multiplication result corresponding to the component of each dimension.
Step S710, summing the second dot product results corresponding to the at least two components to obtain a second embedded vector with a sixth dimension of the target object.
It should be noted that steps S706 to S710 are similar to the implementation process of steps S701 to S705, and the difference is that steps S706 to S710 process the target object to obtain the second embedded vector of the target object, steps S701 to S705 process the information to be recommended to obtain the first embedded vector of the information to be recommended, and the specific processing procedure may refer to the explanation in steps S701 to S705.
In some embodiments, the second vector to be embedded may be further updated, wherein the updating method includes:
and step S75, determining the similarity between the matching degree and a second preset matching degree.
Step S76, determining a second update parameter of the second to-be-embedded vector according to the similarity.
And step S77, updating the second vector to be embedded by adopting the second updating parameter to obtain the updated second vector to be embedded.
After the updated second vector to be embedded is obtained, correspondingly, steps S708 to S710 may also be implemented by: and step S78, carrying out vector embedding processing on the second dimension-reduced vector by adopting the updated second vector to be embedded.
It should be noted that steps S75 to S77 are similar to the implementation of steps S71 to S73, except that steps S75 to S77 are processes for updating the second vector to be embedded of the target object, and steps S71 to S73 are processes for updating the first vector to be embedded of the information to be recommended, and the specific updating process can refer to the explanations of steps S71 to S73.
Step S711 determines a matching degree between the information to be recommended and the target object according to the first embedded vector and the second embedded vector.
And step 712, recommending the information to be recommended to the target object when the matching degree is greater than the threshold value.
Fig. 8 is an optional flowchart of the information recommendation method according to the embodiment of the present application, and as shown in fig. 8, the step S404 may be implemented by:
step S801 determines a first mapping function corresponding to the first embedding vector and a second mapping function corresponding to the second embedding vector in the recommendation system model. Step S802, the first embedded vector is mapped through a first mapping function to obtain a first mapping vector. Step S803, a second mapping function is used to map the second embedded vector to obtain a second mapping vector.
Here, the first mapping function and the second mapping function are functions in a recommendation system model, which may be a model based on a deep neural network, and therefore, in a hidden layer of the deep neural network, the first embedding vector and the second embedding vector are mapped by the first mapping function and the second mapping function, respectively, so that the first mapping vector and the second mapping vector obtained after mapping satisfy the calculation requirements of each hidden layer in the deep neural network.
Step S804, determining the matching degree between the information to be recommended and the target object according to the first mapping vector and the second mapping vector.
In some embodiments, step S804 may be implemented by:
step S8041, an inner product between the first mapping vector and the second mapping vector is determined. Step S8042, determining a matching degree between the information to be recommended and the target object according to a ratio between the inner product and a preset threshold.
In some embodiments, the information recommendation method may also be implemented by using an information recommendation model, that is, the information recommendation model is used to determine the matching degree between the information to be recommended and the target object. The information recommendation model is a model used for training to obtain vectors to be embedded (including a first vector to be embedded and a second vector to be embedded), a first weight and a second weight in the above embodiment.
Fig. 9A is an optional structural schematic diagram of the information recommendation model provided in the embodiment of the present application, and as shown in fig. 9A, the information recommendation model 90 includes an input layer 901, a feature extraction layer 902, a mapper 903, a recommender 904, and an output layer 905, where the input layer 901 is configured to input sample recommendation information and a sample object, the feature extraction layer 902 is configured to perform feature extraction on the sample recommendation information and the sample object, respectively, to obtain a first sample feature vector and a second sample feature vector, and input the obtained first sample feature vector and second sample feature vector to the mapper 903; the mapper 903 is configured to sequentially perform mapping processing and vector embedding processing on the first sample feature vector and the second sample feature vector, to obtain a first sample embedding vector of the sample recommendation information and a second sample embedding vector of the sample object correspondingly, and input the first sample embedding vector and the second sample embedding vector to the recommender 904; the recommender 904 is configured to determine a sample matching degree between the sample recommendation information and the sample object according to the first sample embedding vector and the second sample embedding vector, and output the sample matching degree through the output layer 905.
In the information recommendation model shown in fig. 9A, the feature extraction layer 902 can simultaneously perform feature extraction on the sample recommendation information and the sample object respectively; the mapper 903 can perform mapping processing and vector embedding processing on the first sample feature vector and the second sample feature vector in sequence at the same time. That is, the information recommendation model shown in fig. 9A includes a feature extraction layer and a mapper.
Fig. 9B is an alternative structural diagram of the information recommendation model provided in the embodiment of the present application, and as shown in fig. 9B, the information recommendation model 91 includes a first input layer 911, a first feature extraction layer 912, a first mapper 913, a second input layer 914, a second feature extraction layer 915, a second mapper 916, a recommender 917, and an output layer 918, where the first input layer 911 is used for inputting sample recommendation information; the first feature extraction layer 912 is configured to perform feature extraction on the sample recommendation information, obtain a first sample feature vector correspondingly, and input the obtained first sample feature vector to the first mapper 913; the first mapper 913 is configured to perform mapping processing and vector embedding processing on the first sample feature vector in sequence to obtain a first sample embedding vector of the sample recommendation information correspondingly, and input the first sample embedding vector to the recommender 917; the second input layer 914 is used for inputting sample objects; the second feature extraction layer 915 is configured to perform feature extraction on the sample object, correspondingly obtain a second sample feature vector, and input the obtained second sample feature vector to the second mapper 916; the second mapper 916 is configured to sequentially perform mapping processing and vector embedding processing on the second sample feature vector, to obtain a second sample embedding vector of the sample object correspondingly, and input the second sample embedding vector to the recommender 917; the recommender 917 is configured to determine a sample matching degree between the sample recommendation information and the sample object according to the first sample embedding vector and the second sample embedding vector, and output the sample matching degree through the output layer 918.
In the information recommendation model shown in fig. 9B, sample recommendation information and sample objects are input through two input layers, respectively; respectively extracting the characteristics of the sample recommendation information and the sample object through two characteristic extraction layers; and respectively carrying out mapping processing and vector embedding processing on the first sample feature vector and the second sample feature vector through two mappers. That is, the mapping process and the vector embedding process for the sample recommendation information and the sample object in the information recommendation model shown in fig. 9B are performed in different mappers.
Based on the information recommendation model in fig. 9A, an information recommendation model is further provided in the embodiment of the present application, and fig. 9C is an optional structural schematic diagram of the information recommendation model provided in the embodiment of the present application, as shown in fig. 9C, the information recommendation model 92 includes an input layer 901, a feature extraction layer 902, a mapper 903, a recommender 904, and an output layer 905, where the recommender 904 includes a recommendation system model 921 and a prediction model 922.
The recommendation system model 921 is configured to map the first sample embedding vector and the second sample embedding vector at the same time, and obtain a first sample mapping vector and a second sample mapping vector correspondingly; the prediction model 922 is used to predict the sample matching degree between the sample recommendation information and the sample object. In the information recommendation model shown in fig. 9C, the first sample embedding vector and the second sample embedding vector are mapped simultaneously by one recommendation system model 921.
Based on the information recommendation model in fig. 9B, an information recommendation model is further provided in the embodiment of the present application, and fig. 9D is an optional structural schematic diagram of the information recommendation model provided in the embodiment of the present application, as shown in fig. 9D, the information recommendation model 93 includes a first input layer 911, a first feature extraction layer 912, a first mapper 913, a second input layer 914, a second feature extraction layer 915, a second mapper 916, a recommender 917, and an output layer 918, where the recommender 917 includes a first recommendation system model 931, a second recommendation system model 932, and a prediction model 933.
The first recommendation system model 931 is configured to map the first sample embedding vector, and correspondingly obtain a first sample mapping vector; the second recommendation system model 932 is configured to map the second sample embedding vector to obtain a second sample mapping vector; the prediction model 933 is used to predict the sample matching degree between the sample recommendation information and the sample object. In the information recommendation model shown in fig. 9D, the first sample embedding vector and the second sample embedding vector are mapped by two recommendation system models, respectively.
Based on the information recommendation model shown in fig. 9A, an embodiment of the present application provides a training method for an information recommendation model, through the training method of the embodiment of the present application, a model finally used for information recommendation can be obtained through training, and parameters in the final information recommendation model obtained through training are parameters used for calculating a matching degree between information to be recommended and a target object in any one of the embodiments, where the parameters include, but are not limited to, vectors to be embedded (including a first vector to be embedded and a second vector to be embedded), a first weight, and a second weight, which are mentioned in any one of the embodiments.
Fig. 10 is a schematic flow chart of an optional method for training an information recommendation model provided in the embodiment of the present application, where the method includes the following steps:
step S101, inputting the sample recommendation information and the sample object into an information recommendation model.
Here, the sample recommendation information and the sample object are input as sample data into the information recommendation model.
And S102, respectively extracting the characteristics of the sample recommendation information and the sample object through a characteristic extraction layer in the information recommendation model, and correspondingly obtaining a first sample characteristic vector and a second sample characteristic vector.
And step S103, respectively carrying out mapping processing and vector embedding processing on the first sample characteristic vector and the second sample characteristic vector through a mapper in the information recommendation model to correspondingly obtain a first sample embedding vector of the sample recommendation information and a second sample embedding vector of the sample object.
And step S104, determining the sample matching degree between the sample recommendation information and the sample object according to the first sample embedding vector and the second sample embedding vector through a recommender in the information recommendation model.
And step S105, inputting the sample matching degree into a preset loss model to obtain a loss result.
Here, the preset loss model is configured to compare the sample matching degree with a preset matching degree to obtain a loss result, where the preset matching degree may be a true matching degree between sample recommendation information preset by a user and a sample object.
The preset loss model comprises a loss function, the similarity between the sample matching degree and the preset matching degree can be calculated through the loss function, and the loss result is determined according to the similarity.
And S106, correcting parameters in the feature extraction layer, the mapper and the recommender according to the loss result to obtain an information recommendation model.
Here, when the similarity is greater than the preset similarity threshold, the loss result indicates that at least one of the feature extraction layer, the mapper, and the recommender in the current information recommendation model cannot accurately perform feature extraction on the sample recommendation information and the sample object, or cannot accurately perform mapping processing and vector embedding processing on the first sample feature vector and the second sample feature vector, or cannot accurately determine the sample matching degree according to the first sample embedding vector and the second sample embedding vector. Therefore, the current information recommendation model needs to be modified. Then, at least one of the feature extraction layer, the mapper and the recommender may be modified according to the similarity, and when the similarity between the sample matching degree output by the information recommendation model and the preset matching degree satisfies the preset condition, the corresponding information recommendation model is determined as the trained information recommendation model.
In some embodiments, the parameters in the mapper (e.g., the vector embedding layer in the mapper) include at least: the third weight of the sample recommendation information, the fourth weight of the sample object, the third vector to be embedded of the sample recommendation information and the fourth vector to be embedded of the sample object; correspondingly, when the parameters in the emitter are corrected, at least one of the following parameters can be corrected according to the loss result: a third weight, a fourth weight, a third vector to be embedded, and a fourth vector to be embedded (it should be noted that the third vector to be embedded and the fourth vector to be embedded obtained by final training are the vectors to be embedded in the above embodiments).
According to the training method of the information recommendation model, sample data are input into the information recommendation model, are sequentially processed through the feature extraction layer, the mapper and the recommender in the information recommendation model, sample matching degree is obtained, the sample matching degree is input into the preset loss model, and a loss result is obtained. Therefore, at least one of the feature extraction layer, the mapper and the recommender can be modified according to the loss result, the obtained information recommendation model can accurately determine the matching degree according to the information to be recommended and the target object, and accurate information recommendation can be performed on the user.
Based on the information recommendation model shown in fig. 9B, the mapper includes a first mapper and a second mapper, and the first mapper and the second mapper have the same structure, and correspondingly, step S103 may be implemented by:
and step S1031, sequentially performing mapping processing and vector embedding processing on the first sample feature vector through a first mapper to correspondingly obtain a first sample embedding vector. And step S1032, sequentially performing mapping processing and vector embedding processing on the second sample feature vector through a second mapper to obtain a second sample embedded vector correspondingly.
In some embodiments, the mapper includes at least an input layer, a mapping processing layer, a vector embedding layer, and an output layer (not shown in fig. 9A and 9B), and correspondingly, step S103 may be implemented by:
in step S1033, a first sample feature vector and a second sample feature vector are input through the input layer. Step S1034, respectively mapping the first sample feature vector and the second sample feature vector through the mapping processing layer, and correspondingly obtaining a first sample mapping vector and a second sample mapping vector. Step S1035, performing vector embedding processing on the first sample mapping vector and the second sample mapping vector respectively through the vector embedding layer, and obtaining a first sample embedding vector and a second sample embedding vector correspondingly.
Based on the information recommendation model shown in fig. 9C, the recommender at least includes: recommending a system model and a prediction model; correspondingly, step S104 may be implemented by:
step S1041, mapping the first sample embedding vector and the second sample embedding vector respectively through the recommendation system model, and correspondingly obtaining a first sample mapping vector and a second sample mapping vector. Step S1042, the first sample mapping vector and the second sample mapping vector are input to a prediction model. And step S1043, predicting the sample matching degree between the sample recommendation information and the sample object through the prediction model.
Here, the prediction model includes a prediction function, the prediction function is a function for calculating an inner product between the first sample mapping vector and the second sample mapping vector, and for calculating a ratio of the calculated inner product to a preset threshold, and the ratio calculated by the prediction function is determined as a sample matching degree between the sample recommendation information and the sample object.
Based on the information recommendation model shown in fig. 9D, the recommender includes a first recommendation system model, a second recommendation system model and a prediction model; correspondingly, step S104 may be implemented by:
and step S1044, mapping the first sample embedding vector through the first recommendation system model to correspondingly obtain a first sample mapping vector. And step S1045, mapping the second sample embedding vector through a second recommendation system model to correspondingly obtain a second sample mapping vector. Step S1046, inputting the first sample mapping vector and the second sample mapping vector to the prediction model. Step S1047, predicting a sample matching degree between the sample recommendation information and the sample object through the prediction model.
The training method for the information recommendation model, provided by the embodiment of the application, provides the information recommendation models with different structures, and realizes training of the information recommendation model corresponding to different training methods, so that the trained information recommendation model is finally obtained. Therefore, information recommendation can be performed on the user by adopting different information recommendation models, the development effect which can be realized under different development environments can be met (namely, the information recommendation models with different structures are formed), and more development options are provided for developers.
Next, an exemplary application of the embodiment of the present application in a practical application scenario will be described.
The advertisement triggering is an important link of the online advertisement delivery system, and is used for advertisement recall, namely, a candidate advertisement set is retrieved from an advertisement library based on the user portrait and the context environment situation, and then a subsequent module calculates and preferably selects an exposure advertisement set (here, the advertisement in the advertisement library is the information to be recommended).
The main trigger strategies of the current advertisement delivery system mainly include the following two strategies: first, the tag triggers: the method is based on crowd targeting of a tag system, and an advertiser obtains crowd attribute tags through selection system mining to determine target crowds. In contrast, the advertisement delivery system recalls all the advertisements of the advertisers who purchase the targeted tags as a candidate advertisement set through the tag set carried by the current user. Second, intelligently trigger: the method is characterized in that the matching degree of a scene, a user and an advertiser delivery target is calculated in a refined mode by means of effect data of one party or two historical platform delivery parties accumulated by the advertiser, and then the delivery target population is determined. From the perspective of an advertisement delivery system, a current user is represented as a high-dimensional vector, and an advertisement set with higher similarity in an advertisement library is recalled as an advertisement candidate through a vector retrieval method. The intelligent trigger itself is also a supervised learning, based on advertiser set or system "understanding" advertiser optimization goals (such as conversion rate, click-through rate, etc.), and further constructs training samples. The hidden feedback recommendation system based on the exposure data enhanced negative sampling provided by the method is suitable for intelligent trigger scenes.
The method of the embodiment of the application can be realized based on the idea of feature hashing, wherein the idea of feature hashing is to maintain the expectation of the inner product of the feature vector unchanged, and the method of feature hashing is as follows: firstly, the sign of the original characteristic p-dimensional vector element is randomly inverted, then the element after the sign is randomly inverted is mapped into k characteristic storage buckets (k < p), and finally the element sum in each bucket is used as the element in the vector with the dimension of k.
The recommendation system with high storage efficiency provided by the embodiment of the application is mainly divided into two parts: a mapper and a recommender. The mapper is responsible for mapping the original high-dimensional feature vector into a low-dimensional feature vector and linearly mapping the low-dimensional feature vector into an embedded vector; the recommender accepts the mapper's embedded vector representation and computes a "score" (i.e., degree of match) of the predicted user (i.e., target object) with the item (i.e., information to be recommended).
Fig. 11 is an optional flowchart of the information recommendation method according to the embodiment of the present application, and as shown in fig. 11, the method includes the following steps:
step S111, inputting information (i, j) to be queried.
Wherein i represents user i; j represents item j; the information (i, j) to be queried refers to the matching degree between the user i to be calculated and the article j.
Step S112, reading the feature data of the user i and the item j (i.e., feature extraction), and respectively recording the feature data as feature data
Figure 682363DEST_PATH_IMAGE004
And
Figure 277293DEST_PATH_IMAGE005
suppose that
Figure 600827DEST_PATH_IMAGE004
And
Figure 332022DEST_PATH_IMAGE005
are all p.
Step S113, will
Figure 122124DEST_PATH_IMAGE004
And
Figure 610874DEST_PATH_IMAGE005
input into a mapper via a mapper pair
Figure 144624DEST_PATH_IMAGE004
And
Figure 730326DEST_PATH_IMAGE005
and performing dimension reduction processing and vector embedding processing.
Here, the mapper realizes the pair by the following steps
Figure 19225DEST_PATH_IMAGE004
And
Figure 323167DEST_PATH_IMAGE005
the treatment of (1):
in step S1131, the mappers respectively will
Figure 660607DEST_PATH_IMAGE004
P original features of and
Figure 100816DEST_PATH_IMAGE005
respectively randomly mapping p original features into k feature buckets (k) by a uniformly hashed hash function<p), the hash function employed by user i and item j is different.
In step S1132, the mapper maps the low-dimensional features composed of k feature buckets to obtain the embedded vectors by linear transformation
Figure 170403DEST_PATH_IMAGE006
And
Figure 492800DEST_PATH_IMAGE007
step S114, will
Figure 368352DEST_PATH_IMAGE006
And
Figure 928647DEST_PATH_IMAGE007
inputting the data into a recommender, and calculating to obtain a score by the recommender "
Figure 434714DEST_PATH_IMAGE008
(i.e., degree of match).
Here, the recommender calculates the "score" by the following steps "
Figure 182090DEST_PATH_IMAGE008
Step S1141, a final embedded vector is obtained through recommendation system model construction
Figure 126913DEST_PATH_IMAGE009
And
Figure 338451DEST_PATH_IMAGE010
step S1142, according to the embedded vector
Figure 749841DEST_PATH_IMAGE009
And
Figure 984513DEST_PATH_IMAGE010
calculating to obtain 'score'
Figure 264185DEST_PATH_IMAGE011
Step S115, judging whether the current prediction is on-line prediction or not, and outputting the prediction if the current prediction is on-line prediction
Figure 799071DEST_PATH_IMAGE008
Otherwise, step S116 is executed.
Step S116, calculate the true "score"
Figure 720977DEST_PATH_IMAGE012
And updating the model parameters according to the error optimization.
And step S117, judging whether to continue training, if so, returning to the step S111, and if not, exiting the process.
Fig. 12 is a structural diagram of a mapper provided in the embodiment of the present application, and a core idea of the mapper is based on the feature hash, but is greatly different from the feature hash, where the difference is one of important innovations in the embodiment of the present application. The implementation of the mapper is described below with reference to the input of the user i and shown in fig. 12, and the implementation of the item j is similar, in which the implementation includes the following steps:
step S121, inputting characteristics
Figure 380629DEST_PATH_IMAGE004
Using a uniformly hashed hash function
Figure 401674DEST_PATH_IMAGE013
The coordinates of the original p features are mapped into k feature buckets, respectively.
Step S122, learning the embedding vector for only k buckets, i.e.
Figure 525488DEST_PATH_IMAGE014
Wherein, in the step (A),
Figure 668893DEST_PATH_IMAGE015
is a linear mapping function, representing the feature embedding vector, k and r are positive integers. The traditional recommendation system model requires learning
Figure 143737DEST_PATH_IMAGE016
,p>k, so here the parameters that the model needs to learn can be successfully reduced.
Step S122, a learnable weight vector is maintained for the original p features
Figure 968474DEST_PATH_IMAGE017
Wherein
Figure 212373DEST_PATH_IMAGE018
It is shared by all users. By learning
Figure 136467DEST_PATH_IMAGE017
The importance of each original feature can be learned, and the interpretability of the model result is enhancedMeanwhile, the influence of conflict generated by mapping the high dimension to the low dimension can be reduced.
Step S122, weighting
Figure 833027DEST_PATH_IMAGE017
Characteristic of
Figure 258193DEST_PATH_IMAGE004
Separately row-by-row, point-by-point, feature-embedded vectors
Figure 356599DEST_PATH_IMAGE015
Then are added to obtain
Figure 45069DEST_PATH_IMAGE019
As shown in the following equation (1-1):
Figure 494505DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure 395465DEST_PATH_IMAGE021
representing weights
Figure 676273DEST_PATH_IMAGE017
The component of the t-th dimension;
Figure 207749DEST_PATH_IMAGE022
to represent
Figure 613322DEST_PATH_IMAGE004
The component of the t-th dimension;
Figure 645869DEST_PATH_IMAGE023
representing by a hash function
Figure 453288DEST_PATH_IMAGE013
And mapping the t-th dimension component of the learned feature embedded vector. In FIG. 12
Figure 483561DEST_PATH_IMAGE024
To represent
Figure 314114DEST_PATH_IMAGE004
The transposing of (1).
Fig. 13 is an architecture diagram of a recommender provided in an embodiment of the present application, and as shown in fig. 13, the recommender 13 includes a first recommendation system model 131, a second recommendation system model 132, and a prediction model 133, where the first recommendation system model 131 includes an input layer 1311, a hidden layer 1312, and an output layer 1313; also included in the second recommendation system model 132 are an input layer 1321, a hidden layer 1322, and an output layer 1323. It should be noted that the recommendation system model used in the recommender is not limited, and fig. 13 is only an example, and may be combined with various recommendation system models.
Based on the architecture diagram of the recommender shown in FIG. 13, the method implemented by the recommender includes the following steps:
step S131, the recommender receives the output from the mapper
Figure 822456DEST_PATH_IMAGE019
And
Figure 546698DEST_PATH_IMAGE007
step S132, recommending system model
Figure 482293DEST_PATH_IMAGE019
And
Figure 128038DEST_PATH_IMAGE007
construct a mapping into
Figure 440071DEST_PATH_IMAGE009
And
Figure 159765DEST_PATH_IMAGE010
since most models are based on deep neural networks, FIG. 13 depicts the recommendation system model in neural network summary. In the embodiment of the present application, the mapping can be constructed by the following formulas (1-2) and (1-3)
Figure 531840DEST_PATH_IMAGE009
And
Figure 399302DEST_PATH_IMAGE010
Figure 515026DEST_PATH_IMAGE025
wherein the content of the first and second substances,
Figure 213860DEST_PATH_IMAGE026
and
Figure 163362DEST_PATH_IMAGE027
representing functions of the first recommendation system model 131 and the second recommendation system model 132, respectively.
Step S133, according to
Figure 252541DEST_PATH_IMAGE009
And
Figure 968693DEST_PATH_IMAGE010
the predicted "score" is calculated by the following formulas (1-4) "
Figure 725296DEST_PATH_IMAGE008
Figure 173595DEST_PATH_IMAGE028
In some embodiments, a loss result of the information recommendation model may also be determined by a preset loss model to implement modification of parameters in the model according to the loss result, where an objective function in the parameter modification process is shown in the following formulas (1-5):
Figure 953332DEST_PATH_IMAGE029
wherein the content of the first and second substances,
Figure 156998DEST_PATH_IMAGE030
represents the loss result;
Figure 564846DEST_PATH_IMAGE031
all parameters that represent the model that needs to be learned, for example,
Figure 449625DEST_PATH_IMAGE031
may include weights
Figure 778975DEST_PATH_IMAGE017
And feature embedding vectors
Figure 712296DEST_PATH_IMAGE015
Figure 177912DEST_PATH_IMAGE032
Representing a loss function;
Figure 827068DEST_PATH_IMAGE008
a prediction "score" representing the model output;
Figure 643715DEST_PATH_IMAGE012
represents a true "score", i.e., a true "score" that is input in advance;
Figure 115147DEST_PATH_IMAGE033
representing a set of observed elements.
In the embodiment of the present application, a Stochastic Gradient Descent (SGD) may be used to optimize the information recommendation model. One sample (i, j) is randomly sampled, the error between the predicted "score" and the true "score" is calculated, then the model parameters are derived according to the error, and the model parameters are updated. The training process of the information recommendation model comprises the following steps:
step S141, reading the data, and initializing the model parameters in the information recommendation model.
Step S142, from the observed elements
Figure 700849DEST_PATH_IMAGE033
Randomly sampling one sample (i, j).
In step S143, the prediction "score" of (i, j) is calculated using the current parameters of the model.
Step S144, calculating the error between the predicted "score" and the true "score", and then deriving the model parameters.
Step S145, updating the model parameters. Wherein, updating the model parameters in step S144 and step S145 can be realized by the following formulas (1-6):
Figure 193011DEST_PATH_IMAGE034
wherein t represents the tth round of the optimization update process;
Figure 965795DEST_PATH_IMAGE035
all parameters required to be learned of the model obtained by the optimization updating of the (t + 1) th round are represented;
Figure 568814DEST_PATH_IMAGE036
all parameters required to be learned of the model obtained by the optimization updating of the t-th round are represented;
Figure 71340DEST_PATH_IMAGE037
indicating the learning rate.
And step S146, repeating the processes from step S142 to step S145 until the model converges or the stop condition is met.
In the information recommendation method provided in the embodiment of the application, in an actual measurement process, Inductive Matrix Completion (IMC) is used as a recommendation system model in a recommender. Therefore, the following actual measurement is called HashIMC, and the comparative method is called IMC.
Here, actual measurements were performed on three real large-scale datasets, User-APP (User APP), User-Community (User group), and Lookalike (a dataset), respectively, and the dataset information is presented in Table 1 below. Where p represents the user feature dimension and q represents the dimension of the item feature. # Users, # Items, and # Ω represent the number of Users, the number of Items, and the number of observed "scores", respectively.
Figure 468823DEST_PATH_IMAGE038
The results on the three large scale datasets are presented in table 2, table 3 and table 4, respectively. In the tables below, ρ represents the sampling ratio of the training samples, cr represents the parametric compression ratio of HashIMC compared to IMC, and the numbers in parentheses represent the performance variation of HashIMC relative to IMC, with larger values being better. In table 4, IMC-DS represents the IMC method using randomly down-sampled features for training, and OOM represents that the method cannot obtain results because the memory occupied by the model running exceeds the maximum memory of the server.
Figure 463324DEST_PATH_IMAGE039
Figure 604455DEST_PATH_IMAGE040
Figure 961487DEST_PATH_IMAGE041
From the actual measurement results on the three large-scale data sets, it can be seen that the performance of the recommended system model of the embodiment of the present application is hardly degraded under the condition that the parameters are reduced by 4 to 32 times, which illustrates the implementability of the scheme of the embodiment of the present application.
In addition, in order to exhibit the interpretability of HashIMC, it was also found here on the Lookalike data. By discarding the smaller value in the learned original feature vector weight, whether the importance of each original feature can be learned or not can be verified. The results are presented in table 5, where sp in table 5 indicates the proportion of the smaller weight of the missing feature vector, which is discarded in descending order. The actual measurement result shows that the importance of the original feature can be learned by the feature vector weight in the scheme of the embodiment of the application, so that the influence of the recommendation result can be explained according to the size of the original feature vector weight.
Figure 264293DEST_PATH_IMAGE042
The mapper in the embodiment of the present application is a general technology, and can be used for solving the problem of high storage overhead of parameters of a recommendation system model in the related art, so that the recommendation system model of the recommender portion introduced in the embodiment of the present application may also be other mainstream recommendation system models, such as a NeuMF model, a DIN model, and the like.
Continuing with the exemplary structure of the information recommendation device 354 implemented as a software module provided in the embodiments of the present application, in some embodiments, as shown in fig. 3, the software module stored in the information recommendation device 354 of the memory 350 may be an information recommendation device in the server 300, including:
the feature extraction module 3541 is configured to perform feature extraction on the information to be recommended and the target object respectively, and correspondingly obtain a first feature vector with a first dimension and a second feature vector with a second dimension; a dimensionality reduction processing module 3542, configured to perform mapping processing on the first eigenvector and the second eigenvector respectively, and correspondingly obtain a first dimensionality reduction vector with a third dimensionality and a second dimensionality reduction vector with a fourth dimensionality; wherein the third dimension is less than the first dimension and the fourth dimension is less than the second dimension; a vector embedding processing module 3543, configured to perform vector embedding processing on the first dimension-reduced vector and the second dimension-reduced vector respectively by using a vector to be embedded, and correspondingly obtain a first embedding vector with a fifth dimension of the information to be recommended and a second embedding vector with a sixth dimension of the target object; the fifth dimension is smaller than the third dimension, the sixth dimension is smaller than the fourth dimension, and the vector to be embedded is obtained by learning after training an information recommendation model; a determining module 3544, configured to determine, according to the first embedded vector and the second embedded vector, a matching degree between the information to be recommended and the target object; an information recommending module 3545, configured to recommend the information to be recommended to the target object when the matching degree is greater than a threshold.
In some embodiments, the first feature vector has a first dimension and the second feature vector has a second dimension; the dimension reduction processing module is further configured to: mapping the first feature vector by adopting a first preset hash function to obtain a first dimensionality reduction vector with a third dimensionality; and mapping the second characteristic vector by adopting a second preset hash function to obtain a second dimensionality reduction vector with a fourth dimensionality.
In some embodiments, the vector embedding processing module is further to: acquiring a first weight of the information to be recommended and the first vector to be embedded; performing point multiplication on the component of each dimension in the first dimension-reduced vector with the first weight and the first vector to be embedded respectively to obtain a first point multiplication result corresponding to the component of each dimension; and summing the first point multiplication results corresponding to at least two components to obtain a first embedded vector with a fifth dimension of the information to be recommended.
In some embodiments, the apparatus further comprises: the first similarity determining module is used for determining the similarity between the matching degree and a first preset matching degree; a first update parameter determination module, configured to determine a first update parameter of the first vector to be embedded according to the similarity; the first updating module is used for updating the first vector to be embedded by adopting the first updating parameter to obtain an updated first vector to be embedded; correspondingly, the vector embedding processing module is further configured to: and carrying out vector embedding processing on the first dimension reduction vector by adopting the updated first vector to be embedded.
In some embodiments, the vector to be embedded comprises a second vector to be embedded of the target object; the vector embedding processing module is further to: acquiring a second weight of the target object and the second vector to be embedded; performing point multiplication on the component of each dimension in the second dimension-reduced vector with the second weight and the second vector to be embedded respectively to obtain a second point multiplication result corresponding to the component of each dimension; and summing the second point multiplication results corresponding to at least two components to obtain a second embedded vector with a sixth dimension of the target object.
In some embodiments, the apparatus further comprises: the second similarity determining module is used for determining the similarity between the matching degree and a second preset matching degree; a second update parameter determination module, configured to determine a second update parameter of the second to-be-embedded vector according to the similarity; the second updating module is used for updating the second vector to be embedded by adopting the second updating parameter to obtain an updated second vector to be embedded; correspondingly, the vector embedding processing module is further configured to: and carrying out vector embedding processing on the second dimension reduction vector by adopting the updated second vector to be embedded.
In some embodiments, the determining module is further configured to: determining a first mapping function corresponding to the first embedding vector and a second mapping function corresponding to the second embedding vector in a recommendation system model; mapping the first embedded vector through the first mapping function to obtain a first mapping vector; mapping the second embedded vector through the second mapping function to obtain a second mapping vector; and determining the matching degree between the information to be recommended and the target object according to the first mapping vector and the second mapping vector.
In some embodiments, the determining module is further configured to: determining an inner product between the first mapping vector and the second mapping vector; and determining the matching degree between the information to be recommended and the target object according to the ratio of the inner product to a preset threshold value.
In some embodiments, the apparatus further comprises: the processing module is used for determining the matching degree between the information to be recommended and the target object by adopting an information recommendation model; wherein the information recommendation model is trained by the following steps: inputting sample recommendation information and sample objects into the information recommendation model; respectively extracting features of the sample recommendation information and the sample object through a feature extraction layer in the information recommendation model to correspondingly obtain a first sample feature vector and a second sample feature vector; respectively carrying out mapping processing and vector embedding processing on the first sample characteristic vector and the second sample characteristic vector in sequence through a mapper in the information recommendation model to correspondingly obtain a first sample embedding vector of the sample recommendation information and a second sample embedding vector of the sample object; determining, by a recommender in the information recommendation model, a sample matching degree between the sample recommendation information and the sample object according to the first sample embedding vector and the second sample embedding vector; inputting the sample matching degree into a preset loss model to obtain a loss result; and modifying parameters in the feature extraction layer, the mapper and the recommender according to the loss result to obtain the information recommendation model.
In some embodiments, the mapper comprises a first mapper and a second mapper, the first mapper and the second mapper having the same structure; the information recommendation model is trained through the following steps: sequentially carrying out the mapping processing and the vector embedding processing on the first sample characteristic vector through the first mapper to correspondingly obtain a first sample embedding vector; meanwhile, the second sample characteristic vector is subjected to the mapping processing and the vector embedding processing in sequence through the second mapper, and the second sample embedding vector is correspondingly obtained.
In some embodiments, the mapper comprises at least an input layer, a mapping processing layer, a vector embedding layer, and an output layer; the information recommendation model is trained through the following steps: inputting the first sample feature vector and the second sample feature vector through the input layer; respectively carrying out mapping processing on the first sample characteristic vector and the second sample characteristic vector through the mapping processing layer to correspondingly obtain a first sample mapping vector and a second sample mapping vector; and respectively carrying out the vector embedding processing on the first sample mapping vector and the second sample mapping vector through the vector embedding layer to correspondingly obtain the first sample embedding vector and the second sample embedding vector.
In some embodiments, the parameters in the vector embedding layer include at least: a third weight of the sample recommendation information, a fourth weight of the sample object, a third vector to be embedded of the sample recommendation information, and a fourth vector to be embedded of the sample object; correspondingly, the information recommendation model is trained by the following steps: according to the loss result, correcting at least one of the following: the third weight, the fourth weight, the third vector to be embedded, and the fourth vector to be embedded.
In some embodiments, the recommender comprises at least: recommending a system model and a prediction model; the information recommendation model is trained through the following steps: respectively mapping the first sample embedding vector and the second sample embedding vector through the recommendation system model to correspondingly obtain a first sample mapping vector and a second sample mapping vector; inputting the first sample mapping vector and the second sample mapping vector to the prediction model; predicting, by the prediction model, the sample matching degree between the sample recommendation information and the sample object.
It should be noted that the description of the apparatus in the embodiment of the present application is similar to the description of the method embodiment, and has similar beneficial effects to the method embodiment, and therefore, the description is not repeated. For technical details not disclosed in the embodiments of the apparatus, reference is made to the description of the embodiments of the method of the present application for understanding.
Embodiments of the present application provide a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The processor of the computer device reads the computer instructions from the computer-readable storage medium, and the processor executes the computer instructions, so that the computer device executes the information recommendation method in the embodiment of the present application.
Embodiments of the present application provide a storage medium having stored therein executable instructions, which when executed by a processor, will cause the processor to perform a method provided by embodiments of the present application, for example, the method as illustrated in fig. 4.
In some embodiments, the storage medium may be a computer-readable storage medium, such as a Ferroelectric Random Access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), a charged Erasable Programmable Read Only Memory (EEPROM), a flash Memory, a magnetic surface Memory, an optical disc, or a Compact disc Read Only Memory (CD-ROM), among other memories; or may be various devices including one or any combination of the above memories.
In some embodiments, executable instructions may be written in any form of programming language (including compiled or interpreted languages), in the form of programs, software modules, scripts or code, and may be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
By way of example, executable instructions may correspond, but do not necessarily have to correspond, to files in a file system, and may be stored in a portion of a file that holds other programs or data, such as in one or more scripts in a hypertext markup Language (HTML) document, in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub-programs, or portions of code). By way of example, executable instructions may be deployed to be executed on one computing device or on multiple computing devices at one site or distributed across multiple sites and interconnected by a communication network.

Claims (15)

1. An information recommendation method, comprising:
respectively extracting features of the information to be recommended and the target object, and correspondingly obtaining a first feature vector with a first dimension and a second feature vector with a second dimension;
mapping the first feature vector and the second feature vector respectively to obtain a first dimension-reduced vector with a third dimension and a second dimension-reduced vector with a fourth dimension; wherein the third dimension is less than the first dimension and the fourth dimension is less than the second dimension;
vector embedding processing is carried out on the first dimension reduction vector and the second dimension reduction vector respectively by adopting vectors to be embedded, and a first embedded vector with a fifth dimension of the information to be recommended and a second embedded vector with a sixth dimension of the target object are obtained correspondingly; the fifth dimension is smaller than the third dimension, the sixth dimension is smaller than the fourth dimension, and the vector to be embedded is obtained by learning after training an information recommendation model;
determining the matching degree between the information to be recommended and the target object according to the first embedded vector and the second embedded vector;
and when the matching degree is greater than a threshold value, recommending the information to be recommended to the target object.
2. The method according to claim 1, wherein the mapping the first eigenvector and the second eigenvector respectively to obtain a first dimension-reduced vector with a third dimension and a second dimension-reduced vector with a fourth dimension correspondingly comprises:
performing the mapping processing on the first feature vector by adopting a first preset hash function to obtain the first dimensionality reduction vector with the third dimensionality;
and performing the mapping processing on the second feature vector by adopting a second preset hash function to obtain the second dimensionality-reduced vector with the fourth dimensionality.
3. The method according to claim 1, wherein the vector to be embedded comprises a first vector to be embedded of the information to be recommended;
performing vector embedding processing on the first dimension-reduced vector to correspondingly obtain a first embedded vector with a fifth dimension of the information to be recommended, including:
acquiring a first weight of the information to be recommended and the first vector to be embedded;
performing point multiplication on the component of each dimension in the first dimension-reduced vector with the first weight and the first vector to be embedded respectively to obtain a first point multiplication result corresponding to the component of each dimension;
and summing the first point multiplication results corresponding to at least two components to obtain a first embedded vector with a fifth dimension of the information to be recommended.
4. The method of claim 3, further comprising:
determining the similarity between the matching degree and a first preset matching degree;
determining a first updating parameter of the first vector to be embedded according to the similarity;
updating the first vector to be embedded by using the first updating parameter to obtain an updated first vector to be embedded;
correspondingly, the method further comprises: and carrying out vector embedding processing on the first dimension reduction vector by adopting the updated first vector to be embedded.
5. The method of claim 1, wherein the vector to be embedded comprises a second vector to be embedded of the target object;
performing feature embedding processing on the second dimension-reduced vector to correspondingly obtain a second embedded vector with a sixth dimension of the target object, including:
acquiring a second weight of the target object and the second vector to be embedded;
performing point multiplication on the component of each dimension in the second dimension-reduced vector with the second weight and the second vector to be embedded respectively to obtain a second point multiplication result corresponding to the component of each dimension;
and summing the second point multiplication results corresponding to at least two components to obtain a second embedded vector with a sixth dimension of the target object.
6. The method of claim 5, further comprising:
determining the similarity between the matching degree and a second preset matching degree;
determining a second updating parameter of the second vector to be embedded according to the similarity;
updating the second vector to be embedded by adopting the second updating parameter to obtain an updated second vector to be embedded;
correspondingly, the method further comprises: and carrying out vector embedding processing on the second dimension reduction vector by adopting the updated second vector to be embedded.
7. The method according to claim 1, wherein the determining a matching degree between the information to be recommended and the target object according to the first embedded vector and the second embedded vector comprises:
determining a first mapping function corresponding to the first embedding vector and a second mapping function corresponding to the second embedding vector in a recommendation system model;
mapping the first embedded vector through the first mapping function to obtain a first mapping vector;
mapping the second embedded vector through the second mapping function to obtain a second mapping vector;
and determining the matching degree between the information to be recommended and the target object according to the first mapping vector and the second mapping vector.
8. The method according to claim 7, wherein the determining the matching degree between the information to be recommended and the target object according to the first mapping vector and the second mapping vector comprises:
determining an inner product between the first mapping vector and the second mapping vector;
and determining the matching degree between the information to be recommended and the target object according to the ratio of the inner product to a preset threshold value.
9. The method according to any one of claims 1 to 8, further comprising:
determining the matching degree between the information to be recommended and the target object by adopting an information recommendation model;
wherein the information recommendation model is trained by the following steps:
inputting sample recommendation information and sample objects into the information recommendation model;
respectively extracting features of the sample recommendation information and the sample object through a feature extraction layer in the information recommendation model to correspondingly obtain a first sample feature vector and a second sample feature vector;
respectively carrying out mapping processing and vector embedding processing on the first sample characteristic vector and the second sample characteristic vector in sequence through a mapper in the information recommendation model to correspondingly obtain a first sample embedding vector of the sample recommendation information and a second sample embedding vector of the sample object;
determining, by a recommender in the information recommendation model, a sample matching degree between the sample recommendation information and the sample object according to the first sample embedding vector and the second sample embedding vector;
inputting the sample matching degree into a preset loss model to obtain a loss result;
and modifying parameters in the feature extraction layer, the mapper and the recommender according to the loss result to obtain the information recommendation model.
10. The method of claim 9, wherein the mapper comprises a first mapper and a second mapper, and wherein the first mapper and the second mapper have the same structure;
the sequentially performing mapping processing and vector embedding processing on the first sample feature vector and the second sample feature vector through a mapper in the information recommendation model to correspondingly obtain a first sample embedding vector of the sample recommendation information and a second sample embedding vector of the sample object includes:
sequentially carrying out the mapping processing and the vector embedding processing on the first sample characteristic vector through the first mapper to correspondingly obtain a first sample embedding vector; at the same time, the user can select the desired position,
and sequentially carrying out the mapping processing and the vector embedding processing on the second sample characteristic vector through the second mapper to correspondingly obtain the second sample embedding vector.
11. The method of claim 9, wherein the mapper comprises at least an input layer, a mapping processing layer, a vector embedding layer, and an output layer;
the sequentially performing mapping processing and vector embedding processing on the first sample feature vector and the second sample feature vector through a mapper in the information recommendation model to correspondingly obtain a first sample embedding vector of the sample recommendation information and a second sample embedding vector of the sample object includes:
inputting the first sample feature vector and the second sample feature vector through the input layer;
respectively carrying out mapping processing on the first sample characteristic vector and the second sample characteristic vector through the mapping processing layer to correspondingly obtain a first sample mapping vector and a second sample mapping vector;
and respectively carrying out the vector embedding processing on the first sample mapping vector and the second sample mapping vector through the vector embedding layer to correspondingly obtain the first sample embedding vector and the second sample embedding vector.
12. The method of claim 9, wherein the recommender comprises at least: recommending a system model and a prediction model;
the determining, by a recommender in the information recommendation model, a sample matching degree between the sample recommendation information and the sample object according to the first sample embedding vector and the second sample embedding vector includes:
respectively mapping the first sample embedding vector and the second sample embedding vector through the recommendation system model to correspondingly obtain a first sample mapping vector and a second sample mapping vector;
inputting the first sample mapping vector and the second sample mapping vector to the prediction model;
predicting, by the prediction model, the sample matching degree between the sample recommendation information and the sample object.
13. An information recommendation apparatus, comprising:
the characteristic extraction module is used for respectively extracting characteristics of the information to be recommended and the target object to correspondingly obtain a first characteristic vector with a first dimension and a second characteristic vector with a second dimension;
the dimensionality reduction processing module is used for respectively mapping the first eigenvector and the second eigenvector to correspondingly obtain a first dimensionality reduction vector with a third dimensionality and a second dimensionality reduction vector with a fourth dimensionality; wherein the third dimension is less than the first dimension and the fourth dimension is less than the second dimension;
the vector embedding processing module is used for respectively carrying out vector embedding processing on the first dimension reduction vector and the second dimension reduction vector by adopting a vector to be embedded, and correspondingly obtaining a first embedding vector with a fifth dimension of the information to be recommended and a second embedding vector with a sixth dimension of the target object; the fifth dimension is smaller than the third dimension, the sixth dimension is smaller than the fourth dimension, and the vector to be embedded is obtained by learning after training an information recommendation model;
the determining module is used for determining the matching degree between the information to be recommended and the target object according to the first embedded vector and the second embedded vector;
and the information recommending module is used for recommending the information to be recommended to the target object when the matching degree is greater than a threshold value.
14. An information recommendation apparatus characterized by comprising:
a memory for storing executable instructions; a processor for implementing the method of any one of claims 1 to 12 when executing executable instructions stored in the memory.
15. A computer-readable storage medium having stored thereon executable instructions for causing a processor, when executing, to implement the method of any one of claims 1 to 12.
CN202010835094.XA 2020-08-19 2020-08-19 Information recommendation method, device, equipment and computer readable storage medium Active CN111737586B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010835094.XA CN111737586B (en) 2020-08-19 2020-08-19 Information recommendation method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010835094.XA CN111737586B (en) 2020-08-19 2020-08-19 Information recommendation method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111737586A true CN111737586A (en) 2020-10-02
CN111737586B CN111737586B (en) 2020-12-04

Family

ID=72658543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010835094.XA Active CN111737586B (en) 2020-08-19 2020-08-19 Information recommendation method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111737586B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112135334A (en) * 2020-10-27 2020-12-25 上海连尚网络科技有限公司 Method and equipment for determining hotspot type of wireless access point
CN112418423A (en) * 2020-11-24 2021-02-26 百度在线网络技术(北京)有限公司 Method, apparatus, and medium for recommending objects to a user using a neural network
CN114398558A (en) * 2022-01-19 2022-04-26 北京百度网讯科技有限公司 Information recommendation method and device, electronic equipment and storage medium
CN114780844A (en) * 2022-04-21 2022-07-22 杭州樱熊网络科技有限公司 Novel recommendation method and system based on user habits
CN114840761A (en) * 2022-05-13 2022-08-02 北京达佳互联信息技术有限公司 Push model training method, device, equipment, storage medium and program product
CN114840761B (en) * 2022-05-13 2024-05-28 北京达佳互联信息技术有限公司 Training method, device, equipment, storage medium and program product of push model

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112018A (en) * 2014-07-21 2014-10-22 南京大学 Large-scale image retrieval method
CN105095435A (en) * 2015-07-23 2015-11-25 北京京东尚科信息技术有限公司 Similarity comparison method and device for high-dimensional image features
US10467122B1 (en) * 2017-04-27 2019-11-05 Intuit Inc. Methods, systems, and computer program product for capturing and classification of real-time data and performing post-classification tasks
CN110717539A (en) * 2019-10-08 2020-01-21 腾讯科技(深圳)有限公司 Dimension reduction model training method, retrieval method and device based on artificial intelligence
CN110781391A (en) * 2019-10-22 2020-02-11 腾讯科技(深圳)有限公司 Information recommendation method, device, equipment and storage medium
CN110874440A (en) * 2020-01-16 2020-03-10 支付宝(杭州)信息技术有限公司 Information pushing method and device, model training method and device, and electronic equipment
CN111079015A (en) * 2019-12-17 2020-04-28 腾讯科技(深圳)有限公司 Recommendation method and device, computer equipment and storage medium
CN111444428A (en) * 2020-03-27 2020-07-24 腾讯科技(深圳)有限公司 Information recommendation method and device based on artificial intelligence, electronic equipment and storage medium
CN111460130A (en) * 2020-03-27 2020-07-28 咪咕数字传媒有限公司 Information recommendation method, device, equipment and readable storage medium
CN111538761A (en) * 2020-04-21 2020-08-14 中南大学 Click rate prediction method based on attention mechanism

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104112018A (en) * 2014-07-21 2014-10-22 南京大学 Large-scale image retrieval method
CN105095435A (en) * 2015-07-23 2015-11-25 北京京东尚科信息技术有限公司 Similarity comparison method and device for high-dimensional image features
US10467122B1 (en) * 2017-04-27 2019-11-05 Intuit Inc. Methods, systems, and computer program product for capturing and classification of real-time data and performing post-classification tasks
CN110717539A (en) * 2019-10-08 2020-01-21 腾讯科技(深圳)有限公司 Dimension reduction model training method, retrieval method and device based on artificial intelligence
CN110781391A (en) * 2019-10-22 2020-02-11 腾讯科技(深圳)有限公司 Information recommendation method, device, equipment and storage medium
CN111079015A (en) * 2019-12-17 2020-04-28 腾讯科技(深圳)有限公司 Recommendation method and device, computer equipment and storage medium
CN110874440A (en) * 2020-01-16 2020-03-10 支付宝(杭州)信息技术有限公司 Information pushing method and device, model training method and device, and electronic equipment
CN111444428A (en) * 2020-03-27 2020-07-24 腾讯科技(深圳)有限公司 Information recommendation method and device based on artificial intelligence, electronic equipment and storage medium
CN111460130A (en) * 2020-03-27 2020-07-28 咪咕数字传媒有限公司 Information recommendation method, device, equipment and readable storage medium
CN111538761A (en) * 2020-04-21 2020-08-14 中南大学 Click rate prediction method based on attention mechanism

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵珊等: ""基于随机旋转局部保持哈希的图像检索技术"", 《工程科学与技术》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112135334A (en) * 2020-10-27 2020-12-25 上海连尚网络科技有限公司 Method and equipment for determining hotspot type of wireless access point
CN112418423A (en) * 2020-11-24 2021-02-26 百度在线网络技术(北京)有限公司 Method, apparatus, and medium for recommending objects to a user using a neural network
CN112418423B (en) * 2020-11-24 2023-08-15 百度在线网络技术(北京)有限公司 Method, apparatus and medium for recommending objects to user using neural network
CN114398558A (en) * 2022-01-19 2022-04-26 北京百度网讯科技有限公司 Information recommendation method and device, electronic equipment and storage medium
CN114780844A (en) * 2022-04-21 2022-07-22 杭州樱熊网络科技有限公司 Novel recommendation method and system based on user habits
CN114780844B (en) * 2022-04-21 2022-10-28 杭州樱熊网络科技有限公司 Novel recommendation method and system based on user habits
CN114840761A (en) * 2022-05-13 2022-08-02 北京达佳互联信息技术有限公司 Push model training method, device, equipment, storage medium and program product
CN114840761B (en) * 2022-05-13 2024-05-28 北京达佳互联信息技术有限公司 Training method, device, equipment, storage medium and program product of push model

Also Published As

Publication number Publication date
CN111737586B (en) 2020-12-04

Similar Documents

Publication Publication Date Title
CN111737586B (en) Information recommendation method, device, equipment and computer readable storage medium
CN110717098B (en) Meta-path-based context-aware user modeling method and sequence recommendation method
CN111242310B (en) Feature validity evaluation method and device, electronic equipment and storage medium
WO2023124204A1 (en) Anti-fraud risk assessment method and apparatus, training method and apparatus, and readable storage medium
US11501161B2 (en) Method to explain factors influencing AI predictions with deep neural networks
US10943068B2 (en) N-ary relation prediction over text spans
JP7225395B2 (en) Dynamic Reconfiguration Training Computer Architecture
CN110008397B (en) Recommendation model training method and device
CN111026977B (en) Information recommendation method and device and storage medium
WO2024041483A1 (en) Recommendation method and related device
WO2024002167A1 (en) Operation prediction method and related apparatus
CN110472659B (en) Data processing method, device, computer readable storage medium and computer equipment
CN113656699B (en) User feature vector determining method, related equipment and medium
CN111178986A (en) User-commodity preference prediction method and system
CN113255327B (en) Text processing method and device, electronic equipment and computer readable storage medium
CN114511387A (en) Product recommendation method and device, electronic equipment and storage medium
CN113868466A (en) Video recommendation method, device, equipment and storage medium
WO2023050143A1 (en) Recommendation model training method and apparatus
CN115879508A (en) Data processing method and related device
CN111459990B (en) Object processing method, system, computer readable storage medium and computer device
CN115545738A (en) Recommendation method and related device
CN114529399A (en) User data processing method, device, computer equipment and storage medium
JP7435821B2 (en) Learning device, psychological state sequence prediction device, learning method, psychological state sequence prediction method, and program
KR102666635B1 (en) User equipment, method, and recording medium for creating recommendation keyword
Aenugu et al. Asymmetric Weights and Retrieval Practice in an Autoassociative Neural Network Model of Paired-Associate Learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40030700

Country of ref document: HK