WO2021051515A1 - 基于向量迁移的推荐方法、装置、计算机设备及非易失性可读存储介质 - Google Patents

基于向量迁移的推荐方法、装置、计算机设备及非易失性可读存储介质 Download PDF

Info

Publication number
WO2021051515A1
WO2021051515A1 PCT/CN2019/116921 CN2019116921W WO2021051515A1 WO 2021051515 A1 WO2021051515 A1 WO 2021051515A1 CN 2019116921 W CN2019116921 W CN 2019116921W WO 2021051515 A1 WO2021051515 A1 WO 2021051515A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature vector
matrix
implicit
neural network
deep neural
Prior art date
Application number
PCT/CN2019/116921
Other languages
English (en)
French (fr)
Inventor
陈楚
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021051515A1 publication Critical patent/WO2021051515A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations

Definitions

  • This application belongs to the field of artificial intelligence technology, and in particular relates to a recommendation method, device, computer equipment, and non-volatile readable storage medium based on vector migration.
  • LFM requires long-term data, and the time window spans a large amount of users and commodities involved.
  • the inventor realized that: a certain user’s preference for all commodities within a certain time window, so the amount of calculation is large, the model
  • the results also reflect the long-term and stable preferences of users.
  • LFM in industrial applications uses full data offline for training, which cannot reflect the user's recent preferences, and the sensitivity is not high.
  • the training data of the DNN deep neural network is mainly a click sequence, which is a sorting model.
  • the model results reflect relatively short-term preferences.
  • the prediction results are greatly affected by the data set, and the data set and the model need to be updated frequently.
  • the model wants to reflect longer-term preferences the volume of commodities and users will become larger, or when the volume of commodities and users is larger, the parameters of the deep neural network will also become larger, and the training time will become longer, resulting in The production efficiency of the model is not high.
  • the embodiments of the application provide a recommendation method, device, computer device, and non-volatile readable storage medium based on vector migration, which are designed to solve the existing DNN deep neural network in the prior art that needs frequent updates and reflects long-term preferences. The problem of long training time.
  • a recommendation method based on vector migration is provided, which includes the steps:
  • This application also provides a recommendation device based on vector migration, including:
  • An acquiring module configured to acquire a hidden semantic model trained with full data, where the full data includes user historical preference data, and the trained hidden semantic model includes a full matrix trained from the full data;
  • An extraction module for extracting a target feature vector in the latent semantic model, where the target feature vector includes a product feature vector
  • the migration module is used to migrate the target feature vector to the corresponding feature vector layer in the preset deep neural network to obtain the fused deep neural network;
  • the recommendation module is configured to output a prediction result for recommendation based on the fusion deep neural network, where the prediction result includes a product prediction result.
  • the present application also provides a computer device, including a memory and a processor.
  • the memory stores computer-readable instructions.
  • the processor executes the computer-readable instructions, the implementation is as described in any of the embodiments of the present application.
  • the steps of the recommended method based on vector migration.
  • the present application also provides a nonvolatile readable storage medium, characterized in that computer readable instructions are stored on the nonvolatile readable storage medium, and when the computer readable instructions are executed by a processor, the The steps of the recommendation method based on vector migration described in any of the embodiments of the present application.
  • This application transfers the trained target feature vector to the preset deep neural network, so that the deep neural network does not need to learn the feature vector, reduces the model scale of the fused deep neural network, reduces the model training time, and improves Productivity.
  • the implicit semantic model is trained through the full amount of user and product data to improve the robustness of the fusion deep neural network model, while the implicit semantic model is trained offline, the increase in data volume does not affect the production efficiency of the fusion deep neural network model.
  • Figure 1 is an exemplary system architecture diagram to which the present application can be applied;
  • FIG. 2 is a flowchart of an embodiment of a recommendation method based on vector migration provided by an embodiment of the present application
  • FIG. 3 is a flowchart of a specific implementation manner of step S202 in the embodiment of FIG. 2;
  • FIG. 4 is a schematic diagram of matrix decomposition provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of a specific implementation manner of step S301 in the embodiment of FIG. 3;
  • FIG. 6 is a flowchart of a specific implementation manner of step S302 in the embodiment of FIG. 3;
  • FIG. 7 is a schematic diagram of a flow chart of a deep neural network preset in an embodiment of the present application.
  • FIG. 8 is a flowchart of a specific implementation manner of step S203 in the embodiment of FIG. 2;
  • FIG. 9 is a flowchart of another embodiment of a recommendation method based on vector migration provided by an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an embodiment of a recommendation device based on vector migration according to an embodiment of the present application.
  • FIG. 11 is a schematic structural diagram of another embodiment of a recommendation device based on vector migration according to an embodiment of the present application.
  • FIG. 12 is a schematic structural diagram of another embodiment of a recommendation device based on vector migration according to an embodiment of the present application.
  • FIG. 13 is a schematic diagram of the structure of a training module of a deep neural network according to an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of another embodiment of a recommendation device based on vector migration according to an embodiment of the present application.
  • 15 is a schematic structural diagram of another embodiment of a recommendation device based on vector migration according to an embodiment of the present application.
  • Fig. 16 is a schematic structural diagram of an embodiment of the computer device of the present application.
  • This application transfers the target feature vector in the trained latent semantic model to the preset deep neural network. Since the target feature vector is already trained, the deep neural network does not need to learn the feature vector, reducing the fusion The model scale of the deep neural network reduces model training time and improves production efficiency. In addition, since the implicit semantic model is trained through the full amount of user and product data, the target feature vector can better reflect the user’s long-term preference, so that the fusion deep neural network model can also reflect the user’s long-term preference without frequent updates, which improves The robustness of the fusion deep neural network model, while the implicit semantic model is offline training, and the increase in data volume does not affect the production efficiency of the fusion deep neural network model.
  • the system architecture 100 may include a server 105, a network 102, and terminal devices 101, 102, and 103.
  • the network 104 is used to provide a medium for communication links between the server 105 and the terminal devices 101, 102, and 103.
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables.
  • the terminal devices 101, 102, 103 can be various electronic devices that have a display screen, can download application software, can read and write data, etc., including but not limited to smart phones, tablet computers, laptop computers, desktop computers, etc. ,
  • the client can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or obtain information and so on.
  • FIG. 2 it is a flowchart of an embodiment provided by the vector migration-based recommendation method according to the present application.
  • the above-mentioned recommendation method based on vector migration includes the steps:
  • S201 Obtain a latent semantic model trained with full data, where the full data includes user historical preference data, and the trained latent semantic model includes a full matrix obtained by training with the full data.
  • the recommendation method based on vector migration can run on an electronic device equipped with a recommendation system (for example, the mobile terminal shown in FIG. 1).
  • the above-mentioned full data may be the user's historical preference data
  • the above-mentioned preference data may be obtained from the user's historical behavior data of the product.
  • the user's historical behavior data includes the number of times the user clicks on a certain product, the number of purchases, and the collection. Sharing, scoring, etc., such as: the user clicks on product A 18 times, indicating that the user repeatedly browses the product A, and it can be considered that the user prefers the product A; another example: the user bookmarks the product B to prevent it from being found.
  • the full matrix in the aforementioned implicit semantic model may be a scoring matrix (also referred to as a recommendation matrix) obtained by inputting the full amount of data into the implicit semantic model for training and learning. For example, by inputting the user's historical purchase data into the implicit semantic model, and calculating according to the parameters in the model, the user's historical behavior data is converted into the corresponding score value to form a product recommendation matrix.
  • the corresponding score value can be understood as a preference value , Is used to indicate the user's preference for commodities.
  • the commodity recommendation matrix includes user (User) data and commodity (Item) data
  • the commodity recommendation matrix may also be referred to as a full-quantity matrix.
  • the above-mentioned hidden semantic model is a model trained on full data in an offline situation, and the above-mentioned hidden semantic model trained on full data means that the hidden semantic model can be directly used for prediction after being trained.
  • S202 Extract a target feature vector in the latent semantic model, where the target feature vector includes a product feature vector.
  • the feature vector in the above implicit semantic model can be represented by a full matrix.
  • the full matrix can include two dimensions of user feature (User) and product feature (Item), and the two dimensions intersect to obtain a matrix unit
  • Each matrix unit represents the preference value of the corresponding user for the corresponding product.
  • a set of preference values can be used to represent the feature vector, as shown in Table 1.
  • R11-R35 are the preference values corresponding to each matrix unit.
  • the feature vector of the feature User1 can be [R11, R12, R13, R14, R15]
  • the feature vector of the product feature Item1 can be [R11, R21, R31].
  • the preference value can be calculated by calculating the loss function of the user characteristics and the product characteristics to obtain the user's preference value for the category.
  • the user's preference value for the product can be recommended.
  • the above target feature vector can be extracted
  • the product feature vector can also be a user feature vector. If it is a product feature vector, the product can be recommended to users through the product feature vector, and if it is a user feature vector, it can be recommended to users who intend to use the product. In this embodiment, the product feature vector is preferred.
  • the product feature vector can be extracted by finding the data corresponding to the product feature vector in the data in the trained implicit semantic model, or the product feature vector can be obtained by matrix decomposition of the full matrix.
  • the aforementioned target feature vector is preferably a commodity feature vector.
  • S203 Migrate the target feature vector to a corresponding feature vector layer in a preset deep neural network to obtain a fused deep neural network.
  • the target feature vector can be extracted from the hidden semantic model and stored in an update database, and the target feature vector in the update database can be updated according to the data update in the hidden semantic model.
  • the target feature vector can be migrated by updating the database, or by directly extracting the target feature from the latent semantic model and then performing the migration.
  • the above-mentioned deep neural network can be pre-set, such as pre-trained, downloaded from the Internet, or built by the user himself to train, etc. It should be understood that the deep neural network should include features corresponding to the target In this way, the migration of the target feature vector can be completed. For example, when the target vector is a product feature vector, the deep neural network includes a product feature vector layer corresponding to the product feature.
  • the fused deep neural network obtained above can be understood as a deep neural network that integrates the feature vector of the product in the implicit semantic model. Because the full feature vector of the product is fused, the deep neural network can reflect the long-term and stable preferences of users, increasing Robustness of the model.
  • the above-mentioned fusion deep neural network integrates the product feature vector of the implicit semantic model
  • the above-mentioned prediction result can reflect the user's long-term preference, and recommending according to the user's long-term preference can improve the accuracy of the recommendation, and the depth of fusion
  • the feature vector of the neural network has been learned in the latent semantic model, so there is no need to train and learn the full amount of target feature vector attributes online, which ensures the training efficiency of the deep neural network and reduces the model training time.
  • This application transfers the target feature vector in the trained latent semantic model to the preset deep neural network. Since the target feature vector is already trained, the deep neural network does not need to learn the feature vector, reducing the fusion The model scale of the deep neural network reduces model training time and improves production efficiency. In addition, since the implicit semantic model is trained through the full amount of user and product data, the target feature vector can better reflect the user’s long-term preference, so that the fusion deep neural network model can also reflect the user’s long-term preference without frequent updates, which improves The robustness of the fusion deep neural network model, while the implicit semantic model is offline training, and the increase in data volume does not affect the production efficiency of the fusion deep neural network model.
  • step S202 specifically includes:
  • S301 Perform matrix decomposition on the full matrix in the implicit semantic model to obtain an implicit factor matrix including product features, and the full matrix includes product features and user features.
  • S302 Extract a product feature vector based on the implicit factor matrix of the product feature.
  • the full matrix (commodity recommendation matrix) is obtained from the full data.
  • the full data includes data about users and commodities, and the full matrix is the matrix relationship between users and commodities, as shown in Table 2:
  • Item is a commodity
  • User is a user
  • R11-R34 are preference data, which can be used to indicate the number of times the user purchases or clicks on the commodity or indicates the user's preference value for the commodity.
  • the matrix decomposition of the full matrix in the implicit semantic model includes: decomposing the full matrix into two implicit factor matrices through an implicit factor class, respectively based on user characteristics
  • the implicit factor matrix of and the implicit factor matrix based on commodity features where the implicit factor matrix of user features and the implicit factor matrix of commodity features are multiplied to obtain the full matrix.
  • the implicit factor matrix of commodity features includes commodity features and implicit factor features.
  • the commodity feature vector can be represented by an implicit factor matrix based on commodity features.
  • the implicit factor matrix of user features includes user feature and implicit factor matrix, and the user feature vector can be represented based on the implicit factor matrix of product features,
  • the implicit factor matrix of the product features can be directly obtained, and the product feature vector is extracted from the implicit factor matrix, which is more representative and can increase the accuracy of recommendation.
  • step S301 specifically includes:
  • S401 Use the stochastic gradient descent algorithm to optimize the loss function, and iteratively calculate the parameters until the parameters converge.
  • S402 Obtain an implicit factor matrix based on the convergent parameter, where the parameter refers to a parameter of a matrix unit in the implicit factor matrix.
  • R UI represents the preference value, which is 1 or 0, the positive sample is 1 for preference, the negative sample is 0 for disapproval, K is the implicit factor, and ⁇ PU ⁇ 2+ ⁇ QI ⁇ 2 is The regular term prevents the loss function from overfitting.
  • ⁇ in the above formula is the learning rate, and ⁇ is the regularization parameter.
  • the implicit factor matrix based on user characteristics is obtained through P UK [P11, P12, P13], [P21, P22, P23], [P31, P32 , P33].
  • the above calculation process is carried out in the computer, and the user only needs to set the calculation formula and provide the calculation data.
  • step S302 specifically includes:
  • S501 Extract the row or column of the category in the implicit factor matrix of the commodity feature as a target feature vector, where the target feature vector includes matrix units corresponding to at least one category attribute and multiple implicit factors.
  • the implicit factor matrix of commodity features is the implicit matrix obtained in the above embodiment.
  • the aforementioned implicit factor matrix based on commodity features includes multiple implicit factors and At least one category attribute, the above category attribute may be attributes such as product name, product category, product unit price, etc.
  • the category attribute is the product unit price
  • the target feature vector represents the degree of preference under a certain product unit price.
  • the target feature vector refers to a vector formed by a category attribute and multiple implicit factors.
  • the preset steps of the deep neural network specifically include:
  • S601 Obtain initial weight parameters of the deep neural network, and train the deep neural network through the training set.
  • the above-mentioned initial weight parameter can be set by the user based on experience, or it can be obtained from an open source station on the network such as gethub, or it can be imported as the initial weight parameter using weight parameters trained by others.
  • the deep neural network is trained by using the training set, and the weight parameters in the deep neural network are continuously adjusted during the training process.
  • the training process includes weight initialization.
  • the initialization weight is adjusted through the training data.
  • the final weight parameters of each layer are obtained. In this way, after obtaining the final weight parameters, you only need to replace the trained product feature vector into the corresponding layer, and then the weight parameters of this layer can be used.
  • the entire model does not need to be retrained due to the replacement of the feature vector.
  • the product feature vector is a feature vector that has not been trained. In this case, it is necessary to retrain to obtain the weight parameter of the layer.
  • step S203 specifically includes:
  • S702 According to the attribute of the target feature vector, match to the feature vector layer of the corresponding attribute in the deep neural network, and replace the vector in the feature vector layer of the corresponding attribute with the target feature vector.
  • the attributes of the target feature vector include user attributes, product attributes, etc.
  • the attributes of the target feature vector to be extracted can be determined by the implicit factor matrix to which the target feature vector belongs. For example, if the target feature vector is a user feature vector, it is It is extracted from the implicit factor matrix of user features. If the target feature vector is a commodity feature vector, it is extracted from the implicit factor matrix of commodity features. It can also be judged based on whether the feature vector is a row vector or a column vector For example, in Figure 4, the user feature vector is a row vector, and the product feature vector is a column vector.
  • the product attributes can be subdivided into category attributes.
  • the category attributes can be attributes such as product name, product category, and product unit price.
  • the above method further includes:
  • the data of the implicit semantic model in order to ensure the timeliness of the full amount of data, can be updated regularly, for example, once a month or two months.
  • the update time of ordinary deep neural network models is relatively short, and it needs to be updated every few days. Because the fusion deep neural network integrates the latent semantic model's feature vector obtained through full data training, the vector is full and long-term, making the fusion deep neural network also full, long-term, and stable. Therefore, the fusion trial neural
  • the update of the network model can also be synchronized with the update time of the implicit semantic model.
  • the embodiment of the application transfers the target feature vector in the trained latent semantic model to the preset deep neural network. Since the target feature vector is already trained, the deep neural network does not need to learn the feature vector, which reduces The model scale of the fusion deep neural network is reduced, the model training time is reduced, and the production efficiency is improved. In addition, since the implicit semantic model is trained through the full amount of user and product data, the target feature vector can better reflect the user’s long-term preference, which improves the robustness of the fusion deep neural network model. At the same time, the implicit semantic model is trained offline. Volume growth does not affect the production efficiency of the fusion deep neural network model.
  • the computer-readable instructions can be stored in a computer, readable and nonvolatile.
  • a readable storage medium when the computer readable instructions are executed, they may include the processes of the above-mentioned method embodiments.
  • the aforementioned non-volatile readable storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disc, a read-only memory (Read-Only Memory, ROM), or a random access memory (Random Access Memory, RAM) and so on.
  • FIG. 10 this is a schematic diagram of a vector migration-based recommendation device provided by this embodiment.
  • the above-mentioned device 900 includes: an acquisition module 901, an extraction module 902, a migration module 903, and a recommendation module 904. among them:
  • the acquiring module 901 is configured to acquire a hidden semantic model trained with full data, where the full data includes user historical preference data, and the trained hidden semantic model includes a full matrix trained with the full data.
  • the extraction module 902 is used to extract a target feature vector in the latent semantic model, where the target feature vector includes a product feature vector.
  • the migration module 903 is used to migrate the target feature vector to the corresponding feature vector layer in the preset deep neural network to obtain the fused deep neural network.
  • the recommendation module 904 is configured to output a prediction result for recommendation based on the above-mentioned fusion deep neural network, where the above-mentioned prediction result includes a product prediction result.
  • the aforementioned extraction module 902 includes: a decomposition unit 9021 and an extraction unit 9022, wherein:
  • the above-mentioned decomposition unit 9021 is configured to perform matrix decomposition on the full matrix in the implicit semantic model to obtain an implicit factor matrix including product features, and the full matrix includes product features and user features.
  • the above-mentioned extraction unit 9022 is used for extracting commodity feature vectors based on the implicit factor matrix of commodity features.
  • the above-mentioned decomposition unit 9021 includes: a first calculation sub-unit 90211 and a second calculation sub-unit 90212, wherein:
  • the aforementioned first calculation subunit 90211 is used to optimize the loss function using a stochastic gradient descent algorithm, and iteratively calculate the parameters until the parameters converge;
  • the above-mentioned second calculation subunit 90212 is used to obtain an implicit factor matrix based on the convergent parameters, where the parameter refers to the parameter of the matrix unit in the implicit factor matrix.
  • the aforementioned extraction unit 9022 is also used to extract the row or column of the category in the implicit factor matrix of the commodity feature as a target feature vector, wherein the target feature vector includes at least one category attribute corresponding to a plurality of implicit factors Matrix unit.
  • the above-mentioned device 900 further includes: a training module 905.
  • the training module 905 includes: a weight acquisition unit 9051 and a weight adjustment unit 9052
  • the above-mentioned weight obtaining unit 9051 is used to obtain the initial weight parameters of the deep neural network, and train the deep neural network through the training set.
  • the above weight adjustment unit 9052 is used to adjust the weight parameters in the deep neural network to fit the curve of the training set to obtain the final weight parameters.
  • the migration module 903 includes: a judgment unit 9031 and a matching replacement unit 9032, wherein:
  • the above judgment unit 9031 is used to judge the attributes of the extracted target feature vector.
  • the above-mentioned matching replacement unit 9032 is configured to match the feature vector layer of the corresponding attribute in the deep neural network according to the attribute of the target feature vector, and replace the vector in the feature vector layer of the corresponding attribute with the target feature vector.
  • the above-mentioned apparatus 900 further includes: a detection module 906 and an update module 907. among them,
  • the above-mentioned detection module 906 is used to detect whether the full matrix of the latent semantic model is updated.
  • the aforementioned update module 907 is used to re-extract the target feature vector if the full matrix of the implicit semantic model is updated, and update the vector data in the corresponding feature vector layer in the fusion deep neural network.
  • the vector migration-based recommendation device provided by the embodiment of the present application can implement the implementation manners of each vector migration-based recommendation method in the method embodiments of FIG. 2 to FIG. 9 and the corresponding beneficial effects. To avoid repetition, details are not described herein again.
  • FIG. 16 is a block diagram of the basic structure of the computer device in this embodiment.
  • the computer device 15 includes a non-volatile memory 151, a processor 152, and a network interface 153 that are communicatively connected to each other through a system bus. It should be pointed out that only the computer device 15 with components 151-153 is shown in the figure, but it should be understood that it is not required to implement all the illustrated components, and more or fewer components may be implemented instead. Among them, those skilled in the art can understand that the computer device here is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions.
  • Its hardware includes, but is not limited to, a microprocessor, a dedicated Integrated Circuit (Application Specific Integrated Circuit, ASIC), Programmable Gate Array (Field-Programmable GateArray, FPGA), Digital Processor (Digital Signal Processor, DSP), embedded equipment, etc.
  • ASIC Application Specific Integrated Circuit
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-Programmable GateArray
  • DSP Digital Processor
  • the computer equipment can be computing equipment such as desktop computers, notebooks, palmtop computers, and cloud servers. Computer equipment can interact with customers through keyboard, mouse, remote control, touchpad or voice control equipment.
  • the non-volatile memory 151 includes at least one type of readable storage medium.
  • the readable storage medium includes flash memory, hard disk, multimedia card, card-type non-volatile memory (for example, SD or DX non-volatile memory, etc.), Random access non-volatile memory (RAM), static random access non-volatile memory (SRAM), read-only non-volatile memory (ROM), electrically erasable programmable read-only non-volatile memory (EEPROM) , Programmable read-only non-volatile memory (PROM), magnetic non-volatile memory, magnetic disks, optical disks, etc.
  • RAM random access non-volatile memory
  • SRAM static random access non-volatile memory
  • ROM read-only non-volatile memory
  • EEPROM electrically erasable programmable read-only non-volatile memory
  • PROM Programmable read-only non-volatile memory
  • magnetic non-volatile memory magnetic disks, optical disks, etc.
  • the non-volatile memory 151 may be an internal storage unit of the computer device 15, for example, the hard disk or memory of the computer device 15. In other embodiments, the non-volatile memory 151 may also be an external storage device of the computer device 15, for example, a plug-in hard disk, a smart media card (SMC), and a secure digital device equipped on the computer device 15. (Secure Digital, SD) card, flash card (Flash Card), etc. Of course, the non-volatile memory 151 may also include both the internal storage unit of the computer device 15 and its external storage device.
  • the non-volatile memory 151 is generally used to store the operating system and various application software installed in the computer device 15, such as computer-readable instructions based on the recommended method of vector migration.
  • the non-volatile memory 151 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 152 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments.
  • the processor 152 is generally used to control the overall operation of the computer device 15.
  • the processor 152 is configured to run computer-readable instructions or process data stored in the non-volatile memory 151, such as computer-readable instructions for running a recommended method based on vector migration.
  • the network interface 153 may include a wireless network interface or a wired network interface, and the network interface 153 is generally used to establish a communication connection between the computer device 15 and other electronic devices.
  • This application also provides another implementation manner, that is, a non-volatile readable storage medium is provided.
  • the non-volatile readable storage medium stores a vector migration-based recommendation process, and the above-mentioned vector migration-based recommendation process can be At least one processor executes, so that at least one processor executes the steps of the vector migration-based recommendation method in each of the above-mentioned embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Development Economics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Entrepreneurship & Innovation (AREA)
  • General Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种基于向量迁移的推荐方法、装置、计算机设备及非易失性存储介质,适用于人工智能领域,其中,该方法包括:获取通过全量数据训练好的隐语义模型(S201),其中,全量数据包括用户历史偏好数据,训练好的隐语义模型包括由全量数据训练得到的全量矩阵;提取隐语义模型中的目标特征向量(S202),其中,目标特征向量包括商品特征向量;将目标特征向量迁移到预先设置的深度神经网络中的对应特征向量层,得到融合深度神经网络(S203);基于融合深度神经网络,输出预测结果进行推荐(S204),其中,所述预测结果包括商品预测结果。该方法能够减小深度神经网络的模型规模,降低模型训练时间,提高生产效率。

Description

基于向量迁移的推荐方法、装置、计算机设备及非易失性可读存储介质
本申请以2019年9月16日提交的申请号为201910871369.2,名称为“基于向量迁移的推荐方法、装置、计算机设备及非易失性可读存储介质”的中国申请专利申请为基础,并要求其优先权。
技术领域
本申请属于人工智能技术领域,尤其涉及一种基于向量迁移的推荐方法、装置、计算机设备及非易失性可读存储介质。
背景技术
目前,由于网购的发展成熟与人工智能的技术兴起,商品推荐系统已经得到广泛应用,主要是采用的是LFM隐语义模型以及DNN深度神经网络进行预测。其中,LFM需要长期的数据,时间窗口跨度很大,涉及的用户和商品都是全量的,发明人意识到:某个用户在某个时间窗口内对所有商品的偏好,因此计算量大,模型结果也反映用户的长期的稳定的偏好,一般工业应用LFM都是离线使用全量数据进行训练,无法反应用户近期的偏好,灵敏度不高。DNN深度神经网络的训练数据主要为点击序列,是排序模型,虽然方便线上训练,但是模型结果反映相对短期的偏好,预测结果受数据集影响较大,需要对数据集以及模型频繁进行更新。另外,当模型想要反映较为长期的偏好时,商品量和用户量会变得较大或当商品量和用户量较大时,深度神经网络的参数量也变大,训练时间变长,导致模型的生产效率不高。
发明内容
本申请实施例提供一种基于向量迁移的推荐方法、装置、计算机设备及非易失性可读存储介质,旨在解决现有技术中DNN深度神经网络存在的需要频繁更新以及反映长期偏好时模型训练时间长的问题。
为了解决所述问题,本申请实施例是这样实现的,提供一种基于向量迁移的推荐方法,包括步骤:
获取通过全量数据训练好的隐语义模型,其中,所述全量数据包括用户历史偏好数据,所述训练好的隐语义模型包括由全量数据训练得到的全量矩阵;
提取所述隐语义模型中的目标特征向量,其中,目标特征向量包括商品特征向量;
将所述目标特征向量迁移到预先设置的深度神经网络中的对应特征向量层,得到融合深度神经网络;
基于所述融合深度神经网络,输出预测结果进行推荐,其中,所述预测结果包括商品预测结果。
本申请还提供一种基于向量迁移的推荐装置,包括:
获取模块,用于获取通过全量数据训练好的隐语义模型,其中,所述全量数据包括用户历史偏好数据,所述训练好的隐语义模型包括由全量数据训练得到的全量矩阵;
提取模块,用于提取所述隐语义模型中的目标特征向量,其中,目标特征向量包括商品特征向量;
迁移模块,用于将所述目标特征向量迁移到预先设置的深度神经网络中的对应特征向量层,得到融合深度神经网络;
推荐模块,用于基于所述融合深度神经网络,输出预测结果进行推荐,其中,所述预测结果包括商品预测结果。
本申请还提供一种计算机设备,包括存储器和处理器,所述存储器中存储有计算机可读指令,所述处理器执行所述计算机可读指令时实现如本申请中任一项实施例所述的基于向量迁移的推荐方法的步骤。
本申请还提供一种非易失性可读存储介质,其特征在于,所述非易失性可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如本申请中任一项实施例所述的基于向量迁移的推荐方法的步骤。
本申请通过将训练好的目标特征向量迁移至预先设置的深度神经网络中,可以使深度神经网络不需要学习特征向量,减小了融合深度神经网络的模型规模,降低了模型训练时间,提高了生产效率。并且由于隐语义模型是通过全量的用户和商品数据进行训练提高了融合深度神经网络模型鲁棒性,同时隐语义模型是离线训练,数据量增长也不影响融合深度神经网络模型的生产效率。
附图说明
图1是本申请可以应用于其中的示例性系统架构图;
图2是本申请实施例提供的基于向量迁移的推荐方法的一个实施例的流程图;
图3是图2实施例中步骤S202的一种具体实施方式的流程图;
图4是本申请实施例提供的矩阵分解示意图;
图5是图3实施例中步骤S301的一种具体实施方式的流程图;
图6是图3实施例中步骤S302的一种具体实施方式的流程图;
图7是本申请实施例中深度神经网络预先设置的流程示意图;
图8是图2实施例中步骤S203的一种具体实施方式的流程图;
图9是本申请实施例提供的基于向量迁移的推荐方法的另一个实施例的流程图;
图10是本申请实施例的基于向量迁移的推荐装置的一个实施例的结构示意图;
图11是本申请实施例的基于向量迁移的推荐装置的另一个实施例的结构示意图;
图12是本申请实施例的基于向量迁移的推荐装置的另一个实施例的结构示意图;
图13是本申请实施例的深度神经网络的训练模块结构示意图;
图14是本申请实施例的基于向量迁移的推荐装置的另一个实施例的结构示意图;
图15是本申请实施例的基于向量迁移的推荐装置的另一个实施例的结构示意图;
图16是本申请的计算机设备的一个实施例的结构示意图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请通过将训练好的隐语义模型中的目标特征向量迁移至预先设置的深度神经网络中,由于目标特征向量是已经训练好的,可以使深度神经网络不需要学习特征向量,减小了融合深度神经网络的模型规模,降低了模型训练时间,提高了生产效率。另外,由于隐语义模型是通过全量的用户和商品数据进行训练,所以目标特征向量更能反映用户的长期的偏好,使得融合深度神经网络模型也能反映用户长期的偏好,不用频繁更新,提高了融合深度神经网络模型鲁棒性,同时隐语义模型是离线训练,数据量增长也不影响融合深度神经网络模型的生产效率。
如图1所示,系统架构100可以包括服务器105,网络102和终端设备101、102、103。网络104用以在服务器105和终端设备101、102、103之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等。终端设备101、102、 103可以是具有显示屏,可以下载应用软件,可以进行数据读写等的各种电子设备,包括但不限于智能手机、平板电脑、膝上型便携计算机和台式计算机等等,客户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或获取信息等。
应该理解,图1中的移动终端、网络和设备的数目仅仅是示意性的,根据实现需要,可以具有任意数目的移动终端、网络和服务器。
如图2所示,为根据本申请的基于向量迁移的推荐方法所提供的一个实施例的流程图。上述的基于向量迁移的推荐方法,包括步骤:
S201,获取通过全量数据训练好的隐语义模型,其中,全量数据包括用户历史偏好数据,训练好的隐语义模型包括由全量数据训练得到的全量矩阵。
在本实施例中,基于向量迁移的推荐方法可以运行于搭载推荐系统的电子设备(例如图1所示的移动终端)。其中,上述的全量数据可以是用户的历史偏好数据,上述的偏好数据可以是由用户对商品的历史行为数据得到,用户的历史行为数据包括用户对某个商品点击的次数、购买次数,收藏,分享、评分等,比如:用户点击A商品18次,说明用户反复浏览该A商品,可以认为该用户是偏好该A商品的;又如:用户将B商品进行收藏,以防止找不到,也可以认为该用户是偏好该B商品的;再如:用户对C商品评分为9,对D商品评分为1,则可以认为用户偏好C商品,偏厌D商品。上述的隐语义模型中的全量矩阵可以是通过将全量数据输入到隐语义模型中进行训练学习得到的一个评分矩阵(也可以称为推荐矩阵)。比如:通过将用户的历史购买数据输入隐语义模型中,根据模型内的参数进行计算,将用户的历史行为数据转换成对应的分数值,形成商品推荐矩阵,对应的分数值可以理解为偏好值,用于表示用户对商品的偏好程度,此时上述的商品推荐矩阵包括有用户(User)数据与商品(Item)数据,该商品推荐矩阵也可以称为全量矩阵。上述的隐语义模型为在离线情况下通过全量数据训练好的模型,上述通过全量数据训练好的隐语义模型指的是隐语义模型在经过训练后可以直接用于预测。
S202,提取隐语义模型中的目标特征向量,其中,目标特征向量包括商品特征向量。
在本实施例中,上述的隐语义模型中的特征向量可以通过全量矩阵进行表示,比如,全量矩阵可以包括用户特征(User)与商品特征(Item)两个维度,两个维度交叉得到矩阵单元,每个矩阵单元表示对应用户对于对应商品的偏好值,一组偏好值可用于表示特征向量,如表1所示,在表1中,R11-R35为各矩阵单元对应的偏好值,关于用户特征User1的特征向量可以是[R11,R12,R13,R14,R15],关于商品特征Item1的特征向量可以是[R11,R21,R31]。
   Item1 Item2 Item3 Item4 Item5
User1 R11 R12 R13 R14 R15
User2 R21 R22 R23 R24 R25
User3 R31 R32 R33 R34 R35
表1
偏好值的计算可以通过对用户特征与商品特征进行损失函数计算,从而得到用户对类别的偏好值,在隐语义模型中可以通过用户对商品的偏好值进行推荐,上述的目标特征向量可以是提取商品特征向量,也可以是用户特征向量,若是商品特征向量,可以通过商品特征向量向用户进行商品推荐,若是用户特征向量,可以为商品推荐意向用户。本实施例中优选为商品特征向量,可以通过在训练好的隐语义模型中的数据中查找到对应于商品特征向量的数据进行提取,也可以通过对全量矩阵进行矩阵分解从而得到商品特征向量。上述的目标特征向量优选为商品特征向量。
S203,将目标特征向量迁移到预先设置的深度神经网络中的对应特征向量层,得到融合深度神经网络。
其中,可以将目标特征向量从隐语义模型中提取出来存储在一个更新数据库中,上述更新数据库中的目标特征向量可以根据隐语义模型中的数据更新进行更新。将目标特征向量进行迁移可以是通过更新数据库进行迁移,也可以是直接从隐语义模型中提取到目标特征后进行迁移。
上述的深度神经网络可以是预先设置的,比如预先训练好的,从网上下载得到的,也可以是用户自己搭建自己训练的等,需要理解的是,该深度神经网络中应包括与目标特征对应的特征向量层,这样,才能完成目标特征向量的迁移。比如,当目标向量为商品特征向量时,该深度神经网络中包括与商品特征对应的商品特征向量层。
上述得到的融合深度神经网络可以理解为融合了隐语义模型中的商品特征向量的深度神经网络,因为融合了全量的商品特征向量,从而可以通过深度神经网络反映用户长期的稳定的偏好,增加了模型的鲁棒性。
S204,基于融合深度神经网络,输出预测结果进行推荐,其中,上述预测结果包括商品预测结果。
其中,由于上述的融合深度神经网络中融合了隐语义模型的商品特征向量,上述的预测结果可以反映用户长期的偏好,根据用户长期的偏好进行推荐,可以提高推荐的精准度,而 且,融合深度神经网络的特征向量在隐语义模型中的已经进行学习,从而不需要在线上对全量的目标特征向量属性再进行训练学习,保证了深度神经网络的训练高效性,降低模型训练时间。
本申请通过将训练好的隐语义模型中的目标特征向量迁移至预先设置的深度神经网络中,由于目标特征向量是已经训练好的,可以使深度神经网络不需要学习特征向量,减小了融合深度神经网络的模型规模,降低了模型训练时间,提高了生产效率。另外,由于隐语义模型是通过全量的用户和商品数据进行训练,所以目标特征向量更能反映用户的长期的偏好,使得融合深度神经网络模型也能反映用户长期的偏好,不用频繁更新,提高了融合深度神经网络模型鲁棒性,同时隐语义模型是离线训练,数据量增长也不影响融合深度神经网络模型的生产效率。
进一步地,如图3所示,上述步骤S202具体包括:
S301,对隐语义模型中的全量矩阵进行矩阵分解,得到包括商品特征的隐式因子矩阵,全量矩阵包括商品特征及用户特征。
S302,基于商品特征的隐式因子矩阵提取商品特征向量。
具体的,通过全量数据得到全量矩阵(商品推荐矩阵),比如,全量数据包括用户与商品的数据,全量矩阵则是的用户与商品的矩阵关系,如表2所示:
  Item1 Item2 Item3
User1 R11 R12 R13
User2 R21 R22 R23
User3 R31 R32 R33
表2
表2中,Item为商品,User为用户,R11-R34为偏好数据,可以用于表示该用户购买或点击该商品的次数或表示该用户对该商品的偏好值。
具体的,请参照图4,如图4所示,对隐语义模型中的全量矩阵进行矩阵分解包括:通过一个隐式因子class将全量矩阵分解为两个隐式因子矩阵,分别是基于用户特征的隐式因子矩阵与基于商品特征的隐式因子矩阵,其中,用户特征的隐式因子矩阵与商品特征的隐式因子矩阵相乘得到全量矩阵。商品特征的隐式因子矩阵包括商品特征与隐式因子特征,商品特征向量可以通过基于商品特征的隐式因子矩阵进行表示,比如item的特征向量为item1=[Q11,Q21,Q31],item2=[Q12,Q22,Q32],item3=[Q13,Q23,Q33],用户特征的隐式因子矩 阵包括用户特征与隐式因子矩阵,用户特征向量可以基于通过商品特征的隐式因子矩阵进行表示,比如user的特征向量为user1=[P11,P12,P13],user2=[P21,P22,P23]。
通过对全量矩阵进行矩阵分解,可以直接得到商品特征的隐式因子矩阵,从隐式因子矩阵中提取商品特征向量,其表征性较强,可以增加推荐的精准度。
进一步地,如图5所示,上述步骤S301具体包括:
S401,使用随机梯度下降算法对损失函数进行优化,迭代计算参数,直到参数收敛。
S402,基于收敛的参数得到隐式因子矩阵,其中参数指的是隐式因子矩阵中矩阵单元的参数。
在本申请实施例中,矩阵分解的原理如下:
Figure PCTCN2019116921-appb-000001
根据上式可以理解,全量矩阵可以由两个隐式因子矩阵的乘积得到,比如,假设R UI为R12,则有R12=P 1Q 2=P11Q12+P12Q22+P13Q32。
上述的损失函数如下:
Figure PCTCN2019116921-appb-000002
上述的损失函数中,R UI表示偏好值,为1或0,正样本为1表示偏好,负样本为0表示不偏好,K为隐式因子,λ‖P U‖2+λ‖Q I‖2为正则项,防止损失函数过拟合。通过对参数P UK与Q KI求偏导,确定梯度下降的方向如下式:
Figure PCTCN2019116921-appb-000003
Figure PCTCN2019116921-appb-000004
通过迭代计算,逼近得到收敛的参数P UK与Q KI如下式:
Figure PCTCN2019116921-appb-000005
Figure PCTCN2019116921-appb-000006
其中,上述式子中的α为学习速率,λ为正则化参数,通过P UK得到基于用户特征的隐式因子矩阵[P11,P12,P13],[P21,P22,P23],[P31,P32,P33]。通过Q KI得到基于商品特征的隐式因子矩阵[Q11,Q12,Q13,Q14],[Q11,Q12,Q13,Q14],[Q11,Q12,Q13,Q14]。以上的计算过程在计算机中进行,用户只需要设定计算公式及提供计算数据即可。
通过上述的计算,可以迭代得到全量矩阵的两个隐式因子矩阵,便于特征向量的提取。
进一步地,如图6所示,上述S302的步骤具体包括:
S501,提取商品特征的隐式因子矩阵中的类别所在的行或列作为目标特征向量,其中,目标特征向量包括至少一个类别属性与多个隐式因子对应的矩阵单元。
在本申请实施例中,商品特征的隐式因子矩阵为上述实施例中的得到的隐式矩阵,请结合图4进行理解,上述的基于商品特征的隐式因子矩阵包括多个隐式因子与至少一个类别属性,上述的类别属性可以是商品名称、商品类别、商品单价等属性,比如,item为商品单价时,类别属性为商品单价,目标特征向量表示某个商品单价下的偏好程度。目标特征向量指的是一个类别属性与多个隐式因子形成的向量,比如图2中的类别item的特征向量为item1=[Q11,Q21,Q31],item2=[Q12,Q22,Q32],item3=[Q13,Q23,Q33]。
这样,可以根据不同的类别属性,在隐语义矩阵中提取不同的特征向量进行迁移,以使融合深度神经网络适应不同条件的推荐。
进一步地,如图7所示,上述方法中,深度神经网络的预先设置的步骤具体包括:
S601,获取深度神经网络的初始权重参数,并通过训练集对深度神经网络进行训练。
S602,调整深度神经网络中的权重参数,使训练集的曲线拟合,得到最终权重参数。
在本申请实施例中,上述的初始权重参数可以是用户根据经验进行设定,也可以是在网络上的开源站比如gethub上进行获取,还可以使用别人训练得到的权重参数导入作为初始权重参数。通过使用训练集对深度神经网络进行训练,在训练过程中不断调整深度神经网络中的权重参数,具体的,训练过程包括权重初始化,通过训练数据对初始化的权重进行调整,在训练集的曲线拟合后,得到各层的最终权重参数。这样,在得到最终权重参数后,只需要将训练过的商品特征向量替换到对应层中,即可使用该层的权重参数,整个模型不用因替换 特征向量而导致需要重新训练。当然,在一种可能的实施例中,商品特征向量为没有训练过的特征向量,此时,需要再训练得到该层的权重参数。
这样,将在隐语义模型中训练过的特征向量替换到训练好的深度神经网络中,直接就可以进行预测推荐,不需要再对融合深度神经网络再进行训练,提高生产效率。
进一步地,如图8所示,上述的步骤S203具体包括:
S701,判断提取的目标特征向量的属性。
S702,根据目标特征向量的属性,匹配到深度神经网络中对应属性的特征向量层,将对应属性的特征向量层中的向量替换为目标特征向量。
其中,目标特征向量的属性包括用户属性、商品属性等,判断提取的目标特征向量的属性可以是通过目标特征向量所属的隐式因子矩阵,比如:若目标特征向量为用户特征向量,则其为用户特征的隐式因子矩阵中提取得到,若目标特征向量为商品特征向量,则其为商品特征的隐式因子矩阵中提取得到,也可以是根据特征向量的是行向量或列向量来进行判断,比如,在图4中,用户特征向量则是行向量,商品特征向量为列向量。
将深度神经网络中的特征向量层替换为目标特征向量,可以是将特征向量层的向量数据删除,将目标特征向量数据进行导入,从而得到融合神经网络。
当然,为使用户得到的商品推荐更加精确,可以将商品属性细分为到类别属性,比如,类别属性可以是商品名称、商品类别、商品单价等属性。
进一步地,如图9所示,上述方法还包括:
S801,检测隐语义模型的全量矩阵是否更新。
S802,若隐语义模型的全量矩阵更新,则重新提取目标特征向量,并更新融合深度神经网络中对应的特征向量层中的向量数据。
在本申请实施例中,为了保证全量数据的时效性,可以定时对隐语义模型的数据进行更新,比如1个月或2个月更新一次。而普通的深度神经网络模型更新时间较短,几天就要更新一次。由于融合深度神经网络中整合了隐语义模型的通过全量数据训练得到的特征向量,该向量是全量的,长期的,使得融合深度神经网络也是全量的,长期的,稳定的,因此,融合尝试神经网络模型的更新也可以与隐语义模型的更新时间进行同步。
本申请实施例通过将训练好的隐语义模型中的目标特征向量迁移至预先设置的深度神经网络中,由于目标特征向量是已经训练好的,可以使深度神经网络不需要学习特征向量,减小了融合深度神经网络的模型规模,降低了模型训练时间,提高了生产效率。另外,由于隐语义模型是通过全量的用户和商品数据进行训练,所以目标特征向量更能反映用户的长期 的偏好,提高了融合深度神经网络模型鲁棒性,同时隐语义模型是离线训练,数据量增长也不影响融合深度神经网络模型的生产效率。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机可读指令来指令相关的硬件来完成,该计算机可读指令可存储于一计算机可读取非易失性可读存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,前述的非易失性可读存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)等非易失性存储介质,或随机存储记忆体(Random Access Memory,RAM)等。
应该理解的是,虽然附图的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,其可以以其他的顺序执行。而且,附图的流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,其执行顺序也不必然是依次进行,而是可以与其他步骤或者其他步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
如图10所示,为本实施例所提供的基于向量迁移的推荐装置的示意图,上述装置900包括:获取模块901、提取模块902、迁移模块903、推荐模块904。其中:
获取模块901,用于获取通过全量数据训练好的隐语义模型,其中,全量数据包括用户历史偏好数据,训练好的隐语义模型包括由全量数据训练得到的全量矩阵。
提取模块902,用于提取隐语义模型中的目标特征向量,其中,目标特征向量包括商品特征向量。
迁移模块903,用于将目标特征向量迁移到预先设置的深度神经网络中的对应特征向量层,得到融合深度神经网络。
推荐模块904,用于基于上述融合深度神经网络,输出预测结果进行推荐,其中,上述预测结果包括商品预测结果。
进一步地,如图11所示,上述的提取模块902包括:分解单元9021以及提取单元9022,其中:
上述分解单元9021,用于对隐语义模型中的全量矩阵进行矩阵分解,得到包括商品特征的隐式因子矩阵,全量矩阵包括商品特征及用户特征。
上述提取单元9022,用于基于商品特征的隐式因子矩阵提取商品特征向量。
进一步地,如图12所示,上述分解单元9021包括:第一计算子单元90211、第二计算子单元90212,其中:
上述第一计算子单元90211,用于使用随机梯度下降算法对损失函数进行优化,迭代计算参数,直到参数收敛;
上述第二计算子单元90212,用于基于收敛的参数得到隐式因子矩阵,其中参数指的是隐式因子矩阵中矩阵单元的参数。
进一步地,上述提取单元9022,还用于提取商品特征的隐式因子矩阵中的类别所在的行或列作为目标特征向量,其中,目标特征向量包括至少一个类别属性与多个隐式因子对应的矩阵单元。
进一步地,如图13所示,上述装置900还包括:训练模块905。其中,训练模块905包括:权重获取单元9051以及权重调整单元9052
上述权重获取单元9051,用于获取深度神经网络的初始权重参数,并通过训练集对深度神经网络进行训练。
上述权重调整单元9052,用于调整深度神经网络中的权重参数,使训练集的曲线拟合,得到最终权重参数。
进一步地,如图14所示,上述迁移模块903包括:判断单元9031以及匹配替换单元9032,其中:
上述判断单元9031,用于判断提取的目标特征向量的属性。
上述匹配替换单元9032,用于根据目标特征向量的属性,匹配到深度神经网络中对应属性的特征向量层,将对应属性的特征向量层中的向量替换为目标特征向量。
进一步地,如图15所示,上述装置900还包括:检测模块906以及更新模块907。其中,
上述检测模块906,用于检测隐语义模型的全量矩阵是否更新。
上述更新模块907,用于若隐语义模型的全量矩阵更新,则重新提取目标特征向量,并更新融合深度神经网络中对应的特征向量层中的向量数据。
本申请实施例提供的基于向量迁移的推荐装置能够实现图2至图9的方法实施例中的各个基于向量迁移的推荐方法的实施方式,以及相应有益效果,为避免重复,这里不再赘述。
为解决上述技术问题,本申请实施例还提供计算机设备。具体请参阅图16,图16为本实施例计算机设备基本结构框图。
计算机设备15包括通过系统总线相互通信连接非易失性存储器151、处理器152、网络接口153。需要指出的是,图中仅示出了具有组件151-153的计算机设备15,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。其中,本技术领 域技术人员可以理解,这里的计算机设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable GateArray,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。
计算机设备可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。计算机设备可以与客户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。
非易失性存储器151至少包括一种类型的可读存储介质,可读存储介质包括闪存、硬盘、多媒体卡、卡型非易失性存储器(例如,SD或DX非易失性存储器等)、随机访问非易失性存储器(RAM)、静态随机访问非易失性存储器(SRAM)、只读非易失性存储器(ROM)、电可擦除可编程只读非易失性存储器(EEPROM)、可编程只读非易失性存储器(PROM)、磁性非易失性存储器、磁盘、光盘等。在一些实施例中,非易失性存储器151可以是计算机设备15的内部存储单元,例如该计算机设备15的硬盘或内存。在另一些实施例中,非易失性存储器151也可以是计算机设备15的外部存储设备,例如该计算机设备15上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。当然,非易失性存储器151还可以既包括计算机设备15的内部存储单元也包括其外部存储设备。本实施例中,非易失性存储器151通常用于存储安装于计算机设备15的操作系统和各类应用软件,例如基于向量迁移的推荐方法的计算机可读指令等。此外,非易失性存储器151还可以用于暂时地存储已经输出或者将要输出的各类数据。
处理器152在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器152通常用于控制计算机设备15的总体操作。本实施例中,处理器152用于运行非易失性存储器151中存储的计算机可读指令或者处理数据,例如运行基于向量迁移的推荐方法的计算机可读指令。
网络接口153可包括无线网络接口或有线网络接口,该网络接口153通常用于在计算机设备15与其他电子设备之间建立通信连接。
本申请还提供了另一种实施方式,即提供一种非易失性可读存储介质,非易失性可读存储介质存储有基于向量迁移的推荐流程,上述基于向量迁移的推荐流程可被至少一个处理器执行,以使至少一个处理器执行如上述的各个实施例中的基于向量迁移的推荐方法的步骤。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部 分可以以软件产品的形式体现出来,该计算机软件产品存储在一个非易失性可读存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例的基于向量迁移的推荐方法。
本申请的说明书和权利要求书及上述附图说明中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。本申请的说明书和权利要求书或上述附图中的术语“第一”、“第二”等是用于区别不同对象,而不是用于描述特定顺序。在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
以上实施例仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本申请的保护范围之内。

Claims (20)

  1. 一种基于向量迁移的推荐方法,其特征在于,包括步骤:
    获取通过全量数据训练好的隐语义模型,其中,所述全量数据包括用户历史偏好数据,所述训练好的隐语义模型包括由全量数据训练得到的全量矩阵;
    提取所述隐语义模型中的目标特征向量,其中,所述目标特征向量包括商品特征向量;
    将所述目标特征向量迁移到预先设置的深度神经网络中的对应特征向量层,得到融合深度神经网络;
    基于所述融合深度神经网络,输出预测结果进行推荐,其中,所述预测结果包括商品预测结果。
  2. 根据权利要求1所述的基于向量迁移的推荐方法,其特征在于,所述提取所述隐语义模型中的目标特征向量的步骤具体包括:
    对所述隐语义模型中的全量矩阵进行矩阵分解,得到包括商品特征的隐式因子矩阵,所述全量矩阵包括商品特征及用户特征;
    基于所述商品特征的隐式因子矩阵提取商品特征向量。
  3. 根据权利要求2所述的基于向量迁移的推荐方法,其特征在于,所述对所述隐语义模型中的全量矩阵进行矩阵分解的步骤具体包括:
    使用随机梯度下降算法对损失函数进行优化,迭代计算参数,直到参数收敛;
    基于收敛的参数得到隐式因子矩阵,其中参数指的是隐式因子矩阵中矩阵单元的参数。
  4. 根据权利要求3所述的基于向量迁移的推荐方法,其特征在于,所述基于收敛的参数得到隐式因子矩阵为:
    Figure PCTCN2019116921-appb-100001
    Figure PCTCN2019116921-appb-100002
    其中,上述式子中的α为学习速率,λ为正则化参数,R UI为隐士引子矩阵的全量矩阵,P UK和Q KI分别为两组所述隐式因子,通过P UK得到基于用户特征的隐式因子矩阵,通过Q KI得到基于商品特征的隐式因子矩阵。
  5. 根据权利要求2所述的基于向量迁移的推荐方法,其特征在于,所述基于所述商品特征的隐式因子矩阵提取商品特征向量的步骤具体包括:
    提取所述商品特征的隐式因子矩阵中的类别所在的行或列作为目标特征向量,其中,目标特征向量包括至少一个类别属性与多个隐式因子对应的矩阵单元。
  6. 根据权利要求1所述的基于向量迁移的推荐方法,其特征在于,所述深度神经网络的预先设置的步骤具体包括:
    获取所述深度神经网络的初始权重参数,并通过训练集对所述深度神经网络进行训练;
    调整深度神经网络中的权重参数,使训练集的曲线拟合,得到最终权重参数。
  7. 根据权利要求1所述的基于向量迁移的推荐方法,其特征在于,所述将所述目标特征向量迁移到预先设置的深度神经网络中的对应特征向量层的步骤具体包括:
    判断提取的目标特征向量的属性;
    根据所述目标特征向量的属性,匹配到深度神经网络中对应属性的特征向量层,将对应属性的特征向量层中的向量替换为目标特征向量。
  8. 根据权利要求1所述的基于向量迁移的推荐方法,其特征在于,所述基于所述融合深度神经网络,输出预测结果进行推荐的步骤之后,还包括:
    检测所述隐语义模型的全量矩阵是否更新;
    若所述隐语义模型的全量矩阵更新,则重新提取目标特征向量,并更新融合深度神经网络中对应的特征向量层中的向量数据。
  9. 一种基于向量迁移的推荐装置,其特征在于,包括:
    获取模块,用于获取通过全量数据训练好的隐语义模型,其中,所述全量数据包括用户历史偏好数据,所述训练好的隐语义模型包括由全量数据训练得到的全量矩阵;
    提取模块,用于提取所述隐语义模型中的目标特征向量,其中,所述目标特征向量包括商品特征向量;
    迁移模块,用于将所述目标特征向量迁移到预先设置的深度神经网络中的对应特征向量层,得到融合深度神经网络;
    推荐模块,用于基于所述融合深度神经网络,输出预测结果进行推荐,其中,所述预测结果包括商品预测结果。
  10. 根据权利要求9所述的一种基于向量迁移的推荐装置,其特征在于,所述提取模块包括:
    分解单元,用于对所述隐语义模型中的全量矩阵进行矩阵分解,得到包括商品特征的隐式因子矩阵,所述全量矩阵包括商品特征及用户特征;
    提取单元,用于基于所述商品特征的隐式因子矩阵提取商品特征向量。
  11. 根据权利要求10所述的一种基于向量迁移的推荐装置,其特征在于,分解单元包括:
    第一计算子单元,用于使用随机梯度下降算法对损失函数进行优化,迭代计算参数,直到参数收敛;
    第二计算子单元,用于基于收敛的参数得到隐式因子矩阵,其中参数指的是隐式因子矩阵中矩阵单元的参数。
  12. 根据权利要求9所述的一种基于向量迁移的推荐装置,其特征在于,还包括:
    检测模块,用于检测隐语义模型的全量矩阵是否更新。
    更新模块,用于若隐语义模型的全量矩阵更新,则重新提取目标特征向量,并更新融合深度神经网络中对应的特征向量层中的向量数据。
  13. 一种计算机设备,其特征在于,包括存储器、处理器,以及存储在所述存储器中,并可在所述处理器上运行的计算机可读指令,其特征在于,所述处理器执行所述计算机可读指令时实现如下步骤:
    获取通过全量数据训练好的隐语义模型,其中,所述全量数据包括用户历史偏好数据,所述训练好的隐语义模型包括由全量数据训练得到的全量矩阵;
    提取所述隐语义模型中的目标特征向量,其中,所述目标特征向量包括商品特征向量;
    将所述目标特征向量迁移到预先设置的深度神经网络中的对应特征向量层,得到融合深度神经网络;
    基于所述融合深度神经网络,输出预测结果进行推荐,其中,所述预测结果包括商品预测结果。
  14. 根据权利要求13所述的计算机设备,其特征在于,所述提取所述隐语义模型中的目标特征向量的步骤具体包括:
    对所述隐语义模型中的全量矩阵进行矩阵分解,得到包括商品特征的隐式因子矩阵,所述全量矩阵包括商品特征及用户特征;
    基于所述商品特征的隐式因子矩阵提取商品特征向量。
  15. 根据权利要求14所述的计算机设备,其特征在于,所述对所述隐语义模型中的全量矩阵进行矩阵分解的步骤具体包括:
    使用随机梯度下降算法对损失函数进行优化,迭代计算参数,直到参数收敛;
    基于收敛的参数得到隐式因子矩阵,其中参数指的是隐式因子矩阵中矩阵单元的参数。
  16. 根据权利要求14所述的计算机设备,其特征在于,所述基于所述商品特征的隐式 因子矩阵提取商品特征向量的步骤具体包括:
    提取所述商品特征的隐式因子矩阵中的类别所在的行或列作为目标特征向量,其中,目标特征向量包括至少一个类别属性与多个隐式因子对应的矩阵单元。
  17. 一种非易失性可读存储介质,其特征在于,所述非易失性可读存储介质上存储有可执行代码,所述可执行代码被处理器执行时实现如下所述的基于向量迁移的推荐方法的步骤方法的步骤::
    获取通过全量数据训练好的隐语义模型,其中,所述全量数据包括用户历史偏好数据,所述训练好的隐语义模型包括由全量数据训练得到的全量矩阵;
    提取所述隐语义模型中的目标特征向量,其中,所述目标特征向量包括商品特征向量;
    将所述目标特征向量迁移到预先设置的深度神经网络中的对应特征向量层,得到融合深度神经网络;
    基于所述融合深度神经网络,输出预测结果进行推荐,其中,所述预测结果包括商品预测结果。
  18. 根据权利要求17所述的非易失性可读存储介质,其特征在于,所述提取所述隐语义模型中的目标特征向量的步骤具体包括:
    对所述隐语义模型中的全量矩阵进行矩阵分解,得到包括商品特征的隐式因子矩阵,所述全量矩阵包括商品特征及用户特征;
    基于所述商品特征的隐式因子矩阵提取商品特征向量。
  19. 根据权利要求18所述的非易失性可读存储介质,其特征在于,所述对所述隐语义模型中的全量矩阵进行矩阵分解的步骤具体包括:
    使用随机梯度下降算法对损失函数进行优化,迭代计算参数,直到参数收敛;
    基于收敛的参数得到隐式因子矩阵,其中参数指的是隐式因子矩阵中矩阵单元的参数。
  20. 根据权利要求18所述的非易失性可读存储介质,其特征在于,所述基于所述商品特征的隐式因子矩阵提取商品特征向量的步骤具体包括:
    提取所述商品特征的隐式因子矩阵中的类别所在的行或列作为目标特征向量,其中,目标特征向量包括至少一个类别属性与多个隐式因子对应的矩阵单元。
PCT/CN2019/116921 2019-09-16 2019-11-10 基于向量迁移的推荐方法、装置、计算机设备及非易失性可读存储介质 WO2021051515A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910871369.2 2019-09-16
CN201910871369.2A CN110838020B (zh) 2019-09-16 2019-09-16 基于向量迁移的推荐方法、装置、计算机设备及存储介质

Publications (1)

Publication Number Publication Date
WO2021051515A1 true WO2021051515A1 (zh) 2021-03-25

Family

ID=69574664

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/116921 WO2021051515A1 (zh) 2019-09-16 2019-11-10 基于向量迁移的推荐方法、装置、计算机设备及非易失性可读存储介质

Country Status (2)

Country Link
CN (1) CN110838020B (zh)
WO (1) WO2021051515A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222706A (zh) * 2021-05-25 2021-08-06 深圳和锐网络科技有限公司 商品的二次推送方法、装置、电子设备及存储介质
CN113688327A (zh) * 2021-08-31 2021-11-23 中国平安人寿保险股份有限公司 融合神经图协同滤波网络的数据预测方法、装置及设备
CN113706204A (zh) * 2021-08-31 2021-11-26 中国平安财产保险股份有限公司 基于深度学习的权益发放方法、装置、设备及存储介质
CN114138323A (zh) * 2021-11-13 2022-03-04 苏州浪潮智能科技有限公司 一种产品多版本开发的管理方法、装置、设备及可读介质
CN114596120A (zh) * 2022-03-15 2022-06-07 江苏衫数科技集团有限公司 一种商品销量预测方法、系统、设备及存储介质
CN116522003A (zh) * 2023-07-03 2023-08-01 之江实验室 基于嵌入表压缩的信息推荐方法、装置、设备和介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553745A (zh) * 2020-05-08 2020-08-18 深圳前海微众银行股份有限公司 基于联邦的模型更新方法、装置、设备及计算机存储介质
CN112347361B (zh) * 2020-11-16 2024-03-01 百度在线网络技术(北京)有限公司 推荐对象的方法、神经网络及其训练方法、设备和介质
CN112418423B (zh) * 2020-11-24 2023-08-15 百度在线网络技术(北京)有限公司 利用神经网络向用户推荐对象的方法、设备和介质
CN115396831A (zh) * 2021-05-08 2022-11-25 中国移动通信集团浙江有限公司 交互模型生成方法、装置、设备及存储介质
CN113656692B (zh) * 2021-08-17 2023-05-30 中国平安财产保险股份有限公司 基于知识迁移算法的产品推荐方法、装置、设备及介质
CN113822776B (zh) * 2021-09-29 2023-11-03 中国平安财产保险股份有限公司 课程推荐方法、装置、设备及存储介质
CN114547482B (zh) * 2022-03-03 2023-01-20 智慧足迹数据科技有限公司 业务特征生成方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018095049A1 (zh) * 2016-11-22 2018-05-31 华为技术有限公司 生成推荐结果的方法和装置
CN109359793A (zh) * 2018-08-03 2019-02-19 阿里巴巴集团控股有限公司 一种针对新场景的预测模型训练方法及装置
CN110019965A (zh) * 2019-02-28 2019-07-16 北京达佳互联信息技术有限公司 表情图像的推荐方法、装置、电子设备及存储介质
CN110147882A (zh) * 2018-09-03 2019-08-20 腾讯科技(深圳)有限公司 神经网络模型的训练方法、人群扩散方法、装置及设备
CN110162693A (zh) * 2019-03-04 2019-08-23 腾讯科技(深圳)有限公司 一种信息推荐的方法以及服务器

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107451855A (zh) * 2017-07-13 2017-12-08 南京师范大学 一种图构建与l1正则矩阵分解联合学习的推荐方法
CN108090229A (zh) * 2018-01-10 2018-05-29 广东工业大学 一种基于卷积神经网络确定评分矩阵的方法和装置
CN108573399B (zh) * 2018-02-28 2022-03-18 中国银联股份有限公司 基于转移概率网络的商户推荐方法及其系统
CN109241440A (zh) * 2018-09-29 2019-01-18 北京工业大学 一种基于深度学习的面向隐式反馈推荐方法
CN110210933B (zh) * 2019-05-21 2022-02-11 清华大学深圳研究生院 一种基于生成对抗网络的隐语义推荐方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018095049A1 (zh) * 2016-11-22 2018-05-31 华为技术有限公司 生成推荐结果的方法和装置
CN109359793A (zh) * 2018-08-03 2019-02-19 阿里巴巴集团控股有限公司 一种针对新场景的预测模型训练方法及装置
CN110147882A (zh) * 2018-09-03 2019-08-20 腾讯科技(深圳)有限公司 神经网络模型的训练方法、人群扩散方法、装置及设备
CN110019965A (zh) * 2019-02-28 2019-07-16 北京达佳互联信息技术有限公司 表情图像的推荐方法、装置、电子设备及存储介质
CN110162693A (zh) * 2019-03-04 2019-08-23 腾讯科技(深圳)有限公司 一种信息推荐的方法以及服务器

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113222706A (zh) * 2021-05-25 2021-08-06 深圳和锐网络科技有限公司 商品的二次推送方法、装置、电子设备及存储介质
CN113222706B (zh) * 2021-05-25 2024-01-30 深圳和锐网络科技有限公司 商品的二次推送方法、装置、电子设备及存储介质
CN113688327A (zh) * 2021-08-31 2021-11-23 中国平安人寿保险股份有限公司 融合神经图协同滤波网络的数据预测方法、装置及设备
CN113706204A (zh) * 2021-08-31 2021-11-26 中国平安财产保险股份有限公司 基于深度学习的权益发放方法、装置、设备及存储介质
CN113706204B (zh) * 2021-08-31 2024-04-05 中国平安财产保险股份有限公司 基于深度学习的权益发放方法、装置、设备及存储介质
CN114138323A (zh) * 2021-11-13 2022-03-04 苏州浪潮智能科技有限公司 一种产品多版本开发的管理方法、装置、设备及可读介质
CN114138323B (zh) * 2021-11-13 2023-08-18 苏州浪潮智能科技有限公司 一种产品多版本开发的管理方法、装置、设备及可读介质
CN114596120A (zh) * 2022-03-15 2022-06-07 江苏衫数科技集团有限公司 一种商品销量预测方法、系统、设备及存储介质
CN114596120B (zh) * 2022-03-15 2024-01-05 江苏衫数科技集团有限公司 一种商品销量预测方法、系统、设备及存储介质
CN116522003A (zh) * 2023-07-03 2023-08-01 之江实验室 基于嵌入表压缩的信息推荐方法、装置、设备和介质
CN116522003B (zh) * 2023-07-03 2023-09-12 之江实验室 基于嵌入表压缩的信息推荐方法、装置、设备和介质

Also Published As

Publication number Publication date
CN110838020A (zh) 2020-02-25
CN110838020B (zh) 2023-06-23

Similar Documents

Publication Publication Date Title
WO2021051515A1 (zh) 基于向量迁移的推荐方法、装置、计算机设备及非易失性可读存储介质
US10803377B2 (en) Content presentation based on a multi-task neural network
CN109102127B (zh) 商品推荐方法及装置
CN110889747B (zh) 商品推荐方法、装置、系统、计算机设备及存储介质
WO2015148422A1 (en) Recommendation system with dual collaborative filter usage matrix
WO2019061989A1 (zh) 贷款风险控制方法、电子装置及可读存储介质
WO2022016522A1 (zh) 推荐模型的训练方法、推荐方法、装置及计算机可读介质
US9715486B2 (en) Annotation probability distribution based on a factor graph
WO2019072128A1 (zh) 对象识别方法及其系统
EP2997538A1 (en) Language proficiency detection in social applications
US11403700B2 (en) Link prediction using Hebbian graph embeddings
CN108470052B (zh) 一种基于矩阵补全的抗托攻击推荐算法
JP2021103542A (ja) 情報提供装置、情報提供方法、およびプログラム
WO2021174877A1 (zh) 基于智能决策的目标检测模型的处理方法、及其相关设备
CN113781129B (zh) 一种智能营销策略生成方法和系统
US10949917B1 (en) Contextual graphical user interfaces
US10909145B2 (en) Techniques for determining whether to associate new user information with an existing user
US11770407B2 (en) Methods and apparatuses for defending against data poisoning attacks in recommender systems
CN110598120A (zh) 基于行为数据的理财推荐方法及装置、设备
CN111309834A (zh) 一种无线热点与兴趣点的匹配方法及装置
CN116204714A (zh) 推荐方法、装置、电子设备及存储介质
CN109063120B (zh) 一种基于聚类的协同过滤推荐方法和装置
CN113779380A (zh) 跨域推荐、内容推荐方法、装置及设备
US11601509B1 (en) Systems and methods for identifying entities between networks
CN114756758A (zh) 一种混合推荐方法和系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945812

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 18/07/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19945812

Country of ref document: EP

Kind code of ref document: A1