CN115222486A - Article recommendation model training method, article recommendation device and storage medium - Google Patents

Article recommendation model training method, article recommendation device and storage medium Download PDF

Info

Publication number
CN115222486A
CN115222486A CN202210906295.3A CN202210906295A CN115222486A CN 115222486 A CN115222486 A CN 115222486A CN 202210906295 A CN202210906295 A CN 202210906295A CN 115222486 A CN115222486 A CN 115222486A
Authority
CN
China
Prior art keywords
historical
information
article
comment
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210906295.3A
Other languages
Chinese (zh)
Other versions
CN115222486B (en
Inventor
王健宗
李泽远
司世景
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202210906295.3A priority Critical patent/CN115222486B/en
Publication of CN115222486A publication Critical patent/CN115222486A/en
Application granted granted Critical
Publication of CN115222486B publication Critical patent/CN115222486B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Finance (AREA)
  • Computational Linguistics (AREA)
  • Accounting & Taxation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the application provides an article recommendation model training method, an article recommendation device and a storage medium, and belongs to the technical field of artificial intelligence. The method comprises the following steps: acquiring a plurality of historical user information, historical article information, historical comment information of the historical article information, real article tags and real rating tags, inputting a preset prediction model to obtain a historical user embedding vector, a historical article embedding vector and a historical comment embedding vector, and further determining a point-by-point loss function; determining prediction scoring information according to the historical user embedded vector and the historical comment embedded vector, and determining a pairing loss function according to the prediction scoring information and the real scoring label; and training the prediction model based on the point-by-point loss function and the pairwise loss function to obtain an article recommendation model. According to the method and the device, the article recommendation model can be prevented from being influenced by semantic deviation of comment information, accuracy of the prediction scoring information is guaranteed, and accuracy of article recommendation is improved.

Description

Article recommendation model training method, article recommendation device and storage medium
Technical Field
The present application relates to, but not limited to, the technical field of artificial intelligence, and in particular, to an article recommendation model training method, an article recommendation apparatus, and a storage medium.
Background
After a user purchases an article, the article is generally commented; at present, in an article recommendation system, each piece of comment information of all users is integrated into the same document, then a scoring standard is determined through the document, each piece of comment information is scored according to the scoring standard, further, the overall score of each article is determined, and an article with a high score is recommended to the user; however, when different users have different word habits and the same user has different mood states, the obtained comment information has semantic deviation, which affects the scoring standard of the comment information, and thus the accuracy of item recommendation is low.
Disclosure of Invention
The following is a summary of the subject matter described in detail herein. This summary is not intended to limit the scope of the claims.
The embodiment of the application provides an article recommendation model training method, an article recommendation device and a storage medium, and can improve the recommendation accuracy of an article recommendation system.
In order to achieve the above object, a first aspect of an embodiment of the present application provides an item recommendation model training method, where the method includes: acquiring a plurality of historical user information, historical article information, historical comment information of the historical article information, a real article tag and a real scoring tag, wherein the historical comment information is matched with the historical user information, the real article tag is matched with the historical article information, and the real scoring tag is matched with the historical comment information; inputting the historical user information, any two pieces of historical article information and any two pieces of historical comment information of the historical article information into a preset prediction model to obtain a historical user embedding vector, a historical article embedding vector and a historical comment embedding vector; determining a point-by-point loss function according to the historical user embedding vector, the historical article embedding vector and the real article label; determining prediction scoring information according to the historical user embedded vector and the historical comment embedded vector, and determining a pairing loss function according to the prediction scoring information and the real scoring label; and training the prediction model based on the point-by-point loss function and the pairwise loss function to obtain an article recommendation model.
In some embodiments, the predictive model includes a user network, an item network, a review original network, and a review momentum network, wherein the review momentum network has the same network structure as the review original network.
In some embodiments, the historical comment embedded vector comprises a first historical comment pair embedded vector and a second historical comment pair embedded vector; the step of inputting the historical user information, any two pieces of historical article information and any two pieces of historical comment information of the historical article information into a preset prediction model to obtain a historical user embedding vector, a historical article embedding vector and a historical comment embedding vector includes: inputting the historical user information into the user network to obtain a historical user embedding vector; inputting any two historical article information into the article network to obtain two historical article embedding vectors; inputting historical comment information of any two pieces of historical article information into the comment original network to obtain the first historical comment pair embedding vector; and inputting the historical comment information of any two historical article information into the comment momentum network to obtain the second historical comment pair embedded vector.
In some embodiments, the training the prediction model based on the point-by-point loss function and the pairwise loss function to obtain an item recommendation model includes: determining a model total loss function according to the point-by-point loss function and the pair loss function; updating model parameters of the historical user network, the item network and the comment original network according to the model total loss function; determining a momentum updating function according to the updated comment original network and a preset momentum updating coefficient; and updating the model parameters of the comment momentum network according to the momentum updating function so as to obtain an article recommendation model.
In some embodiments, the pointwiseThe formula for the loss function is:
Figure BDA0003772549380000021
Figure BDA0003772549380000022
wherein L is 1 For said point-by-point loss function, y ij For the real item label, p, corresponding to the ith historical user information and the jth historical item information ij The ith historical user information is predicted item information corresponding to the jth historical item information, and n is the number of all the historical item information; p is a radical of ij =u i ·v j Wherein u is i Embedding a vector, v, for the historical user corresponding to the ith historical user information j Embedding a vector for the historical item corresponding to the jth historical item information; the formula of the pair-wise loss function is:
Figure BDA0003772549380000023
wherein L is 2 As said pairwise loss function, y is The ith historical user information and the ith real rating label, y, corresponding to the historical comment information it The ith real scoring label r 'corresponding to the historical user information and the tth historical comment information' s Embedding vector r 'for the second historical comment pair corresponding to the s-th historical comment information' t Embedding vectors for the second historical comment pairs corresponding to the tth historical comment information, wherein m is the number of all the historical user information, and n is the number of all the historical comment information; when y is is <y it ,I(y is <y it ) =1, otherwise I (y) is <y it 0=0;max(0,u i ·r′ s -u i ·r′ t ) For determining 0 and u i ·r′ s -u i ·r′ t Maximum value of (1); the formula of the model total loss function is as follows: l is total =L 11 L 22 L reg Wherein, L total For said model total loss function, λ 1 And λ 2 Is a predetermined hyper-parameter, L reg Is a regularization term; the formula of the regularization term is:
Figure BDA0003772549380000031
wherein, theta k Is the kth model parameter in the prediction model, and K is the number of all model parameters in the prediction model.
In some embodiments, the momentum update function is formulated as: w is a m =δw m ′+(1-δ)w v Wherein w is m Is the model parameter of the comment momentum network, delta is the momentum update coefficient, w m ' model parameters of the commented momentum network before update, w v And the updated model parameters of the original network are commented.
To achieve the above object, a second aspect of an embodiment of the present application provides an item recommendation method, including: obtaining target user information, a plurality of pieces of target article information and target comment information of the target article information, and inputting the target user information, the target article information and the target comment information into an article recommendation model to obtain prediction score information of each piece of target article information, wherein the article recommendation model is obtained by the article recommendation model training method of the first aspect; and determining item recommendation information in a plurality of target item information based on the prediction scoring information.
In order to achieve the above object, a third aspect of an embodiment of the present application provides an item recommendation model training apparatus, including: the system comprises an acquisition unit, a storage unit and a display unit, wherein the acquisition unit is used for acquiring a plurality of historical user information, historical article information, historical comment information of the historical article information, a real article tag and a real scoring tag, the historical comment information is matched with the historical user information, the real article tag is matched with the historical article information, and the real scoring tag is matched with the historical comment information; the input unit is used for inputting the historical user information, any two pieces of historical article information and the historical comment information of any two pieces of historical article information into a preset prediction model to obtain a historical user embedding vector, a historical article embedding vector and a historical comment embedding vector; a first determining unit, configured to determine a point-by-point loss function according to the historical user embedding vector, the historical item embedding vector, and the real item tag; the second determining unit is used for determining prediction scoring information according to the historical user embedded vector and the historical comment embedded vector, and determining a pairing loss function according to the prediction scoring information and the real scoring label; and the training unit is used for training the prediction model based on the point-by-point loss function and the pairwise loss function to obtain an article recommendation model.
In order to achieve the above object, a fourth aspect of the embodiments of the present application provides an electronic device, which includes a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for implementing connection communication between the processor and the memory, wherein the program, when executed by the processor, implements the item recommendation model training method according to the first aspect.
In order to achieve the above object, a fifth aspect of embodiments of the present application provides a storage medium, which is a computer-readable storage medium for computer-readable storage, and the storage medium stores one or more programs, which are executable by one or more processors to implement the item recommendation model training method according to the first aspect or the item recommendation method according to the second aspect.
The embodiment of the application provides an article recommendation model training method, an article recommendation device and a storage medium, and the method comprises the following steps: acquiring a plurality of historical user information, historical article information, historical comment information of the historical article information, a real article tag and a real scoring tag, wherein the historical comment information is matched with the historical user information, the real article tag is matched with the historical article information, and the real scoring tag is matched with the historical comment information; inputting the historical user information, any two pieces of historical article information and the historical comment information of any two pieces of historical article information into a preset prediction model to obtain a historical user embedding vector, a historical article embedding vector and a historical comment embedding vector; determining a point-by-point loss function according to the historical user embedding vector, the historical article embedding vector and the real article label; determining prediction scoring information according to the historical user embedding vector and the historical comment embedding vector, and determining a pairwise loss function according to the prediction scoring information and the real scoring tag; and training the prediction model based on the point-by-point loss function and the pairwise loss function to obtain an article recommendation model. According to the scheme provided by the embodiment of the application, historical user information, historical article information and historical comment information are used as training data, the training data are input into a prediction model, a historical user embedded vector, a historical article embedded vector and a historical comment embedded vector are obtained, a point-by-point loss function is determined by combining a real article label, prediction score information is determined by the historical user embedded vector and the historical comment embedded vector, a pairwise loss function is determined by combining the real score label, the prediction model is trained by using the point-by-point loss function, the article recommendation model can be more accurately predicted according to the preference of a user, the prediction model is trained by using the pairwise loss function, the article recommendation model can be prevented from being influenced by semantic deviation of the comment information, the accuracy of the prediction score information is guaranteed, and the accuracy of article recommendation is improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
The accompanying drawings are included to provide a further understanding of the claimed subject matter and are incorporated in and constitute a part of this specification, illustrate embodiments of the subject matter and together with the description serve to explain the principles of the subject matter and not to limit the subject matter.
FIG. 1 is a flowchart of an item recommendation model training method provided in one embodiment of the present application;
FIG. 2 is a flow chart of a predictive model information input process provided by another embodiment of the present application;
FIG. 3 is a flow chart of a method for obtaining an item recommendation model according to another embodiment of the present application;
FIG. 4 is a flow chart of an item recommendation method provided in another embodiment of the present application;
FIG. 5 is a system block diagram of an item recommendation model provided in another embodiment of the present application;
FIG. 6 is a schematic structural diagram of an article recommendation model training apparatus according to another embodiment of the present application;
fig. 7 is a schematic hardware structure diagram of an electronic device according to another embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
In the description of the present application, the meaning of a plurality is one or more, the meaning of a plurality is two or more, and larger, smaller, larger, etc. are understood as excluding the present number, and larger, smaller, inner, etc. are understood as including the present number.
It is noted that while functional block divisions are provided in device diagrams and logical sequences are shown in flowcharts, in some cases, steps shown or described may be performed in sequences other than block divisions within devices or flowcharts. The terms "first," "second," and the like in the description, in the claims, or in the foregoing drawings, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
First, several terms referred to in the present application are resolved:
artificial Intelligence (AI): the method is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding human intelligence; artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produces a new intelligent machine that can react in a manner similar to human intelligence, and research in this field includes robotics, language recognition, image recognition, natural language processing, and expert systems, among others. The artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is also a theory, method, technique and application system that uses a digital computer or a machine controlled by a digital computer to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use the knowledge to obtain the best results.
After a user purchases an article, the user usually reviews the article; currently, in an article recommendation system, each piece of review information of all users is integrated into the same document, then a scoring standard is determined through the document, each piece of review information is scored according to the scoring standard, further the overall score of each article is determined, and articles with high scores are recommended to the users; however, when different users have different word habits and the same user is in different mood states, the obtained comment information has semantic deviation, the scoring standard of the comment information is influenced, and the accuracy of item recommendation is low.
Aiming at the problem that the accuracy of item recommendation is low due to semantic deviation of comment information, the application provides an item recommendation model training method, an item recommendation device and a storage medium, wherein the method comprises the following steps: acquiring a plurality of historical user information, historical article information, historical comment information of the historical article information, real article tags and real scoring tags, wherein the historical comment information is matched with the historical user information, the real article tags are matched with the historical article information, and the real scoring tags are matched with the historical comment information; inputting historical user information, any two pieces of historical article information and historical comment information of any two pieces of historical article information into a preset prediction model to obtain a historical user embedding vector, a historical article embedding vector and a historical comment embedding vector; determining a point-by-point loss function according to the historical user embedding vector, the historical article embedding vector and the real article label; determining prediction scoring information according to the historical user embedded vector and the historical comment embedded vector, and determining a pairing loss function according to the prediction scoring information and the real scoring label; and training the prediction model based on the point-by-point loss function and the pairwise loss function to obtain an article recommendation model. According to the scheme provided by the embodiment of the application, historical user information, historical article information and historical comment information are used as training data, the training data are input into a prediction model to obtain a historical user embedded vector, a historical article embedded vector and a historical comment embedded vector, a point-by-point loss function is determined by combining a real article label, then prediction score information is determined through the historical user embedded vector and the historical comment embedded vector, a pairwise loss function is determined by combining the real score label, and therefore the prediction model is trained by using the point-by-point loss function, the article recommendation model can be more accurate in predicting user preference, the prediction model is trained by using the pairwise loss function, the article recommendation model can be prevented from being influenced by semantic deviation of comment information, accuracy of the prediction score information is guaranteed, and accuracy of article recommendation is improved.
The method for training an item recommendation model, the method for recommending an item, the apparatus for recommending an item, and the storage medium provided in the embodiments of the present application are specifically described in the following embodiments, where the method for training an item recommendation model in the embodiments of the present application is first described.
The embodiment of the application provides an article recommendation model training method, and relates to the technical field of artificial intelligence. The article recommendation model training method provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smartphone, tablet, laptop, desktop computer, or the like; the server side can be configured as an independent physical server, can also be configured as a server cluster or a distributed system formed by a plurality of physical servers, and can also be configured as a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN (content distribution network) and big data and artificial intelligence platforms; the software may be an application or the like that implements an item recommendation model training method, but is not limited to the above form.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In each embodiment of the present application, when data related to the user identity or characteristic, such as user information, user behavior data, user history data, and user location information, is processed, permission or consent of the user is obtained, and the data collection, use, and processing comply with relevant laws and regulations and standards of relevant countries and regions. In addition, when the embodiment of the present application needs to acquire sensitive personal information of a user, individual permission or individual consent of the user is obtained through a pop-up window or a jump to a confirmation page, and after the individual permission or individual consent of the user is definitely obtained, necessary user-related data for enabling the embodiment of the present application to operate normally is acquired.
The embodiments of the present application will be further explained with reference to the drawings.
As shown in fig. 1, fig. 1 is a flowchart of an item recommendation model training method according to an embodiment of the present application. The item recommendation model training method includes but is not limited to the following steps:
step S110, acquiring a plurality of historical user information, historical article information, historical comment information of the historical article information, a real article tag and a real rating tag, wherein the historical comment information is matched with the historical user information, the real article tag is matched with the historical article information, and the real rating tag is matched with the historical comment information;
step S120, inputting historical user information, any two pieces of historical article information and historical comment information of any two pieces of historical article information into a preset prediction model to obtain a historical user embedding vector, a historical article embedding vector and a historical comment embedding vector;
step S130, determining a point-by-point loss function according to the historical user embedded vector, the historical article embedded vector and the real article label;
step S140, determining prediction scoring information according to the historical user embedded vector and the historical comment embedded vector, and determining a pairwise loss function according to the prediction scoring information and the real scoring tag;
and S150, training the prediction model based on the point-by-point loss function and the pairwise loss function to obtain an article recommendation model.
It can be understood that, in the training process, a training data set is obtained first, the training data set is determined by the comment records of the user, when the user purchases an article and makes a comment on the article, the user information, the article information purchased by the user and the comment information made on the article by the user are recorded in the background database, so that the historical user information, the historical article information and the historical comment information can be obtained through the background database; through the historical comment information, the emotion of the user on the article can be analyzed, for example, when a television is purchased, the user gives a comment of 'clear image, intelligent system and no atomization phenomenon', the positive emotion of the user on the television can be analyzed, and the attribute of the television satisfied by the user can be known, so that the historical comment information can be used as auxiliary information, and the article recommendation accuracy is improved; therefore, training is carried out on each historical user information, and a point-by-point loss function is determined by combining the real article label, so that the article preference of each user can be well predicted; when the word using habits of different users are different and the mood state of the same user is different, the published comment information has semantic deviation, so that the relation between the comment word using habits and the grade level is not absolute, the same comment possibly corresponds to different grades, and the paired loss function is determined by combining the real grade label, so that the influence of the semantic deviation can be avoided, and the article recommendation accuracy is improved; based on the method, historical user information, historical article information and historical comment information are used as training data, the training data are input into a prediction model to obtain a historical user embedded vector, a historical article embedded vector and a historical comment embedded vector, a real article label is combined to determine a point-by-point loss function, prediction score information is determined through the historical user embedded vector and the historical comment embedded vector, and a pairwise loss function is determined through the real score label, so that the prediction model is trained by the point-by-point loss function, the article recommendation model can predict user preference more accurately, the pairwise loss function is used for training the prediction model, the article recommendation model can be prevented from being influenced by semantic deviation of the comment information, the accuracy of the prediction score information is guaranteed, and the accuracy of article recommendation is improved.
It should be noted that, the value of the real article label is 0 or 1, in the training process, shopping records of all users are mixed together, that is, all historical user information is used as one data set, all historical article information is used as another data set, one historical user information and one historical article information are arbitrarily taken, when a user purchases the article, the value of the real article label is 1, and when the user is satisfied with purchasing the article, the value of the real article label is 0; the real scoring label is the real scoring value of the historical comment information and is the scoring value which is accorded by the target comment issued by the target user to the target article; the real article label and the real scoring label are manually preset according to actual conditions, and the accuracy of model training can be guaranteed.
It should be noted that, because the comment information is introduced, the user data received by each application program can be utilized to the maximum extent, and meanwhile, the comment information can also be used for cross-domain recommendation.
It is worth noting that the article embedding vector and the comment embedding vector are trained together by combining with user information, advantages of the article embedding vector and the comment embedding vector can be integrated, and the generated comment embedding vector can more accurately reflect real moods of users when corresponding comments are made.
In addition, in one embodiment, the prediction model comprises a user network, an item network, a comment original network and a comment momentum network, wherein the comment momentum network has the same network structure as the comment original network.
It can be understood that the user network, the article network, the comment original network and the comment momentum network can be BERT network models, the BERT network models are encoders based on Transformer models, the user network can convert input historical user information into historical user embedded vectors of specific dimensions, the article network can convert input historical article information into historical article embedded vectors of specific dimensions, and the comment original network and the comment momentum network can convert input historical comment information into historical comment embedded vectors of specific dimensions.
It should be noted that the network structure of the item recommendation model is limited, and the item recommendation model has a three-layer four-tower network structure, so that the accuracy of the prediction scoring information can be ensured, and the accuracy of item recommendation is further improved.
Additionally, referring to FIG. 2, in an embodiment, the historical comment embedded vector includes a first historical comment pair embedded vector and a second historical comment pair embedded vector; step S120 in the embodiment shown in fig. 1 includes, but is not limited to, the following steps:
step S210, inputting historical user information into a user network to obtain a historical user embedded vector;
step S220, inputting any two historical article information into an article network to obtain two historical article embedding vectors;
step S230, inputting the historical comment information of any two pieces of historical article information into a comment original network to obtain a first historical comment pair embedded vector;
step S240, inputting the historical comment information of any two historical article information into the comment momentum network to obtain a second historical comment pair embedded vector.
It can be understood that, in the training process of the model, the prediction model needs to be iteratively updated for multiple times until the prediction model meets the preset model requirement or reaches the preset iteration times; in an iteration process, when training data needs to be determined, one historical user information is selected from a plurality of historical user information, and the historical user information is input into a user network, so that a historical user embedded vector can be obtained; then, taking information of two historical objects, and inputting the information into an object network to obtain embedded vectors of the two historical objects; then inputting the corresponding two pieces of historical comment information into a comment original network to obtain a first historical comment pair embedded vector, wherein the first historical comment pair embedded vector comprises two embedded vectors; finally, inputting the corresponding two pieces of historical comment information into a comment momentum network to obtain a second historical comment pair embedded vector, wherein the second historical comment pair embedded vector comprises two embedded vectors; in the current iteration process, the user embedding vector, the two historical item embedding vectors, the first historical comment pair embedding vector and the second historical comment pair embedding vector are used to update the model parameters.
Referring to fig. 3, in an embodiment, step S150 in the embodiment shown in fig. 1 includes, but is not limited to, the following steps:
step S310, determining a model total loss function according to the point-by-point loss function and the pairwise loss function;
step S320, updating model parameters of a historical user network, an article network and a comment original network according to a model total loss function;
step S330, determining a momentum updating function according to the updated comment original network and a preset momentum updating coefficient;
and step S340, updating the model parameters of the comment momentum network according to the momentum updating function so as to obtain an article recommendation model.
It can be understood that when only the point-by-point loss function is used for model training, there may be a labeling bias, which means that different judgment results are obtained for the same comment, for example, for the same user, in the items compared in pairs, for the first set of records, the comment of the user on the item a is 0.9 and may be interested, for the item B is 0.6 and may be interested, for the second set of records, the comment of the user on the item B is 0.6 and may be interested, and for the item C is 0.2 and may be interested; in the first set of records, item a is a positive sample and item B is a negative sample; in the second set of records, item B is a positive sample and item C is a negative sample; for the article B, the two judgment results are obviously different, and the labeling deviation exists, so that the directions of two iterative updates are opposite, and the speed and the precision of model update are influenced; by setting paired samples and utilizing a paired loss function to perform model training, the annotation deviation can be reduced, and compared with the comment network optimized through gradient back propagation, the comment original network and the comment momentum network are set and optimized through momentum updating, the comment momentum network and the comment original network are consistent and slowly evoluted, the influence of the samples with the annotation deviation on the whole model is reduced, the severe fluctuation and even sudden change of model parameters are prevented, and the problem of the annotation deviation can be effectively prevented.
Additionally, in one embodiment, the formula for the point-wise loss function is:
Figure BDA0003772549380000111
wherein L is 1 As a point-by-point loss function, y ij A real item label, p, corresponding to the ith historical user information and the jth historical item information ij For the ith historical user information and the jth calendarPredicted item information corresponding to the historical item information, wherein n is the number of all the historical item information;
p ij =u i ·v j
wherein u is i Embedding vectors, v, into historical users corresponding to ith historical user information j Embedding a vector for the historical item corresponding to the jth historical item information;
the formula for the pairwise loss function is:
Figure BDA0003772549380000112
wherein L is 2 As a function of the pair-wise loss, y is A real rating label, y, corresponding to the ith historical user information and the ith historical comment information it Is a real scoring label r 'corresponding to the ith historical user information and the tth historical comment information' s Embedding vector r 'for second history comment pair corresponding to s-th history comment information' t Embedding vectors for a second historical comment pair corresponding to the tth historical comment information, wherein m is the number of all historical user information, and n is the number of all historical comment information; when y is is <y it ,I(y is <y it ) =1, otherwise I (y) is <y it )=0;max(0,u i ·r′ s -u i ·r′ t ) For determining 0 and u i ·r′ s -u i ·r′ t Maximum value of (2);
the formula of the model total loss function is:
L total =L 11 L 22 L reg
wherein L is total As a function of the total loss of the model, λ 1 And λ 2 Is a predetermined hyper-parameter, L reg Is a regularization term;
the formula of the regularization term is:
Figure BDA0003772549380000121
wherein, theta k K is the kth model parameter in the prediction model, and K is the number of all model parameters in the prediction model.
It will be appreciated that p is relative to the genuine item label ij A probability value of 1 for the item label for the item being predicted; the s-th historical comment information corresponds to an article s, and the t-th historical comment information corresponds to an article t; u. of i ·r′ s Is the product of the embedding vector of the historical user corresponding to the ith historical user information and the embedding vector of the second historical comment corresponding to the s-th historical comment information, i i ·r′ s Predictive score for characterizing item s, in addition, u i ·r′ t Is the product of the embedding vector of the historical user corresponding to the ith historical user information and the embedding vector of the second historical comment corresponding to the t-th historical comment information, u i ·r′ t A prediction score for characterizing the item t.
It should be noted that, the point-by-point loss function and the pairwise loss function are determined first, and then the total loss function of the model is determined, so as to ensure the reliability of the model training.
It should be noted that the regularization term refers to L2 regularization, and by setting the regularization term, overfitting can be reduced because it can attenuate the weight, and the overfitting is generally because it is assumed that the function takes into account each point in the sample, the finally formed function has a large fluctuation, and a slight change in the independent variable will cause a drastic change in the function value, and the result generally shows that the accuracy on the training set is high, and the accuracy on the test set is low. The reason why the fluctuation is large is that the weight (constant) in the function is too large in terms of the structure of the function, and if the weight can be reduced, the fluctuation can be reduced, and the overfitting can be reduced to some extent. It is therefore necessary to add a regularization penalty to the model that constrains the parameters of the model ensemble to prevent overfitting (limiting the two-norm of the model ensemble parameters below a certain value).
In addition, λ is 1 And λ 2 To control the over-parameters of the fractional loss functions of each part, areAnd (4) optimizing the obtained empirical constants through repeated experiments based on grid search (grid search).
Additionally, in one embodiment, the momentum update function is formulated as:
w m =δw m ′+(1-δ)w v
wherein w m To comment on the model parameters of the momentum network, δ is the momentum update coefficient, w m ' model parameters for the comment momentum network before update, w v The updated model parameters of the original network are reviewed.
It will be appreciated that for multiple rounds of the model update procedure, w m The model parameters of the comment momentum network are the updated comment momentum network E of the current round m Model parameter of (1), w m ' is the model parameter of the comment momentum network before update, namely E after the previous round of update m The parameters of (a); w is a v The updated model parameters of the original comment network are the updated original comment network E v The parameter (c) of (c).
In particular practice, to comment on the effective update of the momentum network, δ may be set to 0.99.
As shown in fig. 4, fig. 4 is a flowchart of an item recommendation method according to an embodiment of the present application. The item recommendation method includes, but is not limited to, the following steps:
step S410, obtaining target user information, a plurality of target article information and target comment information of the target article information, inputting the target user information, the target article information and the target comment information into an article recommendation model, and obtaining the prediction score information of each target article information, wherein the article recommendation model is obtained by the training of the article recommendation model;
in step S420, item recommendation information is determined among the plurality of target item information based on the prediction score information.
It can be understood that the item recommendation model mentioned in the item recommendation method is obtained by training the item recommendation model training method, so that the accuracy of the prediction scoring information is high, and the accuracy of item recommendation can be ensured.
It should be noted that, for the target user information, the prediction score information corresponding to each target item information can be obtained through the item recommendation model, and then, through comparison, an item with a high prediction score is determined, so as to obtain item recommendation information.
Referring to fig. 5, fig. 5 is a system block diagram of an item recommendation model according to an embodiment of the present application.
The article recommendation model has a three-layer four-tower network structure, and the comment information and the article information are introduced to be trained together and used as auxiliary information to improve the recommendation effect; in the training process of the model, aiming at certain historical user information, the historical user information is input into a user network E u Obtaining a historical user embedded vector u; then, two historical article information are taken and input into an article network E i Obtaining two historical object embedding vectors v 1 And v 2 (ii) a Then inputting the corresponding two pieces of historical comment information into a comment original network E v Obtaining a first history comment pair embedding vector r 1 And r 2 (ii) a Finally, inputting the corresponding two historical comment information into a comment momentum network E m Get a second historical review pair embedding vector r' 1 And r' 2 Update comment momentum network E by momentum update m The model parameters of (1).
In addition, E m And E v The network structures of (a) are the same, and all include but are not limited to: attention, convolution and max-pooling layers.
In addition, referring to fig. 6, the present application further provides an item recommendation model training apparatus 600, including:
the acquiring unit 610 is configured to acquire a plurality of historical user information, historical item information, historical review information of the historical item information, a real item tag, and a real rating tag, where the historical review information matches the historical user information, the real item tag matches the historical item information, and the real rating tag matches the historical review information;
the input unit 620 is configured to input historical user information, any two pieces of historical item information, and historical comment information of any two pieces of historical item information into a preset prediction model to obtain a historical user embedding vector, a historical item embedding vector, and a historical comment embedding vector;
a first determining unit 630, configured to determine a point-by-point loss function according to the historical user embedding vector, the historical item embedding vector, and the real item tag;
a second determining unit 640, configured to determine prediction scoring information according to the historical user embedded vector and the historical comment embedded vector, and determine a pairwise loss function according to the prediction scoring information and the real scoring tag;
and the training unit 650 is configured to train the prediction model based on the point-by-point loss function and the pairwise loss function to obtain an item recommendation model.
It can be understood that the specific implementation of the item recommendation model training apparatus 600 is substantially the same as the specific embodiment of the item recommendation model training method described above, and is not described herein again; based on the method, historical user information, historical article information and historical comment information are used as training data, the training data are input into a prediction model to obtain a historical user embedded vector, a historical article embedded vector and a historical comment embedded vector, a point-by-point loss function is determined by combining a real article label, then prediction score information is determined through the historical user embedded vector and the historical comment embedded vector, and a paired loss function is determined by combining a real score label, so that the prediction model is trained by using the point-by-point loss function, the preference of the article recommendation model can be more accurately predicted for a user, the paired loss function is used for training the prediction model, the article recommendation model can be prevented from being influenced by semantic deviation of the comment information, the accuracy of the prediction score information is ensured, and the accuracy of article recommendation is improved.
In addition, the present application also provides an article recommendation device, including:
the prediction unit is used for acquiring target user information, a plurality of target article information and target comment information of the target article information, and inputting the target user information, the target article information and the target comment information into an article recommendation model to obtain prediction score information of each target article information;
a recommendation unit configured to determine item recommendation information among the plurality of target item information based on the prediction score information;
and the article recommendation model is obtained by training the article recommendation model training method.
It can be understood that the specific implementation of the item recommendation apparatus is substantially the same as the specific implementation of the item recommendation method, and is not described herein again.
In addition, referring to fig. 7, fig. 7 illustrates a hardware structure of an electronic device of another embodiment, the electronic device including:
the processor 701 may be implemented by a general-purpose CPU (Central Processing Unit), a microprocessor, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute a relevant program to implement the technical solution provided in the embodiment of the present Application;
the Memory 702 may be implemented in the form of a Read Only Memory (ROM), a static storage device, a dynamic storage device, or a Random Access Memory (RAM). The memory 702 may store an operating system and other application programs, and when the technical solution provided in the embodiment of the present specification is implemented by software or firmware, the related program codes are stored in the memory 702 and are called by the processor 701 to execute the item recommendation model training method according to the embodiment of the present application, for example, the method steps S110 to S150 in fig. 1, the method steps S210 to S240 in fig. 2, and the method steps S310 to S340 in fig. 3 described above are executed;
an input/output interface 703 for realizing information input and output;
the communication interface 704 is used for realizing communication interaction between the device and other devices, and can realize communication in a wired manner (for example, USB, network cable, etc.) or in a wireless manner (for example, mobile network, WIFI, bluetooth, etc.);
a bus 705 that transfers information between various components of the device, such as the processor 701, the memory 702, the input/output interface 703, and the communication interface 704;
wherein the processor 701, the memory 702, the input/output interface 703 and the communication interface 704 are communicatively connected to each other within the device via a bus 705.
The present application further provides a storage medium, which is a computer-readable storage medium for a computer-readable storage, and the storage medium stores one or more programs, where the one or more programs are executable by one or more processors to implement the item recommendation model training method, for example, to perform the method steps S110 to S150 in fig. 1, the method steps S210 to S240 in fig. 2, and the method steps S310 to S340 in fig. 3 described above, or to implement the item recommendation method, for example, to perform the method steps S410 to S420 in fig. 4 described above.
The memory, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. Further, the memory may include high speed random access memory, and may also include non-transitory memory, such as at least one disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and these remote memories may be connected to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
According to the article recommendation model training method, the article recommendation device and the storage medium, a plurality of historical user information, historical article information, historical comment information of the historical article information, real article labels and real rating labels are obtained, wherein the historical comment information is matched with the historical user information, the real article labels are matched with the historical article information, and the real rating labels are matched with the historical comment information; inputting historical user information, any two pieces of historical article information and historical comment information of any two pieces of historical article information into a preset prediction model to obtain a historical user embedding vector, a historical article embedding vector and a historical comment embedding vector; determining a point-by-point loss function according to the historical user embedding vector, the historical article embedding vector and the real article label; determining prediction scoring information according to the historical user embedded vector and the historical comment embedded vector, and determining a pairwise loss function according to the prediction scoring information and the real scoring tag; training the prediction model based on the point-by-point loss function and the pairwise loss function to obtain an article recommendation model; based on the method, historical user information, historical article information and historical comment information are used as training data, the training data are input into a prediction model to obtain a historical user embedded vector, a historical article embedded vector and a historical comment embedded vector, a real article label is combined to determine a point-by-point loss function, prediction score information is determined through the historical user embedded vector and the historical comment embedded vector, and a pairwise loss function is determined through the real score label, so that the prediction model is trained by the point-by-point loss function, the article recommendation model can predict user preference more accurately, the pairwise loss function is used for training the prediction model, the article recommendation model can be prevented from being influenced by semantic deviation of the comment information, the accuracy of the prediction score information is guaranteed, and the accuracy of article recommendation is improved.
The embodiments described in the embodiments of the present application are for more clearly illustrating the technical solutions of the embodiments of the present application, and do not constitute a limitation to the technical solutions provided in the embodiments of the present application, and it is obvious to those skilled in the art that the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems with the evolution of technology and the emergence of new application scenarios.
It will be understood by those skilled in the art that the embodiments shown in fig. 1 to 4 do not constitute a limitation of the embodiments of the present application, and may include more or less steps than those shown, or some steps may be combined, or different steps may be included.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, and functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be implemented in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in this application, "at least one" means one or more, "a plurality" means two or more. "and/or" is used to describe the association relationship of the associated object, indicating that there may be three relationships, for example, "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of the singular or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b and c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes multiple instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing programs, such as a usb disk, a portable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and the scope of the claims of the embodiments of the present application is not limited thereby. Any modifications, equivalents, and improvements that may occur to those skilled in the art without departing from the scope and spirit of the embodiments of the present application are intended to be within the scope of the claims of the embodiments of the present application.

Claims (10)

1. An item recommendation model training method, the method comprising:
acquiring a plurality of historical user information, historical article information, historical comment information of the historical article information, a real article tag and a real scoring tag, wherein the historical comment information is matched with the historical user information, the real article tag is matched with the historical article information, and the real scoring tag is matched with the historical comment information;
inputting the historical user information, any two pieces of historical article information and the historical comment information of any two pieces of historical article information into a preset prediction model to obtain a historical user embedding vector, a historical article embedding vector and a historical comment embedding vector;
determining a point-by-point loss function according to the historical user embedding vector, the historical article embedding vector and the real article label;
determining prediction scoring information according to the historical user embedded vector and the historical comment embedded vector, and determining a pairing loss function according to the prediction scoring information and the real scoring label;
and training the prediction model based on the point-by-point loss function and the pairwise loss function to obtain an article recommendation model.
2. The method of claim 1, wherein the predictive model comprises a user network, an item network, a review original network, and a review momentum network, wherein the review momentum network has the same network structure as the review original network.
3. The method of claim 2, wherein the historical comment embedded vector comprises a first historical comment pair embedded vector and a second historical comment pair embedded vector;
the step of inputting the historical user information, any two pieces of historical article information and any two pieces of historical comment information of the historical article information into a preset prediction model to obtain a historical user embedding vector, a historical article embedding vector and a historical comment embedding vector includes:
inputting the historical user information into the user network to obtain a historical user embedding vector;
inputting any two historical article information into the article network to obtain two historical article embedding vectors;
inputting historical comment information of any two pieces of historical article information into the comment original network to obtain the first historical comment pair embedding vector;
and inputting the historical comment information of any two pieces of historical article information into the comment momentum network to obtain the second historical comment pair embedding vector.
4. The method of claim 3, wherein training the predictive model based on the point-by-point loss function and the pairwise loss function to obtain an item recommendation model comprises:
determining a model total loss function according to the point-by-point loss function and the pair loss function;
updating model parameters of the historical user network, the item network and the comment original network according to the model total loss function;
determining a momentum updating function according to the updated comment original network and a preset momentum updating coefficient;
and updating the model parameters of the comment momentum network according to the momentum updating function so as to obtain an article recommendation model.
5. The method of claim 4, wherein the formula of the point-by-point loss function is:
Figure FDA0003772549370000021
wherein L is 1 For said point-by-point loss function, y ij For the real item label corresponding to the ith historical user information and the jth historical item information, p ij Predicting item information corresponding to the ith historical user information and the jth historical item information, wherein n is the number of all the historical item information;
p ij =u i ·v j
wherein u is i Embedding vectors, v, into historical users corresponding to the ith historical user information j Embedding a vector for the historical article corresponding to the jth historical article information;
the formula of the pair-wise loss function is:
Figure FDA0003772549370000022
wherein L is 2 As said pairwise loss function, y is The ith historical user information and the ith real rating label, y, corresponding to the historical comment information it The real rating label r corresponding to the ith historical user information and the tth historical comment information is used as the real rating label s ' embedding vector, r, of the second history comment pair corresponding to the s-th history comment information t ' is the second history comment pair embedded vector corresponding to the tth history comment information, m is the number of all history user information, and n is the number of all history comment information; when y is is <y it ,I(y is <y it ) =1, otherwise I (y) is <y it )=0;max(0,u i ·r s ′-u i ·r t ') for determining 0 and u i ·r s ′-u i ·r t The maximum value in';
the formula of the model total loss function is:
L total =L 11 L 22 L reg
wherein L is total As a function of the total loss of said model, λ 1 And λ 2 Is a predetermined hyper-parameter, L reg Is a regularization term;
the formula of the regularization term is:
Figure FDA0003772549370000031
wherein, theta k Is the kth model parameter in the prediction model, and K is the number of all model parameters in the prediction model.
6. The method of claim 4, wherein the momentum update function is formulated as:
w m =δw m ′+(1-δ)w v
wherein w m For the model parameters of the comment momentum network, δ is the momentum update coefficient, w m ' model parameters of the commenting momentum network before updating, w v And the updated model parameters of the original network are commented.
7. A method for recommending items, the method comprising:
acquiring target user information, a plurality of pieces of target article information and target comment information of the target article information, and inputting the target user information, the target article information and the target comment information into an article recommendation model to obtain prediction score information of each piece of target article information, wherein the article recommendation model is obtained by training with the article recommendation model training method of any one of claims 1 to 6;
and determining item recommendation information in a plurality of target item information based on the prediction scoring information.
8. An item recommendation model training apparatus, the apparatus comprising:
the acquiring unit is used for acquiring a plurality of historical user information, historical article information, historical comment information of the historical article information, real article tags and real scoring tags, wherein the historical comment information is matched with the historical user information, the real article tags are matched with the historical article information, and the real scoring tags are matched with the historical comment information;
the input unit is used for inputting the historical user information, any two pieces of historical article information and the historical comment information of any two pieces of historical article information into a preset prediction model to obtain a historical user embedding vector, a historical article embedding vector and a historical comment embedding vector;
a first determining unit, configured to determine a point-by-point loss function according to the historical user embedding vector, the historical item embedding vector, and the real item tag;
the second determining unit is used for determining prediction scoring information according to the historical user embedded vector and the historical comment embedded vector, and determining a pairing loss function according to the prediction scoring information and the real scoring label;
and the training unit is used for training the prediction model based on the point-by-point loss function and the pairwise loss function to obtain an article recommendation model.
9. An electronic device comprising a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for enabling connection communication between the processor and the memory, the program, when executed by the processor, implementing the item recommendation model training method of any of claims 1-6.
10. A storage medium which is a computer-readable storage medium for computer-readable storage, characterized in that the storage medium stores one or more programs which are executable by one or more processors to implement the item recommendation model training method according to any one of claims 1 to 6 or the item recommendation method according to claim 7.
CN202210906295.3A 2022-07-29 2022-07-29 Article recommendation model training method, article recommendation method, device and storage medium Active CN115222486B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210906295.3A CN115222486B (en) 2022-07-29 2022-07-29 Article recommendation model training method, article recommendation method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210906295.3A CN115222486B (en) 2022-07-29 2022-07-29 Article recommendation model training method, article recommendation method, device and storage medium

Publications (2)

Publication Number Publication Date
CN115222486A true CN115222486A (en) 2022-10-21
CN115222486B CN115222486B (en) 2024-02-02

Family

ID=83613249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210906295.3A Active CN115222486B (en) 2022-07-29 2022-07-29 Article recommendation model training method, article recommendation method, device and storage medium

Country Status (1)

Country Link
CN (1) CN115222486B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014074961A (en) * 2012-10-02 2014-04-24 Nippon Telegr & Teleph Corp <Ntt> Commercial product recommendation device, method and program
CN111275521A (en) * 2020-01-16 2020-06-12 华南理工大学 Commodity recommendation method based on user comment and satisfaction level embedding
CN113256367A (en) * 2021-04-25 2021-08-13 西安交通大学 Commodity recommendation method, system, equipment and medium based on user behavior historical data
CN113254785A (en) * 2021-06-21 2021-08-13 腾讯科技(深圳)有限公司 Recommendation model training method, recommendation method and related equipment
CN113850654A (en) * 2021-10-26 2021-12-28 北京沃东天骏信息技术有限公司 Training method of item recommendation model, item screening method, device and equipment
WO2022016522A1 (en) * 2020-07-24 2022-01-27 华为技术有限公司 Recommendation model training method and apparatus, recommendation method and apparatus, and computer-readable medium
CN114764471A (en) * 2021-01-12 2022-07-19 腾讯科技(深圳)有限公司 Recommendation method, recommendation device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014074961A (en) * 2012-10-02 2014-04-24 Nippon Telegr & Teleph Corp <Ntt> Commercial product recommendation device, method and program
CN111275521A (en) * 2020-01-16 2020-06-12 华南理工大学 Commodity recommendation method based on user comment and satisfaction level embedding
WO2022016522A1 (en) * 2020-07-24 2022-01-27 华为技术有限公司 Recommendation model training method and apparatus, recommendation method and apparatus, and computer-readable medium
CN114764471A (en) * 2021-01-12 2022-07-19 腾讯科技(深圳)有限公司 Recommendation method, recommendation device and storage medium
CN113256367A (en) * 2021-04-25 2021-08-13 西安交通大学 Commodity recommendation method, system, equipment and medium based on user behavior historical data
CN113254785A (en) * 2021-06-21 2021-08-13 腾讯科技(深圳)有限公司 Recommendation model training method, recommendation method and related equipment
CN113850654A (en) * 2021-10-26 2021-12-28 北京沃东天骏信息技术有限公司 Training method of item recommendation model, item screening method, device and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张宜浩;朱小飞;徐传运;董世都;: "基于用户评论的深度情感分析和多视图协同融合的混合推荐方法", 计算机学报, no. 06 *

Also Published As

Publication number Publication date
CN115222486B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN107908740B (en) Information output method and device
CN113626719A (en) Information recommendation method, device, equipment, storage medium and computer program product
US11172040B2 (en) Method and apparatus for pushing information
CN109034203B (en) Method, device, equipment and medium for training expression recommendation model and recommending expression
TW202001736A (en) Classification model training method and store classification method and device
CN114298417A (en) Anti-fraud risk assessment method, anti-fraud risk training method, anti-fraud risk assessment device, anti-fraud risk training device and readable storage medium
CN115917535A (en) Recommendation model training method, recommendation device and computer readable medium
US20200311071A1 (en) Method and system for identifying core product terms
CN109117442B (en) Application recommendation method and device
US11269896B2 (en) System and method for automatic difficulty level estimation
CN114511085A (en) Entity attribute value identification method, apparatus, device, medium, and program product
CN109978594B (en) Order processing method, device and medium
CN113836390B (en) Resource recommendation method, device, computer equipment and storage medium
CN116821516B (en) Resource recommendation method, device, equipment and storage medium
CN113592593A (en) Training and application method, device, equipment and storage medium of sequence recommendation model
CN116680481A (en) Search ranking method, apparatus, device, storage medium and computer program product
CN116757270A (en) Data processing method and server based on man-machine interaction model or large model
CN116541602A (en) Recommendation content pushing method and system, electronic equipment and storage medium
CN115828153A (en) Task prediction method, device, equipment and medium based on artificial intelligence
CN115618121A (en) Personalized information recommendation method, device, equipment and storage medium
CN115222486A (en) Article recommendation model training method, article recommendation device and storage medium
CN112182179B (en) Entity question-answer processing method and device, electronic equipment and storage medium
CN115129885A (en) Entity chain pointing method, device, equipment and storage medium
CN111310016B (en) Label mining method, device, server and storage medium
CN112200602A (en) Neural network model training method and device for advertisement recommendation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant