CN111881282A - Training method and recommendation method of responder recommendation model and electronic equipment - Google Patents

Training method and recommendation method of responder recommendation model and electronic equipment Download PDF

Info

Publication number
CN111881282A
CN111881282A CN202010767815.8A CN202010767815A CN111881282A CN 111881282 A CN111881282 A CN 111881282A CN 202010767815 A CN202010767815 A CN 202010767815A CN 111881282 A CN111881282 A CN 111881282A
Authority
CN
China
Prior art keywords
matrix
respondent
question
wealth
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010767815.8A
Other languages
Chinese (zh)
Inventor
陈卓
袁玺明
杜军威
葛艳
李涵
姜伟豪
魏锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao University of Science and Technology
Original Assignee
Qingdao University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao University of Science and Technology filed Critical Qingdao University of Science and Technology
Priority to CN202010767815.8A priority Critical patent/CN111881282A/en
Publication of CN111881282A publication Critical patent/CN111881282A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the technical field of data mining, and particularly relates to a training method and a recommendation method of a responder recommendation model and electronic equipment. The training method comprises the steps of obtaining a sample scoring matrix and a data set without a wealth value; the data set for which wealth values are not obtained comprises characteristics of respondents and characteristics of questions; training a DeepFM module in a responder recommendation model by utilizing a data set without a wealth value to fill a sample scoring matrix; inputting the filled sample scoring matrix into a matrix decomposition module in the responder recommendation model to decompose the sample scoring matrix to obtain a responder matrix and a question matrix; predicting respondents corresponding to the questions based on the respondent matrix and the question matrix; and updating parameters of the deep FM module and the matrix decomposition module by using the scoring predicted value and a target scoring value corresponding to the question, and determining a responder recommendation model. The deep FM module and the matrix decomposition module in the responder recommendation model form a strong learner, and the accuracy of a prediction result is improved.

Description

Training method and recommendation method of responder recommendation model and electronic equipment
Technical Field
The invention relates to the technical field of data mining, in particular to a training method and a recommendation method of a responder recommendation model and electronic equipment.
Background
With the rapid development of the network, the knowledge of books is difficult to meet the increasing knowledge requirements of people, and the question-answering community becomes a new platform for people to share experience and acquire knowledge. Taking the maritime chemical industry forum as an example, the largest professional platform for communication between the professional chemical industry question-answering communities and the chemical industry classes in China, 10 thousands of daily visitors are employees, more than 95% of the visitors are from major design houses, manufacturing enterprises, sales units and colleges in China. In the data of the Hiacan chemical Forum in the last decade, there are over 400 ten thousand users and over 100 ten thousand questions, wherein one third of the questions is not answered clearly, and meanwhile, the Forum generates a large number of questions every day, and needs users with answering ability to solve the questions. On one hand, many questions are not discovered by the capable respondents, so that the questions are not asked for a long time, and a large number of questions which are not answered by people appear in the community. On the other hand, the users who have the ability to answer the questions do not find the questions matched with the own specialties and abilities, and the questions are massive and irrelevant, so that the efficiency of answering the questions by the users is low, and the interest of the users in the community is reduced.
In order to solve the problems, the method is recommended to become a key technology of the maritime chemical forum, and the recommended purpose is to provide the user with the ability to solve the problems according to the ability of the respondents. When the current recommendation system is applied to the maritime chemical forum, two major technical challenges are mainly faced: (1) sparsity: because the user often interacts with only a few items, it is difficult to train an accurate recommendation model, especially for users or items with a small number of interactions; (2) cold start: in the maritime chemical forum, of over 400 thousands of users, only less than 30 thousands of users participate in answering questions, the vast majority of users do not have user behavior, and it is challenging to deal with these questions or predict user preferences for these items, which is the so-called "cold start" question. Therefore, the above two technical challenges result in low prediction accuracy of the existing recommendation method in the application of predicting respondents for new questions.
Disclosure of Invention
In view of this, embodiments of the present invention provide a training method and a recommendation method for an answerer recommendation model, and an electronic device, so as to solve the problem that the prediction accuracy of the existing recommendation method is relatively low.
According to a first aspect, an embodiment of the present invention provides a training method for an answerer recommendation model, including:
acquiring a sample scoring matrix and a data set without a wealth value; wherein the dataset of unobtained wealth values includes respondent characteristics and question characteristics;
training a DeepFM module in a responder recommendation model by using the data set without the acquired wealth value to fill the sample scoring matrix;
inputting the filled sample scoring matrix into a matrix decomposition module in the respondent recommendation model to decompose the sample scoring matrix to obtain a respondent matrix and a question matrix;
predicting respondents corresponding to the question based on the respondent matrix and the question matrix;
and updating the parameters of the deep FM module and the matrix decomposition module by using the scoring prediction value and a target scoring value corresponding to the question, and determining the respondent recommendation model.
According to the training method of the responder recommendation model provided by the embodiment of the invention, the wealth value is filled in some answers which participate in the answers but do not obtain the wealth value by using the deep FM module, so that the filled sample scoring matrix is denser, the cold start problem is solved, and the prediction accuracy is improved by using the matrix decomposition module. Therefore, the deep FM module and the matrix decomposition module in the trained respondent recommendation model form a strong learner, and the accuracy of the prediction result is improved.
With reference to the first aspect, in a first implementation manner of the first aspect, the predicting respondents to the question based on the respondent matrix and the question matrix includes:
and calculating the product of the matrix of the respondents and the matrix of the question, and determining the score prediction value of each respondent corresponding to the question.
According to the training method of the respondent recommendation model provided by the embodiment of the invention, the respondent matrix and the question matrix are directly utilized for carrying out point multiplication, so that the score predicted value of each respondent corresponding to the question can be obtained, the calculation process is simplified, and the training efficiency is improved.
With reference to the first implementation manner of the first aspect, in a second implementation manner of the first aspect, the updating the parameters of the deep fm module and the matrix decomposition module by using the target score value corresponding to the score predicted value and the question, and determining the respondent recommendation model includes:
acquiring actual wealth values corresponding to the questions, average wealth values obtained in response, maximum wealth values obtained in response and minimum wealth values obtained in response;
calculating the wealth value obtained by answering each question based on the actual wealth value corresponding to each question, the average wealth value obtained by answering, the maximum wealth value obtained by answering and the minimum wealth value obtained by answering;
and calculating a loss function by using the wealth value obtained by each question answer, the filling result of the deep FM module, the sample scoring matrix and the filled sample scoring matrix so as to update the filling result of the deep FM module and determine the respondent recommendation model.
With reference to the second embodiment of the first aspect, in the third embodiment of the first aspect, the wealth value obtained by each question answer is calculated by the following formula:
Figure BDA0002615356380000031
wherein the content of the first and second substances,
Figure BDA0002615356380000032
(ii) a wealth value obtained for each of said question answers; x is the actual wealth value; μ is the average value of the wealth obtained from the answers; x is the number ofmaxMaximum value of wealth obtained for the answer; x is the number ofminThe minimum value of the wealth obtained for the answer.
With reference to the second embodiment of the first aspect, in a fourth embodiment of the first aspect, the loss function is calculated by using the following formula:
Figure BDA0002615356380000033
wherein r isijScoring the ith row and the jth column in the sample scoring matrix;
Figure BDA0002615356380000034
scoring the ith row and the jth column in the filled sample scoring matrix; d is the mean square error between the filling result and the real result of the DeepFM module; c represents whether the problem is the prediction result of the deep FM module, and the value is 0 or 1; λ is a regularization coefficient; p is the matrix of respondents; q is the problem matrix.
According to the training method for the responder recommendation model, provided by the embodiment of the invention, the overfitting problem of the model is avoided by adopting L2 regularization when the loss function is calculated.
With reference to the first aspect or any one of the first to fourth embodiments of the first aspect, in a fifth embodiment of the first aspect, the training method further comprises:
acquiring a test set;
inputting the test set into the respondent recommendation model to obtain a prediction result;
and determining whether the parameters in the respondent recommendation model need to be updated again or not based on the prediction result.
According to the training method of the responder recommendation model provided by the embodiment of the invention, after the responder recommendation model is obtained through training, the responder recommendation model is evaluated by using the test set, so that the reliability of the responder recommendation model is further ensured.
According to a second aspect, an embodiment of the present invention further provides an answerer recommendation method, including:
acquiring the characteristics of a target question and the characteristics of a plurality of respondents to obtain an initial scoring matrix;
coding the characteristics of the target question and the characteristics of a plurality of respondents, inputting a coding result into a deep FM module in a respondent recommendation model, and filling the initial scoring matrix to obtain a scoring matrix;
inputting the scoring matrix into a matrix decomposition module in the respondent recommendation model to decompose the scoring matrix to obtain a respondent matrix and a question matrix;
and determining the target respondent corresponding to the target question based on the respondent matrix and the question matrix.
According to the responder recommending method provided by the embodiment of the invention, the wealth value filling is carried out on some answers which participate in the answers but do not obtain the wealth value by using the deep FM module, so that the filled sample scoring matrix is denser, the cold start problem is solved, and the accuracy of prediction is improved by using the matrix decomposition module. Therefore, the deep FM module and the matrix decomposition module in the recommendation method form a strong learner, and accuracy of the prediction result is improved.
With reference to the second aspect, in a first implementation manner of the second aspect, the determining a target respondent to the target question based on the respondent matrix and the question matrix includes:
calculating the product of the respondent matrix and the question matrix to obtain the score value corresponding to each respondent answering the target question;
and determining the respondent corresponding to the highest scoring value as the target respondent.
According to a third aspect, an embodiment of the present invention provides an electronic device, including: a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, and the processor executing the computer instructions to perform the method for training the respondent recommendation model according to the first aspect or any one of the embodiments of the first aspect, or to perform the method for recommending respondents according to the second aspect or any one of the embodiments of the second aspect.
According to a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium storing computer instructions for causing a computer to execute the method for training an respondent recommendation model according to the first aspect or any one of the embodiments of the first aspect, or execute the method for recommending an respondent according to the second aspect or any one of the embodiments of the second aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow diagram of a method of training a respondent recommendation model according to an embodiment of the invention;
FIG. 2 is a block diagram of an respondent recommendation model according to an embodiment of the invention;
FIG. 3 is a flow diagram of a method of training a respondent recommendation model according to an embodiment of the invention;
FIG. 4 is a flow diagram of a method of training a respondent recommendation model according to an embodiment of the invention;
FIGS. 5 a-5 c are comparison results of an respondent recommendation model against various models, according to embodiments of the present invention;
FIG. 6 is a flow diagram of a method of respondent recommendation according to an embodiment of the present invention;
FIG. 7 is a flow diagram of a method of respondent recommendation according to an embodiment of the present invention;
FIG. 8 is a block diagram of an arrangement of training apparatus for an respondent recommendation model according to an embodiment of the present invention;
fig. 9 is a block diagram of the configuration of an respondent recommending apparatus according to an embodiment of the present invention;
fig. 10 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The training method of the responder recommendation model and the responder recommendation method can be applied to each forum for recommending a proper responder to a question and answer, for example, the responder can be a Hechuan chemical forum or other forums. In the following description, the chemical forum in the sea is taken as an example, but the scope of the present invention is not limited thereto.
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for training a respondent recommendation model, wherein the steps illustrated in the flowchart of the figure may be performed in a computer system, such as a set of computer executable instructions, and wherein, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order different than that described herein.
In this embodiment, a method for training a recommendation model of an answerer is provided, which may be used in electronic devices, such as a computer, a mobile phone, a tablet computer, and the like, fig. 1 is a flowchart of a method for training a recommendation model of an answerer according to an embodiment of the present invention, and as shown in fig. 1, the flowchart includes the following steps:
and S11, acquiring a sample scoring matrix and a data set without acquiring the wealth value.
Wherein the data set of unavailable financial values includes characteristics of respondents and characteristics of questions.
In the Hiwakawa chemical Forum, assume that there are M respondents U ═ U1,u2,...,uMAnd N questions Q ═ Q1,q2,...,qNThe wealth values obtained by the respondents participating in the answers form a scoring matrix with M rows and N columns, namely a sample scoring matrix, as shown in the table 1. Meanwhile, each respondent has C features and each question has D features. As shown in Table 1, row i column j represents respondent uiParticipation problem qjThe score of (1). Since some respondents in the haichuan chemical forum have late participation in the questions or low heat of answers, they do not obtain the wealth value and are represented as 0 in the sample scoring matrix.
TABLE 1 sample Scoring matrix
Figure BDA0002615356380000061
The sample scoring matrix obtained by the electronic device may be formed by collecting wealth values obtained by answers to the N questions by the M respondents, may be directly obtained from the outside, or may be stored in the electronic device, and the source of the sample scoring matrix is not limited.
For a data set for which no wealth value is obtained, the electronic device obtains characteristics of the question and characteristics of the respondent because the respondent participates in the question later or answers are less hot and does not obtain wealth values. The characteristics of the question may be that the question content is divided into 10 categories by using a document topic generation model (LDA), and the characteristics of the respondent may be that the gender, the academic calendar, the mailbox authentication status, and so on of the respondent are 68 characteristics. Of course, the number of the question features and the number of the respondent features may be set according to actual situations, and is not limited to the above-mentioned 10 categories and 68 features.
For example, the river flow platform of the river flow, 10 thousands of visitors in the day of work, is one of the communication platforms on the chemical line of the human qi. The vast chemical workers from the five lakes and the four seas exchange and interact on the platform, and the industrial technology exchange level is further promoted through resource sharing, information transmission and technical discussion, so that the industrial field of view of participants is improved.
The related data shows that by 4 months in 2019, learners are registered in a Hi-Chuan chemical Forum for over 446 ten thousand persons, the total number of the problems proposed is over 100 thousand, 37 plates are provided in the Forum, common posting plates comprise a machining and manufacturing communication plate, a production management and storage and transportation communication plate, a petrochemical pretreatment communication plate, a mobile equipment technology communication plate, a formaldehyde and purification technology communication plate and the like, and a questioner presents the problems related to the contents of the related plates under each plate. As a community of learners, the Hiacan chemical forum adopts reward measures such as wealth value and charm value to stimulate learners contributing to the community so as to promote the sustainable development of the community.
In the embodiment, a question and answer data set in a petrochemical preprocessing plate, in which the haichuan chemical forum is the most active, is selected, wherein 1169 users propose 6877 questions, 3858 respondents make 130409 answer records, and 35274 answers have high professional level of content but do not obtain a wealth value due to late time of participating in the questions or low heat and the like.
And S12, training a DeepFM module in the respondent recommendation model by using the data set without the acquired wealth value so as to fill a sample scoring matrix.
Wherein, the deep FM model comprises two parts: the parallel Factorization (FM) part and the Deep Neural Network (DNN) part are respectively responsible for extracting low-order features and high-order features. Both parts share the same embedded layer (embeddinglayer) input. Wherein, the output result of the DeepFM is represented as:
Figure BDA0002615356380000071
wherein, yFMAs output of the factoring machine, yDNNIs the output result of the deep neural network.
Specifically, for the FM portion, it is calculated by means of a potential vector inner product. The method discovers the associated information among all the characteristics and improves the model score by combining two characteristics and introducing a cross term characteristic, and the formula is as follows:
Figure BDA0002615356380000081
wherein w0 is an initial weight value, i.e. a bias term, wiFor each feature xiCorresponding weight value, wijIs a characteristic xiAnd feature xjThe weight value of the relationship (c).
A deep neural network is a forward neural network used to learn higher-order feature interactions. The data vector is input into a neural network, and the high-dimensional and sparse input vector is compressed into a low-dimensional and dense vector through an embedding layer and then further input into a first hiding layer. The output of the embedding layer is represented as:
a(0)=[e1,e2,...,em]
wherein e isiIs the ith field of embedding, m is the number of fields, then a(0)Inputting a deep neural network, wherein the forward process is as follows:
a(l+1)=W(l)a(l)+b(l)
wherein l is the number of hidden layers, a(l)、W(l)、b(l)Respectively, the output, weight and bias terms of the l-th layer. And finally, inputting the data to an output layer, and performing scoring prediction on the target item: y isDNN=WH+1aH+bH+1Where H is the number of hidden layers.
After the electronic equipment obtains the data set without obtaining the wealth value, the data set is input into a deep FM module in the respondent recommendation model to predict the wealth value of the respondent without obtaining the wealth value, namely 0 item in the sample scoring matrix is filled, and then the sample scoring matrix is filled.
And S13, inputting the filled sample scoring matrix into a matrix decomposition module in the responder recommendation model to decompose the sample scoring matrix to obtain a responder matrix and a question matrix.
The purpose of the matrix decomposition module is to score a sample of respondents and questions into a matrix RM×NDecomposed into an M K answer matrix and a K N question matrix, and the hyperparameter K is a potential factor space. Thus, the sample scoring matrix can be expressed as the product of the matrix of respondents and the matrix of questions, and the formula is:
RM×N=PM×KQK×N
the electronic equipment decomposes the filled sample scoring matrix into a responder matrix and a question matrix by using a matrix decomposition module in the responder recommendation model.
And S14, predicting respondents corresponding to the question based on the respondent matrix and the question matrix.
After obtaining the matrix of respondents and the matrix of questions, the electronic device can predict respondents corresponding to each question by using the two matrices of respondents. In the prediction, the respondents corresponding to the questions can be predicted by predicting the answer scoring values of the respondents to the questions. This step will be described in detail below.
And S15, updating parameters of the DeepFM module and the matrix decomposition module by using the score predicted value and the target score value corresponding to the question, and determining a recommendation model of the respondent.
After predicting the respondent corresponding to the question in S14, the electronic device may update the parameters of the DeepFM module and the matrix decomposition module in the respondent recommendation model by using the score prediction value and the target score value of the respondent on the question, and further determine the parameters in the two modules to obtain the respondent recommendation model.
Fig. 2 shows the structure of a respondent recommendation model in which a deep fm model predicts scores by way of a factorizer and a deep neural network in parallel; and predicting the scoring of the target user on the target question by the matrix decomposition based collaborative filtering model so as to predict the respondent corresponding to the question, updating the parameters in the respondent recommendation model, and determining the respondent recommendation model.
According to the training method of the respondent recommendation model, the deepFM module is used for filling the wealth values of the answers which participate in the answers but do not obtain the wealth values, so that the filled sample scoring matrix is denser, the cold start problem is solved, and the prediction accuracy is improved by the matrix decomposition module. Therefore, the deep FM module and the matrix decomposition module in the trained respondent recommendation model form a strong learner, and the accuracy of the prediction result is improved.
In this embodiment, a method for training a recommendation model of an answerer is provided, which may be used in electronic devices, such as a computer, a mobile phone, a tablet computer, and the like, fig. 3 is a flowchart of a method for training a recommendation model of an answerer according to an embodiment of the present invention, and as shown in fig. 3, the flowchart includes the following steps:
and S21, acquiring a sample scoring matrix and a data set without acquiring the wealth value.
Wherein the data set of unavailable financial values includes characteristics of respondents and characteristics of questions.
Please refer to S11 in fig. 1, which is not described herein again.
And S22, training a DeepFM module in the respondent recommendation model by using the data set without the acquired wealth value so as to fill a sample scoring matrix.
Please refer to S12 in fig. 1, which is not described herein again.
And S23, inputting the filled sample scoring matrix into a matrix decomposition module in the responder recommendation model to decompose the sample scoring matrix to obtain a responder matrix and a question matrix.
Please refer to S13 in fig. 1, which is not described herein again.
And S24, predicting respondents corresponding to the question based on the respondent matrix and the question matrix.
The electronic equipment determines the score predicted value of each respondent corresponding to the question by calculating the product of the matrix of the respondents and the matrix of the question.
For respondent uiFor problem qjScore of (2) predicted value rijThe calculation formula of (2) is as follows:
Figure BDA0002615356380000101
the predictive score for question q for a respondent i in the matrix decomposition model is the dot product of the respondent i's factor vector and the question j's factor vector.
And S25, updating parameters of the DeepFM module and the matrix decomposition module by using the score predicted value and the target score value corresponding to the question, and determining a recommendation model of the respondent.
Specifically, the step S25 includes the following steps:
and S251, acquiring actual wealth values corresponding to the questions, average wealth values obtained in response, maximum wealth values obtained in response and minimum wealth values obtained in response.
The actual wealth value corresponding to each question, the average wealth value obtained by answering, the maximum wealth value obtained by answering and the minimum wealth value obtained by answering can be acquired by the electronic equipment or acquired from the outside, and the source of the electronic equipment is not limited at all.
And S252, calculating the wealth value obtained by answering each question based on the actual wealth value corresponding to each question, the average wealth value obtained by answering, the maximum wealth value obtained by answering and the minimum wealth value obtained by answering.
Wherein the wealth value obtained by each question answer is calculated by adopting the following formula:
Figure BDA0002615356380000102
wherein the content of the first and second substances,
Figure BDA0002615356380000103
(ii) a wealth value obtained for each of said question answers; x is the actual wealth value; μ is the average value of the wealth obtained from the answers; x is the number ofmaxMaximum value of wealth obtained for the answer; x is the number ofminThe minimum value of the wealth obtained for the answer.
And S253, calculating a loss function by using the wealth value obtained by answering each question, the filling result of the deep FM module, the sample scoring matrix and the filled sample scoring matrix so as to update the filling result of the deep FM module and determine an answerer recommendation model.
When the matrix is sparse, overfitting is easy to occur during matrix decomposition model training, so that the overfitting problem of the model is avoided by adopting L2 regularization, and therefore, the loss function is calculated by adopting the following formula:
Figure BDA0002615356380000111
wherein r isijScoring the ith row and the jth column in the sample scoring matrix;
Figure BDA0002615356380000112
scoring the ith row and the jth column in the filled sample scoring matrix; d is the mean square error between the filling result and the real result of the DeepFM module; c represents whether the problem is the prediction result of the deep FM module, and the value is 0 or 1; λ is a regularization coefficient; p is the matrix of respondents; q is the problem matrix.
The loss function is usually solved by using a random gradient descent method or an alternating least squares method (ALS), and in this embodiment, an alternating least squares method is used. ALS uses the loss function described above to reduce the loss function alternately, and this method alternates between iterating only one of the parameters in each iteration and iterating the other parameter the next time. First, partial differentiation of P by Q is fixed to be 0:
Figure BDA0002615356380000113
then, Q partial differential is made on the loss function by fixing P to be 0:
Figure BDA0002615356380000114
and the above processes are circulated and are continuously and alternately carried out until the loss function is converged.
According to the training method of the respondent recommendation model, the respondent matrix and the question matrix are directly used for point multiplication, so that the score prediction values of the respondents corresponding to the questions can be obtained, the calculation process is simplified, and the training efficiency is improved.
In this embodiment, a method for training a recommendation model of an answerer is provided, which may be used in electronic devices, such as a computer, a mobile phone, a tablet computer, and the like, fig. 4 is a flowchart of a method for training a recommendation model of an answerer according to an embodiment of the present invention, and as shown in fig. 4, the flowchart includes the following steps:
and S31, acquiring a sample scoring matrix and a data set without acquiring the wealth value.
Wherein the data set of unavailable financial values includes characteristics of respondents and characteristics of questions.
Please refer to S21 in fig. 3 for details, which are not described herein.
And S32, training a DeepFM module in the respondent recommendation model by using the data set without the acquired wealth value so as to fill a sample scoring matrix.
Please refer to S22 in fig. 3 for details, which are not described herein.
And S33, inputting the filled sample scoring matrix into a matrix decomposition module in the responder recommendation model to decompose the sample scoring matrix to obtain a responder matrix and a question matrix.
Please refer to S23 in fig. 3 for details, which are not described herein.
And S34, predicting respondents corresponding to the question based on the respondent matrix and the question matrix.
Please refer to S24 in fig. 3 for details, which are not described herein.
And S35, updating parameters of the DeepFM module and the matrix decomposition module by using the score predicted value and the target score value corresponding to the question, and determining a recommendation model of the respondent.
Please refer to S25 in fig. 3 for details, which are not described herein.
And S36, acquiring a test set.
In this embodiment, the experimental data set is divided into a training set, a data set in which no wealth value is obtained, and a test set. The training set is used for data mining and processing of respondents to obtain professional credibility of the respondents in different question knowledge fields; the data set without the acquired wealth value is used for outputting a prediction result of deep FM, and a sample scoring matrix is filled; the test set is used for the respondent recommendation, and the performance of the respondent recommendation is evaluated by comparing with other respondent recommendation methods. The basic case of the experimental data set is shown in table 2.
TABLE 2 training data set and test set
Figure BDA0002615356380000121
In order to evaluate the accuracy and robustness of the model, the present embodiment adopts a random sampling method, and 20% of data is extracted from the original data set as a test set to test the model effect.
And S37, inputting the test set into the respondent recommendation model to obtain a prediction result.
The electronic device inputs the tester into the respondent recommendation model obtained in the training of S35 to obtain the prediction result.
And S38, determining whether the parameters in the respondent recommendation model need to be updated again or not based on the prediction result.
When the electronic equipment obtains the prediction result, the Root Mean Square Error (RMSE) and the average absolute error (MAE) are calculated by using the prediction result and the actual result. The electronic device uses these two metrics to evaluate the wealth value obtained from the model predictive response. RMSE and MAE are calculated as follows:
Figure BDA0002615356380000131
Figure BDA0002615356380000132
wherein X is the number of answers to questions of respondents in the test set, and yijPredicted score, R, for respondent i to answer question jijThe actual scores of the questions j are answered for the respondents i.
After the RMSE and the MAE are obtained through calculation, the electronic equipment respectively compares the RMSE and the MAE with corresponding preset values, and when the RMSE and the MAE are both larger than the corresponding preset values, the respondent recommendation model is shown to meet the requirements; otherwise, it needs to be retrained to update the parameters in the respondent recommendation model.
In this implementation, the effectiveness of the trained respondent recommendation model was demonstrated by performing a number of experiments. Firstly, the effectiveness of the model under different sparsity degrees is verified by testing training data with different sparsity degrees, and then the hot recommendation system model is transversely compared, so that the accuracy of the model is proved.
Fig. 5a shows the comparison of RMSE and MAE of the model in this example with other models, and the specific comparison values are shown in table 3:
TABLE 3 RMSE and MAE for each model
ALS FM DeepFM Wide&Deep ALS&DeepFM
RMSE 0.9713 1.8249 1.5972 1.7406 0.9064
MAE 0.6105 1.4025 1.0147 1.3993 0.5979
As shown in table 2, we can see that the algorithm fusing ALS and deep fm is better than other algorithms in the data set of the chemical forum in the sea and river, which proves the effectiveness of the proposed model.
Fig. 5a and 5b respectively show the comparison of RMSE and MAE under different sparsity between the model in this embodiment and other models, and the specific comparison values are shown in table 4:
TABLE 4 comparison of RMSE and MAE for each model at different sparseness
Degree of sparseness 0.9 0.8 0.7 0.55 0.6
DeepFM_rmse 1.307 1.3489 1.3271 1.336 1.3109
DeepFM_mae 1.071675691 1.101043524 1.080375222 1.099633608 1.072655402
ALS_rmse 1.043789956 1.117488631 1.193554096 1.308455187 1.265829941
ALS_mae 0.670472127 0.734138246 0.797058177 0.900358312 0.863830444
ALS&DeepFM_rmse 0.969557455 1.034683966 1.095910241 1.17371746 1.141342636
ALS&DeepFM_mae 0.650316573 0.706888409 0.75651957 0.832119333 0.800746117
For data sets with different sparsity, the experimental results are shown in table 3, and it can be seen that the improvement effect of the respondent recommendation model provided by the embodiment is more obvious than that of the original model under the condition that the data is more sparse.
In general, when the user and answer interaction data are very sparse, the model proposed by the embodiment is more accurate than matrix decomposition and prediction of available wealth value of the answer by deep FM. Although the mean square error is only improved 0.0.7, the prediction result can help thousands of questions to find users who can answer accurately for a sea and river chemical forum with millions of users.
To verify the applicability of the algorithm, we applied the algorithm to the movielens dataset and predicted the scoring results of the movie by the user as shown in table 5:
TABLE 5 comparison of the results of the scores of different models on movies
ALS FM DeepFM Wide&Deep DeepFM&ALS Wide&Deep&DeepFM
RMSE 0.9527 0.9319 0.9885 0.7943 0.9546 0.7423
MAE 0.7574 0.7367 0.7763 0.4754 0.7574 0.4628
As shown in table 5, since the movielens data set only includes four features of gender, age, occupation, and zip code, the movie only includes three features of title, showing time, and website, and the number of features is much smaller than the number of users in the maritime chemical forum, Wide & Deep performs better than Deep fm, the first learner of the algorithm is changed to Wide & Deep, and better results are obtained on the low-dimensional dense data set such as movielens.
In the training method of the respondent recommendation model provided in this embodiment, after the respondent recommendation model is obtained through training, the respondent recommendation model is evaluated by using a test set, so as to further ensure the reliability of the respondent recommendation model.
In accordance with an embodiment of the present invention, there is provided an embodiment of a method for respondent recommendation, it being noted that the steps illustrated in the flowchart of the drawings may be performed in a computer system such as a set of computer executable instructions and that while a logical order is illustrated in the flowchart, in some cases the steps illustrated or described may be performed in an order different than that herein.
In this embodiment, a method for recommending respondents is provided, which may be used in electronic devices, such as computers, mobile phones, tablet computers, and the like, fig. 6 is a flowchart of a method for recommending respondents according to an embodiment of the present invention, as shown in fig. 6, the flowchart includes the following steps:
and S41, acquiring the characteristics of the target question and the characteristics of a plurality of respondents to obtain an initial scoring matrix.
The electronic device obtains an initial scoring matrix by collecting characteristics of the target question and characteristics of the plurality of respondents. The initial scoring matrix may be calculated by using similarity between the features of the target question and the features of the questions in the sample set, and similarity between the features of the respondents and the features of the respondents in the sample set.
And S42, coding the characteristics of the target question and the characteristics of a plurality of respondents, inputting the coding result into a DeepFM module in a respondent recommendation model, and filling the initial scoring matrix to obtain the scoring matrix.
After obtaining the characteristics of the target question and the characteristics of the multiple respondents, the electronic equipment can perform one-hot coding on the target question and the characteristics of the multiple respondents, inputs a coding result into a deep FM module in a value responder recommendation model, and fills an initial scoring matrix to obtain a scoring matrix. For details, please refer to S12 in the embodiment shown in fig. 1 or S22 in the embodiment shown in fig. 3, which is not described herein again.
And S43, inputting the scoring matrix into a matrix decomposition module in the respondent recommendation model to decompose the scoring matrix to obtain a respondent matrix and a question matrix.
After the electronic device obtains the scoring matrix in S42, the matrix decomposition module in the input value responder recommendation model decomposes the scoring matrix to obtain a responder matrix and a question matrix. For details of the process, refer to S13 in the embodiment shown in fig. 1 or S23 in the embodiment shown in fig. 3.
And S44, determining the target respondent corresponding to the target question based on the respondent matrix and the question matrix.
After obtaining the responder matrix and the question matrix, the electronic device can answer the wealth value of the target question through each responder, and the responder corresponding to the maximum wealth value is determined as the target responder. This step will be described in detail below.
According to the responder recommending method provided by the embodiment of the invention, the wealth value filling is carried out on some answers which participate in the answers but do not obtain the wealth value by using the deep FM module, so that the filled sample scoring matrix is denser, the cold start problem is solved, and the accuracy of prediction is improved by using the matrix decomposition module. Therefore, the deep FM module and the matrix decomposition module in the recommendation method form a strong learner, and accuracy of the prediction result is improved.
In this embodiment, a method for recommending respondents is provided, which may be used in electronic devices, such as computers, mobile phones, tablet computers, and the like, fig. 7 is a flowchart of a method for recommending respondents according to an embodiment of the present invention, as shown in fig. 7, the flowchart includes the following steps:
and S51, acquiring the characteristics of the target question and the characteristics of a plurality of respondents to obtain an initial scoring matrix.
Please refer to S41 in fig. 6 for details, which are not described herein.
And S52, coding the characteristics of the target question and the characteristics of a plurality of respondents, inputting the coding result into a DeepFM module in a respondent recommendation model, and filling the initial scoring matrix to obtain the scoring matrix.
Please refer to S42 in fig. 6 for details, which are not described herein.
And S53, inputting the scoring matrix into a matrix decomposition module in the respondent recommendation model to decompose the scoring matrix to obtain a respondent matrix and a question matrix.
Please refer to S43 in fig. 6 for details, which are not described herein.
And S54, determining the target respondent corresponding to the target question based on the respondent matrix and the question matrix.
Specifically, the step S54 includes the following steps:
s541, calculating a product of the respondent matrix and the question matrix to obtain a score value corresponding to each respondent answering the target question.
And the electronic equipment calculates the product of the matrix of the respondents and the matrix of the questions to obtain the score value corresponding to each respondent answering the target question.
And S542, determining the respondent corresponding to the highest score value as the target respondent.
And the electronic equipment determines the respondent corresponding to the highest score value as the target respondent.
According to the respondent recommendation method provided by the embodiment, the respondent matrix and the question matrix are directly used for point multiplication, so that the score prediction value of each respondent corresponding to the question can be obtained, the calculation process is simplified, and the training efficiency is improved.
The data set of the maritime chemical forum brings great pressure on a recommendation system in sparsity, the responder recommendation method provided by the embodiment solves the challenges by combining matrix decomposition and a deep FM ensemble learning responder recommendation algorithm, and the algorithm makes full use of the historical behaviors of users for answering questions and the characteristics of the users and the questions. In order to solve the sparsity and cold start problems, the embodiment proposes that a DeepFM module is used for filling the wealth values of answers which participate in answering because the questions are low in heat or the wealth values are not obtained after the answering time is later, so that a matrix to be subjected to matrix decomposition later is denser, and the accuracy is higher. The effectiveness and feasibility of the method are verified through prediction of practical problems.
The embodiment also provides a training device for the respondent recommendation model and a respondent recommendation device, which are used for implementing the above embodiments and preferred embodiments and are not described again after being described. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. Although the means described in the embodiments below are preferably implemented in software, an implementation in hardware, or a combination of software and hardware is also possible and contemplated.
The present embodiment provides a training apparatus for an answerer recommendation model, as shown in fig. 8, including:
a first obtaining module 61, configured to obtain a sample scoring matrix and a data set for which a wealth value is not obtained; wherein the dataset of unobtained wealth values includes respondent characteristics and question characteristics;
a filling module 62, configured to train a deep fm module in the respondent recommendation model with the data set without the obtained wealth value to fill the sample scoring matrix;
the first decomposition module 63 is configured to input the filled sample scoring matrix into a matrix decomposition module in the respondent recommendation model to decompose the sample scoring matrix, so as to obtain a respondent matrix and a question matrix;
a prediction module 64, configured to predict respondents to the question based on the respondent matrix and the question matrix;
an updating module 65, configured to update parameters of the deep fm module and the matrix decomposition module by using the target score value corresponding to the score prediction value and the question, and determine the respondent recommendation model.
The present embodiment provides an respondent recommending apparatus, as shown in fig. 9, including:
a second obtaining module 71, configured to obtain features of the target question and features of the multiple respondents to obtain an initial scoring matrix;
the encoding module 72 is configured to encode the features of the target question and the features of the multiple respondents, input an encoding result into a deep fm module in a respondent recommendation model, and fill the initial scoring matrix to obtain a scoring matrix;
the second decomposition module 73 is configured to input the scoring matrix into a matrix decomposition module in the respondent recommendation model to decompose the scoring matrix, so as to obtain a respondent matrix and a question matrix;
a determining module 74, configured to determine a target respondent corresponding to the target question based on the respondent matrix and the question matrix.
The training device of the respondent recommendation model and the respondent recommendation device in this embodiment are presented in the form of functional units, where the units refer to ASIC circuits, processors and memories executing one or more software or fixed programs, and/or other devices that can provide the above-described functions.
Further functional descriptions of the modules are the same as those of the corresponding embodiments, and are not repeated herein.
An embodiment of the present invention further provides an electronic device, which includes the training apparatus of the respondent recommendation model shown in fig. 8 or the respondent recommendation apparatus shown in fig. 9.
Referring to fig. 10, fig. 10 is a schematic structural diagram of an electronic device according to an alternative embodiment of the present invention, as shown in fig. 10, the electronic device may include: at least one processor 81, such as a CPU (Central Processing Unit), at least one communication interface 83, memory 84, and at least one communication bus 82. Wherein a communication bus 82 is used to enable the connection communication between these components. The communication interface 83 may include a Display (Display) and a Keyboard (Keyboard), and the optional communication interface 83 may also include a standard wired interface and a standard wireless interface. The Memory 84 may be a high-speed RAM Memory (volatile Random Access Memory) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. The memory 84 may optionally be at least one memory device located remotely from the processor 81. Wherein the processor 81 may be in connection with the apparatus described in fig. 8 or fig. 9, an application program is stored in the memory 84, and the processor 81 calls the program code stored in the memory 84 for performing any of the above-mentioned method steps.
The communication bus 82 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus. The communication bus 82 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 10, but this is not intended to represent only one bus or type of bus.
The memory 84 may include a volatile memory (RAM), such as a random-access memory (RAM); the memory may also include a non-volatile memory (english: non-volatile memory), such as a flash memory (english: flash memory), a hard disk (english: hard disk drive, abbreviation: HDD), or a solid-state drive (english: SSD); the memory 84 may also comprise a combination of the above types of memory.
The processor 81 may be a Central Processing Unit (CPU), a Network Processor (NP), or a combination of CPU and NP.
The processor 81 may further include a hardware chip. The hardware chip may be an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The aforementioned PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
Optionally, the memory 84 is also used to store program instructions. The processor 81 may invoke program instructions to implement a method of training the respondent recommendation model as shown in the embodiments of fig. 1, 3 and 4 of the present application, or a method of respondent recommendation as shown in the embodiments of fig. 6 and 7 of the present application.
Embodiments of the present invention further provide a non-transitory computer storage medium, where the computer storage medium stores computer-executable instructions, and the computer-executable instructions may perform a training method of an answerer recommendation model or an answerer recommendation method in any of the above method embodiments. The storage medium may be a magnetic Disk, an optical Disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a Flash Memory (Flash Memory), a Hard Disk (Hard Disk Drive, abbreviated as HDD), a Solid State Drive (SSD), or the like; the storage medium may also comprise a combination of memories of the kind described above.
Although the embodiments of the present invention have been described in conjunction with the accompanying drawings, those skilled in the art may make various modifications and variations without departing from the spirit and scope of the invention, and such modifications and variations fall within the scope defined by the appended claims.

Claims (10)

1. A method for training an answerer recommendation model, comprising:
acquiring a sample scoring matrix and a data set without a wealth value; wherein the dataset of unobtained wealth values includes respondent characteristics and question characteristics;
training a DeepFM module in a responder recommendation model by using the data set without the acquired wealth value to fill the sample scoring matrix;
inputting the filled sample scoring matrix into a matrix decomposition module in the respondent recommendation model to decompose the sample scoring matrix to obtain a respondent matrix and a question matrix;
predicting respondents corresponding to the question based on the respondent matrix and the question matrix;
and updating the parameters of the deep FM module and the matrix decomposition module by using the scoring prediction value and a target scoring value corresponding to the question, and determining the respondent recommendation model.
2. The training method of claim 1, wherein predicting respondents to the question based on the matrix of respondents and the matrix of questions comprises:
and calculating the product of the matrix of the respondents and the matrix of the question, and determining the score prediction value of each respondent corresponding to the question.
3. The training method according to claim 2, wherein the updating the parameters of the deep fm module and the matrix decomposition module by using the target score value corresponding to the score predicted value and the question to determine the respondent recommendation model comprises:
acquiring actual wealth values corresponding to the questions, average wealth values obtained in response, maximum wealth values obtained in response and minimum wealth values obtained in response;
calculating the wealth value obtained by answering each question based on the actual wealth value corresponding to each question, the average wealth value obtained by answering, the maximum wealth value obtained by answering and the minimum wealth value obtained by answering;
and calculating a loss function by using the wealth value obtained by each question answer, the filling result of the deep FM module, the sample scoring matrix and the filled sample scoring matrix so as to update the filling result of the deep FM module and determine the respondent recommendation model.
4. A training method as claimed in claim 3, wherein the value of the wealth obtained from each of said question answers is calculated using the formula:
Figure FDA0002615356370000011
wherein the content of the first and second substances,
Figure FDA0002615356370000021
(ii) a wealth value obtained for each of said question answers; x is the actual wealth value; μ is the average value of the wealth obtained from the answers; x is the number ofmaxMaximum value of wealth obtained for the answer; x is the number ofminThe minimum value of the wealth obtained for the answer.
5. Training method according to claim 3, characterized in that the loss function is calculated using the following formula:
Figure FDA0002615356370000022
wherein r isijScoring the ith row and the jth column in the sample scoring matrix;
Figure FDA0002615356370000023
scoring the ith row and the jth column in the filled sample scoring matrix; d is the mean square error between the filling result and the real result of the DeepFM module; c represents whether the problem is the prediction result of the deep FM module, and the value is 0 or 1; λ is a regularization coefficient; p is the matrix of respondents; q is the problem matrix.
6. Training method according to any of claims 1-5, characterized in that the training method further comprises:
acquiring a test set;
inputting the test set into the respondent recommendation model to obtain a prediction result;
and determining whether the parameters in the respondent recommendation model need to be updated again or not based on the prediction result.
7. An respondent recommendation method, comprising:
acquiring the characteristics of a target question and the characteristics of a plurality of respondents to obtain an initial scoring matrix;
coding the characteristics of the target question and the characteristics of a plurality of respondents, inputting a coding result into a deep FM module in a respondent recommendation model, and filling the initial scoring matrix to obtain a scoring matrix;
inputting the scoring matrix into a matrix decomposition module in the respondent recommendation model to decompose the scoring matrix to obtain a respondent matrix and a question matrix;
and determining the target respondent corresponding to the target question based on the respondent matrix and the question matrix.
8. The method according to claim 7, wherein the determining the target respondent to the target question based on the matrix of respondents and the matrix of questions comprises:
calculating the product of the respondent matrix and the question matrix to obtain the score value corresponding to each respondent answering the target question;
and determining the respondent corresponding to the highest scoring value as the target respondent.
9. An electronic device, comprising:
a memory and a processor, the memory and the processor being communicatively connected to each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the method of training the respondent recommendation model according to any one of claims 1 to 6, or to perform the method of performing the respondent recommendation according to claim 7 or 8.
10. A computer-readable storage medium storing computer instructions for causing a computer to perform a method of training an respondent recommendation model according to any one of claims 1 to 6, or a method of performing an respondent recommendation according to claim 7 or 8.
CN202010767815.8A 2020-08-03 2020-08-03 Training method and recommendation method of responder recommendation model and electronic equipment Pending CN111881282A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010767815.8A CN111881282A (en) 2020-08-03 2020-08-03 Training method and recommendation method of responder recommendation model and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010767815.8A CN111881282A (en) 2020-08-03 2020-08-03 Training method and recommendation method of responder recommendation model and electronic equipment

Publications (1)

Publication Number Publication Date
CN111881282A true CN111881282A (en) 2020-11-03

Family

ID=73205380

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010767815.8A Pending CN111881282A (en) 2020-08-03 2020-08-03 Training method and recommendation method of responder recommendation model and electronic equipment

Country Status (1)

Country Link
CN (1) CN111881282A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836081A (en) * 2021-03-01 2021-05-25 腾讯音乐娱乐科技(深圳)有限公司 Neural network model training method, information recommendation method and storage medium
CN113553487A (en) * 2021-07-28 2021-10-26 恒安嘉新(北京)科技股份公司 Website type detection method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130097178A1 (en) * 2011-10-17 2013-04-18 Microsoft Corporation Question and Answer Forum Techniques
CN110413877A (en) * 2019-07-02 2019-11-05 阿里巴巴集团控股有限公司 A kind of resource recommendation method, device and electronic equipment
CN111298445A (en) * 2020-02-07 2020-06-19 腾讯科技(深圳)有限公司 Target account detection method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130097178A1 (en) * 2011-10-17 2013-04-18 Microsoft Corporation Question and Answer Forum Techniques
CN110413877A (en) * 2019-07-02 2019-11-05 阿里巴巴集团控股有限公司 A kind of resource recommendation method, device and electronic equipment
CN111298445A (en) * 2020-02-07 2020-06-19 腾讯科技(深圳)有限公司 Target account detection method and device, electronic equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
徐日;张谧;: "基于Adaboost算法的推荐系统评分预测框架", 计算机系统应用, no. 08, 15 August 2017 (2017-08-15), pages 1 - 7 *
曹占伟;胡晓鹏;: "一种结合主题模型的推荐算法", 计算机应用研究, no. 06, 30 June 2019 (2019-06-30), pages 1 - 5 *
李一野;邓浩江;: "基于改进余弦相似度的协同过滤推荐算法", 计算机与现代化, no. 01, 15 January 2020 (2020-01-15) *
罗朗;王利;周志平;赵卫东;: "基于DeepFM模型的科技资源推荐应用研究", 计算机应用研究, no. 1, 30 June 2020 (2020-06-30) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112836081A (en) * 2021-03-01 2021-05-25 腾讯音乐娱乐科技(深圳)有限公司 Neural network model training method, information recommendation method and storage medium
CN113553487A (en) * 2021-07-28 2021-10-26 恒安嘉新(北京)科技股份公司 Website type detection method and device, electronic equipment and storage medium
CN113553487B (en) * 2021-07-28 2024-04-09 恒安嘉新(北京)科技股份公司 Method and device for detecting website type, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
Laukaityte et al. Using plausible values in secondary analysis in large-scale assessments
Tang et al. An exploratory analysis of the latent structure of process data via action sequence autoencoders
CN111538868B (en) Knowledge tracking method and problem recommendation method
Iannario et al. A generalized framework for modelling ordinal data
Almquist et al. Logistic network regression for scalable analysis of networks with joint edge/vertex dynamics
CN111177473B (en) Personnel relationship analysis method, device and readable storage medium
Arashpour et al. Predicting individual learning performance using machine‐learning hybridized with the teaching‐learning‐based optimization
CN109739995B (en) Information processing method and device
Bidah et al. Stability and Global Sensitivity Analysis for an Agree‐Disagree Model: Partial Rank Correlation Coefficient and Latin Hypercube Sampling Methods
Good Explicativity, corroboration, and the relative odds of hypotheses
CN111881282A (en) Training method and recommendation method of responder recommendation model and electronic equipment
JP2017199355A (en) Recommendation generation
Fosdick et al. Multiresolution network models
Yang et al. In-context operator learning with data prompts for differential equation problems
Dao et al. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns
CN110502701B (en) Friend recommendation method, system and storage medium introducing attention mechanism
Brandt et al. Conflict forecasting with event data and spatio-temporal graph convolutional networks
Stewart Latent factor regressions for the social sciences
Huang et al. Response speed enhanced fine-grained knowledge tracing: A multi-task learning perspective
Toribio et al. Discrepancy measures for item fit analysis in item response theory
CN115099711A (en) Member selection method and device, computer equipment and storage medium
Zhou Research on teaching resource recommendation algorithm based on deep learning and cognitive diagnosis
Sassi et al. A methodology using neural network to cluster validity discovered from a marketing database
Alquier et al. Tight risk bound for high dimensional time series completion
CN112035567A (en) Data processing method and device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination