CN112862008A - Training method of preference prediction model and prediction method of user preference - Google Patents
Training method of preference prediction model and prediction method of user preference Download PDFInfo
- Publication number
- CN112862008A CN112862008A CN202110333952.5A CN202110333952A CN112862008A CN 112862008 A CN112862008 A CN 112862008A CN 202110333952 A CN202110333952 A CN 202110333952A CN 112862008 A CN112862008 A CN 112862008A
- Authority
- CN
- China
- Prior art keywords
- evaluation data
- user
- training
- historical evaluation
- preference prediction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012549 training Methods 0.000 title claims abstract description 129
- 238000000034 method Methods 0.000 title claims abstract description 61
- 238000011156 evaluation Methods 0.000 claims abstract description 120
- 238000004422 calculation algorithm Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 5
- 238000010276 construction Methods 0.000 claims description 2
- 230000006870 function Effects 0.000 description 9
- 239000013598 vector Substances 0.000 description 9
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000005034 decoration Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011022 operating instruction Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012706 support-vector machine Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/43—Querying
- G06F16/435—Filtering based on additional data, e.g. user or group profiles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/60—Information retrieval; Database structures therefor; File system structures therefor of audio data
- G06F16/63—Querying
- G06F16/635—Filtering based on additional data, e.g. user or group profiles
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the application provides a training method of a preference prediction model and a prediction method of user preference. The method comprises the following steps: acquiring historical evaluation data of a user on an object; determining whether the data volume of the historical evaluation data meets a preset condition; constructing a training set based on whether the data volume of the historical evaluation data meets a preset condition; training a preference prediction model based on a training set, wherein the preference prediction model is established based on a limited Bowman model. Based on the preference prediction model obtained by the scheme, the hidden features of the user can be better extracted for preference prediction, and the accuracy of the preference prediction of the user is improved.
Description
Technical Field
The application relates to the technical field of data processing, in particular to a training method of a preference prediction model and a prediction method of user preference.
Background
At present, various products are in endless, the amount of active users is an important index for evaluating products, how to keep the rapid increase of the amount of active users becomes a problem which needs to be considered and solved by a plurality of companies, and predicting the preference of users and making the users feel the best is a good solution.
One approach to predicting the preferences of users is a Feature-based approach. The idea of the method is that the observation characteristics of the user are directly used as input, and the preference of the user is predicted by establishing models such as linear regression, Bayesian network, support vector machine and decision tree.
The preference prediction mode based on the observation features has a certain bottleneck, namely the prediction granularity is limited, and the prediction cannot be accurate to (user, video) pairs by taking the prediction of the user's favor of different videos as an example. For example, if the observable characteristics of two users are the same, the predicted results are the same for the same video, but such predicted results may be less reasonable and the user preferences may actually be different. Therefore, it is desirable to provide a method for improving the accuracy of the user preference prediction.
Disclosure of Invention
The present application aims to solve at least one of the above technical drawbacks. The technical scheme adopted by the application is as follows:
in a first aspect, an embodiment of the present application provides a method for training a preference prediction model, where the method includes:
acquiring historical evaluation data of a user on an object;
determining whether the data volume of the historical evaluation data meets a preset condition;
constructing a training set based on whether the data volume of the historical evaluation data meets a preset condition;
training a preference prediction model based on a training set, wherein the preference prediction model is established based on a limited Bowman model.
Optionally, constructing a training set based on whether the data size of the historical evaluation data meets a preset condition, including:
if the data size of the historical evaluation data does not meet the preset condition, constructing a training set based on the historical evaluation data and the observable characteristics;
and if the data volume of the historical evaluation data meets the preset condition, constructing a training set based on the historical evaluation data.
Optionally, training a preference prediction model based on the training set, comprising:
and training a preference prediction model based on the training set according to a collaborative filtering algorithm.
Optionally, the observable features comprise user features, the user features comprising at least one of:
the age of the user;
the gender of the user;
the user is professional.
Optionally, the observable features further include object features, and if the object is a video, the object features include at least one of:
keywords extracted from a video name of a video;
time information obtained from the year of release of the video.
Optionally, if the data size of the historical evaluation data does not satisfy the preset condition, after the preference prediction model is trained, the method further includes:
repeatedly acquiring historical evaluation data of the user on the object according to a preset retry rule until the data volume of the acquired historical evaluation data meets a preset condition, updating a training set based on the historical evaluation data, and training a preference prediction model based on the updated training set.
In a second aspect, an embodiment of the present application provides a method for predicting user preferences, where the method includes:
acquiring historical evaluation data of a user on an object;
and inputting the historical evaluation data into a pre-trained preference prediction model to obtain a preference prediction result of the user, wherein the preference prediction model is obtained by training according to the training method of the preference prediction model.
In a third aspect, an embodiment of the present application provides a device for training a preference prediction model, where the device includes:
the data acquisition module is used for acquiring historical evaluation data of the user on the object;
the data volume comparison module is used for determining whether the data volume of the historical evaluation data meets a preset condition or not;
the training set construction module is used for constructing a training set based on whether the data volume of the historical evaluation data meets a preset condition;
and the model training module is used for training a preference prediction model based on the training set, and the preference prediction model is established based on the wave-limited Zeeman machine model.
Optionally, the training set constructing module is specifically configured to:
if the data size of the historical evaluation data does not meet the preset condition, constructing a training set based on the historical evaluation data and the observable characteristics;
and if the data volume of the historical evaluation data meets the preset condition, constructing a training set based on the historical evaluation data.
Optionally, the model training module is specifically configured to:
and training a preference prediction model based on the training set according to a collaborative filtering algorithm.
Optionally, the observable features comprise user features, the user features comprising at least one of:
the age of the user;
the gender of the user;
the user is professional.
Optionally, the observable features further include object features, and if the object is a video, the object features include at least one of:
keywords extracted from a video name of a video;
time information obtained from the year of release of the video.
Optionally, if the data size of the historical evaluation data does not satisfy the preset condition, the model training module is further configured to:
after the preference prediction model is trained, repeatedly acquiring historical evaluation data of the user on the object according to a preset retry rule until the data volume of the acquired historical evaluation data meets a preset condition, updating the training set based on the historical evaluation data, and training the preference prediction model based on the updated training set.
In a fourth aspect, an embodiment of the present application provides an apparatus for predicting user preferences, where the apparatus includes:
the evaluation data acquisition module is used for acquiring historical evaluation data of the user on the object;
and the preference prediction module is used for inputting the historical evaluation data into a pre-trained preference prediction model to obtain a preference prediction result of the user, wherein the preference prediction model is obtained by training according to the training method of the preference prediction model.
In a fifth aspect, an embodiment of the present application provides an electronic device, including: a processor and a memory;
a memory for storing operating instructions;
a processor configured to perform the method as shown in any implementation of the first aspect or any implementation of the second aspect of the present application by calling an operation instruction.
In a sixth aspect, embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements the method shown in any of the embodiments of the first aspect or any of the embodiments of the second aspect of the present application.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
according to the scheme provided by the embodiment of the application, whether the data volume of the historical evaluation data meets the preset condition or not is determined by acquiring the historical evaluation data of the user on the object, whether the data volume based on the historical evaluation data meets the preset condition or not is determined, a training set is constructed, a preference prediction model is trained based on the training set, and the preference prediction model is established based on a limited Bowman model. Based on the preference prediction model obtained by the scheme, the hidden features of the user can be better extracted for preference prediction, and the accuracy of the preference prediction of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic flowchart of a method for training a preference prediction model according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart illustrating a method for predicting user preferences according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of a training apparatus for a preference prediction model according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an apparatus for predicting user preferences according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 shows a schematic flowchart of a method for training a preference prediction model according to an embodiment of the present application, and as shown in fig. 1, the method mainly includes:
step S110: acquiring historical evaluation data of a user on an object;
step S120: determining whether the data volume of the historical evaluation data meets a preset condition;
step S130: constructing a training set based on whether the data volume of the historical evaluation data meets a preset condition;
step S140: training a preference prediction model based on a training set, wherein the preference prediction model is established based on a limited Bowman model.
In the embodiment of the application, the object may be a product, such as a video, and the historical evaluation data may be a score of the user on the object.
Hidden features play a very important role in predicting user preferences. For example, it appears that the two users scored the video similarly, but that in essence, some of the hidden features of the two users are similar, such as both liking comedy films and disliking horror films. Therefore, in solving the problem of user preference prediction, it is critical to extract a part of content whose hidden features are very critical to the user from the already observed user preferences.
In the embodiment of the application, the preset condition may be a preset data amount, and if the historical evaluation data is less, the trained prediction model may have a poor prediction effect, so that before the prediction model is trained according to the historical evaluation data, whether the data amount of the historical evaluation data meets the preset condition or not can be determined, and a training set is constructed according to whether the data amount of the historical evaluation data meets the preset condition or not, so as to ensure the prediction effect of the trained prediction model.
A Restricted Boltzmann Machine (RBM) can abstract features deeply by superposition and can flexibly process a probabilistic generation model. Therefore, in order to better extract a model of a user hidden feature to predict user preference, in the embodiment of the application, the user preference is predicted by establishing a constrained wave-zeeman machine model.
According to the method provided by the embodiment of the application, whether the data volume of the historical evaluation data meets the preset condition or not is determined by acquiring the historical evaluation data of the user on the object, a training set is constructed based on whether the data volume of the historical evaluation data meets the preset condition or not, a preference prediction model is trained based on the training set, and the preference prediction model is established based on a limited Beziman model. Based on the preference prediction model obtained by the scheme, the hidden features of the user can be better extracted for preference prediction, and the accuracy of the preference prediction of the user is improved.
In an optional mode of the embodiment of the application, a training set is constructed based on whether the data size of the historical evaluation data meets a preset condition, including:
if the data size of the historical evaluation data does not meet the preset condition, constructing a training set based on the historical evaluation data and the observable characteristics;
and if the data volume of the historical evaluation data meets the preset condition, constructing a training set based on the historical evaluation data.
In the embodiment of the application, before the prediction model is trained according to the historical evaluation data, whether the data volume of the historical evaluation data meets the preset condition or not can be determined, if the data volume of the historical evaluation data meets the preset condition, the data volume of the historical evaluation data for model training can be considered to meet the requirement of model training, and a training set can be constructed according to the historical evaluation data.
If the data volume of the historical evaluation data does not meet the preset condition, the data volume of the historical evaluation data for model training can be considered to be incapable of meeting the requirement of model training, and in order to ensure the prediction effect of the trained prediction model, the observable features can be added for model training, and a training set is constructed through the observable features and the historical evaluation data.
Due to the introduction of observable features, even if the data volume of the historical evaluation data is small, the hidden features can be well predicted.
In an optional mode of the embodiment of the present application, training a preference prediction model based on a training set includes:
and training a preference prediction model based on the training set according to a collaborative filtering algorithm.
Because of the historical evaluation data, if all users score all products, the RBM can be obtained to have M visible nodes. However, most scores in the score matrix are absent in the actual situation, and in order to cope with the condition of score absence, the RBM is trained according to the collaborative filtering algorithm in the embodiment of the application. Specifically, an RBM may be trained for each user, all of which have the same number of hidden nodes, but the number of visible nodes for each RBM is the number of products in a user's score. Thus, although all RBMs have only one training example, the same hidden node and video bias and weight are tied together for all RBMs. For example, if two users perform a hash score on the same video, the weighting value between the video and the hidden node is the same for the RBMs of the two users.
In the embodiment of the application, binarization processing can be performed on the historical evaluation data, that is, all nodes in the RBM have only two states, namely 0 and 1. Constructing a binarized RBM can simplify the training and prediction process.
In an optional manner of the embodiment of the present application, the observable feature includes a user feature, and the user feature includes at least one of:
the age of the user;
the gender of the user;
the user is professional.
In the embodiment of the application, the user-related information can be used as observable features, such as the user age, the user gender and the user occupation.
In an optional manner of the embodiment of the present application, the observable features further include object features, and if the object is a video, the object features include at least one of the following:
keywords extracted from a video name of a video;
time information obtained from the year of release of the video.
In the embodiment of the present application, the observable features may further include object features, and if the object is a video, the object features may be extracted from related information of the video, for example, some keywords capable of representing the video features are extracted from a video name, or time information is obtained from a release year.
In an optional manner of the embodiment of the application, if the data size of the historical evaluation data does not satisfy the preset condition, after the preference prediction model is trained, the method further includes:
repeatedly acquiring historical evaluation data of the user on the object according to a preset retry rule until the data volume of the acquired historical evaluation data meets a preset condition, updating a training set based on the historical evaluation data, and training a preference prediction model based on the updated training set.
In the embodiment of the application, after the preference prediction model is trained, the historical evaluation data can be repeatedly tried to be acquired, the data volume of the acquired historical evaluation data is detected, when the data volume of the acquired historical evaluation data is determined to meet the preset condition, a training set can be reconstructed based on the currently acquired historical evaluation data, and the preference prediction model is retrained according to the reconstructed training set.
As one example, the observable features may be pre-processed, quantifying user features and video features. In order to facilitate the calculation in the model, in this example, both the user features and the video features are mapped into binary vectors, and the specific relationship is as follows:
age: in the example, the age is divided into five intervals [ -17] [18-24] [25-34] [35-44] [44- ], and a binary vector with the length of 5 is used for representing the interval of the age of the user;
sex: a binary vector with the length of 2 is used for representing that the sex (1, 0) is male and (0, 1) is female;
occupation: the method is represented by a binary vector with the length of 21, if the occupation of a user is the ith, the ith element in the vector is 1, and other elements are 0;
movie type: expressed by a binary vector with the length of 19, if the film type of the video is jth, the jth element in the vector is 1, and other elements are 0;
after data preprocessing, a vector with the length of 28 can be obtained for each user to serve as an observable feature; a length 19 vector may be obtained for each video as an observable feature.
Fig. 2 is a flowchart illustrating a user preference prediction method provided in an embodiment of the present application, and as shown in fig. 2, the method mainly includes:
step S210: acquiring historical evaluation data of a user on an object;
step S220: and inputting the historical evaluation data into a pre-trained preference prediction model to obtain a preference prediction result of the user, wherein the preference prediction model is obtained by training according to the training method of the preference prediction model.
In the embodiment of the application, the object may be a product, such as a video, and the historical evaluation data may be a score of the user on the object.
When the preference of the user is predicted, historical evaluation data of the user can be input into a pre-trained preference prediction model, and the prediction model is established based on a limited Bowman model, so that the characteristics can be abstracted deeply through superposition, the model of the hidden characteristics of the user can be better extracted for predicting the preference of the user, and the predicted preference of the user is more accurate.
According to the method provided by the embodiment of the application, historical evaluation data of the user on the object are obtained, and the historical evaluation data are input into a pre-trained preference prediction model to obtain a preference prediction result of the user. Based on the scheme, the preference prediction model established based on the limited Bowman model is used for predicting the preference of the user, so that the hidden characteristics of the user can be better extracted for predicting the preference, and the accuracy of the preference prediction of the user is improved.
Based on the same principle as the method shown in fig. 1, fig. 3 shows a schematic structural diagram of a training apparatus for a preference prediction model provided by an embodiment of the present application, and as shown in fig. 3, the training apparatus 30 for a preference prediction model may include:
a data obtaining module 310, configured to obtain historical evaluation data of the object by the user;
the data volume comparison module 320 is used for determining whether the data volume of the historical evaluation data meets a preset condition;
a training set constructing module 330, configured to construct a training set based on whether the data size of the historical evaluation data meets a preset condition;
and the model training module 340 is used for training a preference prediction model based on the training set, and the preference prediction model is established based on the constrained Bowman model.
According to the device provided by the embodiment of the application, whether the data volume of the historical evaluation data meets the preset condition or not is determined by acquiring the historical evaluation data of the user on the object, whether the data volume based on the historical evaluation data meets the preset condition or not is determined, a training set is constructed, a preference prediction model is trained based on the training set, and the preference prediction model is established based on a limited Beziman model. Based on the preference prediction model obtained by the scheme, the hidden features of the user can be better extracted for preference prediction, and the accuracy of the preference prediction of the user is improved.
Optionally, the training set constructing module is specifically configured to:
if the data size of the historical evaluation data does not meet the preset condition, constructing a training set based on the historical evaluation data and the observable characteristics;
and if the data volume of the historical evaluation data meets the preset condition, constructing a training set based on the historical evaluation data.
Optionally, the model training module is specifically configured to:
and training a preference prediction model based on the training set according to a collaborative filtering algorithm.
Optionally, the observable features comprise user features, the user features comprising at least one of:
the age of the user;
the gender of the user;
the user is professional.
Optionally, the observable features further include object features, and if the object is a video, the object features include at least one of:
keywords extracted from a video name of a video;
time information obtained from the year of release of the video.
Optionally, if the data size of the historical evaluation data does not satisfy the preset condition, the model training module is further configured to:
after the preference prediction model is trained, repeatedly acquiring historical evaluation data of the user on the object according to a preset retry rule until the data volume of the acquired historical evaluation data meets a preset condition, updating the training set based on the historical evaluation data, and training the preference prediction model based on the updated training set.
It is to be understood that the above-described modules of the training apparatus of the preference prediction model in the present embodiment have functions of implementing the corresponding steps of the training method of the preference prediction model in the embodiment shown in fig. 1. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above. The modules can be software and/or hardware, and each module can be implemented independently or by integrating a plurality of modules. For the functional description of each module of the training apparatus for the preference prediction model, reference may be specifically made to the corresponding description of the training method for the preference prediction model in the embodiment shown in fig. 1, and details are not repeated here.
Based on the same principle as the method shown in fig. 2, fig. 4 shows a schematic structural diagram of a user preference prediction apparatus provided by an embodiment of the present application, and as shown in fig. 4, the user preference prediction apparatus 40 may include:
an evaluation data obtaining module 410, configured to obtain historical evaluation data of the user on the object;
and the preference prediction module 420 is configured to input the historical evaluation data into a pre-trained preference prediction model to obtain a preference prediction result of the user, where the preference prediction model is obtained by training according to the above-mentioned training method of the preference prediction model.
The device provided by the embodiment of the application acquires historical evaluation data of a user on an object, inputs the historical evaluation data into a pre-trained preference prediction model, and obtains a preference prediction result of the user. Based on the scheme, the preference prediction model established based on the limited Bowman model is used for predicting the preference of the user, so that the hidden characteristics of the user can be better extracted for predicting the preference, and the accuracy of the preference prediction of the user is improved.
It is to be understood that the above modules of the user preference prediction apparatus in the embodiment have functions of implementing the corresponding steps of the user preference prediction method in the embodiment shown in fig. 2. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above. The modules can be software and/or hardware, and each module can be implemented independently or by integrating a plurality of modules. For the functional description of each module of the prediction apparatus of the user preference, reference may be specifically made to the corresponding description of the prediction method of the user preference in the embodiment shown in fig. 2, and details are not repeated here.
The embodiment of the application provides an electronic device, which comprises a processor and a memory;
a memory for storing operating instructions;
and the processor is used for executing the method provided by any embodiment of the application by calling the operation instruction.
As an example, fig. 5 shows a schematic structural diagram of an electronic device to which an embodiment of the present application is applicable, and as shown in fig. 5, the electronic device 2000 includes: a processor 2001 and a memory 2003. Wherein the processor 2001 is coupled to a memory 2003, such as via a bus 2002. Optionally, the electronic device 2000 may also include a transceiver 2004. It should be noted that the transceiver 2004 is not limited to one in practical applications, and the structure of the electronic device 2000 is not limited to the embodiment of the present application.
The processor 2001 is applied to the embodiment of the present application to implement the method shown in the above method embodiment. The transceiver 2004 may include a receiver and a transmitter, and the transceiver 2004 is applied to the embodiments of the present application to implement the functions of the electronic device of the embodiments of the present application to communicate with other devices when executed.
The Processor 2001 may be a CPU (Central Processing Unit), general Processor, DSP (Digital Signal Processor), ASIC (Application Specific Integrated Circuit), FPGA (Field Programmable Gate Array) or other Programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 2001 may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs and microprocessors, and the like.
The Memory 2003 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
Optionally, the memory 2003 is used for storing application program code for performing the disclosed aspects, and is controlled in execution by the processor 2001. The processor 2001 is used to execute the application program code stored in the memory 2003 to implement the methods provided in any of the embodiments of the present application.
The electronic device provided by the embodiment of the application is applicable to any embodiment of the method, and is not described herein again.
Compared with the prior art, the electronic equipment determines whether the data volume of historical evaluation data meets the preset condition or not by acquiring the historical evaluation data of a user on an object, establishes a training set based on whether the data volume of the historical evaluation data meets the preset condition or not, trains a preference prediction model based on the training set, and establishes the preference prediction model based on a limited Betzmann model. Based on the preference prediction model obtained by the scheme, the hidden features of the user can be better extracted for preference prediction, and the accuracy of the preference prediction of the user is improved.
The present application provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the computer program implements the method shown in the above method embodiments.
The computer-readable storage medium provided in the embodiments of the present application is applicable to any of the embodiments of the foregoing method, and is not described herein again.
Compared with the prior art, the computer-readable storage medium is characterized in that historical evaluation data of a user on an object are obtained, whether the data volume of the historical evaluation data meets a preset condition or not is determined, a training set is constructed based on whether the data volume of the historical evaluation data meets the preset condition or not, a preference prediction model is trained based on the training set, and the preference prediction model is established based on a limited Bowman model. Based on the preference prediction model obtained by the scheme, the hidden features of the user can be better extracted for preference prediction, and the accuracy of the preference prediction of the user is improved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.
Claims (11)
1. A method for training a preference prediction model, comprising:
acquiring historical evaluation data of a user on an object;
determining whether the data volume of the historical evaluation data meets a preset condition;
constructing a training set based on whether the data volume of the historical evaluation data meets a preset condition;
training a preference prediction model based on the training set, wherein the preference prediction model is established based on a constrained Bowman's model.
2. The method according to claim 1, wherein the constructing a training set based on whether the data volume of the historical evaluation data meets a preset condition comprises:
if the data size of the historical evaluation data does not meet the preset condition, constructing a training set based on the historical evaluation data and observable characteristics;
and if the data volume of the historical evaluation data meets a preset condition, constructing a training set based on the historical evaluation data.
3. The method of claim 1, wherein training a preference prediction model based on the training set comprises:
and training a preference prediction model based on the training set according to a collaborative filtering algorithm.
4. The method of claim 1, wherein the observable features comprise user features comprising at least one of:
the age of the user;
the gender of the user;
the user is professional.
5. The method of claim 4, wherein the observable features further include object features, and wherein if the object is a video, the object features include at least one of:
keywords extracted from a video name of the video;
time information obtained from the year of release of the video.
6. The method according to claim 2, wherein if the data size of the historical evaluation data does not satisfy a preset condition, after training the preference prediction model, the method further comprises:
repeatedly acquiring historical evaluation data of a user on an object according to a preset retry rule until the data volume of the acquired historical evaluation data meets a preset condition, updating the training set based on the historical evaluation data, and training a preference prediction model based on the updated training set.
7. A method for predicting user preferences, comprising:
acquiring historical evaluation data of a user on an object;
inputting the historical evaluation data into a pre-trained preference prediction model to obtain a preference prediction result of the user, wherein the preference prediction model is obtained by training according to the training method of the preference prediction model of any one of claims 1 to 6.
8. A device for training a preference prediction model, comprising:
the data acquisition module is used for acquiring historical evaluation data of the user on the object;
the data volume comparison module is used for determining whether the data volume of the historical evaluation data meets a preset condition;
the training set construction module is used for constructing a training set based on whether the data volume of the historical evaluation data meets a preset condition or not;
and the model training module is used for training a preference prediction model based on the training set, and the preference prediction model is established based on a constrained Bowman model.
9. An apparatus for predicting user preferences, comprising:
the evaluation data acquisition module is used for acquiring historical evaluation data of the user on the object;
a preference prediction module, configured to input the historical evaluation data into a pre-trained preference prediction model to obtain a preference prediction result of the user, where the preference prediction model is obtained by training according to the training method of the preference prediction model according to any one of claims 1 to 6.
10. An electronic device comprising a processor and a memory;
the memory is used for storing operation instructions;
the processor is used for executing the method of any one of claims 1-7 by calling the operation instruction.
11. A computer-readable storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, carries out the method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110333952.5A CN112862008A (en) | 2021-03-29 | 2021-03-29 | Training method of preference prediction model and prediction method of user preference |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110333952.5A CN112862008A (en) | 2021-03-29 | 2021-03-29 | Training method of preference prediction model and prediction method of user preference |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112862008A true CN112862008A (en) | 2021-05-28 |
Family
ID=75993115
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110333952.5A Pending CN112862008A (en) | 2021-03-29 | 2021-03-29 | Training method of preference prediction model and prediction method of user preference |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112862008A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114418629A (en) * | 2021-12-30 | 2022-04-29 | 中国电信股份有限公司 | User loss prediction method and device, electronic equipment and readable storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105183748A (en) * | 2015-07-13 | 2015-12-23 | 电子科技大学 | Combined forecasting method based on content and score |
CN109376766A (en) * | 2018-09-18 | 2019-02-22 | 平安科技(深圳)有限公司 | A kind of portrait prediction classification method, device and equipment |
WO2019153518A1 (en) * | 2018-02-08 | 2019-08-15 | 平安科技(深圳)有限公司 | Information pushing method and device, computer device and storage medium |
CN111311324A (en) * | 2020-02-18 | 2020-06-19 | 电子科技大学 | User-commodity preference prediction system and method based on stable neural collaborative filtering |
CN111650453A (en) * | 2020-05-25 | 2020-09-11 | 武汉大学 | Power equipment diagnosis method and system based on windowing characteristic Hilbert imaging |
-
2021
- 2021-03-29 CN CN202110333952.5A patent/CN112862008A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105183748A (en) * | 2015-07-13 | 2015-12-23 | 电子科技大学 | Combined forecasting method based on content and score |
WO2019153518A1 (en) * | 2018-02-08 | 2019-08-15 | 平安科技(深圳)有限公司 | Information pushing method and device, computer device and storage medium |
CN109376766A (en) * | 2018-09-18 | 2019-02-22 | 平安科技(深圳)有限公司 | A kind of portrait prediction classification method, device and equipment |
CN111311324A (en) * | 2020-02-18 | 2020-06-19 | 电子科技大学 | User-commodity preference prediction system and method based on stable neural collaborative filtering |
CN111650453A (en) * | 2020-05-25 | 2020-09-11 | 武汉大学 | Power equipment diagnosis method and system based on windowing characteristic Hilbert imaging |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114418629A (en) * | 2021-12-30 | 2022-04-29 | 中国电信股份有限公司 | User loss prediction method and device, electronic equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111798273B (en) | Training method of product purchase probability prediction model and purchase probability prediction method | |
CN108920654B (en) | Question and answer text semantic matching method and device | |
CN111460130B (en) | Information recommendation method, device, equipment and readable storage medium | |
CN110008397B (en) | Recommendation model training method and device | |
CN110096617B (en) | Video classification method and device, electronic equipment and computer-readable storage medium | |
CN111782927B (en) | Article recommendation method and device and computer storage medium | |
CN110489574B (en) | Multimedia information recommendation method and device and related equipment | |
CN116601626A (en) | Personal knowledge graph construction method and device and related equipment | |
CN113656699B (en) | User feature vector determining method, related equipment and medium | |
CN110009486A (en) | A kind of method of fraud detection, system, equipment and computer readable storage medium | |
CN113761359B (en) | Data packet recommendation method, device, electronic equipment and storage medium | |
CN112150238A (en) | Deep neural network-based commodity recommendation method and system | |
CN111966916A (en) | Recommendation method and device, electronic equipment and computer readable storage medium | |
CN112989182B (en) | Information processing method, information processing device, information processing apparatus, and storage medium | |
CN114493674A (en) | Advertisement click rate prediction model and method | |
CN112862008A (en) | Training method of preference prediction model and prediction method of user preference | |
CN109886299B (en) | User portrait method and device, readable storage medium and terminal equipment | |
CN108805290B (en) | Entity category determination method and device | |
CN108596412A (en) | Cross-cutting methods of marking and Marking apparatus based on user's similarity | |
CN116467466A (en) | Knowledge graph-based code recommendation method, device, equipment and medium | |
CN110827078A (en) | Information recommendation method, device, equipment and storage medium | |
CN109784406A (en) | A kind of user draws a portrait method, apparatus, readable storage medium storing program for executing and terminal device | |
CN111353001A (en) | Method and device for classifying users | |
CN115758271A (en) | Data processing method, data processing device, computer equipment and storage medium | |
CN115114483A (en) | Method for processing graph data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |