CN109754067A - Matrix disassembling method, device and electronic equipment based on convolution attention - Google Patents

Matrix disassembling method, device and electronic equipment based on convolution attention Download PDF

Info

Publication number
CN109754067A
CN109754067A CN201811454614.1A CN201811454614A CN109754067A CN 109754067 A CN109754067 A CN 109754067A CN 201811454614 A CN201811454614 A CN 201811454614A CN 109754067 A CN109754067 A CN 109754067A
Authority
CN
China
Prior art keywords
article
matrix
user
implicit factor
implicit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811454614.1A
Other languages
Chinese (zh)
Inventor
曾碧卿
商齐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Normal University
Original Assignee
South China Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Normal University filed Critical South China Normal University
Priority to CN201811454614.1A priority Critical patent/CN109754067A/en
Publication of CN109754067A publication Critical patent/CN109754067A/en
Pending legal-status Critical Current

Links

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The present invention relates to a kind of matrix disassembling method based on convolution attention, device and electronic equipments.Matrix disassembling method of the present invention based on convolution attention includes the following steps: that it is term vector matrix that the user of article is described to document representation;By the term vector Input matrix convolution attention neural network, the implicit factor of article is obtained;According to user to the score information of article and the implicit factor of article, the implicit factor for obtaining user is decomposed using probability matrix;The implicit factor of the implicit factor and the user to the article calculates the inner product of vector, and establishes prediction rating matrix according to inner product;Using error backpropagation algorithm, the parameter and the probability matrix for optimizing the convolution attention neural network are decomposed.Matrix disassembling method interpretation of the present invention based on convolution attention is good, can effectively mitigate article cold start-up problem, and can improve Sparse Problem, improve the accuracy rate of score in predicting.

Description

Matrix disassembling method, device and electronic equipment based on convolution attention
Technical field
The present invention relates to technical field of data processing, more particularly to a kind of matrix decomposition side based on convolution attention Method, device and electronic equipment.
Background technique
Recommended method based on matrix decomposition is a kind of current common basic skills, by decomposing given user to article Rating matrix, original rating matrix is approached with the product of two low-rank matrixes, the target approached be exactly make prediction comment Square-error between sub-matrix and original rating matrix is minimum.Two low-rank matrixes are respectively the feature square of user and article Gust, these feature vectors in matrix can simply be interpreted as user to the preference of different attribute.With content-based filtering Method is compared, and matrix decomposition has preferable field adaptability, is capable of handling non-structured data, such as music, video.But There are problems that Sparse and cold start-up.This is because pure matrix decomposition has ignored the comment text and description text of article This hidden feature, so that the estimated performance of matrix decomposition is limited.
Existing score in predicting regards score information of the user to article as simple scoring, not from user and article Angle deeply thinks deeply the Analysis of Deep Implications that user contains the scoring of different articles and different user to the scoring of same article, Simple matrix decomposition can not add text feature, cannot deeply understand that user to the comment text of article, therefore can not solve Cold start-up and the interpretation problem recommended.
Convolutional neural networks can be used for extracting the hidden feature of text, but convolutional neural networks are often ignored to single word Feature extraction, and different words are different to the importance of score in predicting in text, and most of work does not account for different words To the otherness of score in predicting.
Summary of the invention
Based on this, the object of the present invention is to provide a kind of matrix disassembling methods based on convolution attention, can be explained Property it is good, can effectively mitigate article cold start-up problem, and Sparse Problem can be improved, improve the accuracy rate of score in predicting.
The present invention is based on the matrix disassembling methods of convolution attention to be achieved by the following scheme:
A kind of matrix disassembling method based on convolution attention, includes the following steps:
It is term vector matrix that the user of article, which is described document representation,;
By the term vector Input matrix convolution attention neural network, the implicit factor of article is obtained;
According to user to the score information of article and the implicit factor of article, is decomposed using probability matrix and obtain the hidden of user Containing the factor;
The implicit factor of the implicit factor and the user to the article calculates the inner product of vector, obtains user to article Prediction scoring, and prediction rating matrix is established to the prediction of article scoring according to user;
According to the loss function between the prediction rating matrix and true rating matrix, calculated using error back propagation Method, the parameter and the probability matrix for optimizing the convolution attention neural network are decomposed.
Matrix disassembling method of the present invention based on convolution attention passes through convolution attention neural network extract The implicit factor of product obtains the implicit factor of user according to user, to institute to the score information of article and the implicit factor of article The inner product that the implicit factor of the implicit factor and the user of stating article calculates vector obtains prediction scoring, and is scored according to prediction Loss function between matrix and true rating matrix reduces error using back-propagation algorithm, optimizes the convolution attention Neural network and the probability matrix decompose, and interpretation is good, can effectively mitigate article cold start-up problem, and can improve data Sparse Problems improve the accuracy rate of score in predicting.
In one embodiment, by the user of article describe document representation be term vector matrix before, further include following steps:
It removes the user and describes the excessively high vocabulary of frequency in document;
Remove the vocabulary that the user describes underfrequency in document.
It is screened by describing document to user, can more accurately obtain the term vector matrix that user describes document.
It in one embodiment, further include as follows before the implicit factor for obtaining user according to user's score information of article Step:
Remove the article that no user describes document.
In one embodiment, the inner product of vector is calculated the implicit factor of the implicit factor of the article and the user Before, further include following steps:
Different degrees of Gaussian noise is assigned to the article according to the scoring quantity of article, wherein scoring quantity is fewer, The Gaussian noise assigned is bigger.
Different degrees of Gaussian noise is assigned to the article by the scoring quantity of article, and it is hidden that the article can be improved Robustness containing the factor.
Further, the present invention also provides a kind of matrix decomposition devices based on convolution attention, comprising:
Term vector matrix module is term vector matrix for the user of article to be described document representation;
Article is implicit because of sub-acquisition module, for obtaining the term vector Input matrix convolution attention neural network The implicit factor of article;
User is implicit because of sub-acquisition module, for, to the score information of article and the implicit factor of article, being made according to user The implicit factor for obtaining user is decomposed with probability matrix;
Probability matrix decomposing module, the implicit factor for the implicit factor and the user to the article calculate vector Inner product, obtain user and score the prediction of article, and prediction rating matrix is established to the prediction scoring of article according to user;
Optimization module, for using mistake according to the loss function between the prediction rating matrix and true rating matrix Poor back-propagation algorithm, the parameter and the probability matrix for optimizing the convolution attention neural network are decomposed.
Matrix decomposition device of the present invention based on convolution attention passes through convolution attention neural network extract The implicit factor of product obtains the implicit factor of user according to user, to institute to the score information of article and the implicit factor of article The inner product that the implicit factor of the implicit factor and the user of stating article calculates vector obtains prediction scoring, and is scored according to prediction Loss function between matrix and true rating matrix reduces error using back-propagation algorithm, optimizes the convolution attention Neural network and the probability matrix decompose, and interpretation is good, can effectively mitigate article cold start-up problem, and can improve data Sparse Problems improve the accuracy rate of score in predicting.
In one embodiment, further includes:
First preprocessing module before term vector matrix, removes the use for the user of article to be described document representation Family describes the excessively high vocabulary of frequency in document, and the removal user describes the vocabulary of underfrequency in document.
In one embodiment, further includes:
Second preprocessing module, for, to the score information of article and the implicit factor of article, using probability according to user Before matrix decomposition obtains the implicit factor of user, the article that no user describes document is removed.
In one embodiment, further includes:
Gaussian noise assigns module, and the implicit factor for the implicit factor and the user to the article calculates vector Inner product before, different degrees of Gaussian noise is assigned to the article according to the scoring quantity of article, wherein scoring quantity get over Few, the Gaussian noise assigned is bigger.
Further, the present invention also provides a kind of computer-readable medium, it is stored thereon with computer program, the computer Such as matrix disassembling method of the above-mentioned any one based on convolution attention is realized when program is executed by processor.
Further, the present invention also provides a kind of electronic equipment, including memory, processor and it is stored in the storage Device and the computer program that can be executed by the processor when processor executes the computer program, are realized as above-mentioned Matrix disassembling method of any one based on convolution attention.
In order to better understand and implement, the invention will now be described in detail with reference to the accompanying drawings.
Detailed description of the invention
Fig. 1 is the matrix disassembling method flow chart based on convolution attention in a kind of embodiment;
Fig. 2 is to describe document pretreatment process figure to article in a kind of embodiment;
Fig. 3 is the matrix decomposition model schematic based on convolution attention;
Fig. 4 is the structural schematic diagram of convolution attention neural network;
Fig. 5 is a kind of Optimizing Flow schematic diagram of the matrix decomposition model based on convolution attention in embodiment;
Fig. 6 is the matrix disassembling method flow chart based on convolution attention in a kind of embodiment;
Fig. 7 is the matrix decomposition apparatus structure schematic diagram based on convolution attention in a kind of embodiment;
Fig. 8 is electronic devices structure schematic diagram in a kind of embodiment.
Specific embodiment
Referring to Fig. 1, including following step the present invention is based on the matrix disassembling method of convolution attention in one embodiment It is rapid:
Step S101: it is term vector matrix that the user of article, which is described document representation,.
Step S102: by the term vector Input matrix convolution attention neural network, the implicit factor of article is obtained.
The article includes the commodity that user buys or uses, including practice commodity, also includes film, TV play, books Equal commodity, it is the comment that user delivers the article that the user, which describes document, and user's score information is user to the object The score information that product are delivered.
The term vector matrix is mapped to vector space by word embeding layer, by the description document of article, between vector Distance characterizes in description document, the semantic relation between word and word.
The convolution attention neural network includes attention layer, and the local feature of document, institute are described for extracting user The implicit factor of article is stated, is the relational matrix between article and hidden class in hidden semantic model.
Step S103: it according to user to the score information of article and the implicit factor of article, is obtained using probability matrix decomposition Take the implicit factor at family.
The implicit factor of the user is the relational matrix in hidden semantic model, between user's scoring and hidden class.
Step S104: the implicit factor of the implicit factor and the user to the article calculates the inner product of vector, obtains User scores to the prediction of article, and establishes prediction rating matrix to the prediction scoring of article according to user.
The target that the probability matrix decomposes is according to the implicit factor of existing article and the implicit factor of user, to institute The implicit factor of the implicit factor and the user of stating article carries out alternately update, predicts in user-article rating matrix not Know value.
Step S105: anti-using error according to the loss function between the prediction rating matrix and true rating matrix To propagation algorithm, the parameter and the probability matrix for optimizing the convolution attention neural network are decomposed.
Wherein, true rating matrix can be handled by score data collection MovieLens and Amazon data set and be obtained.
Matrix disassembling method of the present invention based on convolution attention passes through convolution attention neural network extract The implicit factor of product obtains the implicit factor of user according to user, to institute to the score information of article and the implicit factor of article The inner product that the implicit factor of the implicit factor and the user of stating article calculates vector obtains prediction scoring, and is scored according to prediction Loss function between matrix and true rating matrix reduces error using back-propagation algorithm, optimizes the convolution attention Neural network and the probability matrix decompose, and interpretation is good, can effectively mitigate article cold start-up problem, and can improve data Sparse Problems improve the accuracy rate of score in predicting.
Referring to Fig. 2, in one embodiment, the user of article is described document representation also to wrap before term vector matrix Include following steps:
Step S201: it removes the user and describes the excessively high vocabulary of frequency in document.
Step S202: the vocabulary that the user describes underfrequency in document is removed.
In the present embodiment, it by calculating the term frequency-inverse document frequency for describing each word in document of article, and then removes Fall the too high or too low vocabulary of term frequency-inverse document frequency.
In one embodiment, further include following steps:
Remove the article that no user describes document.
It in one embodiment, is the robustness for improving article and implying the factor, the implicit factor to the article and described Further include following steps before the implicit factor of user calculates the inner product of vector:
Different degrees of Gaussian noise is assigned to the article according to the scoring quantity of article, wherein scoring quantity is fewer, The Gaussian noise assigned is bigger.
The Gaussian noise is a noise like of probability density function Gaussian distributed (i.e. normal distribution).
In a specific embodiment, Fig. 3-6 is please referred to, wherein Fig. 3 is of the present invention based on convolution attention Matrix disassembling method used in the matrix decomposition model (ACMF) based on convolution attention schematic diagram, Fig. 4 be convolution infuse It anticipates the structural schematic diagram of power neural network (ACNN), wherein based on the matrix decomposition model of convolution attention by convolution attention Neural network is integrated under the frame of probability matrix decomposition, improves the accuracy of score in predicting.Wherein, R is rating matrix in figure, U is that user implies the factor, and V is that article implies the factor, and X is the description document of article, and W is weight and the biasing of ACNN network, σ2 For the variance of variable.
Please refer to Fig. 5 and Fig. 6, wherein Fig. 5 is the Optimizing Flow of the matrix decomposition model (ACMF) based on convolution attention Schematic diagram when the root-mean-square error between the prediction rating matrix and true rating matrix is unsatisfactory for imposing a condition, continues Convolution attention neural network (ACNN) and probability matrix decomposition model (PMF) are trained.
The matrix disassembling method based on convolution attention of the present embodiment includes the following steps:
Step S601: describing document to the user of article and pre-process, and it is term vector square that user, which is described document representation, Battle array.
Wherein, to the user of article describe document carry out pretreatment include the following steps:
Step S6011: to user describe Document Length value be 300 (Document Length be more than 300 only reservation the document in front of 300 words).
Step S6012: removal user describes the stop words in document.
Step S6013: the term frequency-inverse document frequency that user describes each word in document is calculated.
Step S6014: removal user describes the word that frequency in document is higher than 0.5.
Step S6015: choose maximum 8000 words of term frequency-inverse document frequency and generate vocabulary.
Step S6016: the word in vocabulary is not appeared in from deletion in document.
By word embeding layer, the document comprising T word is mapped to the term vector that dimension is d and implies in space, and document is corresponding Word embeded matrix beIt may be expressed as: D=(x1,x2,...,xT)。
Step S602: pre-processing user's score data of article, removes the article that no user describes document.
For ML-100k, ML-1m, ML-10m and Amazon score data, the article without describing document is therefrom removed. For Amazon score data, user of the removal scoring less than 6 obtains AIV-6, finally obtains table 1-1 by statistics.With ML- 100k, ML-1m, ML-10m are compared, and the consistency of AIV-6 score data is lower.
The data statistics of tetra- data sets of table 1-1
Step S603: by the term vector Input matrix convolution attention neural network, the implicit factor of article is obtained.
Convolution attention neural network (ACNN) first passes through local attention layer and convolutional layer extracts text feature, and part is infused Meaning power module obtains the attention score of text sequence by sliding window, to indicate the weight size of each centre word, rolls up Lamination is used to extract the local feature of text, then reuses pond layer and carries out dimensionality reduction, last output to the output of convolutional layer The implicit factor of product.
Wherein, ACNN network parameter is provided that
1) it initializes term vector: term vector being initialized by Glove, the dimension of term vector is 200;
2) the sliding window length of local attention is 5;
3) convolutional layer uses convolution kernel each 50 that length is 5 and 1;
4) activation primitive of convolutional layer is ReLU;
5) ACNN network over-fitting is prevented using two layers of Dropout, and it is 0.4 and 0.2 that Loss Rate, which is arranged,.
Step S604: it according to user to the score information of article and the implicit factor of article, is obtained using probability matrix decomposition Take the implicit factor at family.
Step S605: different degrees of Gaussian noise is assigned to the article according to the scoring quantity of article, wherein scoring Quantity is fewer, and the Gaussian noise assigned is bigger.
Equation is as follows:
vj=acnnW(Xj)+∈j,
The implicit factor of article are as follows:
Step S606: the implicit factor of the implicit factor and the user to the article calculates the inner product of vector, obtains User scores to the prediction of article, and establishes prediction rating matrix to the prediction scoring of article according to user.
The target of matrix decomposition is to find suitable user and the article implicit factor U and V, then passes through UTV prediction is unknown Scoring, whereinAccording to conditional probability distribution it is found that the condition of known scoring is distributed are as follows:
Wherein, N (x | μ, σ2) expression mean value be μ, variance σ2Gauss normal distribution probability density function.
Model potential for user, is generated using zero-mean spherical surface Gaussian prior, and variance is
Step S607: anti-using error according to the loss function between the prediction rating matrix and true rating matrix To propagation algorithm, the parameter and the probability matrix for optimizing the convolution attention neural network are decomposed.
Loss function such as following equation:
Optimization for parameter U and V, the present invention use coordinate descent (Coordinate Descent), coordinate decline Method is a kind of non-gradient optimization algorithm, and algorithm in each iteration, is optimized in current point along a coordinate direction, and by its Its coordinate direction is fixed, and in the hope of the local minimum of the function of many variables, different coordinate directions is used alternatingly in the whole process.
ui←(VIiVTUIK)-1VRi
vj←(UIjUT+h(njVIK)-1(URj+h(njVacnnW(Xj))
For variable W, regard loss function equation as quadratic function about W, loss function equation can simplify are as follows:
The present invention uses back-propagation algorithm optimized variable W
Pass through Optimal Parameters U, V, W+And W, it can finally predict unknown scoring of the user about article:
A kind of matrix disassembling method based on convolution attention proposed by the present invention MovieLens-100k, MovieLens-1m, MovieLens-10m, AIV-6 data set are the different user-article score data collection of four degree of rarefications. ACMF algorithm proposed by the present invention is in MovieLens-100k, MovieLens-1m, MovieLens-10m, AIV-6 data set On, root-mean-square error RMSE has a degree of decline with respect to other common algorithms, illustrates of the invention to pay attention to based on convolution The matrix disassembling method of power improves the accuracy rate of score in predicting.
Matrix disassembling method of the present invention based on convolution attention passes through convolution attention neural network extract The implicit factor of product obtains the implicit factor of user according to user, to institute to the score information of article and the implicit factor of article The inner product that the implicit factor of the implicit factor and the user of stating article calculates vector obtains prediction scoring, and is scored according to prediction Loss function between matrix and true rating matrix reduces error using back-propagation algorithm, optimizes the convolution attention Neural network and the probability matrix decompose, and interpretation is good, can effectively mitigate article cold start-up problem, and can improve data Sparse Problems improve the accuracy rate of score in predicting.
Referring to Fig. 7, in one embodiment, the matrix decomposition device 700 the present invention is based on convolution attention includes:
Term vector matrix module 701 is term vector matrix for the user of article to be described document representation.
Article is implicit because of sub-acquisition module 702, for obtaining the term vector Input matrix convolution attention neural network Take the implicit factor of article.
User is implicit because of sub-acquisition module 703, for according to user to the score information of article and the implicit factor of article, The implicit factor for obtaining user is decomposed using probability matrix.
Probability matrix decomposing module 704, the implicit factor for the implicit factor and the user to the article calculate The inner product of vector obtains user and scores the prediction of article, and establishes prediction scoring square to the prediction scoring of article according to user Battle array.
Optimization module 705, for according to the loss function between the prediction rating matrix and true rating matrix, optimization The convolution attention neural network and the probability matrix decompose.
Matrix decomposition device of the present invention based on convolution attention passes through convolution attention neural network extract The implicit factor of product obtains the implicit factor of user according to user, to institute to the score information of article and the implicit factor of article The inner product that the implicit factor of the implicit factor and the user of stating article calculates vector obtains prediction scoring, and is scored according to prediction Loss function between matrix and true rating matrix reduces error using back-propagation algorithm, optimizes the convolution attention Neural network and the probability matrix decompose, and interpretation is good, can effectively mitigate article cold start-up problem, and can improve data Sparse Problems improve the accuracy rate of score in predicting.
In one embodiment, further includes:
First preprocessing module 706, for by the user of article describe document representation be term vector matrix before, described in removal User describes the excessively high vocabulary of frequency in document, and the removal user describes the vocabulary of underfrequency in document.
In one embodiment, further includes:
Second preprocessing module 707, for according to user to the score information of article and the implicit factor of article, using general Before rate matrix decomposes the implicit factor for obtaining user, the article that no user describes document is removed.
In one embodiment, further includes:
Gaussian noise assigns module 708, and the implicit factor for the implicit factor and the user to the article calculates Before the inner product of vector, different degrees of Gaussian noise is assigned according to the implicit factor of the scoring quantity of article to the article, In, scoring quantity is fewer, and the Gaussian noise assigned is bigger.
The present invention also provides a kind of computer-readable mediums, are stored thereon with computer program, which is located Reason device realizes the matrix disassembling method based on convolution attention in above-mentioned any one embodiment when executing.
Referring to Fig. 8, in one embodiment, electronic equipment 800 of the invention includes memory 801 and processor 802, And the computer program that is stored in the memory 801 and can be executed by the processor 802, the processor 802 execute When the computer program, realize such as the matrix disassembling method based on convolution attention in above-mentioned any one embodiment.
In the present embodiment, controller 802 can be one or more application specific integrated circuit (ASIC), digital signal Processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components.Storage medium 801 can be used it is one or more its In include program code storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) on implement Computer program product form.Computer-readable storage media includes permanent and non-permanent, removable and non-removable Dynamic media can be accomplished by any method or technique information storage.Information can be computer readable instructions, data structure, The module of program or other data.The example of the storage medium of computer includes but is not limited to: phase change memory (PRAM), it is static with Machine access memory (SRAM), dynamic random access memory (DRAM), other kinds of random access memory (RAM), only It reads memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory techniques, read-only Compact disc read-only memory (CD-ROM), digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic magnetic Disk storage or other magnetic storage devices or any other non-transmission medium, can be used for storage can be accessed by a computing device letter Breath.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed system, device and method can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
The embodiments described above only express several embodiments of the present invention, and the description thereof is more specific and detailed, but simultaneously It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art It says, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to protection of the invention Range.

Claims (10)

1. a kind of matrix disassembling method based on convolution attention, which comprises the steps of:
It is term vector matrix that the user of article, which is described document representation,;
By the term vector Input matrix convolution attention neural network, the implicit factor of article is obtained;
According to user to the score information of article and the implicit factor of article, decomposed using probability matrix obtain user it is implicit because Son;
The implicit factor of the implicit factor and the user to the article calculates the inner product of vector, obtains user to the pre- of article Assessment point, and prediction rating matrix is established to the prediction scoring of article according to user;
It is excellent using error backpropagation algorithm according to the loss function between the prediction rating matrix and true rating matrix The parameter and the probability matrix for changing the convolution attention neural network are decomposed.
2. the matrix disassembling method according to claim 1 based on convolution attention, which is characterized in that by the user of article Describe document representation be term vector matrix before, further include following steps:
It removes the user and describes the excessively high vocabulary of frequency in document;
Remove the vocabulary that the user describes underfrequency in document.
3. the matrix disassembling method according to claim 1 based on convolution attention, which is characterized in that according to user to object The score information of product and the implicit factor of article further include as follows before decomposing the implicit factor for obtaining user using probability matrix Step:
Remove the article that no user describes document.
4. the matrix disassembling method according to claim 1 based on convolution attention, which is characterized in that the article Further include following steps before the implicit factor and the implicit factor of the user calculate the inner product of vector:
Different degrees of Gaussian noise is assigned to the article according to the scoring quantity of article, wherein scoring quantity is fewer, is assigned The Gaussian noise given is bigger.
5. a kind of matrix decomposition device based on convolution attention characterized by comprising
Term vector matrix module is term vector matrix for the user of article to be described document representation;
Article is implicit because of sub-acquisition module, for obtaining article for the term vector Input matrix convolution attention neural network The implicit factor;
User is implicit because of sub-acquisition module, for according to user to the score information of article and the implicit factor of article, using general Rate matrix decomposes the implicit factor for obtaining user;
Probability matrix decomposing module, the implicit factor for the implicit factor and the user to the article calculate in vector Product obtains user and scores the prediction of article, and establishes prediction rating matrix to the prediction scoring of article according to user;
Optimization module, for according to it is described prediction rating matrix and true rating matrix between loss function, it is anti-using error To propagation algorithm, the parameter and the probability matrix for optimizing the convolution attention neural network are decomposed.
6. the matrix decomposition device according to claim 5 based on convolution attention, which is characterized in that further include:
First preprocessing module before term vector matrix, removes the user and retouches for the user of article to be described document representation The excessively high vocabulary of frequency in document is stated, and the removal user describes the vocabulary of underfrequency in document.
7. the matrix decomposition device according to claim 5 based on convolution attention, which is characterized in that further include:
Second preprocessing module, for, to the score information of article and the implicit factor of article, using probability matrix according to user Before decomposing the implicit factor for obtaining user, the article that no user describes document is removed.
8. the matrix decomposition device according to claim 5 based on convolution attention, which is characterized in that further include:
Gaussian noise assigns module, and the implicit factor for the implicit factor and the user to the article calculates in vector Before product, different degrees of Gaussian noise is assigned to the article according to the scoring quantity of article, wherein scoring quantity is fewer, institute The Gaussian noise of imparting is bigger.
9. a kind of computer-readable medium, is stored thereon with computer program, it is characterised in that:
When the computer program is executed by processor realize as described in Claims 1-4 any one based on convolution attention Matrix disassembling method.
10. a kind of electronic equipment, including memory, processor and it is stored in the memory and can be executed by the processor Computer program, it is characterised in that:
When the processor executes the computer program, infusing as described in Claims 1-4 any one based on convolution is realized The matrix disassembling method for power of anticipating.
CN201811454614.1A 2018-11-30 2018-11-30 Matrix disassembling method, device and electronic equipment based on convolution attention Pending CN109754067A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811454614.1A CN109754067A (en) 2018-11-30 2018-11-30 Matrix disassembling method, device and electronic equipment based on convolution attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811454614.1A CN109754067A (en) 2018-11-30 2018-11-30 Matrix disassembling method, device and electronic equipment based on convolution attention

Publications (1)

Publication Number Publication Date
CN109754067A true CN109754067A (en) 2019-05-14

Family

ID=66403448

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811454614.1A Pending CN109754067A (en) 2018-11-30 2018-11-30 Matrix disassembling method, device and electronic equipment based on convolution attention

Country Status (1)

Country Link
CN (1) CN109754067A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110176012A (en) * 2019-05-28 2019-08-27 腾讯科技(深圳)有限公司 Target Segmentation method, pond method, apparatus and storage medium in image
CN112308173A (en) * 2020-12-28 2021-02-02 平安科技(深圳)有限公司 Multi-target object evaluation method based on multi-evaluation factor fusion and related equipment thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170132509A1 (en) * 2015-11-06 2017-05-11 Adobe Systems Incorporated Item recommendations via deep collaborative filtering
CN108090229A (en) * 2018-01-10 2018-05-29 广东工业大学 A kind of method and apparatus that rating matrix is determined based on convolutional neural networks
CN108287904A (en) * 2018-05-09 2018-07-17 重庆邮电大学 A kind of document context perception recommendation method decomposed based on socialization convolution matrix
CN108536856A (en) * 2018-04-17 2018-09-14 重庆邮电大学 Mixing collaborative filtering film recommended models based on two aside network structure
CN108874914A (en) * 2018-05-29 2018-11-23 吉林大学 A kind of information recommendation method based on the long-pending and neural collaborative filtering of picture scroll

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170132509A1 (en) * 2015-11-06 2017-05-11 Adobe Systems Incorporated Item recommendations via deep collaborative filtering
CN108090229A (en) * 2018-01-10 2018-05-29 广东工业大学 A kind of method and apparatus that rating matrix is determined based on convolutional neural networks
CN108536856A (en) * 2018-04-17 2018-09-14 重庆邮电大学 Mixing collaborative filtering film recommended models based on two aside network structure
CN108287904A (en) * 2018-05-09 2018-07-17 重庆邮电大学 A kind of document context perception recommendation method decomposed based on socialization convolution matrix
CN108874914A (en) * 2018-05-29 2018-11-23 吉林大学 A kind of information recommendation method based on the long-pending and neural collaborative filtering of picture scroll

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
商齐等: "ACMF:基于卷积注意力模型的评分预测研究", 《中文信息学报》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110176012A (en) * 2019-05-28 2019-08-27 腾讯科技(深圳)有限公司 Target Segmentation method, pond method, apparatus and storage medium in image
CN110176012B (en) * 2019-05-28 2022-12-13 腾讯科技(深圳)有限公司 Object segmentation method in image, pooling method, device and storage medium
CN112308173A (en) * 2020-12-28 2021-02-02 平安科技(深圳)有限公司 Multi-target object evaluation method based on multi-evaluation factor fusion and related equipment thereof
CN112308173B (en) * 2020-12-28 2021-04-09 平安科技(深圳)有限公司 Multi-target object evaluation method based on multi-evaluation factor fusion and related equipment thereof

Similar Documents

Publication Publication Date Title
CN109446430B (en) Product recommendation method and device, computer equipment and readable storage medium
CN110674407B (en) Hybrid recommendation method based on graph convolution neural network
CN105701191B (en) Pushed information click rate estimation method and device
US9875294B2 (en) Method and apparatus for classifying object based on social networking service, and storage medium
CN111931062A (en) Training method and related device of information recommendation model
CN109800853B (en) Matrix decomposition method and device fusing convolutional neural network and explicit feedback and electronic equipment
Clinchant et al. A domain adaptation regularization for denoising autoencoders
CN110704739A (en) Resource recommendation method and device and computer storage medium
CN105630767B (en) The comparative approach and device of a kind of text similarity
CN109558533B (en) Personalized content recommendation method and device based on multiple clustering
Claypo et al. Opinion mining for thai restaurant reviews using K-Means clustering and MRF feature selection
CN106776863A (en) The determination method of the text degree of correlation, the method for pushing and device of Query Result
CN104462489B (en) A kind of cross-module state search method based on Deep model
CN112396492A (en) Conversation recommendation method based on graph attention network and bidirectional long-short term memory network
CN111400615A (en) Resource recommendation method, device, equipment and storage medium
CN109754067A (en) Matrix disassembling method, device and electronic equipment based on convolution attention
CN114494747A (en) Model training method, image processing method, device, electronic device and medium
CN104035978A (en) Association discovering method and system
CN113159892A (en) Commodity recommendation method based on multi-mode commodity feature fusion
CN109033224A (en) A kind of Risk Text recognition methods and device
Muñoz et al. Shears: Unstructured sparsity with neural low-rank adapter search
Godara et al. Support vector machine classifier with principal component analysis and k mean for sarcasm detection
CN109325511B (en) Method for improving feature selection
CN114842247B (en) Characteristic accumulation-based graph convolution network semi-supervised node classification method
CN112784046B (en) Text clustering method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190514