CN107918652A - A kind of method that the film recommendation based on social networks is carried out using multi-modal e-learning - Google Patents

A kind of method that the film recommendation based on social networks is carried out using multi-modal e-learning Download PDF

Info

Publication number
CN107918652A
CN107918652A CN201711129690.0A CN201711129690A CN107918652A CN 107918652 A CN107918652 A CN 107918652A CN 201711129690 A CN201711129690 A CN 201711129690A CN 107918652 A CN107918652 A CN 107918652A
Authority
CN
China
Prior art keywords
film
msub
mrow
user
msup
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711129690.0A
Other languages
Chinese (zh)
Other versions
CN107918652B (en
Inventor
赵洲
孟令涛
林志杰
杨启凡
蔡登�
何晓飞
庄越挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201711129690.0A priority Critical patent/CN107918652B/en
Publication of CN107918652A publication Critical patent/CN107918652A/en
Application granted granted Critical
Publication of CN107918652B publication Critical patent/CN107918652B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/40Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
    • G06F16/43Querying
    • G06F16/435Filtering based on additional data, e.g. user or group profiles

Abstract

The invention discloses a kind of method that the film recommendation based on social networks is carried out using multi-modal e-learning.Mainly include the following steps:1) one group of video, user are directed to, builds the SMR networks containing its dependency relation.And it is directed to the SMR network struction sample paths to be formed, and it is directed to that the film in sample path and user node form the Integrative expression of film and the mapping table of user reaches, then it is directed to predefined loss function to be updated, tries to achieve final user's expression and film Integrative expression.2) for obtained user's expression and the Integrative expression of film, the film recommendation for user is produced.Recommend solution compared to general film, the present invention, which is extracted the multi-modal information of film and is directed to user, forms final validated user expression, then can more accurately reflect the characteristic of user and film, and produce the film recommendation for being more in line with requirement.Present invention effect acquired in film recommendation problem is more preferable compared to traditional method.

Description

It is a kind of to carry out what the film based on social networks was recommended using multi-modal e-learning Method
Technical field
Recommend to generate the present invention relates to film, more particularly to one kind carries out being based on social networks using multi-modal e-learning Film recommend method.
Background technology
Develop with network film industry, the film commending system based on social networks becomes a kind of important in gauze Network service, the target of the service are to be directed to each user, recommend relevant film based on its social networks.
Existing technology builds user primarily directed in the feedback of user and the feature of some artificial selection of film Film recommended models, this method lock into that lack can distinguishing characteristic and based on social networks for the effective of multi-modal movie contents Film commending system it is openness.
The present invention will use a kind of learning method based on random walk build multi-modal neutral net so that Learn expression and the recommended models of user of multi-modal movie contents.Using the movie contents expression for learning with user's Model is recommended to carry out the film based on social networks.
This method will first be directed to the heterogeneous social networks film recommendation network (SMR networks) of structure, utilize random trip The method structure sample path walked.The film node being directed to afterwards in path is extracted respectively using VGG networks with LSTM networks The poster information of film and the description information of film simultaneously synthesize the Integrative expression of film, the user node structure being directed in path User's mapping matrix is built, then is directed to predefined object function and obtains penalty values, using the method for SGD for user and film Matrix is updated.After multiple renewal, obtain final user's mapping and mapped with film.Utilize finally formed use Family maps the recommendation sequence for carrying out film for user with film mapping.
The content of the invention
It is an object of the invention to solve the problems of the prior art, lack in the prior art for multi-modal to overcome Movie contents it is effective can distinguishing characteristic and the film commending system based on social networks it is openness the problem of, the present invention provides A kind of multi-modal e-learning carries out the method that the film based on social networks is recommended.Concrete technical scheme of the present invention It is:
Solve the problems, such as that the film based on social networks is recommended using multi-modal e-learning, comprise the following steps:
1st, one group of film, user are directed to, according to evaluation structure of the mutual concern relation, user between user for film Build the SMR networks containing its dependency relation.
2nd, using the method for random walk, it is directed to SMR network structions sample path and proceeds as follows:For being adopted Film node in sample path, forms film description mapping using its poster and description information, for user node, generates user Description mapping.Penalty values are obtained in description mapping for film and user, and are carried out gradient and declined the final use of solution acquisition Family and film table reach.
3rd, the user obtained using study and film table reach the film recommendation for forming user.
Above-mentioned steps can be specifically using being implemented as described below mode:
1st, gather for given video and user, the correlation concentrated according to real data forms heterogeneous SMR Network.
2nd, the SMR networks completed for structure, using the method for random walk, are directed to each node and are sampled, Form the path of sampling.When carrying out path sampling, restriction may only be sampled from the person of being concerned to follower, because of the invention Think, the film tendency that follower likes can be influenced by not only being concerned the film the liked tendency of person.
The user node being directed on the path of structure, it is random to generate user's mapping matrix U.The film being directed on path Node, the film poster I={ i of the film are generated using convolutional neural networks VGG-Net1,i2,...,inVideo expression Y= {y1,y2,...,yn}:Wherein n represents the number of poster, yiRepresent i-th poster iiThe output expression of corresponding VGG-Net.Profit Text representation X={ the x of the film description of the film are generated with LSTM networks1,x2,...,xn}:Wherein n represents film description Number, and identical with the number of poster is n, xiRepresent the i-th segment description diThe output of corresponding LSTM networks.For each section Description, is divided into each sentence by the description of this section first.Word insertion mapping is utilized for each word of each sentence Method, obtain each word word mapping dit, by the word sequence of mapping (d of each sentencei1,di2,...,dik) make For the input of LSTM networks, the output for obtaining the LSTM networks of each sentence is reached as the sentence mapping table, by each section All sentence expression obtain the expression x of the paragraph by an extra Mean-pooling layeri
3rd, for the obtained image expression Y={ y of film poster1,y2,...,ynWith film obtained text is described Express X={ x1,x2,...,xn, same multi-modal feature blending space is mapped to according to equation below, and both are added up, Obtain the activation output of multi-modal mixed layer:
zj=g (Wd·xj+αWiyj)
Wherein, WdWith WiRespectively correspond to film text Expressive Features component xjWith film poster characteristic component yjDimension Degree conversion weight matrix, what α represented film text Expressive Features component and film poster characteristic component is added relative weighting ,+generation Table with film poster characteristic component be added by element for film text Expressive Features component.zjThe film being ultimately formed The mixture table of jth item content reach.G () represents the other arctan function of Element-Level as follows:
4th, the mix description Z=(z of all films are being obtained1,z2,...,zn) the film matrix V and all users that form After mapping the user's mapping matrix U formed, final training network is built using following object function:
Wherein, hiFor the map vector of some film or some user of film matrix V or user's mapping matrix U, loss Function lv() likes degree, loss function l relatively for different user for certain movieu() is specific user for difference Film likes degree relatively, is described in detail below for the two functions:
For loss function lv(), its concrete form are as follows:
Wherein u+Represent for film viThere are the user of higher degree of liking, u-Represent for film viThere is relatively low degree of liking User, m represent the marginal value in object function, 0 < m < 1.
For loss function lu(), its concrete form are as follows:
Wherein user uiFor film v+Than for film v-There is the degree of liking of higher, m represents the edge in object function Value, 0 < m < 1.
Function f in above formulauj() represents order models of j-th of user for the Integrative expression of movie contents, and full Foot is for arbitrary orderly triple (j, i, k) ∈ T={ (j, i, k) }, just like lower inequality into establishment:
Wherein, there are triple (j, i, k) ∈ T to represent j-th of user and prefer film i, u relative to film kjFor user j Map vector, zkFor the Integrative expression vector of film k.
5th, subsequently for the training penalty values that the object function structure in step 4 is shown below as final whole training The object function of model:
Wherein, θ is all parameters in whole multi-modal network model, and λ is the training penalty values and canonical of object function Relative size balance parameter between.
6th, undated parameter is carried out using the method for stochastic gradient descent for the final object function in step 5, the present invention, And using the learning rate update method of Adagrad, i.e., for parameter θ, walk in t and be updated according to equation below:
Wherein, ρ is initial learning rate, gtFor the sub- gradient walked in t.
7th, by preset times be cyclically updated parameter after, the synthesis for forming final user's mapping matrix and film is reflected Penetrate.Comprehensive followed by the user's mapping matrix and film formed maps, and for the user given and corresponding film, judges The user and is ranked up for the opposite favorable rating of some films, and the more forward film that will sort recommends the user, The film formed for user recommends sequence.
Brief description of the drawings
Fig. 1 is the overall schematic used in the present invention using the multi-modal network of SMR e-learnings.Fig. 2 is the present invention The example schematic of used SMR networks.
Embodiment
The present invention is further elaborated and illustrated with reference to the accompanying drawings and detailed description.
A kind of as shown in Figure 1, side that the film recommendation based on social networks is carried out using multi-modal e-learning of the present invention Method includes the following steps:
1) one group of film, user are directed to, according to evaluation structure of the mutual concern relation, user between user for film Build the SMR networks containing its dependency relation;
2) for the obtained SMR networks of step 1), using the method for random walk, it is directed to SMR network structions sampling Path simultaneously proceeds as follows:For the film node in institute's sample path, form film using its poster and description information and retouch Mapping is stated, for user node, generates the description mapping of user;Penalty values are obtained for film and user's mapping, and carry out ladder Degree declines the final user of solution acquisition and film table reaches;
3) user and film table obtained using step 2) study reaches the film recommendation for forming user.
The step 2) obtains final user using the method that multi-modal network parameter updates and film table reaches, its Concretely comprise the following steps:
2.1) method for utilizing random walk for the SMR networks that step 1) is formed, is directed to each node and is adopted Sample, forms the path of sampling;And when carrying out path sampling, restriction may only be sampled from the person of being concerned to follower;
2.2) node sampled using step 2.1), it is random to generate user's mapping matrix U firstly for user node; For the film node on path, the film poster I={ i of the film are generated using convolutional neural networks VGG-Net1,i2,..., inVideo expression Y={ y1,y2,...,yn}:Wherein n represents the number of poster, yiRepresent i-th poster iiCorresponding VGG- The output expression of Net;Text representation X={ the x of the film description of the film are generated using LSTM networks1,x2,...,xn}:Wherein N represents the number of film description, and identical with the number of poster is n, xiRepresent the i-th segment description diCorresponding LSTM networks Output;For each segment description, the description of this section is divided into each sentence first, for each word profit of each sentence With the method for word insertion mapping, the word mapping d of each word is obtainedit, by the word sequence of mapping of each sentence (di1,di2,...,dik) input as LSTM networks, the output for obtaining the LSTM networks of each sentence reflects as the sentence Firing table reaches, and each section of all sentences are expressed and obtain the expression x of the paragraph by an extra Mean-pooling layeri
2.3) the video expression Y={ y for the film found out using step 2.2)1,y2,...,ynDescribed with film it is corresponding Text representation X={ x1,x2,...,xn, same multi-modal feature blending space is mapped to according to equation below, and both are added Get up, obtain the activation output of multi-modal mixed layer:
zj=g (Wd·xj+αWiyj)
Wherein, WdWith WiRespectively correspond to film text Expressive Features component xjWith film poster characteristic component yjDimension Degree conversion weight matrix, what α represented film text Expressive Features component and film poster characteristic component is added relative weighting ,+generation Table with film poster characteristic component be added by element for film text Expressive Features component, zjThe film being ultimately formed The mixture table of jth item content reach;G () represents the other arctan function of Element-Level as follows:
2.4) the mix description Z=(z of all films of step 2.3) acquisition are directed to1,z2,...,zn) form film The user mapping matrix U that the mapping for all users that matrix V is obtained with step 2.2) is formed, using following object function come structure Build final training network:
Wherein, hiFor the map vector of some film or some user of film matrix V or user's mapping matrix U, loss Function lv() likes degree, loss function l relatively for different user for certain movieu() is specific user for difference Film likes degree relatively, is described in detail below for the two functions:
For loss function lv(), its concrete form are as follows:
Wherein u+Represent for film viThere are the user of higher degree of liking, u-Represent for film viThere is relatively low degree of liking User, m represent the marginal value in object function, 0 < m < 1;
For loss function lu(), its concrete form are as follows:
Wherein user uiFor film v+Than for film v-There is the degree of liking of higher, m represents the edge in object function Value, 0 < m < 1;
Function f in above formulauj() represents order models of j-th of user for the Integrative expression of movie contents, and full Foot is for arbitrary orderly triple (j, i, k) ∈ T={ (j, i, k) }, just like lower inequality into establishment:
Wherein, there are triple (j, i, k) ∈ T to represent j-th of user and prefer film i, u relative to film kjFor user j Map vector, zkFor the Integrative expression vector of film k;
2.5) the training penalty values that the object function structure being directed in step 2.4) is shown below are as final whole The object function of training pattern:
Wherein, θ is all parameters in whole multi-modal network model, and λ is the training penalty values and canonical of object function Relative size balance parameter between;
2.6) object function being directed in step 2.5), carrys out undated parameter using the method for stochastic gradient descent, and Using the learning rate update method of Adagrad, i.e., for parameter θ, walk in t and be updated according to equation below:
Wherein, ρ is initial learning rate, gtFor the sub- gradient walked in t;
2.7) be directed to the parameter updating method of step 2.6), by preset times be cyclically updated parameter after, formed The comprehensive mapping of final user's mapping matrix and film;
The user and film table that the step 3) is obtained using step 2) study reach the film recommendation for forming user, it has Body step is:
The final user's mapping matrix and the comprehensive of film formed for step 2.8) maps, and utilizes formed user The comprehensive mapping of mapping matrix and film, for the user given and corresponding film, judges phase of the user for some films To favorable rating, and it is ranked up, the more forward film that will sort recommends the user, forms the film recommendation for user Sequence.
The above method is applied in the following example below, it is specific in embodiment with the technique effect of the embodiment present invention Step repeats no more.
Embodiment
The present invention carries out experimental verification at oneself above the data set that bean cotyledon website obtains, and is directed to bean cotyledon website, this Include the information of 4242 users in data set used in invention altogether, include the social networks of these users, and each user For opposite favorable rating totally 99641 film sequencing informations of different films, the letter of 59695 films is further included in data set Breath, including the description information of the poster of film and film, and it is 224* that the present invention changes its size again for the poster of film 224。
In order to objectively evaluate the performance of the algorithm of the present invention, the present invention has used nDCG@in selected test set 10, MAP@10, Precision@1 are evaluated for the effect of the present invention, and respectively for accounting for training data 20%, The test data set of 40%, 60% and 80% ratio quantity carries out asking for for result.According to the step described in embodiment Suddenly, the experimental result of gained is as shown in table 1- tables 3:
1 present invention of table is directed to 10 substandard test results of nDCG@of different test set accountings
2 present invention of table is directed to 10 substandard test results of MAP@of different test set accountings
3 present invention of table is directed to 1 substandard test results of Precision@of different test set accountings.

Claims (3)

  1. A kind of 1. method that the film recommendation based on social networks is carried out using multi-modal e-learning, it is characterised in that including such as Lower step:
    1) one group of film, user are directed to, is contained according to the evaluation structure of the mutual concern relation between user, user for film There are the SMR networks of its dependency relation;
    2) for the obtained SMR networks of step 1), using the method for random walk, SMR network struction sample paths are directed to And proceed as follows:For the film node in institute's sample path, form film description using its poster and description information and reflect Penetrate, for user node, generate the description mapping of user;Penalty values are obtained in description mapping for film and user, and are carried out Gradient declines the final user of solution acquisition and film table reaches;
    3) user and film table obtained using step 2) study reaches the film recommendation for forming user.
  2. 2. the method for the film recommendation based on social networks is carried out using multi-modal e-learning according to claim 1, its It is characterized in that the step 2) concretely comprises the following steps:
    2.1) method for utilizing random walk for the SMR networks that step 1) is formed, is directed to each node and is sampled, shape Into the path of sampling;And when carrying out path sampling, restriction may only be sampled from the person of being concerned to follower;
    2.2) node sampled using step 2.1), it is random to generate user's mapping matrix U firstly for user node;For Film node on path, the film poster I={ i of the film are generated using convolutional neural networks VGG-Net1,i2,...,in} Video expression Y={ y1,y2,...,yn}:Wherein n represents the number of poster, yiRepresent i-th poster iiCorresponding VGG-Net Output expression;Text representation X={ the x of the film description of the film are generated using LSTM networks1,x2,...,xn}:Wherein n generations The number of table film description, and identical with the number of poster is n, xiRepresent the i-th segment description diCorresponding LSTM networks it is defeated Go out;For each segment description, the description of this section is divided into each sentence first, is utilized for each word of each sentence The method of word insertion mapping, obtains the word mapping d of each wordit, by the word sequence of mapping (d of each sentencei1, di2,...,dik) input as LSTM networks, the output of LSTM networks of each sentence is obtained as the sentence mapping table Reach, each section of all sentences are expressed the expression x of the paragraph is obtained by an extra Mean-pooling layeri
    2.3) the video expression Y={ y for the film found out using step 2.2)1,y2,...,ynWith film corresponding text is described Express X={ x1,x2,...,xn, same multi-modal feature blending space is mapped to according to equation below, and both are added up, Obtain the activation output of multi-modal mixed layer:
    zj=g (Wd·xj+αWiyj)
    Wherein, WdWith WiRespectively correspond to film text Expressive Features component xjWith film poster characteristic component yjDimension conversion Weight matrix, what α represented film text Expressive Features component and film poster characteristic component is added relative weighting ,+represent for Film text Expressive Features component with film poster characteristic component be added by element, zjThe jth for the film being ultimately formed The mixture table of item content reaches;G () represents the other arctan function of Element-Level as follows:
    <mrow> <mi>g</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mn>1.7159</mn> <mo>&amp;CenterDot;</mo> <mi>tanh</mi> <mrow> <mo>(</mo> <mfrac> <mn>2</mn> <mn>3</mn> </mfrac> <mi>x</mi> <mo>)</mo> </mrow> </mrow>
    2.4) the mix description Z=(z of all films of step 2.3) acquisition are directed to1,z2,...,zn) form film matrix V The user mapping matrix U that the mapping of all users obtained with step 2.2) is formed, is built most using following object function Whole training network:
    <mrow> <mi>l</mi> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <msup> <mi>u</mi> <mo>+</mo> </msup> <mo>,</mo> <msup> <mi>u</mi> <mo>-</mo> </msup> <mo>&amp;Element;</mo> <mi>W</mi> </mrow> </munder> <msub> <mi>l</mi> <mi>v</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>,</mo> <msup> <mi>u</mi> <mo>+</mo> </msup> <mo>,</mo> <msup> <mi>u</mi> <mo>-</mo> </msup> <mo>)</mo> </mrow> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>&amp;Element;</mo> <mi>V</mi> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <munder> <mi>&amp;Sigma;</mi> <mrow> <msup> <mi>v</mi> <mo>+</mo> </msup> <mo>,</mo> <msup> <mi>v</mi> <mo>-</mo> </msup> <mo>&amp;Element;</mo> <mi>W</mi> </mrow> </munder> <msub> <mi>l</mi> <mi>u</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>,</mo> <msup> <mi>v</mi> <mo>+</mo> </msup> <mo>,</mo> <msup> <mi>v</mi> <mo>-</mo> </msup> <mo>)</mo> </mrow> <mo>+</mo> <munder> <mi>&amp;Sigma;</mi> <mrow> <mi>u</mi> <mo>&amp;Element;</mo> <mi>W</mi> <mo>-</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> </mrow> </munder> <mo>|</mo> <mo>|</mo> <mi>u</mi> <mo>-</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>,</mo> </mrow> </mtd> <mtd> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>&amp;Element;</mo> <mi>U</mi> </mrow> </mtd> </mtr> </mtable> </mfenced> </mrow>
    Wherein, hiFor the map vector of some film or some user of film matrix V or user's mapping matrix U, activation loss letter Number lv() likes degree, loss function l relatively for different user for certain movieu() is specific user for different electricity Shadow likes degree relatively, is described in detail below for the two functions:
    For loss function lv(), its concrete form are as follows:
    <mrow> <msub> <mi>l</mi> <mi>v</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>,</mo> <msup> <mi>u</mi> <mo>+</mo> </msup> <mo>,</mo> <msup> <mi>u</mi> <mo>-</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mi>m</mi> <mo>+</mo> <msub> <mi>f</mi> <msup> <mi>u</mi> <mo>-</mo> </msup> </msub> <mo>(</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> <mo>-</mo> <msub> <mi>f</mi> <msup> <mi>u</mi> <mo>+</mo> </msup> </msub> <mo>(</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>&amp;Element;</mo> <mi>V</mi> </mrow>
    Wherein u+Represent for film viThere are the user of higher degree of liking, u-Represent for film viThere are the user of relatively low degree of liking, m Represent the marginal value in object function, 0 < m < 1;
    For loss function lu(), its concrete form are as follows:
    <mrow> <msub> <mi>l</mi> <mi>u</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>,</mo> <msup> <mi>v</mi> <mo>+</mo> </msup> <mo>,</mo> <msup> <mi>v</mi> <mo>-</mo> </msup> <mo>)</mo> </mrow> <mo>=</mo> <mi>m</mi> <mi>a</mi> <mi>x</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <mi>m</mi> <mo>+</mo> <msub> <mi>f</mi> <msub> <mi>h</mi> <mi>i</mi> </msub> </msub> <mo>(</mo> <msup> <mi>v</mi> <mo>-</mo> </msup> <mo>)</mo> <mo>-</mo> <msub> <mi>f</mi> <msub> <mi>h</mi> <mi>i</mi> </msub> </msub> <mo>(</mo> <msup> <mi>v</mi> <mo>+</mo> </msup> <mo>)</mo> <mo>)</mo> </mrow> <mo>,</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>&amp;Element;</mo> <mi>U</mi> </mrow>
    Wherein user uiFor film v+Than for film v-There is a degree of liking of higher, m represents the marginal value in object function, and 0 < m < 1;
    Function in above formulaRepresent order models of j-th of user for the Integrative expression of movie contents, and satisfaction pair In arbitrary triple (j, i, k) ∈ T={ (j, i, k) } in order, just like lower inequality into establishment:
    Wherein, orderly triple (j, i, k) ∈ T represent j-th of user and prefer film i, u relative to film kjFor reflecting for user j Directive amount, zkFor the Integrative expression vector of film k;
    2.5) the training penalty values that the object function structure being directed in step 2.4) is shown below are as final whole training The object function of model:
    <mrow> <munder> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> <mi>&amp;theta;</mi> </munder> <mi>L</mi> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <munder> <mo>&amp;Sigma;</mo> <mi>W</mi> </munder> <munder> <mo>&amp;Sigma;</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> </munder> <mi>l</mi> <mrow> <mo>(</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>+</mo> <mi>&amp;lambda;</mi> <mo>|</mo> <mo>|</mo> <mi>&amp;theta;</mi> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow>
    Wherein, θ is all parameters in whole multi-modal network model, λ be object function training penalty values and regular terms it Between relative size balance parameter;
    2.6) object function being directed in step 2.5), carrys out undated parameter using the method for stochastic gradient descent, and uses The learning rate update method of Adagrad, i.e., for parameter θ, walk in t and be updated according to equation below:
    <mrow> <msub> <mi>&amp;theta;</mi> <mi>t</mi> </msub> <mo>&amp;LeftArrow;</mo> <msub> <mi>&amp;theta;</mi> <mrow> <mi>t</mi> <mo>-</mo> <mn>1</mn> </mrow> </msub> <mo>-</mo> <mfrac> <mi>&amp;rho;</mi> <msqrt> <mrow> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>t</mi> </msubsup> <msubsup> <mi>g</mi> <mi>i</mi> <mn>2</mn> </msubsup> </mrow> </msqrt> </mfrac> <msub> <mi>g</mi> <mi>t</mi> </msub> </mrow>
    Wherein, ρ is initial learning rate, gtFor the sub- gradient walked in t;
    2.7) be directed to the parameter updating method of step 2.6), by preset times be cyclically updated parameter after, formed final User's mapping matrix and film comprehensive mapping.
  3. 3. the method for the film recommendation based on social networks is carried out using multi-modal e-learning according to claim 1, its It is characterized in that the step 3) is specially:
    The final user's mapping matrix and the comprehensive of film formed for step 2.7) maps, and utilizes formed user to map The comprehensive mapping of matrix and film, for the user given and corresponding film, judges opposite happiness of the user for some films Love degree, and be ranked up, the more forward film that will sort recommends the user, and the film formed for user recommends row Sequence.
CN201711129690.0A 2017-11-15 2017-11-15 Method for recommending movies based on social relations by utilizing multi-modal network learning Active CN107918652B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711129690.0A CN107918652B (en) 2017-11-15 2017-11-15 Method for recommending movies based on social relations by utilizing multi-modal network learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711129690.0A CN107918652B (en) 2017-11-15 2017-11-15 Method for recommending movies based on social relations by utilizing multi-modal network learning

Publications (2)

Publication Number Publication Date
CN107918652A true CN107918652A (en) 2018-04-17
CN107918652B CN107918652B (en) 2020-10-02

Family

ID=61896355

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711129690.0A Active CN107918652B (en) 2017-11-15 2017-11-15 Method for recommending movies based on social relations by utilizing multi-modal network learning

Country Status (1)

Country Link
CN (1) CN107918652B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108628990A (en) * 2018-04-28 2018-10-09 京东方科技集团股份有限公司 Recommendation method, computer installation and readable storage medium storing program for executing
CN109587527A (en) * 2018-11-09 2019-04-05 青岛聚看云科技有限公司 A kind of method and device that individualized video is recommended
CN112101309A (en) * 2020-11-12 2020-12-18 北京道达天际科技有限公司 Ground object target identification method and device based on deep learning segmentation network

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136309A (en) * 2011-11-21 2013-06-05 微软公司 Method for carrying out modeling on social intensity through learning based on core
CN103714130A (en) * 2013-12-12 2014-04-09 深圳先进技术研究院 Video recommendation system and method thereof
CN106169083A (en) * 2016-07-05 2016-11-30 广州市香港科大霍英东研究院 The film of view-based access control model feature recommends method and system
US9659248B1 (en) * 2016-01-19 2017-05-23 International Business Machines Corporation Machine learning and training a computer-implemented neural network to retrieve semantically equivalent questions using hybrid in-memory representations
CN106997387A (en) * 2017-03-28 2017-08-01 中国科学院自动化研究所 The multi-modal automaticabstracting matched based on text image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136309A (en) * 2011-11-21 2013-06-05 微软公司 Method for carrying out modeling on social intensity through learning based on core
CN103714130A (en) * 2013-12-12 2014-04-09 深圳先进技术研究院 Video recommendation system and method thereof
US9659248B1 (en) * 2016-01-19 2017-05-23 International Business Machines Corporation Machine learning and training a computer-implemented neural network to retrieve semantically equivalent questions using hybrid in-memory representations
CN106169083A (en) * 2016-07-05 2016-11-30 广州市香港科大霍英东研究院 The film of view-based access control model feature recommends method and system
CN106997387A (en) * 2017-03-28 2017-08-01 中国科学院自动化研究所 The multi-modal automaticabstracting matched based on text image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108628990A (en) * 2018-04-28 2018-10-09 京东方科技集团股份有限公司 Recommendation method, computer installation and readable storage medium storing program for executing
CN109587527A (en) * 2018-11-09 2019-04-05 青岛聚看云科技有限公司 A kind of method and device that individualized video is recommended
CN109587527B (en) * 2018-11-09 2021-06-04 青岛聚看云科技有限公司 Personalized video recommendation method and device
CN112101309A (en) * 2020-11-12 2020-12-18 北京道达天际科技有限公司 Ground object target identification method and device based on deep learning segmentation network

Also Published As

Publication number Publication date
CN107918652B (en) 2020-10-02

Similar Documents

Publication Publication Date Title
Chandrasekhar et al. Tractable and consistent random graph models
CN111428147A (en) Social recommendation method of heterogeneous graph volume network combining social and interest information
Bobadilla et al. Collaborative filtering adapted to recommender systems of e-learning
CN105260390B (en) A kind of item recommendation method based on joint probability matrix decomposition towards group
CN103399858B (en) Based on the socialization&#39;s collaborative filtering recommending method trusted
CN106503623B (en) Facial image age estimation method based on convolutional neural networks
Beynon DS/AHP method: A mathematical analysis, including an understanding of uncertainty
CN105335157B (en) A kind of demand classes sort method for integrating subjective and objective evaluation and system
CN106779867A (en) Support vector regression based on context-aware recommends method and system
CN103077247B (en) The method for building up of friends transmission tree in a kind of social networks
Mele Does school desegregation promote diverse interactions? An equilibrium model of segregation within schools
CN107918652A (en) A kind of method that the film recommendation based on social networks is carried out using multi-modal e-learning
CN105913323A (en) PullRequest reviewer recommend method of GitHub open source community
CN105760649B (en) A kind of credible measure towards big data
Mi et al. Probabilistic graphical models for boosting cardinal and ordinal peer grading in MOOCs
CN109376857A (en) A kind of multi-modal depth internet startup disk method of fusion structure and attribute information
Fumanal-Idocin et al. Community detection and social network analysis based on the Italian wars of the 15th century
Wang et al. Matrix representations of the inverse problem in the graph model for conflict resolution
CN111191081B (en) Developer recommendation method and device based on heterogeneous information network
US20020013676A1 (en) Techniques for objectively measuring discrepancies in human value systems and applications therefor
CN110334286A (en) A kind of personalized recommendation method based on trusting relationship
CN108256678A (en) A kind of method that double-deck attention network using sorting measure carries out customer relationship prediction
CN106022723A (en) Personalized recommendation method of employment information
CN110334278A (en) A kind of web services recommended method based on improvement deep learning
CN104572915B (en) One kind is based on the enhanced customer incident relatedness computation method of content environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant