CN106169083A - The film of view-based access control model feature recommends method and system - Google Patents

The film of view-based access control model feature recommends method and system Download PDF

Info

Publication number
CN106169083A
CN106169083A CN201610522342.9A CN201610522342A CN106169083A CN 106169083 A CN106169083 A CN 106169083A CN 201610522342 A CN201610522342 A CN 201610522342A CN 106169083 A CN106169083 A CN 106169083A
Authority
CN
China
Prior art keywords
film
user
feature
marking
visual signature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610522342.9A
Other languages
Chinese (zh)
Other versions
CN106169083B (en
Inventor
赵莉莉
吕仲琪
杨强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou HKUST Fok Ying Tung Research Institute
Original Assignee
Guangzhou HKUST Fok Ying Tung Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou HKUST Fok Ying Tung Research Institute filed Critical Guangzhou HKUST Fok Ying Tung Research Institute
Priority to CN201610522342.9A priority Critical patent/CN106169083B/en
Publication of CN106169083A publication Critical patent/CN106169083A/en
Application granted granted Critical
Publication of CN106169083B publication Critical patent/CN106169083B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Image Analysis (AREA)

Abstract

The film that the invention discloses a kind of view-based access control model feature recommends method, including: according to the characterization factor of described user to be recommended, the characterization factor of described film of not marking, the coefficient of similarity of described do not mark film and remaining each film, in advance extract described in the weight of each feature do not marked in the visual signature of film and described visual signature, use the film forecast model pre-build, it was predicted that the prediction of described film of not marking is marked by described user to be recommended;Wherein, described visual signature includes color histogram, SIFT feature, CNN feature and movies category feature;According to described prediction scoring judge whether to described user to be recommended recommend described in do not mark film.It addition, the embodiment of the invention also discloses the film commending system of a kind of view-based access control model feature.Use the embodiment of the present invention, it is possible to increase the accuracy rate that film is recommended, and improve Consumer's Experience.

Description

The film of view-based access control model feature recommends method and system
Technical field
The present invention relates to field of computer technology, the film particularly relating to a kind of view-based access control model feature is recommended method and is System.
Background technology
Film commending system is one of the most popular commending system form.The operation principle that film is recommended is by analyzing The film that user may like is predicted in the history viewing behavior of user.But owing to the historical behavior data of user are very few, Causing recommendation effect very poor, this phenomenon is referred to as Sparse Problem.In present proposed algorithm, matrix decomposition is current It is proved to one of most efficient method.According to the difference using data, it is recommended that the matrix decomposition technology in algorithm is divided into without upper Matrix decomposition hereafter and matrix decomposition based on context.
Matrix disassembling method without context can be described as, given user-film such 2-D data X, according to Partially observable value carrys out filled matrix missing values.In film commending system, each value X observed in matrixuvRepresent and use The family u preference to film v.From the point of view of movie system based on marking, this preference can be the numeral between 1-5, numeral The biggest expression user more likes.The target of matrix disassembling method is that user-film scoring matrix is resolved into user's factor matrix Product X=U*V with film factor matrix.For the understanding of the two factor matrix, illustrate with user's factor matrix, U be by User is mapped to the eigenmatrix of lower dimensional space, such as user in the story of a play or opera to the preference of film, the aspect such as film types potential Preference.Film recommended models goes out U and V by iterative learning, just can carry out the Missing Data Filling to X, i.e. film prediction.This The advantage of way is high dimensional data X to be decomposed into two low-dimensional fac-tor, alleviates the Sparse of X in some degree Problem.But in openness data, still unavoidably learn the model poor effect when recommending.In order to make up This defect, matrix disassembling method based on context arises at the historic moment.
So-called matrix disassembling method based on context, is in addition to outside film marking data, and film is recommended to also need to it He supports extra data.These excessive datas are referred to as context data.In early days, context data is mainly around user and electricity The base attribute information of shadow uses, such as user's sex, the age, city, place, the director of film, acts the leading role, film types etc.. Later along with the rise of social networks, film proposed algorithm is also contemplated for the impact adding social factors to recommending.Social networks this The impact in film is recommended of the part data can be illustrated as, if two users have common social networks chain, then these are two years old Individual user there is also some common ground in terms of film appreciation.By adding the restriction of social factors, the accuracy rate of film prediction Get back further lifting.Still later, the development of Web 2.0, occur in that user's original content (User Generated Content).Such as label, film comment etc. broadly falls into user's original content.The feature of this kind of data is that user personality color is dense Strong.Because the main target that film is recommended is personalized, so one of contemplated factor as film recommendation of this kind of data. In use, this kind of data mainly make up the interference that the feature of user's dimension is not enough to reduce Sparse Problems to recommending.Except Context data described above, the user behavior being different from marking is also included into browsing of context category, such as user, point Hitting, the behavior such as labelling, these data are also referred to as implicit feedback (implicit feedback) behavior.In general, implicit expression is anti- The behavioral data of feedback is used for retraining the study of two factor matrixs.The great advantage that film based on contextual techniques is recommended exists In, it was predicted that process is no longer determined by single marking data, but is together decided on by marking and two data of context.Work as marking The when of shortage of data, it was predicted that still can derive with contextual information.
In existing matrix disassembling method based on context, Most models is as one by these context datas Plant constraint and put into model for retraining the study of two factor matrixs in order to reduce the openness impact on model.This use shape The supposed premise of formula be marking behavior and the context of user be height correlation.For example, it is assumed that two users have common friend, It is judged that the two user has common preference in terms of film appreciation.It is true that this hypothesis is at factor matrix Learning process in added certain restriction.Therefore, although existing matrix disassembling method based on context can be necessarily Promote effect and the performance of recommendation in degree, but do not ensure that the lifting recommending accuracy rate.
Summary of the invention
The embodiment of the present invention proposes the film of a kind of view-based access control model feature and recommends method and system, it is possible to increase film is recommended Accuracy rate, and improve Consumer's Experience.
The film of a kind of view-based access control model feature that the embodiment of the present invention provides recommends method, including:
Obtain the characterization factor of the user to be recommended that training in advance goes out;
Obtain the characterization factor of the film of not marking of user described to be recommended that training in advance goes out, described do not mark film with The weight of each feature in the coefficient of similarity of its each film of She and the visual signature of described film of not marking;Wherein, Described visual signature includes color histogram, SIFT feature, CNN feature and movies category feature;
Characterization factor according to described user to be recommended, the characterization factor of described film of not marking, described film of not marking With the coefficient of similarity of its each film of She, in advance extract described in do not mark the visual signature of film and described visual signature In the weight of each feature, use the film forecast model pre-build, it was predicted that described user to be recommended does not marks to described The prediction scoring of film;
According to described prediction scoring judge whether to described user to be recommended recommend described in do not mark film.
Further, the described characterization factor according to described user to be recommended, the characterization factor of described film of not marking, institute The coefficient of similarity of stating do not mark film and its each film of She, the visual signature of the described film of not marking extracted in advance and The weight of each feature in described visual signature, uses the film forecast model pre-build, it was predicted that described user to be recommended Prediction to described film of not marking is marked, and specifically includes:
According to the described film coefficient of similarity with its each film of She of not marking, call visual signature function, it is thus achieved that institute State the vision similar features of film of not marking;
User's biased data according to the user described to be recommended obtained in advance, the benchmark average mark obtained in advance, described The characterization factor of user to be recommended, the characterization factor of described film of not marking, the vision similar features of described film of not marking, pre- The weight of each feature in the visual signature of the described film of not marking first obtained and described visual signature, uses and builds in advance Vertical film forecast model, it was predicted that the prediction of described film of not marking is marked by described user to be recommended;
Described visual signature function is:
η = | N ( θ , v ) | - 1 2 Σ s ∈ N ( θ , v ) θ s v χ ~ s Φ ( v ) ;
Wherein, η is the vision similar features of film v of not marking;(θ, v) for for giving film screening of not marking high similar for N The film screening function of degree film;θsvCoefficient of similarity for film v and the high similarity film s that do not marks;For high similarity The visual signature of film s;The inner product of the Φ (v) vector by being made up of the visual signature of the film v that do not marks.
Preferably, described film forecast model is:
x ^ u v = U * u T ( V * v + η ) + W * v T ψ v + b u + μ ;
Wherein,For user u to be recommended, the prediction of the film v that do not marks is marked, U*uFor user u to be recommended feature because of Son, V*vFor the characterization factor of the film v that do not marks, η is the vision similar features of film of not marking, W*vFor by not marking film v's The vector that the weight of each feature in visual signature is constituted, ψvFor the visual signature of the film v that do not marks, buFor use to be recommended User's biased data of family u, average mark on the basis of μ.
Further, before the characterization factor of the user to be recommended gone out in described acquisition training in advance, also include:
The each user in m user is obtained to its every film marked in n portion film from score data storehouse Score data;Described m user includes that described user to be recommended, described n portion film do not mark film described in including;
From picture image data storehouse, obtain the film image of described n portion film, and extract the film of every film respectively The visual signature of image;
Calculate described each user to the average mark of its all films marked in described n portion film, it is thus achieved that described User's biased data of each user;
Calculate described m the user average mark to all score data of described n portion film, it is thus achieved that described benchmark is average Point;
According to described each user to the score data of the film that it was marked, the visual signature of described every film, User's biased data of described each user and described benchmark average mark, be trained the feature analysis model pre-build, Train in the characterization factor of described each user, the characterization factor of described every film, described every film and n portion film The weight of each feature in the coefficient of similarity of remaining each film and the visual signature of described every film;
Described feature analysis model is:
min b * , W * , θ * , U * , V * Σ ( u , v ) ( λ 1 b i 2 + λ 2 W * j 2 + λ 3 | | U * i | | 2 + λ 4 | | V * j | | 2 + λ 5 θ s j 2 + ( x i j - μ - b i - W * j T ψ j - U * i T ( V * j + η ) ) 2 ) 1 ≤ i ≤ m ; 1 ≤ j ≤ n ;
Wherein, λ1, λ2, λ3, λ4, λ5For default coefficient;biUser's biased data for user i;W*jFor regarding by film j The vector that the weight of each feature in feel feature is constituted;U*iCharacterization factor for user i;V*jFor film j feature because of Son;θsjCoefficient of similarity for film j Yu similarity film s high with it;xijFor the user i score data to film j;μ is base Quasi-average mark;ψjVisual signature for film j;η is the vision similar features of film j.
Further, in described each user from score data storehouse in m user of acquisition is to n portion film, it is commented Before the score data of every the film divided, also include:
The viewing data of each user of Real-time Collection;Described viewing data include score data and the electricity of the commented film of user Shadow image;
Update described score data storehouse according to described score data, and update described film image according to described film image Data base;
Respectively the score data storehouse after updating and the picture image data storehouse after renewal obtain data, with to described spy Levy analytical model and re-start training.
Correspondingly, the embodiment of the present invention also provides for the film commending system of a kind of view-based access control model feature, including:
First data acquisition module, for obtaining the characterization factor of the user to be recommended that training in advance goes out;
Second data acquisition module, for obtaining the feature of the film of not marking of the user described to be recommended that training in advance goes out In the factor, the described coefficient of similarity of do not mark film and its each film of She and the visual signature of described film of not marking The weight of each feature;Wherein, described visual signature includes that color histogram, SIFT feature, CNN feature and movies category are special Levy;
Prediction module, for the characterization factor according to described user to be recommended, the characterization factor of described film of not marking, institute The coefficient of similarity of stating do not mark film and its each film of She, the visual signature of the described film of not marking extracted in advance and The weight of each feature in described visual signature, uses the film forecast model pre-build, it was predicted that described user to be recommended Prediction to described film of not marking is marked;And,
Recommending module, for according to described prediction scoring judge whether to described user to be recommended recommend described in do not mark electricity Shadow.
Further, described prediction module specifically includes:
Vision similar features acquiring unit, for according to the described film similarity system with its each film of She of not marking Number, calls visual signature function, it is thus achieved that the vision similar features of described film of not marking;And,
Predicting unit, the base be used for the user's biased data according to the user described to be recommended obtained in advance, obtaining in advance Quasi-average mark, the characterization factor of described user to be recommended, the characterization factor of described film of not marking, the regarding of described film of not marking The power of each feature in feel similar features, the visual signature of the described film of not marking obtained in advance and described visual signature Weight, uses the film forecast model pre-build, it was predicted that the prediction of described film of not marking is marked by described user to be recommended;
Described visual signature function is:
η = | N ( θ , v ) | - 1 2 Σ s ∈ N ( θ , v ) θ s v χ ~ s Φ ( v ) ;
Wherein, η is the vision similar features of film v of not marking;(θ, v) for for giving film screening of not marking high similar for N The film screening function of degree film;θsvCoefficient of similarity for film v and the high similarity film s that do not marks;For high similarity The visual signature of film s;The inner product of the Φ (v) vector by being made up of the visual signature of the film v that do not marks.
Preferably, described film forecast model is:
x ^ u v = U * u T ( V * v + η ) + W * v T ψ v + b u + μ ;
Wherein,For user u to be recommended, the prediction of the film v that do not marks is marked, U*uFor user u to be recommended feature because of Son, V*vFor the characterization factor of the film v that do not marks, η is the vision similar features of film of not marking, W*vFor by not marking film v's The vector that the weight of each feature in visual signature is constituted, ψvFor the visual signature of the film v that do not marks, buFor use to be recommended User's biased data of family u, average mark on the basis of μ.
Further, the film commending system of described view-based access control model feature also includes:
Score data acquisition module, for obtaining each user in m user in n portion film from score data storehouse The score data of its every film marked;Described m user includes that described user to be recommended, described n portion film include Described film of not marking;
Visual feature extraction module, for obtaining the film image of described n portion film from picture image data storehouse, and point Take the visual signature of the film image of every film indescribably;
User's biased data acquisition module, for calculating described each user, in described n portion film, it was marked The average mark of all films, it is thus achieved that user's biased data of described each user;
Benchmark average mark acquisition module, for calculating described m user putting down all score data of described n portion film Divide equally, it is thus achieved that described benchmark average mark;And,
Model training module, for according to described each user to the score data of the film that it was marked, described often The visual signature of portion's film, user's biased data of described each user and described benchmark average mark, to the feature pre-build Analytical model is trained, and trains the characterization factor of described each user, the characterization factor of described every film, described every portion Film and the coefficient of similarity of remaining each film in n portion film and each spy in the visual signature of described every film The weight levied;
Described feature analysis model is:
min b * , W * , θ * , U * , V * Σ ( u , v ) ( λ 1 b i 2 + λ 2 W * j 2 + λ 3 | | U * i | | 2 + λ 4 | | V * j | | 2 + λ 5 θ s j 2 + ( x i j - μ - b i - W * j T ψ j - U * i T ( V * j + η ) ) 2 ) 1 ≤ i ≤ m ; 1 ≤ j ≤ n ;
Wherein, λ1, λ2, λ3, λ4, λ5For default coefficient;biUser's biased data for user i;W*jFor regarding by film j The vector that the weight of each feature in feel feature is constituted;U*iCharacterization factor for user i;V*jFor film j feature because of Son;θsjCoefficient of similarity for film j Yu similarity film s high with it;xijFor the user i score data to film j;μ is base Quasi-average mark;ψjVisual signature for film j;η is the vision similar features of film j.
Further, the film commending system of view-based access control model feature also includes:
Acquisition module, for the viewing data of each user of Real-time Collection;Described viewing data include score data and use The film image of the commented film in family;
Data update module, for updating described score data storehouse according to described score data, and according to described film figure As updating described picture image data storehouse;And,
Training module, obtains number for the score data storehouse after updating respectively and the picture image data storehouse after renewal According to, so that described feature analysis model is re-started training.
Implement the embodiment of the present invention, have the advantages that
The film of the view-based access control model feature that the embodiment of the present invention provides recommends method and system, it is possible to prediction user to it The film do not watched, during the prediction scoring of film of i.e. not marking, the characterization factor of user and the feature of film of not marking because of On the basis of son, the visual signature introducing film of not marking is predicted so that prediction scoring is more accurate, thus improves film The accuracy recommended, and improve Consumer's Experience;Visual signature is included in study and the training of feature analysis model, to make up electricity The deficiency of film review divided data and make user and the inaccurate defect of characterization factor of film that study goes out, improve the standard of learning data Really property, improves the accuracy that film is recommended further.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of an embodiment of the film recommendation method of the view-based access control model feature that the present invention provides;
Fig. 2 is the structural representation of an embodiment of the film commending system of the view-based access control model feature that the present invention provides.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Describe, it is clear that described embodiment is only a part of embodiment of the present invention rather than whole embodiments wholely.Based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under not making creative work premise Embodiment, broadly falls into the scope of protection of the invention.
See Fig. 1, be that the flow process of an embodiment of the film recommendation method of the view-based access control model feature that the present invention provides is shown It is intended to, including step S1 to S4, specific as follows:
The characterization factor of the user to be recommended that S1, acquisition training in advance go out;
The characterization factor of film of not marking of user described to be recommended that S2, acquisition training in advance go out, described electricity of not marking The weight of each feature in the visual signature of shadow and the coefficient of similarity of its each film of She and described film of not marking;Its In, described visual signature includes color histogram, SIFT feature, CNN feature and movies category feature;
S3, the characterization factor according to described user to be recommended, the characterization factor of described film of not marking, described electricity of not marking The coefficient of similarity of shadow and its each film of She, the visual signature of the described film of not marking extracted in advance and described vision are special The weight of each feature in levying, uses the film forecast model pre-build, it was predicted that described user to be recommended does not comments described Divide the prediction scoring of film;
S4, according to described prediction scoring judge whether to described user to be recommended recommend described in do not mark film.
It should be noted that before recommending film to user, be first trained for m user and n portion film, with instruction Practise the coefficient of similarity matrix θ of factor matrix V, n portion film of factor matrix U, n portion film of m user and n portion film Visual signature weight matrix W.Wherein, factor matrix U comprises the characterization factor of each user, and factor matrix V comprises each film Characterization factor, coefficient of similarity matrix θ comprises the coefficient of similarity of each film and its each film of She, visual signature weight Matrix W comprises the weight of each feature in the visual signature of each film.
Each user in m user recommends its film do not watched in n portion film, i.e. do not mark film time, first Identification code according to user to be recommended obtains the characterization factor of user to be recommended from factor matrix U, according to film of not marking Identification code obtains the characterization factor of this film of not marking respectively from factor matrix V, obtains this not from coefficient of similarity matrix θ The coefficient of similarity of scoring film and its each film of She, from visual signature weight matrix W, obtain regarding of this film of not marking The weight of each feature in feel feature, and obtain the visual signature of this film of not marking extracted in advance.And then, according to acquisition To the characterization factor of user to be recommended, the characterization factor of film of not marking, film of not marking similar to its each film of She The weight of each feature in the visual signature of degree coefficient and film of not marking, uses the film forecast model pre-build, in advance Measure user to be recommended the prediction of this film of not marking is marked.If this prediction scoring is more than or equal to the scoring threshold value preset, Then recommend this film of not marking to user to be recommended;If this prediction scoring is less than the scoring threshold value preset, the most not to use to be recommended This film of not marking is recommended at family.In like manner, said method is used to predict that this user to be recommended is to remaining film of not marking in n portion film Prediction scoring, to recommend the film do not watched accordingly to this user to be recommended.In like manner, said method prediction m is used In user, each user is to the prediction scoring of its each film do not marked in n portion film, to recommend it not to each user The film watched.On the basis of the characterization factor of user and the characterization factor of film of not marking, introduce film of not marking Visual signature is predicted so that prediction scoring is more accurate, thus improves the accuracy that film is recommended, and improves user's body Test.
Further, the described characterization factor according to described user to be recommended, the characterization factor of described film of not marking, institute The coefficient of similarity of stating do not mark film and its each film of She, the visual signature of the described film of not marking extracted in advance and The weight of each feature in described visual signature, uses the film forecast model pre-build, it was predicted that described user to be recommended Prediction to described film of not marking is marked, and specifically includes:
According to the described film coefficient of similarity with its each film of She of not marking, call visual signature function, it is thus achieved that institute State the vision similar features of film of not marking;
User's biased data according to the user described to be recommended obtained in advance, the benchmark average mark obtained in advance, described The characterization factor of user to be recommended, the characterization factor of described film of not marking, the vision similar features of described film of not marking, pre- The weight of each feature in the visual signature of the described film of not marking first obtained and described visual signature, uses and builds in advance Vertical film forecast model, it was predicted that the prediction of described film of not marking is marked by described user to be recommended;
Described visual signature function is:
η = | N ( θ , v ) | - 1 2 Σ s ∈ N ( θ , v ) θ s v χ ~ s Φ ( v ) ;
Wherein, η is the vision similar features of film v of not marking;(θ, v) for for giving film screening of not marking high similar for N The film screening function of degree film;θsvCoefficient of similarity for film v and the high similarity film s that do not marks;For high similarity The visual signature of film s;The inner product of the Φ (v) vector by being made up of the visual signature of the film v that do not marks.
Preferably, described film forecast model is:
Wherein,For user u to be recommended, the prediction of the film v that do not marks is marked, U*uFor user u to be recommended feature because of Son, V*vFor the characterization factor of the film v that do not marks, η is the vision similar features of film of not marking, W*vFor by not marking film v's The vector that the weight of each feature in visual signature is constituted, ψvFor the visual signature of the film v that do not marks, buFor use to be recommended User's biased data of family u, average mark on the basis of μ.
It should be noted that it is similar to its each film of She to obtain this film of not marking from coefficient of similarity matrix θ After degree coefficient, call visual signature function, calculate the vision similar features of this film of not marking.Wherein, visual signature function is just Ratio is in this film similarity with its She's film of not marking, and the similarity of do not mark film and its She's film is should by calculating The similarity of visual signature of visual signature and its She's film of film of not marking and obtain.During concrete calculating, (θ, v) first according to film v and remaining each film in n portion film of not marking for film screening function N in visual signature function η Coefficient of similarity filters out at least one film s high with film v similarity of not marking, further according to described at least one film s Visual signature calculate the vision similar features η of the film v that do not marks.And then according to this vision similar features η, treating of obtaining Recommend the characterization factor U of user u*u, the characterization factor V of the film v that do not marks*v, the visual signature ψ of the film v that do not marksvAnd vision Weight W of each feature in feature*v, use the film forecast model pre-build, it was predicted that described user u to be recommended is to described Do not mark film v prediction scoring.Wherein, in film forecast model, film bias term, the most traditionally way It is set to a single value, but as visual signature ψvLinear function so that it is biasing possess higher ability to express.
Further, before the characterization factor of the user to be recommended gone out in described acquisition training in advance, also include:
The each user in m user is obtained to its every film marked in n portion film from score data storehouse Score data;Described m user includes that described user to be recommended, described n portion film do not mark film described in including;
From picture image data storehouse, obtain the film image of described n portion film, and extract the film of every film respectively The visual signature of image;
Calculate described each user to the average mark of its all films marked in described n portion film, it is thus achieved that described User's biased data of each user;
Calculate described m the user average mark to all score data of described n portion film, it is thus achieved that described benchmark is average Point;
According to described each user to the score data of the film that it was marked, the visual signature of described every film, User's biased data of described each user and described benchmark average mark, be trained the feature analysis model pre-build, Train in the characterization factor of described each user, the characterization factor of described every film, described every film and n portion film The weight of each feature in the coefficient of similarity of remaining each film and the visual signature of described every film;
Described feature analysis model is:
min b * , W * , θ * , U * , V * Σ ( u , v ) ( λ 1 b i 2 + λ 2 W * j 2 + λ 3 | | U * i | | 2 + λ 4 | | V * j | | 2 + λ 5 θ s j 2 + ( x i j - μ - b i - W * j T ψ j - U * i T ( V * j + η ) ) 2 ) 1 ≤ i ≤ m ; 1 ≤ j ≤ n ;
Wherein, λ1, λ2, λ3, λ4, λ5For default coefficient;biUser's biased data for user i;W*jFor regarding by film j The vector that the weight of each feature in feel feature is constituted;U*iCharacterization factor for user i;V*jFor film j feature because of Son;θsjCoefficient of similarity for film j Yu similarity film s high with it;xijFor the user i score data to film j;μ is base Quasi-average mark;ψjVisual signature for film j;η is vision similar features.
It should be noted that before prediction, need to first train desired data.M user is obtained to n from score data storehouse The scoring matrix X of portion's film, comprises each user i score data to each film that it was marked, and wherein, user does not marks The score data of the film crossed is null value.The film image of this n portion film is obtained, such as film sea from picture image data storehouse Report, still etc., and extract the visual signature of the film image of every film j in this n portion film, wherein, vision respectively Feature includes color histogram, SIFT feature, CNN feature and movies category feature.Color histogram: for picture category data, Color is to first sense organ of user, film can use in a large number color to express film want to convey to spectators emotion or Mood, such as film " little Huang National People's Congress eye is sprouted " use large-area yellow to show joy, and therefore present example have employed mark Quasi-color histogram feature, i.e. RGB color.SIFT feature: SIFT (Scale-invariant feature transform, chi Degree invariant features conversion) possess good stability, for the same object shot from different perspectives in same scene, make Still can effectively identify by SIFT feature, in still, such data accounting is relatively big, thus this feature for Effectively assemble image data and play well effect.CNN feature: degree of depth convolutional neural networks (Convoutional Neural Network) feature, degree of depth convolutional network, owing to possessing the strongest abstract characteristics ability to express, obtains in picture classification task Exceed the level of human classification, owing to being the convolutional network of multilamellar, all can extract abstract spy in various degree from different layers Levying, for example, it is assumed that 8 layers of convolutional network, ground floor is data input layers, then the second layer can be pixel characteristic, and the Five layers of feature produced can be some the corner features in picture.Have benefited from the development of visual field, can be by existing Instrument extracts multilamellar abstract characteristics from film poster and stage photo and assists film to recommend task, and present example have employed 8 layers Convolutional network, has extracted the 5th layer and the 6th layer of feature respectively.Movies category feature: be used for identifying film poster and still is subordinate to The classification belonged to, the category system of the ImageNet that this example is adopted, the same neural network forecast poster using extraction CNN feature and Stage photo generic.
And then, by benchmark average mark μ, the user biased data b of each user ii, according to benchmark average mark μ, each user The user biased data b of ii, the visual signature ψ of every film jj, the similarity system of similarity film high with it of every film j Number θsjWith each score data xijSubstitute into feature analysis model, by iterative learning, train factor matrix U, n of m user The coefficient of similarity matrix θ and the visual signature weight matrix W of n portion film of factor matrix V, n portion film of portion's film, to carry out Subsequent prediction.Visual signature is included in the learning process of factor matrix, can effectively make up the deficiency of data in scoring matrix X And the inaccurate defect of factor matrix making study go out, thus improve the accuracy of factor matrix study, improve what film was recommended Accuracy rate.
Further, in described each user from score data storehouse in m user of acquisition is to n portion film, it is commented Before the score data of every the film divided, also include:
The viewing data of each user of Real-time Collection;Described viewing data include score data and the electricity of the commented film of user Shadow image;
Update described score data storehouse according to described score data, and update described film image according to described film image Data base;
Respectively the score data storehouse after updating and the picture image data storehouse after renewal obtain data, with to described spy Levy analytical model and re-start training.
It should be noted that the film of view-based access control model feature that the embodiment of the present invention is provided recommends method to be from server This side is described.User's viewing data of server Real-time Collection client, and according to the viewing data collected Score data storehouse corresponding to real-time update user and picture image data storehouse, will the scoring number of user's film to being watched According to updating in its score data storehouse, and the poster of film user watched and stage photo update its picture image data storehouse In.After score data storehouse and picture image data storehouse are updated, m the user marking to n portion film can be reacquired Matrix, and again feature analysis model is trained the factor of m user after training renewal according to this scoring matrix X Matrix U, the coefficient of similarity matrix θ and the visual signature weight matrix W of n portion film of factor matrix V, n portion film of n portion film Carry out subsequent prediction, and in real time corresponding film is transferred to client according to predicting the outcome and recommends.It addition, that recommends is same Time circle collection client user's viewing data, in order to proceed update and recommend.
The film of the view-based access control model feature that the embodiment of the present invention provides recommends method, it is possible to do not watch it prediction user The film crossed, during the prediction scoring of film of i.e. not marking, at the characterization factor of user and the base of the characterization factor of film of not marking On plinth, the visual signature introducing film of not marking is predicted so that prediction scoring is more accurate, thus improves what film was recommended Accuracy, and improve Consumer's Experience;Visual signature is included in study and the training of feature analysis model, to make up film scoring The deficiency of data and make user and the inaccurate defect of characterization factor of film that study goes out, improve the accuracy of learning data, Improve the accuracy that film is recommended further.
Correspondingly, the present invention also provides for the film commending system of a kind of view-based access control model feature, it is possible to realize above-described embodiment In view-based access control model feature film recommend method all flow processs.
See Fig. 2, be that the structure of an embodiment of the film commending system of the view-based access control model feature that the present invention provides is shown It is intended to, specific as follows:
First data acquisition module 1, for obtaining the characterization factor of the user to be recommended that training in advance goes out;
Second data acquisition module 2, for obtaining the spy of the film of not marking of the user described to be recommended that training in advance goes out Levy in the visual signature of the factor, described do not mark film and the coefficient of similarity of its each film of She and described film of not marking The weight of each feature;Wherein, described visual signature includes that color histogram, SIFT feature, CNN feature and movies category are special Levy;
Prediction module 3, for according to the characterization factor of described user to be recommended, the characterization factor of described film of not marking, The coefficient of similarity of described do not mark film and its each film of She, the described film of not marking extracted in advance visual signature with And the weight of each feature in described visual signature, use the film forecast model pre-build, it was predicted that described use to be recommended The prediction of described film of not marking is marked by family;And,
Recommending module 4, for according to described prediction scoring judge whether to described user to be recommended recommend described in do not mark Film.
Further, described prediction module specifically includes:
Vision similar features acquiring unit, for according to the described film similarity system with its each film of She of not marking Number, calls visual signature function, it is thus achieved that the vision similar features of described film of not marking;And,
Predicting unit, the base be used for the user's biased data according to the user described to be recommended obtained in advance, obtaining in advance Quasi-average mark, the characterization factor of described user to be recommended, the characterization factor of described film of not marking, the regarding of described film of not marking The power of each feature in feel similar features, the visual signature of the described film of not marking obtained in advance and described visual signature Weight, uses the film forecast model pre-build, it was predicted that the prediction of described film of not marking is marked by described user to be recommended;
Described visual signature function is:
η = | N ( θ , v ) | - 1 2 Σ s ∈ N ( θ , v ) θ s v χ ~ s Φ ( v ) ;
Wherein, η is the vision similar features of film v of not marking;(θ, v) for for giving film screening of not marking high similar for N The film screening function of degree film;θsvCoefficient of similarity for film v and the high similarity film s that do not marks;For high similarity The visual signature of film s;The inner product of the Φ (v) vector by being made up of the visual signature of the film v that do not marks.
Preferably, described film forecast model is:
x ^ u v = U * u T ( V * v + η ) + W * v T ψ v + b u + μ ;
Wherein,For user u to be recommended, the prediction of the film v that do not marks is marked, U*uFor user u to be recommended feature because of Son, V*vFor the characterization factor of the film v that do not marks, η is the vision similar features of film of not marking, W*vFor by not marking film v's The vector that the weight of each feature in visual signature is constituted, ψvFor the visual signature of the film v that do not marks, buFor use to be recommended User's biased data of family u, average mark on the basis of μ.
Further, the film commending system of described view-based access control model feature also includes:
Score data acquisition module, for obtaining each user in m user in n portion film from score data storehouse The score data of its every film marked;Described m user includes that described user to be recommended, described n portion film include Described film of not marking;
Visual feature extraction module, for obtaining the film image of described n portion film from picture image data storehouse, and point Take the visual signature of the film image of every film indescribably;
User's biased data acquisition module, for calculating described each user, in described n portion film, it was marked The average mark of all films, it is thus achieved that user's biased data of described each user;
Benchmark average mark acquisition module, for calculating described m user putting down all score data of described n portion film Divide equally, it is thus achieved that described benchmark average mark;And,
Model training module, for according to described each user to the score data of the film that it was marked, described often The visual signature of portion's film, user's biased data of described each user and described benchmark average mark, to the feature pre-build Analytical model is trained, and trains the characterization factor of described each user, the characterization factor of described every film, described every portion Film and the coefficient of similarity of remaining each film in n portion film and each spy in the visual signature of described every film The weight levied;
Described feature analysis model is:
min b * , W * , θ * , U * , V * Σ ( u , v ) ( λ 1 b i 2 + λ 2 W * j 2 + λ 3 | | U * i | | 2 + λ 4 | | V * j | | 2 + λ 5 θ s j 2 + ( x i j - μ - b i - W * j T ψ j - U * i T ( V * j + η ) ) 2 ) 1 ≤ i ≤ m ; 1 ≤ j ≤ n ;
Wherein, λ1, λ2, λ3, λ4, λ5For default coefficient;biUser's biased data for user i;W*jFor regarding by film j The vector that the weight of each feature in feel feature is constituted;U*iCharacterization factor for user i;V*jFor film j feature because of Son;θsjCoefficient of similarity for film j Yu similarity film s high with it;xijFor the user i score data to film j;μ is base Quasi-average mark;ψjVisual signature for film j;η is the vision similar features of film j.
Further, the film commending system of view-based access control model feature also includes:
Acquisition module, for the viewing data of each user of Real-time Collection;Described viewing data include score data and use The film image of the commented film in family;
Data update module, for updating described score data storehouse according to described score data, and according to described film figure As updating described picture image data storehouse;And,
Training module, obtains number for the score data storehouse after updating respectively and the picture image data storehouse after renewal According to, so that described feature analysis model is re-started training.
The film commending system of the view-based access control model feature that the embodiment of the present invention provides, it is possible to it is not watched prediction user The film crossed, during the prediction scoring of film of i.e. not marking, at the characterization factor of user and the base of the characterization factor of film of not marking On plinth, the visual signature introducing film of not marking is predicted so that prediction scoring is more accurate, thus improves what film was recommended Accuracy, and improve Consumer's Experience;Visual signature is included in study and the training of feature analysis model, to make up film scoring The deficiency of data and make user and the inaccurate defect of characterization factor of film that study goes out, improve the accuracy of learning data, Improve the accuracy that film is recommended further.
The above is the preferred embodiment of the present invention, it is noted that for those skilled in the art For, under the premise without departing from the principles of the invention, it is also possible to make some improvements and modifications, these improvements and modifications are also considered as Protection scope of the present invention.

Claims (10)

1. the film of a view-based access control model feature recommends method, it is characterised in that including:
Obtain the characterization factor of the user to be recommended that training in advance goes out;
Obtain the characterization factor of the film of not marking of user described to be recommended that training in advance goes out, described do not mark film and remaining The weight of each feature in the coefficient of similarity of each film and the visual signature of described film of not marking;Wherein, described Visual signature includes color histogram, SIFT feature, CNN feature and movies category feature;
Characterization factor according to described user to be recommended, the characterization factor of described film of not marking, described do not mark film and its In the coefficient of similarity of each film remaining, the visual signature of the described film of not marking extracted in advance and described visual signature The weight of each feature, uses the film forecast model pre-build, it was predicted that described user to be recommended is to described film of not marking Prediction scoring;
According to described prediction scoring judge whether to described user to be recommended recommend described in do not mark film.
2. the film of view-based access control model feature as claimed in claim 1 recommends method, it is characterised in that wait to push away described in described basis Recommend the characterization factor of user, the characterization factor of described film of not marking, described the similar of each film of film and remaining of not marking The weight of each feature in degree coefficient, the visual signature of the described film of not marking extracted in advance and described visual signature, Use the film forecast model pre-build, it was predicted that the prediction of described film of not marking is marked, specifically by described user to be recommended Including:
According to the coefficient of similarity of described do not mark film and remaining each film, call visual signature function, it is thus achieved that described not The vision similar features of scoring film;
User's biased data according to the user described to be recommended obtained in advance, the benchmark average mark obtained in advance, described in wait to push away Recommend the characterization factor of user, the characterization factor of described film of not marking, the vision similar features of described film of not marking, obtain in advance The weight of each feature in the visual signature of the described film of not marking taken and described visual signature, employing pre-builds Film forecast model, it was predicted that the prediction of described film of not marking is marked by described user to be recommended;
Described visual signature function is:
η = | N ( θ , v ) | - 1 2 Σ s ∈ N ( θ , v ) θ s v χ ~ s Φ ( v ) ;
Wherein, η is the vision similar features of film v of not marking;(θ, v) for for screening high similarity electricity to film of not marking for N The film screening function of shadow;θsvCoefficient of similarity for film v and the high similarity film s that do not marks;For high similarity film s Visual signature;The inner product of the Φ (v) vector by being made up of the visual signature of the film v that do not marks.
3. the film of view-based access control model feature as claimed in claim 2 recommends method, it is characterised in that described film forecast model For:
x ^ u v = U * u T ( V * v + η ) + W * v T ψ v + b u + μ ;
Wherein,For user u to be recommended, the prediction of the film v that do not marks is marked, U*uFor the characterization factor of user u to be recommended, V*v For the characterization factor of the film v that do not marks, η is the vision similar features of film of not marking, W*vFor special by the vision of the film v that do not marks The vector that the weight of each feature in levying is constituted, ψvFor the visual signature of the film v that do not marks, buUse for user u to be recommended Family biased data, average mark on the basis of μ.
4. the film of view-based access control model feature recommends method as claimed in claim 2 or claim 3, it is characterised in that pre-in described acquisition Before the characterization factor of the user to be recommended first trained, also include:
Each user in m user is obtained to the commenting of its every film marked in n portion film from score data storehouse Divided data;Described m user includes that described user to be recommended, described n portion film do not mark film described in including;
From picture image data storehouse, obtain the film image of described n portion film, and extract the film image of every film respectively Visual signature;
Calculate described each user to the average mark of its all films marked in described n portion film, it is thus achieved that described each User's biased data of user;
Calculate described m the user average mark to all score data of described n portion film, it is thus achieved that described benchmark average mark;
According to described each user to the score data of the film that it was marked, the visual signature of described every film, described User's biased data of each user and described benchmark average mark, be trained the feature analysis model pre-build, training Go out the characterization factor of described each user, the characterization factor of described every film, described every film and remaining in n portion film The weight of each feature in the coefficient of similarity of each film and the visual signature of described every film;
Described feature analysis model is:
min b * , W * , θ * , U * , V * Σ ( u , v ) ( λ 1 b i 2 + λ 2 W * j 2 + λ 3 | | U * i | | 2 + λ 4 | | V * j | | 2 + λ 5 θ s j 2 + ( x i j - μ - b i - W * j T ψ j - U * i T ( V * j + η ) ) 2 ) 1 ≤ i ≤ m ; 1 ≤ j ≤ n ;
Wherein, λ1, λ2, λ3, λ4, λ5For default coefficient;biUser's biased data for user i;W*jFor special by the vision of film j The vector that the weight of each feature in levying is constituted;U*iCharacterization factor for user i;V*jCharacterization factor for film j;θsj Coefficient of similarity for film j Yu similarity film s high with it;xijFor the user i score data to film j;On the basis of μ averagely Point;ψjVisual signature for film j;η is the vision similar features of film j.
5. the film of view-based access control model feature as claimed in claim 4 recommends method, it is characterised in that described from score data Before in storehouse, each user in m user of acquisition is to the score data of its every film marked in n portion film, also wrap Include:
The viewing data of each user of Real-time Collection;Described viewing data include score data and the film figure of the commented film of user Picture;
Update described score data storehouse according to described score data, and update described picture image data according to described film image Storehouse;
Respectively the score data storehouse after updating and the picture image data storehouse after renewal obtain data, with to described characteristic solution Analysis model re-starts training.
6. the film commending system of a view-based access control model feature, it is characterised in that including:
First data acquisition module, for obtaining the characterization factor of the user to be recommended that training in advance goes out;
Second data acquisition module, for obtain the user described to be recommended that training in advance goes out film of not marking feature because of Each in the coefficient of similarity of each film of film and remaining of not marking sub, described and the visual signature of described film of not marking The weight of individual feature;Wherein, described visual signature includes color histogram, SIFT feature, CNN feature and movies category feature;
Prediction module, for according to the characterization factor of described user to be recommended, the characterization factor of described film of not marking, described not The coefficient of similarity of scoring each film of film and remaining, the visual signature of described film of not marking extracted in advance and described The weight of each feature in visual signature, uses the film forecast model pre-build, it was predicted that described user to be recommended is to institute State the prediction scoring of film of not marking;And,
Recommending module, for according to described prediction scoring judge whether to described user to be recommended recommend described in do not mark film.
7. the film commending system of view-based access control model feature as claimed in claim 6, it is characterised in that described prediction module is concrete Including:
Vision similar features acquiring unit, for according to the described film coefficient of similarity with remaining each film of not marking, adjusting Use visual signature function, it is thus achieved that the vision similar features of described film of not marking;And,
Predicting unit, the benchmark be used for the user's biased data according to the user described to be recommended obtained in advance, obtaining in advance is put down Divide equally, the characterization factor of described user to be recommended, the characterization factor of described film of not marking, the vision phase of described film of not marking The weight of each feature in the visual signature of described film of not marking like feature, obtained in advance and described visual signature, Use the film forecast model pre-build, it was predicted that the prediction of described film of not marking is marked by described user to be recommended;
Described visual signature function is:
η = | N ( θ , v ) | - 1 2 Σ s ∈ N ( θ , v ) θ s v χ ~ s Φ ( v ) ;
Wherein, η is the vision similar features of film v of not marking;(θ, v) for for screening high similarity electricity to film of not marking for N The film screening function of shadow;θsvCoefficient of similarity for film v and the high similarity film s that do not marks;For high similarity film s Visual signature;The inner product of the Φ (v) vector by being made up of the visual signature of the film v that do not marks.
8. the film commending system of view-based access control model feature as claimed in claim 7, it is characterised in that described film forecast model For:
x ^ u v = U * u T ( V * v + η ) + W * v T ψ v + b u + μ ;
Wherein,For user u to be recommended, the prediction of the film v that do not marks is marked, U*uFor the characterization factor of user u to be recommended, V*v For the characterization factor of the film v that do not marks, η is the vision similar features of film of not marking, W*vFor special by the vision of the film v that do not marks The vector that the weight of each feature in levying is constituted, ψvFor the visual signature of the film v that do not marks, buUse for user u to be recommended Family biased data, average mark on the basis of μ.
9. the film commending system of view-based access control model feature as claimed in claim 7 or 8, it is characterised in that described view-based access control model The film commending system of feature also includes:
Score data acquisition module, for obtaining each user in m user to its institute in n portion film from score data storehouse The score data of every the film marked;It is described that described m user includes that described user to be recommended, described n portion film include Do not mark film;
Visual feature extraction module, for obtaining the film image of described n portion film from picture image data storehouse, and carries respectively Take the visual signature of the film image of every film;
User's biased data acquisition module, for calculate described each user in described n portion film its marked own The average mark of film, it is thus achieved that user's biased data of described each user;
Benchmark average mark acquisition module, for calculating average to all score data of described n portion film of described m user Point, it is thus achieved that described benchmark average mark;And,
Model training module, is used for according to described each user the score data of the film that it was marked, described every electricity The visual signature of shadow, user's biased data of described each user and described benchmark average mark, to the feature analysis pre-build Model is trained, and trains the characterization factor of described each user, the characterization factor of described every film, described every film With each feature in the coefficient of similarity of remaining each film in n portion film and the visual signature of described every film Weight;
Described feature analysis model is:
min b * , W * , θ * , U * , V * Σ ( u , v ) ( λ 1 b i 2 + λ 2 W * j 2 + λ 3 | | U * i | | 2 + λ 4 | | V * j | | 2 + λ 5 θ s j 2 + ( x i j - μ - b i - W * j T ψ j - U * i T ( V * j + η ) ) 2 ) 1 ≤ i ≤ m ; 1 ≤ j ≤ n ;
Wherein, λ1, λ2, λ3, λ4, λ5For default coefficient;biUser's biased data for user i;W*jFor special by the vision of film j The vector that the weight of each feature in levying is constituted;U*iCharacterization factor for user i;V*jCharacterization factor for film j;θsj Coefficient of similarity for film j Yu similarity film s high with it;xijFor the user i score data to film j;On the basis of μ averagely Point;ψjVisual signature for film j;η is the vision similar features of film j.
10. the film commending system of view-based access control model feature as claimed in claim 9, it is characterised in that view-based access control model feature Film commending system also includes:
Acquisition module, for the viewing data of each user of Real-time Collection;Described viewing data include score data and user institute Comment the film image of film;
Data update module, is used for according to the described score data storehouse of described score data renewal, and according to described film image more New described picture image data storehouse;And,
Training module, obtains data for the score data storehouse after updating respectively and the picture image data storehouse after renewal, So that described feature analysis model is re-started training.
CN201610522342.9A 2016-07-05 2016-07-05 Movie recommendation method and system based on visual features Active CN106169083B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610522342.9A CN106169083B (en) 2016-07-05 2016-07-05 Movie recommendation method and system based on visual features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610522342.9A CN106169083B (en) 2016-07-05 2016-07-05 Movie recommendation method and system based on visual features

Publications (2)

Publication Number Publication Date
CN106169083A true CN106169083A (en) 2016-11-30
CN106169083B CN106169083B (en) 2020-06-19

Family

ID=58065953

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610522342.9A Active CN106169083B (en) 2016-07-05 2016-07-05 Movie recommendation method and system based on visual features

Country Status (1)

Country Link
CN (1) CN106169083B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991122A (en) * 2017-02-27 2017-07-28 四川大学 A kind of film based on particle cluster algorithm recommends method
CN107291845A (en) * 2017-06-02 2017-10-24 北京邮电大学 A kind of film based on trailer recommends method and system
CN107688610A (en) * 2017-08-03 2018-02-13 电子科技大学 A kind of film popularity Forecasting Methodology and system based on network evolution model
CN107918652A (en) * 2017-11-15 2018-04-17 浙江大学 A kind of method that the film recommendation based on social networks is carried out using multi-modal e-learning
CN108320187A (en) * 2018-02-02 2018-07-24 合肥工业大学 A kind of recommendation method based on depth social networks
CN108536856A (en) * 2018-04-17 2018-09-14 重庆邮电大学 Mixing collaborative filtering film recommended models based on two aside network structure
CN108959429A (en) * 2018-06-11 2018-12-07 苏州大学 A kind of method and system that the film merging the end-to-end training of visual signature is recommended
CN110457517A (en) * 2019-08-19 2019-11-15 山东云缦智能科技有限公司 The implementation method of the similar suggested design of program request based on picture similitude

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156747A (en) * 2011-04-21 2011-08-17 清华大学 Method and device for forecasting collaborative filtering mark by introduction of social tag
CN104615741A (en) * 2015-02-12 2015-05-13 福建金科信息技术股份有限公司 Cloud computing based cold start item recommending method and device
CN104834710A (en) * 2015-04-30 2015-08-12 广东工业大学 Cold start processing method of film recommendation rating system
US20150370890A1 (en) * 2014-06-24 2015-12-24 International Business Machines Corporation Providing a visual and conversational experience in support of recommendations

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102156747A (en) * 2011-04-21 2011-08-17 清华大学 Method and device for forecasting collaborative filtering mark by introduction of social tag
US20150370890A1 (en) * 2014-06-24 2015-12-24 International Business Machines Corporation Providing a visual and conversational experience in support of recommendations
CN104615741A (en) * 2015-02-12 2015-05-13 福建金科信息技术股份有限公司 Cloud computing based cold start item recommending method and device
CN104834710A (en) * 2015-04-30 2015-08-12 广东工业大学 Cold start processing method of film recommendation rating system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LILI ZHAO, ET AL.: "Ontology Based Opinion Mining for Movie Reviews", 《SPRINGER-VERLAG BERLIN HEIDELBERG 2009》 *
戴思: "基于可视化知识框架的视频推荐系统研究与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
杨宇: "基于深度学习特征的图像推荐系统", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991122A (en) * 2017-02-27 2017-07-28 四川大学 A kind of film based on particle cluster algorithm recommends method
CN106991122B (en) * 2017-02-27 2021-02-02 四川大学 Movie recommendation method based on particle swarm optimization
CN107291845A (en) * 2017-06-02 2017-10-24 北京邮电大学 A kind of film based on trailer recommends method and system
CN107688610A (en) * 2017-08-03 2018-02-13 电子科技大学 A kind of film popularity Forecasting Methodology and system based on network evolution model
CN107688610B (en) * 2017-08-03 2020-05-12 电子科技大学 Movie popularity prediction method and system based on network evolution model
CN107918652A (en) * 2017-11-15 2018-04-17 浙江大学 A kind of method that the film recommendation based on social networks is carried out using multi-modal e-learning
CN108320187A (en) * 2018-02-02 2018-07-24 合肥工业大学 A kind of recommendation method based on depth social networks
CN108320187B (en) * 2018-02-02 2021-04-06 合肥工业大学 Deep social relationship-based recommendation method
CN108536856A (en) * 2018-04-17 2018-09-14 重庆邮电大学 Mixing collaborative filtering film recommended models based on two aside network structure
CN108959429A (en) * 2018-06-11 2018-12-07 苏州大学 A kind of method and system that the film merging the end-to-end training of visual signature is recommended
CN108959429B (en) * 2018-06-11 2022-09-09 苏州大学 Method and system for recommending movie integrating visual features for end-to-end training
CN110457517A (en) * 2019-08-19 2019-11-15 山东云缦智能科技有限公司 The implementation method of the similar suggested design of program request based on picture similitude

Also Published As

Publication number Publication date
CN106169083B (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN106169083A (en) The film of view-based access control model feature recommends method and system
Zhang et al. Integrating bottom-up classification and top-down feedback for improving urban land-cover and functional-zone mapping
CN106446015A (en) Video content access prediction and recommendation method based on user behavior preference
CN103678431B (en) A kind of recommendation method to be scored based on standard label and project
dos Santos et al. A relevance feedback method based on genetic programming for classification of remote sensing images
CN109614973A (en) Rice seedling and Weeds at seedling image, semantic dividing method, system, equipment and medium
CN102376063B (en) Social-label-based method for optimizing personalized recommendation system
CN102930539B (en) Based on the method for tracking target of Dynamic Graph coupling
CN106682108A (en) Video retrieval method based on multi-modal convolutional neural network
CN103262118B (en) Attribute value estimation device and property value method of estimation
CN106529499A (en) Fourier descriptor and gait energy image fusion feature-based gait identification method
CN106991382A (en) A kind of remote sensing scene classification method
CN109299657B (en) Group behavior identification method and device based on semantic attention retention mechanism
CN108288014A (en) Intelligent road extracting method and device, extraction model construction method and hybrid navigation system
CN107480642A (en) A kind of video actions recognition methods based on Time Domain Piecewise network
CN102982107A (en) Recommendation system optimization method with information of user and item and context attribute integrated
CN103714349A (en) Image recognition method based on color and texture features
CN107436950A (en) A kind of itinerary recommends method and system
CN103400154B (en) A kind of based on the human motion recognition method having supervision Isometric Maps
CN102750385A (en) Correlation-quality sequencing image retrieval method based on tag retrieval
CN106408030A (en) SAR image classification method based on middle lamella semantic attribute and convolution neural network
CN105095863A (en) Similarity-weight-semi-supervised-dictionary-learning-based human behavior identification method
CN104298974A (en) Human body behavior recognition method based on depth video sequence
CN104239496A (en) Collaborative filtering method based on integration of fuzzy weight similarity measurement and clustering
CN105654198B (en) Brand advertisement effect optimization method capable of realizing optimal threshold value selection

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant