CN102508907B - Dynamic recommendation method based on training set optimization for recommendation system - Google Patents

Dynamic recommendation method based on training set optimization for recommendation system Download PDF

Info

Publication number
CN102508907B
CN102508907B CN2011103568944A CN201110356894A CN102508907B CN 102508907 B CN102508907 B CN 102508907B CN 2011103568944 A CN2011103568944 A CN 2011103568944A CN 201110356894 A CN201110356894 A CN 201110356894A CN 102508907 B CN102508907 B CN 102508907B
Authority
CN
China
Prior art keywords
training
sample
data
recommendation
iteration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2011103568944A
Other languages
Chinese (zh)
Other versions
CN102508907A (en
Inventor
欧阳元新
蒋祥涛
罗建辉
熊璋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN2011103568944A priority Critical patent/CN102508907B/en
Publication of CN102508907A publication Critical patent/CN102508907A/en
Application granted granted Critical
Publication of CN102508907B publication Critical patent/CN102508907B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a dynamic recommendation method based on training set optimization for a recommendation system, which specifically includes: (1) establishing a preliminary recommendation portion: generating an original recommendation model according to original user grading data; (2) performing AdaBoost trainings: utilizing the original recommendation model as a classifying and judging basis to classify the data and adjust learning times of samples by means of multiple iterative learning training data; (3) screening incorrect samples: data of selected difficult samples are removed as the incorrect samples after multiple AdaBoost trainings so as to construct a new training data collection; (4) reconstructing a recommendation model: combining training results to regenerate the recommendation model based on the new training data; and (5) generating recommendation results: utilizing the new recommendation model to generate the recommendation results. The method is capable of removing the data without referential meaning in recommended service by the aid of great relevance of original training set data in content, so that validity of the training data and precision of the final recommendation model are improved.

Description

A kind of dynamic recommendation method of the commending system based on training set optimization
Technical field
The present invention relates to the technical field of user's commending system, particularly a kind of dynamic recommendation method of the commending system based on training set optimization.
Background technology
The personalized recommendation service is customer-centric, to understand user preferences as basis, is the method for service that the user provides customized customized information to present, and is also to solve from the magnanimity Internet resources, extracting a kind of effective way of user's information needed.With the generic services pattern, compare the personalized recommendation service following characteristics are arranged: at first, the personalized recommendation service can be by the user from rescue the predicament of information overload out, make the user can have an opportunity to enjoy the really hommization network information service of rich and varied, convenient appropriateness, greatly promote the user and experience and satisfaction; Secondly, the personalized recommendation service can fully improve service quality and the access efficiency of Web website, can also find the point of interest that the user is potential simultaneously, thereby excavate potential commercial value, for Internet service provider provides considerable economy return.
Since the commending system based on collaborative filtering is born, the proposition of the hidden vectorial recommended models of especially decomposing based on normalized matrix, the personalized recommendation technology has had quite high lifting in the recommendation precision of theoretical aspect.As the original score data of recommending important evidence, final recommendation results is had to decisive influence, apparent one group of data with high accuracy can obtain good recommendation effect in final recommendation.
The data set that general personalized recommendation service to the user all is based on existing historical accumulation carries out, and the data volume of this data set is very huge.Data set scale huge is difficult to avoid exist in the collection of data irrational data, such as user's mistake scoring or non-user replaces the phenomenons such as scoring.Itself does not have referential these data, and these data should not adopted in the recommendation service to the user.Therefore, to processing and the selection of original score data, can help to a great extent to improve the recommendation precision.Adopting the training set of determination methods screening comparatively accurately, and carrying out on this basis the foundation of recommended models, the recommended models that obtains so can recommend, on precision, more remarkable must the lifting arranged.
Summary of the invention
The technical problem to be solved in the present invention is: overcome the deficiencies in the prior art, a kind of dynamic recommendation method of the commending system based on training set optimization is provided, the method can be screened by the original training data to as the personalized recommendation foundation, and take the new training set of removing error sample as according to obtaining to have the more recommended models of high accurancy and precision, the accuracy that has improved personalized recommendation.
The technical scheme that the present invention solves the problems of the technologies described above is: a kind of dynamic recommendation method of the commending system based on training set optimization, and the method concrete steps are as follows:
Step (1) is set up preliminary recommended models: according to original user's score data, utilize based on the modeling method in normalized matrix factorization recommended models and generate initial recommended models;
Step (2) AdaBoost training: the foundation of utilizing the middle recommended models that generates of step (1) to judge as primary classification builds sorter, the recommendation that calculates according to recommended models and the classification of the difference condition decision data between raw value, utilize AdaBoost Algorithm Learning original training sample, and take turns and finish the rear new sorter that generates at each;
Step (3) screening error sample: each takes turns training in the training process that utilizes the AdaBoost algorithm all needs to filter out difficult sample, the division of difficult sample can adopt the otherness between predicted value and actual value to judge in the method, namely when this species diversity, judges difficult sample during greater than a certain threshold value.After the AdaBoost training that warp is too much taken turns, repeatedly the data of chosen difficult sample can be used as the error sample removal, thereby construct for the required training data set of next iteration;
Step (4) reconstruct recommended models: take the middle training data that obtains of step (3) as basis, in conjunction with the AdaBoost training data, regenerate recommended models.
Step (5) produces recommendation results: using the user characteristics vector as input, the recommended models of utilizing step (4) to obtain calculates recommendation results and returns to the user.
To the AdaBoost cluster training of raw data set, specific as follows in described step (2):
1. step revises normalized matrix factorization recommended models, no longer original score data set T is divided into to two subset T 1, T 2, wherein, data set T 1For learning, data set T 2For judging that study stops wherein, but all data in data set T are all learnt, set the AdaBoost training iteration wheel number I, often take turns the number of times R of study, the error range errPermission of permission, and initialization feature vector set;
2. step utilizes normalized matrix factorization recommended models learning training data R time in the iteration of the first round, the estimated value of respective user to the scoring of project in the calculation training data on the set of eigenvectors that training obtains
Figure BDA0000107592510000021
And obtain itself and actual value r U, iAbsolute error, i.e. absolute error
Figure BDA0000107592510000022
The AbsE value that 3. step calculates in 2. when step is judged the difficult sample of secondary data during greater than errPermission, travel through the total errCount of whole difficult samples of training data acquisition nAnd by following formula, calculated the error rate ε of sample n, wherein | T| represents the number of samples in training set;
ϵ n = errCount n | T | (1) formula;
In formula: ε nExpression sample error rate, | T| represents the number of samples in training set, errCount nThe whole training data of expression traversal obtains the sum of difficult sample;
The error rate ε that 4. step is calculated in 3. according to step nAdjust the study number of times of training sample in the next round iteration, be specially when the AbsE of training sample data value during less than errPermission, the study number of times of this sample in the next round iteration is trainTime N+1=trainTime n* ε nIf (tramTime N+1Get 1 for<1), when the AbsE of training sample data value determined that it is difficult sample in (2) during greater than errPermission, the study number of times of this sample in the next round iteration was trainTime n + 1 = trainTime n &epsiv; n , Available as shown in the formula expression:
trainTime n + 1 = trainTime n &epsiv; n , ( AbsE &GreaterEqual; errPermission ) max ( trainTime n * &epsiv; n , 1 ) , ( AbsE < errPermission ) (2) formula;
In formula: trainTime nThe study number of times of sample in n next round iteration, trainTime N+1The study number of times of sample in n+1 next round iteration, ε nThe error rate of calculating in expression step (3), the absolute error that the sub-step of AbsE value representation step (2) is 2. calculated, errPermission represents the error range that allows;
Step 5. by errPermission with after fixed proportion declineRate reduction, start the iteration of a new round, and the study number of times of each sample calculates and carries out in the iteration that this is taken turns in 4. according to step.
The method of the screening error sample in described step (3), specific as follows:
In steps A, step (2), each is taken turns training and all needs to filter out difficult sample go forward side by side row labels and statistics in the training process that utilizes the AdaBoost algorithm;
After step B, the training of the AdaBoost iteration through too much taking turns, training data is traveled through to the number of times that each sample of statistics is determined into difficult sample;
Step C, according to clearance delRate, will judge into sample that difficult sample number of times is higher from training set, removing, thereby obtain new training set.
The present invention's advantage compared with prior art is:
The present invention can carry out take available data as basis automatic cluster, through repeatedly iterative learning, can filter out the misdata that is unfavorable for improving the recommended models precision, thereby remove misdata and obtain high-quality training data, the higher recommendation precision of final acquisition.Than classic method, only for recommended models, do not process original training data is filtered, this method can more effectively improve the recommendation precision.
The accompanying drawing explanation
Fig. 1 is summary workflow diagram of the present invention;
Fig. 2 is detailed operation process flow diagram of the present invention.
Embodiment
Existing accompanying drawings embodiments of the invention.
As shown in Figure 2, the present invention includes four key steps: set up recommended models, AdaBoost training, screening error sample and reconstruct recommended models.
Step (1) is set up recommended models: read original user's score data and test data, and according to the Customs Assigned Number UserID of maximum in two data and the dimension that bullets ItemID determines user characteristics vector sum item feature vector, based on the modeling method in normalized matrix factorization recommended models, newly-built and random initializtion user characteristics vector sum item feature vector;
Step (2) the AdaBoost training stage: the foundation structure sorter that utilizes the middle recommended models that generates of step (1) to judge as classification, the recommendation that calculates according to recommended models and the classification of the difference condition decision data between raw value, concrete steps are:
1. step revises normalized matrix factorization recommended models, no longer original score data set T is divided into to two subset T 1, T 2(utilize data set T 1Learn, utilize T 2Judge that study stops), but all data in data set T are all learnt, set the AdaBoost training iteration wheel number I, often take turns the number of times R of study, the error range errPermission of permission, and initialization feature vector set;
2. step is utilized above-mentioned amended based on the modeling in normalized matrix factorization recommended models and training method learning training data R time in AdaBoost every takes turns iteration.After each study, calculating the RMSE of resulting user characteristics vector sum product features vector model on training data, the RMSE value that reaches this wheel hour namely when RMSE value can finish in advance this when large and takes turns iteration than learning value afterwards last time.This is taken turns after iteration and obtains that new recommended models---this takes turns final user characteristics vector sum product features vector.Utilize the estimated value of respective user to the scoring of project in proper vector calculation training data
Figure BDA0000107592510000041
And obtain itself and actual value r U, iAbsolute error, i.e. absolute error
Figure BDA0000107592510000042
The AbsE value that 3. step calculates in 2. when step judges that the difficult sample of secondary data, the mistake that these data are corresponding judge that number of times increases by 1 during greater than errPermission, travel through the total errCount of whole difficult samples of training data acquisition nAnd by (1) formula, calculated the error rate ε of sample n
&epsiv; n = errCount n | T | (1) formula;
In formula: ε nExpression sample error rate, | T| represents the number of samples in training set, errCount nThe whole training data of expression traversal obtains the sum of difficult sample;
The error rate ε that 4. step is calculated in 3. according to step n, utilize (2) formula to adjust the study number of times of training sample in the next round iteration;
trainTime n + 1 = trainTime n &epsiv; n , ( AbsE &GreaterEqual; errPermission ) max ( trainTime n * &epsiv; n , 1 ) , ( AbsE < errPermission ) (2) formula;
In formula: trainTime nThe study number of times of sample in n next round iteration, trainTime N+1The study number of times of sample in n+1 next round iteration, ε nThe error rate of calculating in expression step (3), the absolute error that the sub-step of AbsE value representation step (2) is 2. calculated, errPermission represents the error range that allows;
5. step reduces errPermission with fixed proportion declineRate after, start the iteration of a new round, and the study number of times of each sample calculates in 4. and carries out according to step in the iteration that this is taken turns, if completed the iteration of pre-determined number finished this stage;
Step (3) screening error sample: each takes turns training in the training process that utilizes the AdaBoost algorithm all needs to filter out difficult sample, the division of difficult sample can adopt the otherness between predicted value and actual value to judge in the method, namely when this species diversity, judges difficult sample during greater than a certain threshold value.After the AdaBoost training that warp is too much taken turns, repeatedly the data of chosen difficult sample can be used as the error sample removal, thereby construct for the required training data set of next iteration;
In steps A, step (2), each is taken turns training and all needs to filter out difficult sample go forward side by side row labels and statistics in the training process that utilizes the AdaBoost algorithm;
After step B, the training of the AdaBoost iteration through too much taking turns, training data is traveled through to the number of times that each sample of statistics is determined into difficult sample;
Step C, statistics calculates the maximum errors number that allows according to clearance del_rate, total sample number and 2., by judging into difficult sample number of times, higher than the sample of this value, at training concentrated setting frequency of training, is 0, thereby obtains new training set;
Step (4) reconstruct recommended models: take the middle training data that obtains of step (3) as basis, in conjunction with the AdaBoost training data, regenerate recommended models based on the normalized matrix factorization method;
Step (5) produces recommendation results: using the product of user characteristics vector sum item feature vector as the estimation score value of specific user to specific project, will estimate that project recommendation that score value is higher is to this user.
The present invention can carry out take available data as basis automatic cluster, through repeatedly iterative learning, can filter out the misdata that is unfavorable for improving the recommended models precision, thereby remove misdata and obtain high-quality training data, the higher recommendation precision of final acquisition.Than classic method, only for recommended models, do not process original training data is filtered, this method can more effectively improve commending system and recommend precision, as shown in table 1 below, when not adopting screening technique of the present invention, the RMSE of RMF method on test set is 0.792933, and several groups of data in table 1 have all obtained less RMSE value.
The RMSE value that table 1 the present invention obtains
The part that the present invention does not elaborate belongs to techniques well known.Above embodiment is only in order to technical scheme of the present invention to be described but not be limited in the scope of embodiment; to those skilled in the art; as long as various variations claim limit and the spirit and scope of the present invention determined in; these variations are apparent, and all utilize innovation and creation that the present invention conceives all at the row of protection.

Claims (2)

1. the dynamic recommendation method of a commending system of optimizing based on training set, it is characterized in that: the method concrete steps are as follows:
Step (1), set up preliminary recommended models: according to original user's score data, utilize based on the modeling method in normalized matrix factorization recommended models and generate preliminary recommended models;
Step (2), AdaBoost training: the foundation of utilizing the middle recommended models that generates of step (1) to judge as primary classification builds sorter, the recommendation that calculates according to recommended models and the classification of the difference condition decision data between raw value, utilize AdaBoost Algorithm Learning original training sample, and take turns and finish the rear new sorter that generates at each;
Wherein, to the AdaBoost cluster training of raw data set, specific as follows in described step (2):
Step 1., revise normalized matrix factorization recommended models, no longer original score data set T is divided into to two subset T 1, T 2, wherein, data set T 1For learning, data set T 2For judging that study stops, but all data in data set T all being learnt, set the AdaBoost training iteration wheel number I, often take turns the number of times R of study, the error range errPermission of permission, and initialization feature vector set;
Step 2., in the iteration of the first round, utilize normalized matrix factorization recommended models learning training data R time, the estimated value of respective user to the scoring of project in the calculation training data on the set of eigenvectors that training obtains
Figure FDA00003100222900014
And obtain itself and actual value r u,iAbsolute error, i.e. absolute error
Figure FDA00003100222900011
Step 3., the AbsE value that calculates in 2. when step judges the difficult sample of these data during greater than errPermission, travels through the total errCount that whole training datas obtains difficult sample nAnd by following formula, calculated the error rate ε of sample n, wherein | T| represents the number of samples in training set;
Figure FDA00003100222900012
Formula;
In formula: ε nExpression sample error rate, | T| represents the number of samples in training set, errCount nThe whole training data of expression traversal obtains the sum of difficult sample;
Step 4., the error rate ε that calculates in 3. according to step nAdjust the study number of times of training sample in the next round iteration, be specially: during less than errPermission, the study number of times of this sample in the next round iteration is trainTime when the AbsE of training sample data value N+1=trainTime n* ε nIf trainTime wherein N+1Get 1 for<1, when the AbsE of training sample data value determined that it is difficult sample in (2) formula during greater than errPermission, the study number of times of this sample in the next round iteration was
Figure FDA00003100222900013
Available as shown in the formula expression:
Formula;
In formula: trainTime nThe study number of times of expression sample in n next round iteration, trainTime N+1The study number of times of expression sample in n+1 next round iteration, ε nRepresent during step is 3. the error rate of calculating, the absolute error that the sub-step of AbsE value representation step (2) is 2. calculated, errPermission represents the error range that allows;
Step 5., by errPermission with after fixed proportion declineRate reduction, start the iteration of a new round, and the carrying out that the study number of times of each sample calculates in 4. according to step in the iteration that this is taken turns;
Step (3), screening error sample: each takes turns training in the training process that utilizes the AdaBoost algorithm all needs to filter out difficult sample, the division of difficult sample can adopt the otherness between predicted value and actual value to judge, namely when this species diversity, judges difficult sample during greater than a certain threshold value; After the AdaBoost training that warp is too much taken turns, repeatedly the data of chosen difficult sample can be used as the error sample removal, thereby construct for the required training data set of next iteration;
Step (4), reconstruct recommended models: with the training dataset that obtains in step (3), be combined into basis, in conjunction with the AdaBoost algorithm, train, regenerate recommended models;
Step (5), generation recommendation results: using the user characteristics vector as input, the recommended models of utilizing step (4) to obtain calculates recommendation results and returns to the user.
2. the dynamic recommendation method of a kind of commending system of optimizing based on training set according to claim 1 is characterized in that: the method for the screening error sample in described step (3), specific as follows:
Steps A, in step (2), utilizing in the training process of AdaBoost algorithm each to take turns training all to need to filter out difficult sample go forward side by side row labels and statistics;
After step B, the training of the AdaBoost iteration through too much taking turns, training data is traveled through to the number of times that each sample of statistics is determined into difficult sample;
Step C, according to clearance delRate, will judge into sample that difficult sample number of times is higher from training set, removing, thereby obtain new training set.
CN2011103568944A 2011-11-11 2011-11-11 Dynamic recommendation method based on training set optimization for recommendation system Expired - Fee Related CN102508907B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011103568944A CN102508907B (en) 2011-11-11 2011-11-11 Dynamic recommendation method based on training set optimization for recommendation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011103568944A CN102508907B (en) 2011-11-11 2011-11-11 Dynamic recommendation method based on training set optimization for recommendation system

Publications (2)

Publication Number Publication Date
CN102508907A CN102508907A (en) 2012-06-20
CN102508907B true CN102508907B (en) 2013-11-20

Family

ID=46220993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011103568944A Expired - Fee Related CN102508907B (en) 2011-11-11 2011-11-11 Dynamic recommendation method based on training set optimization for recommendation system

Country Status (1)

Country Link
CN (1) CN102508907B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156107A (en) * 2015-04-03 2016-11-23 刘岩松 A kind of discovery method of hot news

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104503842B (en) * 2014-12-22 2018-05-04 广州品唯软件有限公司 Policy execution method and device based on multilayer shunting experimental framework
CN104615989B (en) * 2015-02-05 2018-06-15 北京邮电大学 A kind of outdoor day and night distinguishing method
CN104615779B (en) * 2015-02-28 2017-08-11 云南大学 A kind of Web text individuations recommend method
EP3480767A1 (en) * 2015-04-23 2019-05-08 Rovi Guides, Inc. Systems and methods for improving accuracy in media asset recommendation models
CN105488216B (en) * 2015-12-17 2020-08-21 上海中彦信息科技股份有限公司 Recommendation system and method based on implicit feedback collaborative filtering algorithm
CN106934413B (en) * 2015-12-31 2020-10-13 阿里巴巴集团控股有限公司 Model training method, device and system and sample set optimization method and device
CN105760965A (en) * 2016-03-15 2016-07-13 北京百度网讯科技有限公司 Pre-estimated model parameter training method, service quality pre-estimation method and corresponding devices
CN107590065B (en) * 2016-07-08 2020-08-11 阿里巴巴集团控股有限公司 Algorithm model detection method, device, equipment and system
CN107632995B (en) * 2017-03-13 2018-09-11 平安科技(深圳)有限公司 The method and model training control system of Random Forest model training
CN107766929B (en) * 2017-05-05 2019-05-24 平安科技(深圳)有限公司 Model analysis method and device
CN109754105B (en) 2017-11-07 2024-01-05 华为技术有限公司 Prediction method, terminal and server
CN108108821B (en) * 2017-12-29 2022-04-22 Oppo广东移动通信有限公司 Model training method and device
WO2019223582A1 (en) * 2018-05-24 2019-11-28 Beijing Didi Infinity Technology And Development Co., Ltd. Target detection method and system
CN108804670B (en) * 2018-06-11 2023-03-31 腾讯科技(深圳)有限公司 Data recommendation method and device, computer equipment and storage medium
CN109034188B (en) * 2018-06-15 2021-11-05 北京金山云网络技术有限公司 Method and device for acquiring machine learning model, equipment and storage medium
CN108829846B (en) * 2018-06-20 2021-09-10 中国联合网络通信集团有限公司 Service recommendation platform data clustering optimization system and method based on user characteristics
CN108984756B (en) * 2018-07-18 2022-03-08 网易传媒科技(北京)有限公司 Information processing method, medium, device and computing equipment
CN109670437B (en) * 2018-12-14 2021-05-07 腾讯科技(深圳)有限公司 Age estimation model training method, facial image recognition method and device
CN110069663B (en) * 2019-04-29 2021-06-04 厦门美图之家科技有限公司 Video recommendation method and device
CN110738264A (en) * 2019-10-18 2020-01-31 上海眼控科技股份有限公司 Abnormal sample screening, cleaning and training method, device, equipment and storage medium
CN111522533B (en) * 2020-04-24 2023-10-24 中国标准化研究院 Product modularization design method and device based on user personalized demand recommendation
CN113672798A (en) * 2020-05-15 2021-11-19 第四范式(北京)技术有限公司 Article recommendation method and system based on collaborative filtering model
CN116503695B (en) * 2023-06-29 2023-10-03 天津所托瑞安汽车科技有限公司 Training method of target detection model, target detection method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7853485B2 (en) * 2005-11-22 2010-12-14 Nec Laboratories America, Inc. Methods and systems for utilizing content, dynamic patterns, and/or relational information for data analysis
CN101364222B (en) * 2008-09-02 2010-07-28 浙江大学 Two-stage audio search method
CN102073720B (en) * 2011-01-10 2014-01-22 北京航空航天大学 FR method for optimizing personalized recommendation results

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106156107A (en) * 2015-04-03 2016-11-23 刘岩松 A kind of discovery method of hot news
CN106156107B (en) * 2015-04-03 2019-12-13 刘岩松 Method for discovering news hotspots

Also Published As

Publication number Publication date
CN102508907A (en) 2012-06-20

Similar Documents

Publication Publication Date Title
CN102508907B (en) Dynamic recommendation method based on training set optimization for recommendation system
Cui et al. A new hyperparameters optimization method for convolutional neural networks
CN107239529B (en) Public opinion hotspot category classification method based on deep learning
CN109299380B (en) Exercise personalized recommendation method based on multi-dimensional features in online education platform
CN105893609A (en) Mobile APP recommendation method based on weighted mixing
CN107193876A (en) A kind of missing data complementing method based on arest neighbors KNN algorithms
CN102982107A (en) Recommendation system optimization method with information of user and item and context attribute integrated
CN111414942A (en) Remote sensing image classification method based on active learning and convolutional neural network
CN108287858A (en) The semantic extracting method and device of natural language
CN106951471B (en) SVM-based label development trend prediction model construction method
CN109635010B (en) User characteristic and characteristic factor extraction and query method and system
CN110162591A (en) A kind of entity alignment schemes and system towards digital education resource
CN104794501B (en) Pattern recognition method and device
CN110880019A (en) Method for adaptively training target domain classification model through unsupervised domain
CN104298787A (en) Individual recommendation method and device based on fusion strategy
CN116541911B (en) Packaging design system based on artificial intelligence
CN111932026A (en) Urban traffic pattern mining method based on data fusion and knowledge graph embedding
CN105046323B (en) Regularization-based RBF network multi-label classification method
WO2016009419A1 (en) System and method for ranking news feeds
CN105609116A (en) Speech emotional dimensions region automatic recognition method
CN105574213A (en) Microblog recommendation method and device based on data mining technology
CN110069776A (en) Customer satisfaction appraisal procedure and device, computer readable storage medium
CN110349170A (en) A kind of full connection CRF cascade FCN and K mean value brain tumor partitioning algorithm
CN110472115B (en) Social network text emotion fine-grained classification method based on deep learning
CN103310027B (en) Rules extraction method for map template coupling

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20131120

Termination date: 20211111