CN106886872A - Method is recommended in a kind of logistics based on cluster and cosine similarity - Google Patents

Method is recommended in a kind of logistics based on cluster and cosine similarity Download PDF

Info

Publication number
CN106886872A
CN106886872A CN201710041664.6A CN201710041664A CN106886872A CN 106886872 A CN106886872 A CN 106886872A CN 201710041664 A CN201710041664 A CN 201710041664A CN 106886872 A CN106886872 A CN 106886872A
Authority
CN
China
Prior art keywords
lorry
data
cargo
data set
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710041664.6A
Other languages
Chinese (zh)
Inventor
朱全银
赵阳
胡荣林
李翔
肖绍章
瞿学新
于柿民
潘舒新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN201710041664.6A priority Critical patent/CN106886872A/en
Publication of CN106886872A publication Critical patent/CN106886872A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/08Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
    • G06Q10/083Shipping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Probability & Statistics with Applications (AREA)
  • Marketing (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Quality & Reliability (AREA)
  • Operations Research (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Development Economics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Recommend method the invention discloses a kind of logistics based on cluster and cosine similarity, first with AP clustering methods, SDbw clusters balancing method and K means clustering methods calculate cargo data collection and lorry data set uses the optimum k value of K means clustering methods, according to optimum k value to cargo data collection and lorry cluster data, and two graders are trained according to the result that cargo data clustering and lorry cluster data are obtained using Naive Bayes Classifier, the grader for getting of being assembled for training using lorry data set and cargo data is classified, the COS distance that all elements with lorry information identical category are concentrated by the lorry information and cargo data of normalized is calculated again, goods is finally recommended according to COS distance successively from big to small.The present invention is effectively improved the real time response speed of recommendation method.

Description

Method is recommended in a kind of logistics based on cluster and cosine similarity
Technical field
It is more particularly to a kind of based on cluster and cosine similarity the invention belongs to clustering method and recommendation method and technology field Logistics recommend method.
Background technology
Raising of the logistics recommendation method to the conevying efficiency of logistics field goods has important function and significance, traditional thing Stream is only to provide simple displacement, and modern logistics then provide value-added service, artificial to select goods or lorry and meet The demand of logistics field.In recent years for the demand of different commending systems, researcher proposes corresponding personalized recommendation side Case, is such as based on commending contents, and collaborative filtering, correlation rule, effectiveness is recommended, combined recommendation etc..
The existing Research foundation of Zhu Quan silver et al. includes:Zhu Quanyin, Pan Lu, Liu Wenru, wait .Web science and technology news to classify and take out Take algorithm [J] Huaiyingong College journals, 2015,24 (5):18-24;Li Xiang, Zhu Quan silver joints cluster and rating matrix are shared Collaborative filtering recommending [J] computer science with explore, 2014,8 (6):751-759;Quanyin Zhu,Sunqun Cao.A Novel Classifier-independent Feature Selection Algorithm for Imbalanced Datasets.2009,p:77-82;Quanyin Zhu,Yunyang Yan,Jin Ding,Jin Qian.The Case Study for Price Extracting of Mobile Phone Sell Online.2011,p:282-285;Quanyin Zhu,Suqun Cao,Pei Zhou,Yunyang Yan,Hong Zhou.Integrated Price Forecast based on Dichotomy Backfilling and Disturbance Factor Algorithm.International Review on Computers and Software,2011,Vol.6(6):1089-1093;Zhu Quan silver et al. application, disclosure With the Patents for authorizing:A kind of linear interpolations that are based on of the such as Zhu Quanyin, Hu Rongjing, He Suqun, week training are with Adaptive windowing mouthful Price forecasting of commodity method Chinese patents:ZL 2011 1 0423015.5,2015.07.01;Zhu Quanyin, Cao Suqun, Yan Yun Ocean, Hu Rong waits quietly, a kind of price forecasting of commodity method Chinese patents for being based on the repairing of two divided datas and disturbing factors:ZL 2011 1 0422274.6,2013.01.02;Zhu Quanyin, Yin Yonghua, Yan Yunyang, Cao Suqun etc., a kind of multi items based on neutral net The data preprocessing method Chinese patents of price forecasting of commodity:ZL 2012 1 0325368.6;Li Xiang, Zhu Quanyin, Hu Ronglin, A kind of all deep Cold Chain Logistics prestowage intelligent recommendation method China Patent Publication No. based on spectral clustering of:CN105654267A, 2016.06.08;In Cao Suqun, Zhu Quanyin, Zuo Xiaoming, noble soldier et al., a kind of feature selection approach for pattern classification State's patent publication No.:CN 103425994 A, 2013.12.04;Zhu Quanyin, Yan Yunyang, Li Xiang, Zhang Yongjun et al., one kind is used for The scientific and technological information that text classification and picture depth are excavated is obtained and method for pushing China Patent Publication No.:CN 104035997 A, 2014.09.10;Zhu Quanyin, Xin Cheng, Li Xiang, Xu Kang et al., a kind of network behavior based on K means and LDA bi-directional verifications are practised Used clustering method China Patent Publication No.:CN 106202480 A,2016.12.07.
AP clustering methods:
Affinity Propagation cluster abbreviation AP, be it is a kind of be published on Science in 2007 it is new poly- Class method.
The basic thought of AP methods is the node that whole samples are regarded as network, then by the message on each bar side in network Transmission calculates the cluster centre of each sample.In cluster process, have two kinds of message and transmitted between each node, be respectively Attraction Degree And degree of membership (availability) (responsibility).AP methods constantly update each suction put by iterative process Degree of drawing and ownership angle value, until producing m high-quality sample, while remaining data point is assigned in corresponding cluster.
SDbw Clustering Effect balancing methods:
SDbw is a kind of index based on density, and it assesses cluster by contrasting the density between the tight ness rating in class and class Validity, the cluster when the index reaches minimum is optimum cluster, and cluster result is unrelated with method.
K-means clustering methods:
K-means clustering methods are hard clustering methods, are the representatives of the typically object function clustering method based on prototype, It is data point to prototype certain distance as the object function for optimizing, the method for seeking extreme value using function obtains interative computation Regulation rule.Using Euclidean distance as similarity measure, it is to seek a certain initial cluster center vector V of correspondence most to K-means Optimal sorting class so that evaluation index J is minimum.The method is using error sum of squares criterion function as clustering criteria function.
Cosine similarity:
Cosine similarity, is also called cosine similarity.Assess theirs by calculating two included angle cosine values of vector Similarity.
Traditional method needs the nearest-neighbors of the searching target user on whole data set when recommending for logistics, with That e-commerce system scale is increasing, sharply increasing for number of users and the number of entry searches for mesh on whole data set The nearest-neighbors for marking user take very much, are increasingly difficult to meet the requirement of real-time of commending system.Accordingly, it would be desirable to find one kind On the premise of recommendation effect is not influenceed, the method for improving proposed algorithm operational efficiency.
The content of the invention
Goal of the invention:For problems of the prior art, the present invention provides a kind of number by lorry and goods According to treatment and then cluster, then by the information processing of the owner of cargo and car owner, recommend haulage vehicle to the owner of cargo, recommend transport goods to car owner Thing, is finally reached the recommendation method of the logistics based on cluster and cosine similarity for improving traffic efficiency purpose.
Technical scheme:In order to solve the above technical problems, the present invention provides a kind of logistics based on cluster and cosine similarity Recommendation method, comprises the following steps:
Step one:Cargo data collection and lorry data set are pre-processed, and weighing apparatus is clustered using AP clustering methods, SDbw Amount method and K-means clustering methods determine the cluster number optimum k value of cargo data collection and lorry data set;
Step 2:According to the optimal cluster number determined in step one, K- is used to cargo data collection and lorry data set Means is clustered, and the result for being obtained using cargo data collection and lorry cluster data respectively trains two graders;
Step 3:Owner of cargo's input goods information that lorry is recommended, goods information is needed to use step by normalized The grader classification that lorry data set is trained in two is, it is necessary to the car owner that goods is recommended is input into information of vehicles, information of vehicles process Normalized, the grader trained using vehicle data collection in step 2 is classified;
Step 4:Using the owner of cargo in cosine similarity method calculation procedure three or car owner by normalized data with The similarity of all elements is arranged cargo data collection or lorry data set from high to low according to similarity in the class that grader is got Row, recommend to user.
Further, balancing method and K-means clustering methods are clustered using AP clustering methods, SDbw in the step one The step of determining the cluster number optimum k value of cargo data collection and lorry data set is as follows:
Step 1.1:Define lorry and cargo data collection, lorry and cargo data collection are pre-processed;
Step 1.2:AP clustering methods are used to lorry and cargo data collection, categorical measure is obtained;
Step 1.3:K-means clustering methods are used to lorry and cargo data collection, if defining K value is obtained from 2 to step 1.2 Numerical value, using SDbw cluster balancing method weigh Clustering Effect, obtain the cluster optimum k value of lorry and cargo data.
Further, the result for being obtained using cargo data collection and lorry cluster data respectively in the step 2 is trained The step of going out two graders is as follows:
Step 2.1:Using Naive Bayes Classifier training pattern ModelA, training data is to be clustered using K-means To the result of lorry data clusters, K values are that the lorry data that step 2 determines most preferably cluster K values to method;
Step 2.2:Using Naive Bayes Classifier training pattern ModelB, training data is to be clustered using K-means The result that method is clustered to cargo data, K values are that the cargo data that step 2 determines most preferably clusters K values.
Further, the grader for being trained using lorry data set respectively in the step 3 and vehicle data collection training The step of grader for going out is classified is as follows:
Step 3.1:Need lorry recommend the owner of cargo input goods information, by goods information normalized after, using point Class device ModelA classifies to goods information, obtains tag along sort;
Step 3.2:Need goods recommend car owner input information of vehicles, by information of vehicles normalized after, using point Class device ModelB classifies to information of vehicles, obtains tag along sort.
Further, the owner of cargo or car owner are calculated by normalized using cosine similarity method in the step 4 In the class that data and grader are got the similarity of all elements to cargo data collection or lorry data set according to similarity from height It is as follows the step of recommend to user to low arrangement:
Step 4.1:Using cosine similarity method, calculate the treated information of the owner of cargo and concentrated with vehicle data and goods Information has the similarity of the information of same label, is recommended from big to small according to similarity;
Step 4.2:Using cosine similarity method, calculate the treated information of car owner and concentrated with cargo data and vehicle Information has the similarity of the information of same label, is recommended from big to small according to similarity.
Further, balancing method and K-means cluster sides are clustered using AP clustering methods, SDbw in the step one Method determines that the detailed step of the cluster number optimum k value of cargo data collection and lorry data set is as follows:
Step 101:If lorry data volume is N bars, the dimension of lorry data is M, sets up lorry data set Crecords= { C1, C2 ..., CM }, the data of Elements C m={ c1, c2, c3, c4, c5 } the expression lorries m of Crecords, c1, c2, c3, c4, C5, is five dimensions of Cm, m ∈ [1, M], wherein, c1 represents transport price desired by car owner, and c2 represents lorry residue load-carrying, C3 represents the departure place of lorry, and c4 represents the destination of lorry, and c5 represents the time that lorry is transported;
Step 102:Definition cyclic variable is t, and assigns initial value t=1;
Step 103:Work as t<=M performs step 204, otherwise performs step 207;
Step 104:The distance that temporary variable dis is represented between departure place and destination is defined, temporary variable time tables are defined Show the time for needing from origin to destination;
Step 105:Call moral api high to calculate in Ct from departure place c3 to destination c4 away from discrete time, and assign respectively It is worth to dis and time, and replaces the c3 and c4 in original Ct with dis and time;
Step 106:T=t+1, continues executing with step 203;
Step 107:Centralization and standardization are carried out to obtaining data set SCrecords, data set is obtained SCrecords=SC1, SC2 ..., SCM };
Step 108:Dimensionality reduction is carried out to data set SCrecords using PCA dimension reduction methods, the data set after dimensionality reduction is obtained Precord={ P1, P2 ..., PM };
Step 109:To the data after dimensionality reduction use AP clustering methods, obtain class label Labels=L1, L2 ..., LM }, categorical measure is assigned to NUM;
Step 110:Cyclic variable as n is set, and assigns initial value n=2;
Step 111:Work as n<=NUM performs step 212, otherwise performs step 215;
Step 112:Data set Precord is clustered using K-means clustering methods, obtains lorry class label Labels={ L1, L2 ..., LM };
Step 113:Balancing method being clustered using SDbw and weighing this Clustering Effect, the value for obtaining assigns SDk;
Step 114:N=n+1, continues executing with step 211;
Step 115:If SD2, SD3 ..., the minimum value in SDNUM is SDmin;
Step 116:Min is the optimum k value that lorry data set uses K-means clustering methods;
Step 117:If cargo data amount is N bars, the dimension of cargo data is M, sets up cargo data collection Trecords= { T1, T2 ..., TM }, the data of element T m={ t1, t2, t3, t4, t5 } the expression vehicles m of Trecords, t1, t2, t3, t4, T5, is five dimensions of Tm, m ∈ [1, M], wherein, t1 represents the transport price of goods, and t2 represents the weight of goods, and t3 is represented The departure place of goods, t4 represents the destination of goods, and t5 represents the time of goods handling;
Step 118:Definition cyclic variable is t, and assigns initial value t=1;
Step 119:Work as t<=M performs step 220, otherwise performs step 223;
Step 120:The distance that temporary variable dis is represented between departure place and destination is defined, temporary variable time tables are defined Show the time for needing from origin to destination;
Step 121:Call moral api high to calculate in Tt from departure place t3 to destination t4 away from discrete time, and assign respectively It is worth to dis and time, and replaces the t3 and t4 in original Tt with dis and time;
Step 122:T=t+1, continues executing with step 219;
Step 123:Centralization and standardization are carried out to obtaining data set Trecords, data set is obtained STrecords=ST1, ST2 ..., STM };
Step 124:Dimensionality reduction is carried out to data set Srecords using PCA dimension reduction methods, the data set after dimensionality reduction is obtained Precord={ P1, P2 ..., PM };
Step 125:To the data after dimensionality reduction use AP clustering methods, obtain class label Labels=L1, L2 ..., LM }, categorical measure is assigned to NUM;
Step 126:Cyclic variable as n is set, and assigns initial value n=2;
Step 127:Work as n<=NUM performs step 228, otherwise performs step 231;
Step 128:Data set Precord is clustered using K-means clustering methods, obtains series of lot label Labels={ L1, L2 ..., LM };
Step 129:Balancing method being clustered using SDbw and weighing this Clustering Effect, the value for obtaining assigns SDk;
Step 130:N=n+1, continues executing with step 227;
Step 131:If SD2, SD3 ..., the minimum value in SDNUM is SDmin;
Step 132:Min is the optimum k value that cargo data collection uses K-means clustering methods.
Further, the result for being obtained using cargo data collection and lorry cluster data respectively in the step 2 is instructed The detailed step for practising two graders is as follows:
Step 201:Lorry data set Crecords obtains data set SCrecords=by centralization and standardization {SC1,SC2,…,SCM};
Step 202:Data set SCrecords={ SC1, SC2 ..., SCM } is gathered using K-means clustering methods Class, K is the optimum k value that step 216 is obtained, and obtains lorry class label Labels={ L1, L2 ..., LM };
Step 203:Using Naive Bayes Classifier, training dataset is SCrecords={ SC1, SC2 ..., SCM }, Class label is Labels={ L1, L2 ..., LM }, obtains grader ModelA;
Step 204:Cargo data collection Trecords obtains data set STrecords=by centralization and standardization {ST1,ST2,…,STM};
Step 205:Data set STrecords={ ST1, ST2 ..., STM } is gathered using K-means clustering methods Class, K is the optimum k value that step 232 is obtained, and obtains lorry class label Labels={ L1, L2 ..., LM };
Step 206:Using Naive Bayes Classifier, training dataset is STrecords={ ST1, ST2 ..., STM }, Class label is Labels={ L1, L2 ..., LM }, obtains grader ModelB.
Further, grader and the vehicle data training for being trained using lorry data set respectively in the step 3 The detailed step that the grader practised is classified is as follows:
Step 301:Content recommendation is selected;
Step 302:Step 407 is performed when goods is recommended in selection, step 403 is otherwise performed;
Step 303:Input goods information Trecord={ t1, t2, t3, t4, t5 }, t1 represents cargo transport price, t2 tables Show goods weight, t3 represents goods departure place, and t4 represents goods destination, and t5 represents the goods time of departure;
Step 304:Define temporary variable dis and represent departure place to the distance between destination, define temporary variable time tables Show the time for needing from origin to destination, call moral api high to calculate from departure place t3 to the distance of the process of destination t4 Time with spending, and dis and time is assigned to respectively, t3 and t4 in Trecord are then replaced with dis and time;
Step 305:Goods information Trecord is carried out center and and standardization, obtain data STrecord= {ST1,ST2,ST3,ST4,ST5};
Step 306:The grader ModelA obtained using step 303 is classified to data STrecord, obtains class label Tlabel;
Step 307:Input lorry informationC1 represents transport price desired by car owner, c2 Lorry residue load-carrying is represented, c3 represents lorry departure place, and c4 represents lorry destination, and c5 represents the lorry time of departure;
Step 308:Define temporary variable dis and represent departure place to the distance between destination, define temporary variable time tables Show the time for needing from origin to destination, call moral api high to calculate from departure place c3 to the distance of the process of destination c4 Time with spending, and dis and time is assigned to respectively, c3 and c4 in Crecord are then replaced with dis and time;
Step 309:Lorry information Crecord is carried out center and and standardization, obtain data SCrecord= {SC1,SC2,SC3,SC4,SC5};
Step 310:The grader ModelB obtained using step 306 is classified to data SCrecord, obtains class label Clabel。
Further, the owner of cargo or car owner are calculated by normalized using cosine similarity method in the step 4 The class got of data and grader in all elements similarity to cargo data collection or lorry data set according to similarity from High to Low arrangement, the detailed step recommended to user is as follows:
Step 401:According to the label that step 305 is obtained, taken out from data set STrecords={ ST1, ST2 ..., STM } Take the data that class label is Tlabel and constitute new data set TTrecord={ TT1, TT2 ..., TTN };
Step 402:Cyclic variable as n is set, and assigns initial value n=1;
Step 403:Work as n<=N performs step 504, otherwise performs 506;
Step 404:Using cosine similarity method, calculate lorry information SCrecord=SC1, SC2, SC3, SC4, SC5 } and cargo data collection TTrecord={ TT1, TT2 ..., TTN } in TTn similarities, and value is assigned to SIMn;
Step 405:N=n+1 continues executing with step 503;
Step 406:Data set TTrecord={ TT1, TT2 ..., TTN } is carried out according to Similarity value SIM from size Sequence;
Step 407:Car owner will be from front to back recommended by the TTrecord of sequence;
Step 408:According to the label that step 302 is obtained, extracted in data set SCrecords={ SC1, SC2 ..., SCM } The class label data set SSrecords=new for the data of Clabel are constituted SS1, SS2 ..., SSN };
Step 409:Cyclic variable as n is set, and assigns initial value n=1;
Step 410:Work as n<=N performs step 511, otherwise performs step 513;
Step 411:Using cosine similarity method, calculate goods information STrecord=ST1, ST2, ST3, ST4, ST5 } and lorry data set SSrecords={ SS1, SS2 ..., SSN } in Ssn similarities, and value is assigned to SIMn;
Step 412:N=n+1 continues executing with step 510;
Step 413:Data set SSrecords={ SS1, SS2 ..., SSN } is entered according to Similarity value SIM from size Row sequence;
Step 414:The owner of cargo will be from front to back recommended by the SSrecords of sequence.
Compared with prior art, the advantage of the invention is that:
The present invention is a kind of by lorry data and cargo data compared to proposing for existing recommendation method the invention Balancing method is clustered by AP clustering methods, SDbw and K-means clustering methods are combined, obtained the optimal cluster K of data set Value, then data are clustered according to K values, train grader, then by the information classification of user input, the side finally recommended Method, this method compensate for the very time-consuming office of nearest-neighbors of existing recommendation method searching target user on whole data set It is sex-limited, it is effectively improved the real time response speed of recommendation method.
Brief description of the drawings
Fig. 1 is overview flow chart of the invention;
Fig. 2 is the flow chart of determination lorry data set optimum k value in Fig. 1;
Fig. 3 is the flow chart of determination cargo data collection optimum k value in Fig. 1;
Fig. 4 is the flow chart of training lorry disaggregated model in Fig. 1;
Fig. 5 is the flow chart of training freight classification model in Fig. 1;
Fig. 6 is the flow chart of user input information classification approach in Fig. 1;
Fig. 7 is the flow chart of goods recommendation method in Fig. 1;
Fig. 8 is the flow chart of lorry recommendation method in Fig. 1.
Specific embodiment
With reference to the accompanying drawings and detailed description, the present invention is furture elucidated.
As shown in figure 1, the present invention comprises the following steps:
Step 101:Cargo data collection and lorry data set are pre-processed, and is clustered using AP clustering methods, SDbw Balancing method and K-means clustering methods determine the optimum k value of cargo data collection and lorry data set;
Step 102:According to the optimum k value for calculating to cargo data collection and lorry cluster data, and use simple shellfish The result that the result and lorry cluster data that leaf this grader is obtained according to cargo data clustering are obtained trains two points Class device;
Step 103:Owner of cargo's input goods information that lorry is recommended, goods information is needed to use goods by normalized The grader classification that car data collection is trained is, it is necessary to car owner's input information of vehicles of goods recommendation, information of vehicles is by normalization Treatment, the grader trained using vehicle data collection is classified;
Step 104:The data and grader of the owner of cargo or car owner by normalized are calculated using cosine similarity method The similarity of all elements is arranged cargo data collection or lorry data set from high to low according to similarity in the class got, Xiang Yong Recommend at family.
Such as the accompanying drawing 3 of accompanying drawing 2, optimum k value method and step 101 is calculated from step 201 to step 232:
Step 201:If lorry data volume is N bars, the dimension of lorry data is M, sets up lorry data set Crecords= { C1, C2 ..., CM }, the data of Elements C m={ c1, c2, c3, c4, c5 } the expression lorries m of Crecords, c1, c2, c3, c4, C5, is five dimensions of Cm, m ∈ [1, M], wherein, c1 represents transport price desired by car owner, and c2 represents lorry residue load-carrying, C3 represents the departure place of lorry, and c4 represents the destination of lorry, and c5 represents the time that lorry is transported;
Step 202:Definition cyclic variable is t, and assigns initial value t=1;
Step 203:Work as t<=M performs step 204, otherwise performs step 207;
Step 204:The distance that temporary variable dis is represented between departure place and destination is defined, temporary variable time tables are defined Show the time for needing from origin to destination;
Step 205:Call moral api high to calculate in Ct from departure place c3 to destination c4 away from discrete time, and assign respectively It is worth to dis and time, and replaces the c3 and c4 in original Ct with dis and time;
Step 206:T=t+1, continues executing with step 203;
Step 207:Centralization and standardization are carried out to obtaining data set SCrecords, data set is obtained SCrecords=SC1, SC2 ..., SCM };
Step 208:Dimensionality reduction is carried out to data set SCrecords using PCA dimension reduction methods, the data set after dimensionality reduction is obtained Precord={ P1, P2 ..., PM };
Step 209:To the data after dimensionality reduction use AP clustering methods, obtain class label Labels=L1, L2 ..., LM }, categorical measure is assigned to NUM;
Step 210:Cyclic variable as n is set, and assigns initial value n=2;
Step 211:Work as n<=NUM performs step 212, otherwise performs step 215;
Step 212:Data set Precord is clustered using K-means clustering methods, obtains lorry class label Labels={ L1, L2 ..., LM };
Step 213:Balancing method being clustered using SDbw and weighing this Clustering Effect, the value for obtaining assigns SDk;
Step 214:N=n+1, continues executing with step 211;
Step 215:If SD2, SD3 ..., the minimum value in SDNUM is SDmin;
Step 216:Min is the optimum k value that lorry data set uses K-means clustering methods;
Step 217:If cargo data amount is N bars, the dimension of cargo data is M, sets up cargo data collection Trecords= { T1, T2 ..., TM }, the data of element T m={ t1, t2, t3, t4, t5 } the expression vehicles m of Trecords, t1, t2, t3, t4, T5, is five dimensions of Tm, m ∈ [1, M], wherein, t1 represents the transport price of goods, and t2 represents the weight of goods, and t3 is represented The departure place of goods, t4 represents the destination of goods, and t5 represents the time of goods handling;
Step 218:Definition cyclic variable is t, and assigns initial value t=1;
Step 219:Work as t<=M performs step 220, otherwise performs step 223;
Step 220:The distance that temporary variable dis is represented between departure place and destination is defined, temporary variable time tables are defined Show the time for needing from origin to destination;
Step 221:Call moral api high to calculate in Tt from departure place t3 to destination t4 away from discrete time, and assign respectively It is worth to dis and time, and replaces the t3 and t4 in original Tt with dis and time;
Step 222:T=t+1, continues executing with step 219;
Step 223:Centralization and standardization are carried out to obtaining data set Trecords, data set is obtained STrecords=ST1, ST2 ..., STM };
Step 224:Dimensionality reduction is carried out to data set Srecords using PCA dimension reduction methods, the data set after dimensionality reduction is obtained Precord={ P1, P2 ..., PM };
Step 225:To the data after dimensionality reduction use AP clustering methods, obtain class label Labels=L1, L2 ..., LM }, categorical measure is assigned to NUM;
Step 226:Cyclic variable as n is set, and assigns initial value n=2;
Step 227:Work as n<=NUM performs step 228, otherwise performs step 231;
Step 228:Data set Precord is clustered using K-means clustering methods, obtains series of lot label Labels={ L1, L2 ..., LM };
Step 229:Balancing method being clustered using SDbw and weighing this Clustering Effect, the value for obtaining assigns SDk;
Step 230:N=n+1, continues executing with step 227;
Step 231:If SD2, SD3 ..., the minimum value in SDNUM is SDmin;
Step 232:Min is the optimum k value that cargo data collection uses K-means clustering methods;
Such as the accompanying drawing 5 of accompanying drawing 4, clustered and used cluster result to train grader step 102 from step 301 according to optimum k value To step 306:
Step 301:Lorry data set Crecords obtains data set SCrecords=by centralization and standardization {SC1,SC2,…,SCM};
Step 302:Data set SCrecords={ SC1, SC2 ..., SCM } is gathered using K-means clustering methods Class, K is the optimum k value that step 216 is obtained, and obtains lorry class label Labels={ L1, L2 ..., LM };
Step 303:Using Naive Bayes Classifier, training dataset is SCrecords={ SC1, SC2 ..., SCM }, Class label is Labels={ L1, L2 ..., LM }, obtains grader ModelA;
Step 304:Cargo data collection Trecords obtains data set STrecords=by centralization and standardization {ST1,ST2,…,STM};
Step 305:Data set STrecords={ ST1, ST2 ..., STM } is gathered using K-means clustering methods Class, K is the optimum k value that step 232 is obtained, and obtains lorry class label Labels={ L1, L2 ..., LM };
Step 306:Using Naive Bayes Classifier, training dataset is STrecords={ ST1, ST2 ..., STM }, Class label is Labels={ L1, L2 ..., LM }, obtains grader ModelB;
Such as accompanying drawing 6, the goods or lorry data classification process step 103 that will be input into are from step 401 to step 410:
Step 401:Content recommendation is selected;
Step 402:Step 407 is performed when goods is recommended in selection, step 403 is otherwise performed;
Step 403:Input goods information Trecord={ t1, t2, t3, t4, t5 }, t1 represents cargo transport price, t2 tables Show goods weight, t3 represents goods departure place, and t4 represents goods destination, and t5 represents the goods time of departure;
Step 404:Define temporary variable dis and represent departure place to the distance between destination, define temporary variable time tables Show the time for needing from origin to destination, call moral api high to calculate from departure place t3 to the distance of the process of destination t4 Time with spending, and dis and time is assigned to respectively, t3 and t4 in Trecord are then replaced with dis and time;
Step 405:Goods information Trecord is carried out center and and standardization, obtain data STrecord= {ST1,ST2,ST3,ST4,ST5};
Step 406:The grader ModelA obtained using step 303 is classified to data STrecord, obtains class label Tlabel;
Step 407:Input lorry informationC1 represents transport valency desired by car owner Lattice, c2 represents lorry residue load-carrying, and c3 represents lorry departure place, and c4 represents lorry destination, and c5 represents the lorry time of departure;
Step 408:Define temporary variable dis and represent departure place to the distance between destination, define temporary variable time tables Show the time for needing from origin to destination, call moral api high to calculate from departure place c3 to the distance of the process of destination c4 Time with spending, and dis and time is assigned to respectively, c3 and c4 in Crecord are then replaced with dis and time;
Step 409:Lorry information Crecord is carried out center and and standardization, obtain data SCrecord= {SC1,SC2,SC3,SC4,SC5};
Step 410:The grader ModelB obtained using step 306 is classified to data SCrecord, obtains class label Clabel;
Such as the accompanying drawing 8 of accompanying drawing 7, result calculating cosine cluster is obtained according to classification and recommendation step 104 is from step 501 to step 514:
Step 501:According to the label that step 305 is obtained, taken out from data set STrecords={ ST1, ST2 ..., STM } Take the data that class label is Tlabel and constitute new data set TTrecord={ TT1, TT2 ..., TTN };
Step 502:Cyclic variable as n is set, and assigns initial value n=1;
Step 503:Work as n<=N performs step 504, otherwise performs 506;
Step 504:Using cosine similarity method, calculate lorry information SCrecord=SC1, SC2, SC3, SC4, SC5 } and cargo data collection TTrecord={ TT1, TT2 ..., TTN } in TTn similarities, and value is assigned to SIMn;
Step 505:N=n+1 continues executing with step 503;
Step 506:Data set TTrecord={ TT1, TT2 ..., TTN } is carried out according to Similarity value SIM from size Sequence;
Step 507:Car owner will be from front to back recommended by the TTrecord of sequence;
Step 508:According to the label that step 302 is obtained, extracted in data set SCrecords={ SC1, SC2 ..., SCM } The class label data set SSrecords=new for the data of Clabel are constituted SS1, SS2 ..., SSN };
Step 509:Cyclic variable as n is set, and assigns initial value n=1;
Step 510:Work as n<=N performs step 511, otherwise performs step 513;
Step 511:Using cosine similarity method, calculate goods information STrecord=ST1, ST2, ST3, ST4, ST5 } and lorry data set SSrecords={ SS1, SS2 ..., SSN } in Ssn similarities, and value is assigned to SIMn;
Step 512:N=n+1 continues executing with step 510;
Step 513:Data set SSrecords={ SS1, SS2 ..., SSN } is entered according to Similarity value SIM from size Row sequence;
Step 514:The owner of cargo will be from front to back recommended by the SSrecords of sequence.
In order to better illustrate the validity of this method, clustered by 14998 logistics datas, obtain data set Optimum k value be 3, set K as 3 reuse K-means clustering methods to cluster data, obtain data set to be recommended, use Cosine similarity is calculated previous hundred optimal recommending datas, then previous hundred obtained with traditional cosine similarity push away Recommend data and reach 95% compared to similarity, in ten experiments, this method highest raising efficiency is 84.9%, and minimum raising efficiency is 26.5%, average raising efficiency is 59.5%.
The present invention can be combined with computer system, so as to be automatically performed the two-way recommendation of goods and lorry in logistics field.
A kind of lorry data and cargo data are clustered by AP clustering methods, SDbw that propose of the invention is weighed Amount method and K-means clustering methods are combined, and have obtained the optimal cluster K values of data set, then data are clustered according to K values, instruction Practice grader, then by the information classification of user input, the method finally recommended.
A kind of logistics based on cluster and cosine similarity proposed by the present invention recommends method to can be used not only for logistics neck The goods lorry in domain is recommended, it is also possible to for the recommendation of other consumable goods.
Embodiments of the invention is the foregoing is only, is not intended to limit the invention.It is all in principle of the invention Within, the equivalent made should be included within the scope of the present invention.The content category that the present invention is not elaborated In prior art known to this professional domain technical staff.

Claims (9)

1. method is recommended in a kind of logistics based on cluster and cosine similarity, it is characterised in that comprised the following steps:
Step one:Cargo data collection and lorry data set are pre-processed, and measurement side is clustered using AP clustering methods, SDbw Method and K-means clustering methods determine the cluster number optimum k value of cargo data collection and lorry data set;
Step 2:According to the optimal cluster number determined in step one, K-means is used to cargo data collection and lorry data set Cluster, and the result for being obtained using cargo data collection and lorry cluster data respectively trains two graders;
Step 3:Need lorry recommend the owner of cargo input goods information, goods information by normalized, using in step 2 The grader classification that lorry data set is trained is, it is necessary to car owner's input information of vehicles of goods recommendation, information of vehicles is by normalizing Change is processed, and the grader trained using vehicle data collection in step 2 is classified;
Step 4:The data by normalized and classification using the owner of cargo in cosine similarity method calculation procedure three or car owner The similarity of all elements is arranged cargo data collection or lorry data set from high to low according to similarity in the class that device is got, to User recommends.
2. method is recommended in a kind of logistics based on cluster and cosine similarity according to claim 1, it is characterised in that institute To state in step one and cluster balancing method using AP clustering methods, SDbw and K-means clustering methods determine cargo data collection and goods The step of cluster number optimum k value of car data collection, is as follows:
Step 1.1:Define lorry and cargo data collection, lorry and cargo data collection are pre-processed;
Step 1.2:AP clustering methods are used to lorry and cargo data collection, categorical measure is obtained;
Step 1.3:K-means clustering methods are used to lorry and cargo data collection, if defining K value is from 2 numbers obtained to step 1.2 Value, clusters balancing method and weighs Clustering Effect using SDbw, obtains the cluster optimum k value of lorry and cargo data.
3. method is recommended in a kind of logistics based on cluster and cosine similarity according to claim 1, it is characterised in that institute The step of stating the result obtained using cargo data collection and lorry cluster data respectively in step 2 and train two graders It is as follows:
Step 2.1:Using Naive Bayes Classifier training pattern ModelA, training data is to use K-means clustering methods To the result of lorry data clusters, K values are that the lorry data that step 2 determines most preferably cluster K values;
Step 2.2:Using Naive Bayes Classifier training pattern ModelB, training data is to use K-means clustering methods To the result of cargo data cluster, K values are that the cargo data that step 2 determines most preferably clusters K values.
4. method is recommended in a kind of logistics based on cluster and cosine similarity according to claim 1, it is characterised in that institute The grader that the grader that is trained using lorry data set respectively in step 3 and vehicle data collection trained is stated to be classified The step of it is as follows:
Step 3.1:Need lorry recommend the owner of cargo input goods information, by goods information normalized after, use grader ModelA classifies to goods information, obtains tag along sort;
Step 3.2:Need goods recommend car owner input information of vehicles, by information of vehicles normalized after, use grader ModelB classifies to information of vehicles, obtains tag along sort.
5. method is recommended in a kind of logistics based on cluster and cosine similarity according to claim 1, it is characterised in that institute State in step 4 and calculate what the owner of cargo or car owner got by data and the grader of normalized using cosine similarity method The similarity of all elements is arranged cargo data collection or lorry data set from high to low according to similarity in class, is recommended to user The step of it is as follows:
Step 4.1:Using cosine similarity method, calculate the treated information of the owner of cargo and concentrated with vehicle data and goods information The similarity of the information with same label, is recommended from big to small according to similarity;
Step 4.2:Using cosine similarity method, calculate the treated information of car owner and concentrated with cargo data and information of vehicles The similarity of the information with same label, is recommended from big to small according to similarity.
6. method is recommended in a kind of logistics based on cluster and cosine similarity according to claim 1, it is characterised in that institute To state in step one and cluster balancing method using AP clustering methods, SDbw and K-means clustering methods determine cargo data collection and goods The detailed step of the cluster number optimum k value of car data collection is as follows:
Step 101:If lorry data volume is N bars, the dimension of lorry data is M, set up lorry data set Crecords=C1, C2 ..., CM }, the Elements C m={ c1, c2, c3, c4, c5 } of Crecords represents the data of lorry m, and c1, c2, c3, c4, c5 are Five dimensions of Cm, m ∈ [1, M], wherein, c1 represents transport price desired by car owner, and c2 represents lorry residue load-carrying, and c3 is represented The departure place of lorry, c4 represents the destination of lorry, and c5 represents the time that lorry is transported;
Step 102:Definition cyclic variable is t, and assigns initial value t=1;
Step 103:Work as t<=M performs step 204, otherwise performs step 207;
Step 104:Define temporary variable dis and represent distance between departure place and destination, define temporary variable time represent from The time that departure place needs to destination;
Step 105:Call moral api high to calculate in Ct from departure place c3 to destination c4 away from discrete time, and be assigned to respectively Dis and time, and replace the c3 and c4 in original Ct with dis and time;
Step 106:T=t+1, continues executing with step 203;
Step 107:Centralization and standardization are carried out to obtaining data set SCrecords, data set SCrecords=is obtained {SC1,SC2,…,SCM};
Step 108:Dimensionality reduction is carried out to data set SCrecords using PCA dimension reduction methods, the data set after dimensionality reduction is obtained Precord={ P1, P2 ..., PM };
Step 109:AP clustering methods are used to the data after dimensionality reduction, class label Labels={ L1, L2 ..., LM }, class is obtained Other quantity is assigned to NUM;
Step 110:Cyclic variable as n is set, and assigns initial value n=2;
Step 111:Work as n<=NUM performs step 212, otherwise performs step 215;
Step 112:Data set Precord is clustered using K-means clustering methods, obtains lorry class label Labels ={ L1, L2 ..., LM };
Step 113:Balancing method being clustered using SDbw and weighing this Clustering Effect, the value for obtaining assigns SDk;
Step 114:N=n+1, continues executing with step 211;
Step 115:If SD2, SD3 ..., the minimum value in SDNUM is SDmin;
Step 116:Min is the optimum k value that lorry data set uses K-means clustering methods;
Step 117:If cargo data amount is N bars, the dimension of cargo data is M, set up cargo data collection Trecords=T1, T2 ..., TM }, the element T m={ t1, t2, t3, t4, t5 } of Trecords represents the data of vehicle m, and t1, t2, t3, t4, t5 are Five dimensions of Tm, m ∈ [1, M], wherein, t1 represents the transport price of goods, and t2 represents the weight of goods, and t3 represents goods Departure place, t4 represents the destination of goods, and t5 represents the time of goods handling;
Step 118:Definition cyclic variable is t, and assigns initial value t=1;
Step 119:Work as t<=M performs step 220, otherwise performs step 223;
Step 120:Define temporary variable dis and represent distance between departure place and destination, define temporary variable time represent from The time that departure place needs to destination;
Step 121:Call moral api high to calculate in Tt from departure place t3 to destination t4 away from discrete time, and be assigned to respectively Dis and time, and replace the t3 and t4 in original Tt with dis and time;
Step 122:T=t+1, continues executing with step 219;
Step 123:Centralization and standardization are carried out to obtaining data set Trecords, data set STrecords=is obtained {ST1,ST2,…,STM};
Step 124:Dimensionality reduction is carried out to data set Srecords using PCA dimension reduction methods, the data set Precord after dimensionality reduction is obtained ={ P1, P2 ..., PM };
Step 125:AP clustering methods are used to the data after dimensionality reduction, class label Labels={ L1, L2 ..., LM }, class is obtained Other quantity is assigned to NUM;
Step 126:Cyclic variable as n is set, and assigns initial value n=2;
Step 127:Work as n<=NUM performs step 228, otherwise performs step 231;
Step 128:Data set Precord is clustered using K-means clustering methods, obtains series of lot label Labels ={ L1, L2 ..., LM };
Step 129:Balancing method being clustered using SDbw and weighing this Clustering Effect, the value for obtaining assigns SDk;
Step 130:N=n+1, continues executing with step 227;
Step 131:If SD2, SD3 ..., the minimum value in SDNUM is SDmin;
Step 132:Min is the optimum k value that cargo data collection uses K-means clustering methods.
7. method is recommended in a kind of logistics based on cluster and cosine similarity according to claim 1, it is characterised in that institute State the result obtained using cargo data collection and lorry cluster data respectively in step 2 and train the detailed of two graders Step is as follows:
Step 201:Lorry data set Crecords obtains data set SCrecords=by centralization and standardization {SC1,SC2,…,SCM};
Step 202:Data set SCrecords={ SC1, SC2 ..., SCM } is clustered, K using K-means clustering methods It is the optimum k value that step 216 is obtained, obtains lorry class label Labels={ L1, L2 ..., LM };
Step 203:Using Naive Bayes Classifier, training dataset is SCrecords={ SC1, SC2 ..., SCM }, classification Label is Labels={ L1, L2 ..., LM }, obtains grader ModelA;
Step 204:Cargo data collection Trecords obtains data set STrecords=by centralization and standardization {ST1,ST2,…,STM};
Step 205:Data set STrecords={ ST1, ST2 ..., STM } is clustered, K using K-means clustering methods It is the optimum k value that step 232 is obtained, obtains lorry class label Labels={ L1, L2 ..., LM };
Step 206:Using Naive Bayes Classifier, training dataset is STrecords={ ST1, ST2 ..., STM }, classification Label is Labels={ L1, L2 ..., LM }, obtains grader ModelB.
8. method is recommended in a kind of logistics based on cluster and cosine similarity according to claim 1, it is characterised in that institute The grader that the grader that is trained using lorry data set respectively in step 3 and vehicle data collection trained is stated to be classified Detailed step it is as follows:
Step 301:Content recommendation is selected;
Step 302:Step 407 is performed when goods is recommended in selection, step 403 is otherwise performed;
Step 303:Input goods information Trecord={ t1, t2, t3, t4, t5 }, t1 represents cargo transport price, and t2 represents goods Thing weight, t3 represents goods departure place, and t4 represents goods destination, and t5 represents the goods time of departure;
Step 304:Define temporary variable dis and represent departure place to the distance between destination, define temporary variable time represent from The time that departure place needs to destination, moral api high is called to calculate from departure place t3 to the distance and flower of the process of destination t4 The time taken, and dis and time is assigned to respectively, t3 and t4 in Trecord are then replaced with dis and time;
Step 305:Goods information Trecord is carried out center and and standardization, obtain data STrecord=ST1, ST2,ST3,ST4,ST5};
Step 306:The grader ModelA obtained using step 303 is classified to data STrecord, obtains class label Tlabel;
Step 307:Input lorry informationC1 represents transport price desired by car owner, and c2 is represented Lorry residue load-carrying, c3 represents lorry departure place, and c4 represents lorry destination, and c5 represents the lorry time of departure;
Step 308:Define temporary variable dis and represent departure place to the distance between destination, define temporary variable time represent from The time that departure place needs to destination, moral api high is called to calculate from departure place c3 to the distance and flower of the process of destination c4 The time taken, and dis and time is assigned to respectively, c3 and c4 in Crecord are then replaced with dis and time;
Step 309:Lorry information Crecord is carried out center and and standardization, obtain data SCrecord=SC1, SC2,SC3,SC4,SC5};
Step 310:The grader ModelB obtained using step 306 is classified to data SCrecord, obtains class label Clabel。
9. method is recommended in a kind of logistics based on cluster and cosine similarity according to claim 1, it is characterised in that institute State in step 4 and calculate what the owner of cargo or car owner got by data and the grader of normalized using cosine similarity method The similarity of all elements is arranged cargo data collection or lorry data set from high to low according to similarity in class, is recommended to user Detailed step it is as follows:
Step 401:According to the label that step 305 is obtained, class is extracted from data set STrecords={ ST1, ST2 ..., STM } The distinguishing label data set TTrecord=new for the data of Tlabel are constituted TT1, TT2 ..., TTN };
Step 402:Cyclic variable as n is set, and assigns initial value n=1;
Step 403:Work as n<=N performs step 504, otherwise performs 506;
Step 404:Using cosine similarity method, calculate lorry information SCrecord={ SC1, SC2, SC3, SC4, SC5 } and TTn similarities in cargo data collection TTrecord={ TT1, TT2 ..., TTN }, and value is assigned to SIMn;
Step 405:N=n+1 continues executing with step 503;
Step 406:Data set TTrecord={ TT1, TT2 ..., TTN } is arranged according to Similarity value SIM from size Sequence;
Step 407:Car owner will be from front to back recommended by the TTrecord of sequence;
Step 408:According to the label that step 302 is obtained, classification is extracted in data set SCrecords={ SC1, SC2 ..., SCM } The label data set SSrecords=new for the data of Clabel are constituted SS1, SS2 ..., SSN };
Step 409:Cyclic variable as n is set, and assigns initial value n=1;
Step 410:Work as n<=N performs step 511, otherwise performs step 513;
Step 411:Using cosine similarity method, calculate goods information STrecord={ ST1, ST2, ST3, ST4, ST5 } and Ssn similarities in lorry data set SSrecords={ SS1, SS2 ..., SSN }, and value is assigned to SIMn;
Step 412:N=n+1 continues executing with step 510;
Step 413:Data set SSrecords={ SS1, SS2 ..., SSN } is arranged according to Similarity value SIM from size Sequence;
Step 414:The owner of cargo will be from front to back recommended by the SSrecords of sequence.
CN201710041664.6A 2017-01-20 2017-01-20 Method is recommended in a kind of logistics based on cluster and cosine similarity Pending CN106886872A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710041664.6A CN106886872A (en) 2017-01-20 2017-01-20 Method is recommended in a kind of logistics based on cluster and cosine similarity

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710041664.6A CN106886872A (en) 2017-01-20 2017-01-20 Method is recommended in a kind of logistics based on cluster and cosine similarity

Publications (1)

Publication Number Publication Date
CN106886872A true CN106886872A (en) 2017-06-23

Family

ID=59176404

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710041664.6A Pending CN106886872A (en) 2017-01-20 2017-01-20 Method is recommended in a kind of logistics based on cluster and cosine similarity

Country Status (1)

Country Link
CN (1) CN106886872A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109117872A (en) * 2018-07-24 2019-01-01 贵州电网有限责任公司信息中心 A kind of user power utilization behavior analysis method based on automatic Optimal Clustering
CN109767264A (en) * 2018-12-20 2019-05-17 深圳壹账通智能科技有限公司 Product data method for pushing, device, computer equipment and storage medium
CN110175656A (en) * 2019-06-04 2019-08-27 北京交通大学 The city Clustering Model of raising train marshalling list efficiency based on group of cities heroin flow
CN110276503A (en) * 2018-03-14 2019-09-24 吉旗物联科技(上海)有限公司 A kind of method of automatic identification cold chain vehicle task
CN111428145A (en) * 2020-03-19 2020-07-17 重庆邮电大学 Recommendation method and system fusing tag data and naive Bayesian classification
CN112000801A (en) * 2020-07-09 2020-11-27 山东师范大学 Government affair text classification and hot spot problem mining method and system based on machine learning
CN113177103A (en) * 2021-04-13 2021-07-27 广东省农业科学院茶叶研究所 Evaluation comment-based tea sensory quality comparison method and system
CN114399251A (en) * 2021-12-30 2022-04-26 淮阴工学院 Cold-chain logistics recommendation method and device based on semantic network and cluster preference

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685458A (en) * 2008-09-27 2010-03-31 华为技术有限公司 Recommendation method and system based on collaborative filtering
CN103793800A (en) * 2013-04-11 2014-05-14 李敬泉 Intelligent paring technology for vehicle-cargo on-line loading
CN105117879A (en) * 2015-08-19 2015-12-02 广州增信信息科技有限公司 Vehicle and cargo intelligent matching method, device and system
CN105654267A (en) * 2016-03-01 2016-06-08 淮阴工学院 Cold-chain logistic stowage intelligent recommendation method based on spectral cl9ustering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101685458A (en) * 2008-09-27 2010-03-31 华为技术有限公司 Recommendation method and system based on collaborative filtering
CN103793800A (en) * 2013-04-11 2014-05-14 李敬泉 Intelligent paring technology for vehicle-cargo on-line loading
CN105117879A (en) * 2015-08-19 2015-12-02 广州增信信息科技有限公司 Vehicle and cargo intelligent matching method, device and system
CN105654267A (en) * 2016-03-01 2016-06-08 淮阴工学院 Cold-chain logistic stowage intelligent recommendation method based on spectral cl9ustering

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276503A (en) * 2018-03-14 2019-09-24 吉旗物联科技(上海)有限公司 A kind of method of automatic identification cold chain vehicle task
CN109117872A (en) * 2018-07-24 2019-01-01 贵州电网有限责任公司信息中心 A kind of user power utilization behavior analysis method based on automatic Optimal Clustering
CN109767264A (en) * 2018-12-20 2019-05-17 深圳壹账通智能科技有限公司 Product data method for pushing, device, computer equipment and storage medium
CN110175656A (en) * 2019-06-04 2019-08-27 北京交通大学 The city Clustering Model of raising train marshalling list efficiency based on group of cities heroin flow
CN110175656B (en) * 2019-06-04 2021-08-31 北京交通大学 Urban clustering model for improving train marshalling efficiency based on urban white goods flow
CN111428145A (en) * 2020-03-19 2020-07-17 重庆邮电大学 Recommendation method and system fusing tag data and naive Bayesian classification
CN111428145B (en) * 2020-03-19 2022-12-27 重庆邮电大学 Recommendation method and system fusing tag data and naive Bayesian classification
CN112000801A (en) * 2020-07-09 2020-11-27 山东师范大学 Government affair text classification and hot spot problem mining method and system based on machine learning
CN113177103A (en) * 2021-04-13 2021-07-27 广东省农业科学院茶叶研究所 Evaluation comment-based tea sensory quality comparison method and system
CN114399251A (en) * 2021-12-30 2022-04-26 淮阴工学院 Cold-chain logistics recommendation method and device based on semantic network and cluster preference

Similar Documents

Publication Publication Date Title
CN106886872A (en) Method is recommended in a kind of logistics based on cluster and cosine similarity
CN110968701A (en) Relationship map establishing method, device and equipment for graph neural network
CN109783639A (en) A kind of conciliation case intelligence allocating method and system based on feature extraction
CN107705066A (en) Information input method and electronic equipment during a kind of commodity storage
CN111222681A (en) Data processing method, device, equipment and storage medium for enterprise bankruptcy risk prediction
CN103294817A (en) Text feature extraction method based on categorical distribution probability
CN109739844A (en) Data classification method based on decaying weight
CN104199822A (en) Method and system for identifying demand classification corresponding to searching
CN111680225B (en) WeChat financial message analysis method and system based on machine learning
Chen et al. Research on location fusion of spatial geological disaster based on fuzzy SVM
CN114880486A (en) Industry chain identification method and system based on NLP and knowledge graph
CN108897805A (en) A kind of patent text automatic classification method
CN109740642A (en) Invoice category recognition methods, device, electronic equipment and readable storage medium storing program for executing
CN107766323A (en) A kind of text feature based on mutual information and correlation rule
CN107729377A (en) Customer classification method and system based on data mining
CN104142960A (en) Internet data analysis system
CN109191181B (en) Digital signage advertisement audience and crowd classification method based on neural network and Huff model
CN106372964A (en) Behavior loyalty identification and management method, system and terminal
CN113204603A (en) Method and device for marking categories of financial data assets
CN114596031A (en) Express terminal user portrait model based on full life cycle data
CN111612583A (en) Individualized shopping guide system based on clustering
Ali et al. An efficient quality inspection of food products using neural network classification
Pane et al. A PSO-GBR solution for association rule optimization on supermarket sales
Saraswat et al. An optimal feature selection approach using ibbo for histopathological image classification
Liu et al. An active learning algorithm for multi-class classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170623