WO2017167097A1 - 一种基于随机森林的模型训练方法和装置 - Google Patents

一种基于随机森林的模型训练方法和装置 Download PDF

Info

Publication number
WO2017167097A1
WO2017167097A1 PCT/CN2017/077704 CN2017077704W WO2017167097A1 WO 2017167097 A1 WO2017167097 A1 WO 2017167097A1 CN 2017077704 W CN2017077704 W CN 2017077704W WO 2017167097 A1 WO2017167097 A1 WO 2017167097A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample data
attribute information
weight
value
working node
Prior art date
Application number
PCT/CN2017/077704
Other languages
English (en)
French (fr)
Inventor
姜晓燕
王少萌
杨旭
Original Assignee
阿里巴巴集团控股有限公司
姜晓燕
王少萌
杨旭
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 阿里巴巴集团控股有限公司, 姜晓燕, 王少萌, 杨旭 filed Critical 阿里巴巴集团控股有限公司
Publication of WO2017167097A1 publication Critical patent/WO2017167097A1/zh
Priority to US16/146,907 priority Critical patent/US11276013B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F16/2228Indexing structures
    • G06F16/2246Trees, e.g. B+trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/045Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5066Algorithms for mapping a plurality of inter-dependent sub-tasks onto a plurality of physical CPUs

Definitions

  • the present application relates to the technical field of computer processing, and in particular to a model training method based on random forest and a model training device based on random forest.
  • the random forest algorithm is often used for model training, and these massive data are mined to perform classification and recommendation operations.
  • the meta-classifier h(x,k) is generally constructed by CART (Classification And Regression Tree) algorithm.
  • CART Classification And Regression Tree
  • the output of the shorthand forest is usually obtained by majority voting.
  • the stand-alone version of the random forest can no longer handle massive scale, usually using a parallel version of the random forest.
  • Each worker nodes randomly samples a subset of sample data S from D.
  • the size of S is generally much smaller than D, which can be processed by a single computer.
  • a single worker is based on S and applies the CART algorithm to train the decision tree.
  • the Gini coefficient Gini of the feature is generally calculated, and the split is based on the optimal Gini coefficient Gini.
  • Gini coefficient Gini When calculating the Gini coefficient Gini, it is usually necessary to use the exhaustive method, that is, if there are n features, and the CART tree is binary, then the combination of all branches has (2 n-1 -1) species, which needs to be calculated (2 n -1 -1)
  • the Gini coefficient Gini the complexity is O(2 n-1 -1)
  • the computational complexity is exponential, and it takes a lot of time to train the decision tree, which also makes the iterative update time of the model Long, training is less efficient.
  • the embodiment of the present application discloses a model training method based on a random forest, including:
  • the one or more decision tree objects are trained by the working node in each group using the target sample data.
  • the working node in each group includes one or more first working nodes and one or more second working nodes;
  • the step of obtaining the target sample data by the working node in each group is randomly sampled from the preset sample data, and the steps of obtaining the target sample data include:
  • part of the sample data is read from the preset sample data by each first working node
  • the partial sample data read by each first working node is randomly distributed to each of the second working nodes to distribute the sample data distributed to the second working node as the target sample data.
  • the step of training the one or more decision tree objects by the working node in each group by using the target sample data comprises:
  • one decision tree object is trained by each second working node using the target sample data.
  • the step of training the one or more decision tree objects by the working node in each group by using the target sample data comprises:
  • the tree node of the decision tree object is split according to the Gini coefficient.
  • the step of calculating the weight of the value of the attribute information comprises:
  • the frequencies are normalized to obtain weights.
  • the step of calculating the weight of the value of the attribute information comprises:
  • the weight probability matrix is multiplied by the feature vector to obtain a weight.
  • the step of calculating the Gini coefficient by using the value of the sorted attribute information comprises:
  • the Gini coefficient is calculated by sequentially using the two attribute subsets.
  • the embodiment of the present application also discloses a model training device based on random forest, including:
  • a packet division module configured to divide the working node into one or more packets
  • a random sampling module configured to perform random sampling from the preset sample data by the working node in each group to obtain target sample data
  • a decision tree training module is configured to train one or more decision tree objects by the working node in each group using the target sample data.
  • the working node in each group includes one or more first working nodes and one or more second working nodes;
  • the random sampling module includes:
  • a partial data reading submodule for reading, in each group, part of the sample data from the preset sample data by each first working node
  • the data random distribution sub-module is configured to randomly distribute the partial sample data read by each first working node to each second working node to distribute the sample data distributed to the second working node as the target sample data.
  • the decision tree training module includes:
  • the node training sub-module is configured to train, in each group, a decision tree object by using the target sample data by each second working node.
  • the decision tree training module includes:
  • a weight calculation submodule configured to calculate a weight of the value of the attribute information when a value of the attribute information of the target sample data is an enumerated value
  • a sorting submodule configured to sort the values of the attribute information according to the weight
  • a Gini coefficient calculation sub-module for calculating a Gini coefficient using the value of the sorted attribute information
  • a splitting submodule configured to perform split processing on the tree node of the decision tree object according to the Gini coefficient.
  • the weight calculation submodule comprises:
  • a frequency calculation unit configured to calculate a frequency of the attribute information for the classification column when the classification of the attribute information is a two classification
  • a normalization unit for normalizing the frequency to obtain a weight for normalizing the frequency to obtain a weight.
  • the weight calculation submodule comprises:
  • a weight probability matrix calculation unit configured to calculate a weight probability matrix of the value of the attribute information for the classification column when the classification of the attribute information is a multi-category, wherein an abscissa of the weight probability matrix is The value of the attribute information and the ordinate are the values of the classification column;
  • a principal component analysis unit configured to perform principal component analysis on the weight probability matrix to obtain a feature vector corresponding to the maximum eigenvalue
  • a weight obtaining unit configured to multiply the weight probability matrix by the feature vector to obtain a weight.
  • the Gini coefficient calculation submodule comprises:
  • a subset dividing unit configured to sequentially divide the value of the sorted attribute information into two attribute subsets according to the sorting order
  • a subset calculation unit is configured to sequentially calculate the Gini coefficient by using the two attribute subsets.
  • the working node is divided into one or more packets, and the working nodes in each group randomly sample from the preset sample data to obtain target sample data, thereby training the decision tree object, and therefore, each grouping
  • the working node in the process only reads part of the sample data, and does not need to scan the complete sample data once, which greatly reduces the data reading amount and reduces the time spent, thereby reducing the iterative update time of the model and improving the training efficiency.
  • the attribute of the enumerated value is calculated by the order of importance, and the splitting point is not needed, which greatly reduces the calculation amount of the splitting point, and assumes that the attribute has n values, and is calculated by the importance sorting method.
  • the computational complexity can be reduced from O(2 n-1 -1) to O(n) in the exhaustive method, which reduces the training time, thus reducing the iterative update time of the model and improving the training efficiency.
  • FIG. 1 is a flow chart showing the steps of an embodiment of a random forest-based model training method of the present application
  • FIG. 2 is a diagram showing an example of grouping according to an embodiment of the present application.
  • FIG. 3 is a diagram showing an example of a process of performing model training in a Hadoop grouping according to an embodiment of the present application
  • FIG. 4 is a structural block diagram of an embodiment of a random forest-based model training device of the present application.
  • FIG. 1 a flow chart of steps of an embodiment of a random forest-based model training method according to the present application is shown, which may specifically include the following steps:
  • Step 101 Divide the working node into one or more packets
  • the working node may be a computing node of the training model, and may be deployed in a single computer or in a computer cluster, such as a distributed system.
  • the worker can be a CPU (Central) The core of the Processing Unit (Central Processing Unit).
  • the working node can be a single computer.
  • the working node is divided into one or more groups (the dotted line frame portion), and the work in each group is performed.
  • a node includes one or more first working nodes and one or more second working nodes.
  • Each group is responsible for processing a complete sample data, the first working node in the group randomly distributes the sample data to the second working node, and the second working node uses the distributed sample data to train the decision tree.
  • the number of first working nodes is proportional to the amount of data of the sample data, and a second working node trains a decision tree.
  • Hadoop is described as an embodiment of a computer cluster.
  • Hadoop mainly consists of two parts, one is the Distributed File System (HDFS), and the other is the distributed computing framework, MapReduce.
  • HDFS Distributed File System
  • MapReduce MapReduce
  • HDFS is a highly fault-tolerant system that provides high-throughput data access for applications with large data sets.
  • MapReduce is a programming model that extracts the analysis elements from the massive source data and returns the result set.
  • the basic principle can be to divide the large data analysis into small pieces and analyze them one by one, and then summarize the extracted data.
  • JobTracker can be used for scheduling work, and TaskTracker can be used to perform work.
  • the TaskTracker in Hadoop may refer to a processing node of the distributed system, and the processing node may include one or more Map nodes and one or more Reduce nodes.
  • MapReduce In distributed computing, MapReduce handles complex problems such as distributed storage, work scheduling, load balancing, fault-tolerant equalization, fault-tolerant processing, and network communication in parallel programming.
  • mapping function maps function
  • statute function reduce function
  • the map function can decompose the task into multiple tasks
  • the reduce function can summarize the results of the decomposed multitasking.
  • each MapReduce task can be initialized to a Job, and each Job can be divided into two phases: the map phase and the reduce phase. These two phases are represented by two functions, the map function and the reduce function.
  • the map function can accept an input of the form ⁇ key, value> (Input), and then generate an intermediate output of the form ⁇ key, value>.
  • the Hadoop function can receive a form such as ⁇ key, (list of values)> The input (Input), then the value set is processed, each reduce function produces 0 or 1 output (Output), the output of the reduce function is also in the form of ⁇ key, value>.
  • the first working node may be a Map node
  • the second working node may be a Rohce node
  • Step 102 Randomly sampling from the preset sample data by the working node in each group to obtain target sample data
  • each of the first working nodes may read part of the sample data from the preset sample data, that is, the sample data. Subset.
  • the partial sample data read by each first working node is randomly distributed to each of the second working nodes to distribute the sample data distributed to the second working node as the target sample data.
  • the first working node reads once, but whether it is distributed to the second working node is uncertain, that is, random distribution (sampling).
  • a certain piece of sample data is read by the first working node A1, and a random value is generated for the second working nodes B1, B2, B3, B4, and B5, respectively. If the random value is greater than 0.5, Distributed to the second working node, and vice versa, not distributed to the second working node, for which the distribution data is randomly 5 (the number of second working nodes).
  • sample data read by the first working nodes A2 and A3 may also be randomly distributed to the second working nodes B1, B2, B3, B4, and B5.
  • a grouped Map node and a Raduce node process a complete sample data, and each Map node reads part of the sample data and randomly distributes it to the Arthurce node.
  • the map function can be defined as a random distribution to distribute the sample data of the Map node to the Reduce node.
  • the Map node extracts the key-value pairs from the input sample data, and each key-value pair is passed as a parameter to the map function, and the intermediate key-value pairs generated by the map function are cached in the memory.
  • the output of the map function in the Map node is processed by the MapReduce framework and finally distributed to the reduce function in the Reduce node.
  • Step 103 Train one or more decision tree objects by the working node in each group using the target sample data.
  • Each sample data usually includes a sample object, one or more attribute information, and a classification label.
  • the target sample data after random sampling is a data set, generally in the form of a two-dimensional array, that is, includes a set of sample objects, one or more sets of attribute information, and a set of classification labels (also called classification columns).
  • Sample object body temperature Surface coverage viviparous Laying eggs Can fly aquatic Legged Hibernate Classification column people constant temperature Hair Yes no no no Yes no Mammal Giant clam Cold blood Scale no Yes no no no Yes reptiles Salmon Cold blood Scale no Yes no Yes no no no Fish whale constant temperature Hair Yes no no Yes no no Mammal frog Cold blood no no Yes no sometimes Yes Yes Amphibian Monitor lizard Cold blood Scale no Yes no Yes no reptiles bat constant temperature Hair Yes no Yes no Yes no Mammal Cat constant temperature skin Yes no no no Yes no Mammal Leopard shark Cold blood Scale Yes no no no no Fish Turtle Cold blood Scale no Yes no sometimes Yes no reptiles porcupine constant temperature bristle Yes no no no no Yes Yes Yes Mammal eel Cold blood Scale no Yes no Yes no no Fish ⁇ Cold blood no no no Yes no sometimes Yes Yes Amphibian
  • the attribute information includes body temperature, surface coverage, viviparous, laying eggs, capable of flying, aquatic, legged, and hibernating.
  • a decision tree is a tree structure composed of nodes and directed edges. When training, each non-leaf node is classified for an attribute.
  • one decision tree object is trained by each second working node using the target sample data.
  • the Rohce node can train the decision tree by using the distributed sample data (ie, the target sample data).
  • each non-leaf node is split and iterated for a certain attribute until the samples on each leaf node are in a single category or each attribute is selected.
  • the leaf node represents the result of the classification.
  • the complete path from the root node to the leaf node represents a decision process.
  • the training essence of the decision tree is how the node splits.
  • the decision tree obtained by training is generally a binary tree. In a few cases, there are also cases of non-binary trees.
  • the specific training process is as follows:
  • step 2 repeat the process of step 2 for the sample data of each child node, and mark the leaf node if one of the following conditions is met, and the node split ends:
  • the number of sample data of the current data set is less than a given value
  • the depth of the decision tree is greater than the set value.
  • the working node is divided into one or more packets, and the working nodes in each group randomly sample from the preset sample data to obtain target sample data, thereby training the decision tree object, and therefore, each grouping
  • the working node in the process only reads part of the sample data, and does not need to scan the complete sample data once, which greatly reduces the data reading amount and reduces the time spent, thereby reducing the iterative update time of the model and improving the training efficiency.
  • step 103 may include the following sub-steps:
  • Sub-step S11 when the value of the attribute information of the target sample data is an enumeration value, calculating a weight of the value of the attribute information;
  • the value of the attribute information is generally divided into a continuous value and an enumerated value, and the enumerated value is also called a discrete value, that is, a discontinuous value.
  • the values of body temperature in Table 1 are cold blood and constant temperature, which are enumerated values.
  • the order of importance is used to calculate the optimal splitting point to improve the speedup ratio.
  • the value of the attribute information is calculated for the frequency of the classification column, and the frequency is normalized to obtain a weight.
  • the value of the attribute information is calculated for the weight probability matrix of the classification column, wherein the abscissa of the weight probability matrix
  • the value of the attribute information and the ordinate are the values of the classification column.
  • PCA Principal Component Analysis
  • Sub-step S12 sorting the values of the attribute information according to the weight
  • the value of the attribute information may be sequentially sorted according to the weight, or may be sorted in reverse order, which is not limited by the embodiment of the present application.
  • Sub-step S13 calculating the Gini coefficient by using the value of the sorted attribute information
  • the Gini coefficient Gini can be used to split the nodes of the decision tree. The more chaotic the categories contained in the sample population, the larger the Gini index.
  • the value of the sorted attribute information may be sequentially divided into two attribute subsets in the order of sorting.
  • the Gini coefficient Gini is calculated by using two subsets of attributes in turn.
  • Gini index Gini is defined as:
  • the Gini gain of the set D is defined as:
  • Sub-step S14 splitting the tree node of the decision tree object according to the Gini coefficient.
  • the Gini index Gini represents the uncertainty of the data set.
  • the attribute of the enumerated value is calculated by the order of importance, and the splitting point is not needed, which greatly reduces the calculation amount of the splitting point, and assumes that the attribute has n values, and is calculated by the importance sorting method.
  • the computational complexity can be reduced from O(2 n-1 -1) to O(n) in the exhaustive method, which reduces the training time, thus reducing the iterative update time of the model and improving the training efficiency.
  • FIG. 4 a structural block diagram of an embodiment of a random forest-based model training device of the present application is shown, which may specifically include the following modules:
  • a packet dividing module 401 configured to divide the working node into one or more packets
  • the random sampling module 402 is configured to perform random sampling from the preset sample data by the working node in each group to obtain target sample data;
  • the decision tree training module 403 is configured to train one or more decision tree objects by the working node in each group by using the target sample data.
  • the working node in each group includes one or more first working nodes and one or more second working nodes;
  • the random sampling module 401 can include the following sub-modules:
  • a partial data reading submodule for reading, in each group, part of the sample data from the preset sample data by each first working node
  • the data random distribution sub-module is configured to randomly distribute the partial sample data read by each first working node to each second working node to distribute the sample data distributed to the second working node as the target sample data.
  • the decision tree training module 403 may include the following sub-modules:
  • the node training sub-module is configured to train, in each group, a decision tree object by using the target sample data by each second working node.
  • the decision tree training module 403 may include the following sub-modules:
  • a weight calculation submodule configured to calculate a weight of the value of the attribute information when a value of the attribute information of the target sample data is an enumerated value
  • a sorting submodule configured to sort the values of the attribute information according to the weight
  • a Gini coefficient calculation sub-module for calculating a Gini coefficient using the value of the sorted attribute information
  • a splitting submodule configured to perform split processing on the tree node of the decision tree object according to the Gini coefficient.
  • the weight calculation sub-module may include the following units:
  • a frequency calculation unit configured to calculate a frequency of the attribute information for the classification column when the classification of the attribute information is a two classification
  • a normalization unit for normalizing the frequency to obtain a weight for normalizing the frequency to obtain a weight.
  • the weight calculation sub-module may include the following units:
  • a weight probability matrix calculation unit configured to calculate a weight probability matrix of the value of the attribute information for the classification column when the classification of the attribute information is a multi-category, wherein an abscissa of the weight probability matrix is The value of the attribute information and the ordinate are the values of the classification column;
  • a principal component analysis unit configured to perform principal component analysis on the weight probability matrix to obtain a feature vector corresponding to the maximum eigenvalue
  • a weight obtaining unit configured to multiply the weight probability matrix by the feature vector to obtain a weight.
  • the Gini coefficient calculation sub-module may include the following units:
  • a subset dividing unit configured to sequentially divide the value of the sorted attribute information into two attribute subsets according to the sorting order
  • a subset calculation unit is configured to sequentially calculate the Gini coefficient by using the two attribute subsets.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • embodiments of the embodiments of the present application can be provided as a method, apparatus, or computer program product. Therefore, the embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware. Moreover, embodiments of the present application can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • the memory may include non-persistent memory, random access memory (RAM), and/or non-volatile memory in a computer readable medium, such as read only memory (ROM) or flash memory.
  • RAM random access memory
  • ROM read only memory
  • Memory is an example of a computer readable medium.
  • Computer readable media includes both permanent and non-persistent, removable and non-removable media.
  • Information storage can be implemented by any method or technology. The information can be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory. (ROM), electrically erasable programmable read only memory (EEPROM), flash memory or other memory technology, compact disk read only memory (CD-ROM), digital versatile disk (DVD) or other optical storage
  • PRAM phase change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM electrically erasable programmable read only memory
  • flash memory or other memory technology
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • computer readable media does not include non-persistent computer readable media, such as modulated data signals and carrier waves.
  • Embodiments of the present application are described with reference to flowcharts and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the present application. It will be understood that each flow and/or block of the flowchart illustrations and/or FIG.
  • These computer program instructions can be provided to a processor of a general purpose computer, special purpose computer, embedded processor or other programmable data processing terminal device to produce a machine such that instructions are executed by a processor of a computer or other programmable data processing terminal device
  • Means are provided for implementing the functions specified in one or more of the flow or in one or more blocks of the flow chart.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing terminal device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the instruction device implements the functions specified in one or more blocks of the flowchart or in a flow or block of the flowchart.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种基于随机森林的模型训练方法和装置,该方法包括:将工作节点划分成一个或多个分组(101);由每个分组中的工作节点从预置的样本数据中进行随机采样,获得目标样本数据(102);由每个分组中的工作节点采用所述目标样本数据训练一个或多个决策树对象(103)。该方法不需要扫描一次完整的样本数据,大大降低了数据的读取量,减少了耗费的时间,进而减少模型的迭代更新时间、提高训练效率。

Description

一种基于随机森林的模型训练方法和装置 技术领域
本申请涉及计算机处理的技术领域,特别是涉及一种基于随机森林的模型训练方法和一种基于随机森林的模型训练装置。
背景技术
随着互联网的快速发展,人们生活的方方面面都与互联网产生了联系,在人们使用互联网的相关功能时,产生了海量的数据。
目前,经常使用随机森林(Random forest)算法进行模型训练,对这些海量的数据进行挖掘,从而进行分类、推荐等操作。
随机森林是一个树型分类器{h(x,k),k=1,…}的集合,元分类器h(x,k)一般是用CART(Classification And Regression Tree,分类回归树)算法构建的没有剪枝的决策树,其中,x是输入向量,k是独立同分布的随机向量,决定了单颗树的生长过程,速记森林的输出通常采用多数投票法得到。
由于样本数据的规模达到几亿甚至几十亿,单机版的随机森林已经不能处理海量规模的,通常使用并行版的随机森林。
假设样本数据的全集为D,要训练100棵决策树,并行实现方案一般如下:
1、样本随机采样;
同时启动100个工作节点worker,每个worker从D中随机采样出一个样本数据的子集S,S的大小一般远远小于D,单台计算机可处理。
2、单个worker基于S、应用CART算法训练决策树。
在训练决策树时,对于非连续特征,一般是计算该特征的基尼系数Gini,基于最佳基尼系数Gini进行分裂。
在这种方案中,由于每个worker都是从样本数据的全集中采样子集, 因此,需要扫面一次样本数据的全集,数据读取量大,耗费较多的时间进行读取,使得模型的迭代更新时间较长、训练效率较低。
在计算基尼系数Gini时,通常需要使用穷举法,即假设有n个特征,且CART树是二分类的,则所有分支的组合有(2n-1-1)种,需要计算(2n-1-1)次基尼系数Gini,复杂度为O(2n-1-1),计算的复杂度为指数级别,在训练决策树时耗费大量的时间,同样使得使得模型的迭代更新时间较长、训练效率较低。
发明内容
鉴于上述问题,提出了本申请实施例以便提供一种克服上述问题或者至少部分地解决上述问题的一种基于随机森林的模型训练方法和相应的一种基于随机森林的模型训练装置。
为了解决上述问题,本申请实施例公开了一种基于随机森林的模型训练方法,包括:
将工作节点划分成一个或多个分组;
由每个分组中的工作节点从预置的样本数据中进行随机采样,获得目标样本数据;
由每个分组中的工作节点采用所述目标样本数据训练一个或多个决策树对象。
优选地,每个分组中的工作节点包括一个或多个第一工作节点以及一个或多个第二工作节点;
所述由每个分组中的工作节点从预置的样本数据中进行随机采样的,获得目标样本数据步骤包括:
在每个分组中,由每个第一工作节点从预置的样本数据中读取部分样本数据;
由每个第一工作节点将读取的部分样本数据随机分发至每个第二工作节点中,以分发至第二工作节点的样本数据作为目标样本数据。
优选地,所述由每个分组中的工作节点采用所述目标样本数据训练一个或多个决策树对象的步骤包括:
在每个分组中,由每个第二工作节点采用所述目标样本数据训练一个决策树对象。
优选地,所述由每个分组中的工作节点采用所述目标样本数据训练一个或多个决策树对象的步骤包括:
当所述目标样本数据的属性信息的值为枚举值时,计算所述属性信息的值的权重;
按照所述权重对所述属性信息的值进行排序;
采用排序后的属性信息的值计算基尼系数;
按照所述基尼系数针对决策树对象的树节点进行分裂处理。
优选地,所述计算所述属性信息的值的权重的步骤包括:
当所述属性信息的分类列为二分类时,计算所述属性信息的值对于所述分类列的频率;
对所述频率进行归一化,获得权重。
优选地,所述计算所述属性信息的值的权重的步骤包括:
当所述属性信息的分类列为多分类时,计算所述属性信息的值针对所述分类列的权重概率矩阵,其中,所述权重概率矩阵的横坐标为所述属性信息的值、纵坐标为所述分类列的值;
对所述权重概率矩阵进行主成分分析,获得最大特征值对应的特征向量;
将所述权重概率矩阵乘以所述特征向量,获得权重。
优选地,所述采用排序后的属性信息的值计算基尼系数的步骤包括:
按照排序的顺序依次将排序后的属性信息的值划分为两个属性子集;
依次采用所述两个属性子集计算基尼系数。
本申请实施例还公开了一种基于随机森林的模型训练装置,包括:
分组划分模块,用于将工作节点划分成一个或多个分组;
随机采样模块,用于由每个分组中的工作节点从预置的样本数据中进行随机采样,获得目标样本数据;
决策树训练模块,用于由每个分组中的工作节点采用所述目标样本数据训练一个或多个决策树对象。
优选地,每个分组中的工作节点包括一个或多个第一工作节点以及一个或多个第二工作节点;
所述随机采样模块包括:
部分数据读取子模块,用于在每个分组中,由每个第一工作节点从预置的样本数据中读取部分样本数据;
数据随机分发子模块,用于由每个第一工作节点将读取的部分样本数据随机分发至每个第二工作节点中,以分发至第二工作节点的样本数据作为目标样本数据。
优选地,所述决策树训练模块包括:
节点训练子模块,用于在每个分组中,由每个第二工作节点采用所述目标样本数据训练一个决策树对象。
优选地,所述决策树训练模块包括:
权重计算子模块,用于在所述目标样本数据的属性信息的值为枚举值时,计算所述属性信息的值的权重;
排序子模块,用于按照所述权重对所述属性信息的值进行排序;
基尼系数计算子模块,用于采用排序后的属性信息的值计算基尼系数;
分裂子模块,用于按照所述基尼系数针对决策树对象的树节点进行分裂处理。
优选地,所述权重计算子模块包括:
频率计算单元,用于在所述属性信息的分类列为二分类时,计算所述属性信息的值对于所述分类列的频率;
归一化单元,用于对所述频率进行归一化,获得权重。
优选地,所述权重计算子模块包括:
权重概率矩阵计算单元,用于在所述属性信息的分类列为多分类时,计算所述属性信息的值针对所述分类列的权重概率矩阵,其中,所述权重概率矩阵的横坐标为所述属性信息的值、纵坐标为所述分类列的值;
主成分分析单元,用于对所述权重概率矩阵进行主成分分析,获得最大特征值对应的特征向量;
权重获得单元,用于将所述权重概率矩阵乘以所述特征向量,获得权重。
优选地,所述基尼系数计算子模块包括:
子集划分单元,用于按照排序的顺序依次将排序后的属性信息的值划分为两个属性子集;
子集计算单元,用于依次采用所述两个属性子集计算基尼系数。
本申请实施例包括以下优点:
本申请实施例将工作节点划分成一个或多个分组,由每个分组中的工作节点从预置的样本数据中进行随机采样,获得目标样本数据,进而训练决策树对象,因此,每个分组中的工作节点仅是读取部分的样本数据,而不需要扫描一次完整的样本数据,大大降低了数据的读取量,减少了耗费的时间,进而减少模型的迭代更新时间、提高训练效率。
本申请实施例对此枚举值的属性,通过重要性排序的方式计算分裂 点,无需进行穷举,大大减少了分裂点的计算量,假设属性有n个值,通过重要性排序的方式计算分裂点的方式,计算的复杂度可以从穷举法的O(2n-1-1),降低到O(n),减少了训练时间的耗费,进而减少模型的迭代更新时间、提高训练效率。
附图说明
图1是本申请的一种基于随机森林的模型训练方法实施例的步骤流程图;
图2是本申请实施例的一种分组示例图;
图3是本申请实施例的一种在Hadoop的分组中进行模型训练的流程示例图;
图4是本申请的一种基于随机森林的模型训练装置实施例的结构框图。
具体实施方式
为使本申请的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本申请作进一步详细的说明。
参照图1,示出了本申请的一种基于随机森林的模型训练方法实施例的步骤流程图,具体可以包括如下步骤:
步骤101,将工作节点划分成一个或多个分组;
在本申请实施例中,工作节点可以为训练模型的计算节点,可以部署在单台计算机中,也可以应用在计算机集群中,如分布式系统,本申请实施例对此不加以限制。
对于单台计算机而言,工作节点(worker)可以为CPU(Central  Processing Unit,中央处理器)的内核(Core),对于计算机集群,工作节点可以为单台计算机。
在本申请实施例中,可以按照样本数据的数据量、决策树的数量等因素,如图2所示,将工作节点划分为一个或多个分组(虚线框部分),每个分组中的工作节点包括一个或多个第一工作节点以及一个或多个第二工作节点。
其中,每个分组负责处理一份完整的样本数据,组内第一工作节点随机分发样本数据至第二工作节点,第二工作节点采用分发的样本数据训练决策树。
一般而言,考虑了系统的承受能力以及运算速度,分组的数目与决策树对象的数量成正比,例如,分组的数目=决策树的数量/100。
单个分组内,第一工作节点的数量与样本数据的数据量成正比,一个第二工作节点训练一棵决策树。
为使本领域技术人员更好地理解本申请实施例,在本申请实施例中,将Hadoop作为计算机集群的一种实施例进行说明。
Hadoop主要包括两部分,一是分布式文件系统(Hadoop Distributed File System,HDFS),另外是分布式计算框架,即MapReduce。
HDFS是一个高度容错性的系统,能提供高吞吐量的数据访问,适合那些有着超大数据集(large data set)的应用程序。
MapReduce是一套从海量源数据提取分析元素最后返回结果集的编程模型,其基本原理可以是将大的数据分析分成小块逐个分析,最后再将提取出来的数据汇总分析。
在Hadoop中,用于执行MapReduce的机器角色有两个:一个是JobTracker,另一个是TaskTracker。JobTracker可以用于调度工作,TaskTracker可以用于执行工作。
进一步而言,在Hadoop中TaskTracker可以指所述分布式系统的处理节点,该处理节点可以包括一个或多个映射(Map)节点和一个或多个化简(Reduce)节点。
在分布式计算中,MapReduce负责处理了并行编程中分布式存储、工作调度、负载均衡、容错均衡、容错处理以及网络通信等复杂问题,把处理过程高度抽象为两个函数:映射函数(map函数)和规约函数(reduce函数),map函数可以把任务分解成多个任务,reduce函数可以把分解后的多任务处理的结果汇总起来。
在Hadoop中,每个MapReduce的任务可以被初始化为一个Job,每个Job又可以分为两种阶段:map阶段和reduce阶段。这两个阶段分别用两个函数表示,即map函数和reduce函数。
map函数可以接收一个<key,value>形式的输入(Input),然后同样产生一个<key,value>形式的中间输出(Output),Hadoop函数可以接收一个如<key,(list of values)>形式的输入(Input),然后对这个value集合进行处理,每个reduce函数产生0或1个输出(Output),reduce函数的输出也是<key,value>形式的。
对于分组而言,第一工作节点可以为Map节点,第二工作节点可以为Raduce节点。
步骤102,由每个分组中的工作节点从预置的样本数据中进行随机采样,获得目标样本数据;
在具体实现中,在每个分组中,读取预置的样本数据,即样本数据的全集,可以由每个第一工作节点从预置的样本数据中读取部分样本数据,即样本数据的子集。
由每个第一工作节点将读取的部分样本数据随机分发至每个第二工作节点中,以分发至第二工作节点的样本数据作为目标样本数据。
对于每条样本数据,第一工作节点均读取一次,但是否会分发到第二工作节点中是不确定的,即随机分发(采样)。
例如,如图2所示,某一条样本数据由第一工作节点A1读取,针对第二工作节点B1、B2、B3、B4、B5,分别生成一随机值,如果该随机值大于0.5,则分发到该第二工作节点中,反之,则不分发到该第二工作节点,对于该条样本数据,分发随机了5(第二工作节点的数量)次。
同理,对于第一工作节点A2、A3读取的样本数据,也可以随机分发至第二工作节点B1、B2、B3、B4、B5。
如图3所示,在Hadoop中,一个分组的Map节点和Raduce节点处理一份完整的样本数据,每个Map节点读取部分样本数据,随机分发至Raduce节点中。
即在Map节点中,可以定义map函数为随机分发,以将Map节点的样本数据分发到Reduce节点中。
Map节点从输入的样本数据中抽取出键值对,每一个键值对都作为参数传递给map函数,map函数产生的中间键值对被缓存在内存中。
Map节点中的map函数的输出经由MapReduce框架处理后,最后分发到Reduce节点中的reduce函数。
步骤103,由每个分组中的工作节点采用所述目标样本数据训练一个或多个决策树对象。
每条样本数据,通常包括一个样本对象、一个或多个属性信息、一个分类标签。
对于随机采样之后的目标样本数据为一个数据集合,一般为二维数组的形式,即包括一组样本对象、一组或多组属性信息、一组分类标签(又称分类列)。
一个目标样本数据的示例如下表所示:
表1
样本对象 体温 表面覆盖 胎生 产蛋 能飞 水生 有腿 冬眠 分类列
恒温 毛发 哺乳类
巨蟒 冷血 鳞片 爬行类
鲑鱼 冷血 鳞片 鱼类
恒温 毛发 哺乳类
冷血 有时 两栖类
巨蜥 冷血 鳞片 爬行类
蝙蝠 恒温 毛发 哺乳类
恒温 哺乳类
豹纹鲨 冷血 鳞片 鱼类
海龟 冷血 鳞片 有时 爬行类
豪猪 恒温 刚毛 哺乳类
冷血 鳞片 鱼类
蝾螈 冷血 有时 两栖类
其中,属性信息包括体温、表面覆盖、胎生、产蛋、能飞、水生、有腿、冬眠。
决策树(对象)是一种由节点和有向边构成的树状结构,训练时,在每一个非叶子节点针对某一属性进行分类。
在具体实现中,在每个分组中,由每个第二工作节点采用所述目标样本数据训练一个决策树对象。
如图3所示,在Hadoop中,若Map节点读取的部分样本数据随机分发至Raduce节点中,则Raduce节点可以采用该分发的样本数据(即目标样本数据)训练决策树。
在训练决策树时,在每一个非叶子节点针对某一属性进行分裂、迭代这一过程,直到每个叶子节点上的样本均处于单一的类别或者每个属性都被选择过为止。叶子节点代表分类的结果,从根节点到叶子节点的完整路径代表一种决策过程,决策树的训练本质是节点如何进行分裂。
训练得到的决策树一般是二叉树,少数情况下也存在非二叉树的情况,具体的训练过程如下:
(1)、构造决策树的根节点,为全体目标训练样本数据的集合T;
(2)、通过计算信息增益或基尼系数选择出T中区分度最高的属性,分割形成左子节点和右子节点;
(3)、在剩余的属性空间中,针对每一个子节点的样本数据,重复步骤2的过程,若满足以下条件之一则标记为叶子节点,此节点分裂结束:
a、该节点上所有样本数据都属于同一个分类;
b、没有剩余的属性可用以分裂;
c、当前数据集的样本数据个数小于某个给定的值;
d、决策树的深度大于设定的值。
本申请实施例将工作节点划分成一个或多个分组,由每个分组中的工作节点从预置的样本数据中进行随机采样,获得目标样本数据,进而训练决策树对象,因此,每个分组中的工作节点仅是读取部分的样本数据,而不需要扫描一次完整的样本数据,大大降低了数据的读取量,减少了耗费的时间,进而减少模型的迭代更新时间、提高训练效率。
在本申请的一个实施例中,步骤103可以包括如下子步骤:
子步骤S11,当所述目标样本数据的属性信息的值为枚举值时,计算所述属性信息的值的权重;
在实现应用中,属性信息的值一般分为连续值和枚举值,枚举值又称离散值,即不连续的值。
例如,表1中体温的值为冷血、恒温,属于枚举值。
在本申请实施例中,针对枚举值的属性信息,利用其重要性(权重)排序来计算最佳分裂点,来提升加速比。
在一个示例中,当属性信息的分类列为二分类(即具有两个分类)时,计算该属性信息的值对于分类列的频率,对频率进行归一化,获得权重。
在另一个示例中,当属性信息的分类列为多分类(即具有三个或三个以上的分类)时,计算属性信息的值针对分类列的权重概率矩阵,其中,权重概率矩阵的横坐标为属性信息的值、纵坐标为分类列的值。
对所述权重概率矩阵进行主成分分析(Principal Component Analysis,PCA),获得最大特征值对应的特征向量,将权重概率矩阵乘以特征向量,获得权重。
子步骤S12,按照所述权重对所述属性信息的值进行排序;
在具体实现中,可以按照权重对属性信息的值进行顺序排序,也可以倒序排序,本申请实施例对此不加以限制。
子步骤S13,采用排序后的属性信息的值计算基尼系数;
基尼系数Gini,可以用于决策树的节点的分裂标准,样本总体内包含的类别越杂乱,Gini指数就越大。
在实际应用中,可以按照排序的顺序依次将排序后的属性信息的值划分为两个属性子集。
假设按权重排序得到的有序属性信息的值序列为f=(a1,a2, a3……an),那么,可以划分为左子树(属性子集)为a1~ai,右子树(属性子集)为ai+1~an,其中,i=1,2,…,n-1。
依次采用两个属性子集计算基尼系数Gini。
假设有k个分类,样本数据属于第i类的概率为pi,则基尼指数Gini定义为:
Figure PCTCN2017077704-appb-000001
如果数据集合D被划分成D1和D2两部分,则在该条件下,集合D的基尼增益定义为:
Figure PCTCN2017077704-appb-000002
子步骤S14,按照所述基尼系数针对决策树对象的树节点进行分裂处理。
基尼指数Gini表示数据集合的不确定性,基尼指数Gini的值越大,样本属于某个分类的不确定性也就越大。因此,最好的选取特征划分就是使得数据集合的基尼指数Gini最小的划分。
本申请实施例对此枚举值的属性,通过重要性排序的方式计算分裂点,无需进行穷举,大大减少了分裂点的计算量,假设属性有n个值,通过重要性排序的方式计算分裂点的方式,计算的复杂度可以从穷举法的O(2n-1-1),降低到O(n),减少了训练时间的耗费,进而减少模型的迭代更新时间、提高训练效率。
需要说明的是,对于方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请实施例并不受所描述的动作顺序的限制,因为依据本申请实施例,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述 的实施例均属于优选实施例,所涉及的动作并不一定是本申请实施例所必须的。
参照图4,示出了本申请的一种基于随机森林的模型训练装置实施例的结构框图,具体可以包括如下模块:
分组划分模块401,用于将工作节点划分成一个或多个分组;
随机采样模块402,用于由每个分组中的工作节点从预置的样本数据中进行随机采样,获得目标样本数据;
决策树训练模块403,用于由每个分组中的工作节点采用所述目标样本数据训练一个或多个决策树对象。
在本申请的一个实施例中,每个分组中的工作节点包括一个或多个第一工作节点以及一个或多个第二工作节点;
所述随机采样模块401可以包括如下子模块:
部分数据读取子模块,用于在每个分组中,由每个第一工作节点从预置的样本数据中读取部分样本数据;
数据随机分发子模块,用于由每个第一工作节点将读取的部分样本数据随机分发至每个第二工作节点中,以分发至第二工作节点的样本数据作为目标样本数据。
在本申请的一个实施例中,所述决策树训练模块403可以包括如下子模块:
节点训练子模块,用于在每个分组中,由每个第二工作节点采用所述目标样本数据训练一个决策树对象。
在本申请的一个实施例中,所述决策树训练模块403可以包括如下子模块:
权重计算子模块,用于在所述目标样本数据的属性信息的值为枚举值时,计算所述属性信息的值的权重;
排序子模块,用于按照所述权重对所述属性信息的值进行排序;
基尼系数计算子模块,用于采用排序后的属性信息的值计算基尼系数;
分裂子模块,用于按照所述基尼系数针对决策树对象的树节点进行分裂处理。
在本申请的一个实施例中,所述权重计算子模块可以包括如下单元:
频率计算单元,用于在所述属性信息的分类列为二分类时,计算所述属性信息的值对于所述分类列的频率;
归一化单元,用于对所述频率进行归一化,获得权重。
在本申请的一个实施例中,所述权重计算子模块可以包括如下单元:
权重概率矩阵计算单元,用于在所述属性信息的分类列为多分类时,计算所述属性信息的值针对所述分类列的权重概率矩阵,其中,所述权重概率矩阵的横坐标为所述属性信息的值、纵坐标为所述分类列的值;
主成分分析单元,用于对所述权重概率矩阵进行主成分分析,获得最大特征值对应的特征向量;
权重获得单元,用于将所述权重概率矩阵乘以所述特征向量,获得权重。
在本申请的一个实施例中,所述基尼系数计算子模块可以包括如下单元:
子集划分单元,用于按照排序的顺序依次将排序后的属性信息的值划分为两个属性子集;
子集计算单元,用于依次采用所述两个属性子集计算基尼系数。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域内的技术人员应明白,本申请实施例的实施例可提供为方法、装置、或计算机程序产品。因此,本申请实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
在一个典型的配置中,所述计算机设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储 器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括非持续性的电脑可读媒体(transitory media),如调制的数据信号和载波。
本申请实施例是参照根据本申请实施例的方法、终端设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理终端设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理终端设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理终端设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理终端设备上,使得在计算机或其他可编程终端设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程终端设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管已描述了本申请实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本申请所提供的一种基于随机森林的模型方法和一种基于随机森林的模型装置,进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请 的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。

Claims (14)

  1. 一种基于随机森林的模型训练方法,其特征在于,包括:
    将工作节点划分成一个或多个分组;
    由每个分组中的工作节点从预置的样本数据中进行随机采样,获得目标样本数据;
    由每个分组中的工作节点采用所述目标样本数据训练一个或多个决策树对象。
  2. 根据权利要求1所述的方法,其特征在于,每个分组中的工作节点包括一个或多个第一工作节点以及一个或多个第二工作节点;
    所述由每个分组中的工作节点从预置的样本数据中进行随机采样的,获得目标样本数据步骤包括:
    在每个分组中,由每个第一工作节点从预置的样本数据中读取部分样本数据;
    由每个第一工作节点将读取的部分样本数据随机分发至每个第二工作节点中,以分发至第二工作节点的样本数据作为目标样本数据。
  3. 根据权利要求2所述的方法,其特征在于,所述由每个分组中的工作节点采用所述目标样本数据训练一个或多个决策树对象的步骤包括:
    在每个分组中,由每个第二工作节点采用所述目标样本数据训练一个决策树对象。
  4. 根据权利要求1或2或3所述的方法,其特征在于,所述由每个分组中的工作节点采用所述目标样本数据训练一个或多个决策树对象的步骤包括:
    当所述目标样本数据的属性信息的值为枚举值时,计算所述属性信息的值的权重;
    按照所述权重对所述属性信息的值进行排序;
    采用排序后的属性信息的值计算基尼系数;
    按照所述基尼系数针对决策树对象的树节点进行分裂处理。
  5. 根据权利要求4所述的方法,其特征在于,所述计算所述属性信息的值的权重的步骤包括:
    当所述属性信息的分类列为二分类时,计算所述属性信息的值对于所述分类列的频率;
    对所述频率进行归一化,获得权重。
  6. 根据权利要求4所述的方法,其特征在于,所述计算所述属性信息的值的权重的步骤包括:
    当所述属性信息的分类列为多分类时,计算所述属性信息的值针对所述分类列的权重概率矩阵,其中,所述权重概率矩阵的横坐标为所述属性信息的值、纵坐标为所述分类列的值;
    对所述权重概率矩阵进行主成分分析,获得最大特征值对应的特征向量;
    将所述权重概率矩阵乘以所述特征向量,获得权重。
  7. 根据权利要求4所述的方法,其特征在于,所述采用排序后的属性信息的值计算基尼系数的步骤包括:
    按照排序的顺序依次将排序后的属性信息的值划分为两个属性子集;
    依次采用所述两个属性子集计算基尼系数。
  8. 一种基于随机森林的模型训练装置,其特征在于,包括:
    分组划分模块,用于将工作节点划分成一个或多个分组;
    随机采样模块,用于由每个分组中的工作节点从预置的样本数据中进行随机采样,获得目标样本数据;
    决策树训练模块,用于由每个分组中的工作节点采用所述目标样本数据训练一个或多个决策树对象。
  9. 根据权利要求8所述的装置,其特征在于,每个分组中的工作节点包括一个或多个第一工作节点以及一个或多个第二工作节点;
    所述随机采样模块包括:
    部分数据读取子模块,用于在每个分组中,由每个第一工作节点从预置的样本数据中读取部分样本数据;
    数据随机分发子模块,用于由每个第一工作节点将读取的部分样本数据随机分发至每个第二工作节点中,以分发至第二工作节点的样本数据作为目标样本数据。
  10. 根据权利要求9所述的装置,其特征在于,所述决策树训练模块包括:
    节点训练子模块,用于在每个分组中,由每个第二工作节点采用所述目标样本数据训练一个决策树对象。
  11. 根据权利要求8或9或10所述的装置,其特征在于,所述决策树训练模块包括:
    权重计算子模块,用于在所述目标样本数据的属性信息的值为枚举值时,计算所述属性信息的值的权重;
    排序子模块,用于按照所述权重对所述属性信息的值进行排序;
    基尼系数计算子模块,用于采用排序后的属性信息的值计算基尼系数;
    分裂子模块,用于按照所述基尼系数针对决策树对象的树节点进行分裂处理。
  12. 根据权利要求11所述的装置,其特征在于,所述权重计算子模块包括:
    频率计算单元,用于在所述属性信息的分类列为二分类时,计算所述属性信息的值对于所述分类列的频率;
    归一化单元,用于对所述频率进行归一化,获得权重。
  13. 根据权利要求11所述的装置,其特征在于,所述权重计算子模块包括:
    权重概率矩阵计算单元,用于在所述属性信息的分类列为多分类时,计算所述属性信息的值针对所述分类列的权重概率矩阵,其中,所述权重概率矩阵的横坐标为所述属性信息的值、纵坐标为所述分类列的值;
    主成分分析单元,用于对所述权重概率矩阵进行主成分分析,获得最大特征值对应的特征向量;
    权重获得单元,用于将所述权重概率矩阵乘以所述特征向量,获得权重。
  14. 根据权利要求11所述的装置,其特征在于,所述基尼系数计算子模块包括:
    子集划分单元,用于按照排序的顺序依次将排序后的属性信息的值划分为两个属性子集;
    子集计算单元,用于依次采用所述两个属性子集计算基尼系数。
PCT/CN2017/077704 2016-03-31 2017-03-22 一种基于随机森林的模型训练方法和装置 WO2017167097A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/146,907 US11276013B2 (en) 2016-03-31 2018-09-28 Method and apparatus for training model based on random forest

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610201626.8 2016-03-31
CN201610201626.8A CN107292186B (zh) 2016-03-31 2016-03-31 一种基于随机森林的模型训练方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/146,907 Continuation US11276013B2 (en) 2016-03-31 2018-09-28 Method and apparatus for training model based on random forest

Publications (1)

Publication Number Publication Date
WO2017167097A1 true WO2017167097A1 (zh) 2017-10-05

Family

ID=59962562

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/077704 WO2017167097A1 (zh) 2016-03-31 2017-03-22 一种基于随机森林的模型训练方法和装置

Country Status (4)

Country Link
US (1) US11276013B2 (zh)
CN (1) CN107292186B (zh)
TW (1) TW201737058A (zh)
WO (1) WO2017167097A1 (zh)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109257354A (zh) * 2018-09-25 2019-01-22 平安科技(深圳)有限公司 基于模型树算法的异常流量分析方法及装置、电子设备
CN109783967A (zh) * 2019-01-25 2019-05-21 深圳大学 一种滑坡预测方法及系统
CN109857862A (zh) * 2019-01-04 2019-06-07 平安科技(深圳)有限公司 基于智能决策的文本分类方法、装置、服务器及介质
CN110084377A (zh) * 2019-04-30 2019-08-02 京东城市(南京)科技有限公司 用于构建决策树的方法和装置
CN111061968A (zh) * 2019-11-15 2020-04-24 北京三快在线科技有限公司 排序方法、装置、电子设备及可读存储介质
WO2020116727A1 (ko) * 2018-12-04 2020-06-11 주식회사 엘지생활건강 자외선 차단지수 산출 장치, 자외선 차단지수 산출 방법
CN111738297A (zh) * 2020-05-26 2020-10-02 平安科技(深圳)有限公司 特征选择方法、装置、设备及存储介质
CN113254494A (zh) * 2020-12-04 2021-08-13 南理工泰兴智能制造研究院有限公司 一种新能源研发分类记录方法
CN115374763A (zh) * 2022-10-24 2022-11-22 北京睿企信息科技有限公司 一种获取用户优先级的系统
CN111738297B (zh) * 2020-05-26 2024-11-19 平安科技(深圳)有限公司 特征选择方法、装置、设备及存储介质

Families Citing this family (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107844308A (zh) * 2017-10-18 2018-03-27 阿里巴巴集团控股有限公司 一种信息处理方法、装置及设备
CN109993391B (zh) * 2017-12-31 2021-03-26 中国移动通信集团山西有限公司 网络运维任务工单的派发方法、装置、设备及介质
CN108304354B (zh) * 2018-01-25 2021-08-24 腾讯科技(深圳)有限公司 一种预测模型训练方法及装置、存储介质、电子设备
CN110232393B (zh) * 2018-03-05 2022-11-04 腾讯科技(深圳)有限公司 数据的处理方法、装置、存储介质和电子装置
CN110827131B (zh) * 2018-07-23 2022-06-28 中国软件与技术服务股份有限公司 一种分布式自动特征组合的纳税人信用评估方法
CN109145959A (zh) * 2018-07-27 2019-01-04 东软集团股份有限公司 一种特征选择方法、装置及设备
CN109242012A (zh) * 2018-08-27 2019-01-18 平安科技(深圳)有限公司 分组归纳方法及装置、电子装置及计算机可读存储介质
CN109214671B (zh) * 2018-08-27 2022-03-01 平安科技(深圳)有限公司 人员分组方法、装置、电子装置及计算机可读存储介质
CN110889308A (zh) * 2018-09-07 2020-03-17 中国石油化工股份有限公司 一种基于机器学习的地震震相初至识别方法及识别系统
CN109284382B (zh) * 2018-09-30 2021-05-28 武汉斗鱼网络科技有限公司 一种文本分类方法及计算装置
US11625640B2 (en) * 2018-10-05 2023-04-11 Cisco Technology, Inc. Distributed random forest training with a predictor trained to balance tasks
TWI721331B (zh) * 2018-11-06 2021-03-11 中華電信股份有限公司 分類裝置及分類方法
CN109587000B (zh) * 2018-11-14 2020-09-15 上海交通大学 基于群智网络测量数据的高延迟异常检测方法及系统
CN109697049A (zh) * 2018-12-28 2019-04-30 拉扎斯网络科技(上海)有限公司 数据处理方法、装置、电子设备及计算机可读存储介质
US11532132B2 (en) * 2019-03-08 2022-12-20 Mubayiwa Cornelious MUSARA Adaptive interactive medical training program with virtual patients
CN110321945A (zh) * 2019-06-21 2019-10-11 深圳前海微众银行股份有限公司 扩充样本方法、终端、装置及可读存储介质
CN110569659B (zh) * 2019-07-01 2021-02-05 创新先进技术有限公司 数据处理方法、装置和电子设备
CN110298709B (zh) * 2019-07-09 2023-08-01 广州品唯软件有限公司 一种超大规模数据的预估方法和装置
CN112437469B (zh) * 2019-08-26 2024-04-05 中国电信股份有限公司 服务质量保障方法、装置和计算机可读存储介质
CN110837911B (zh) * 2019-09-06 2021-02-05 沈阳农业大学 一种大尺度地表节肢动物空间分布模拟方法
CN110633667B (zh) * 2019-09-11 2021-11-26 沈阳航空航天大学 一种基于多任务随机森林的动作预测方法
CN110691073A (zh) * 2019-09-19 2020-01-14 中国电子科技网络信息安全有限公司 一种基于随机森林的工控网络暴力破解流量检测方法
CN110705683B (zh) * 2019-10-12 2021-06-29 腾讯科技(深圳)有限公司 随机森林模型的构造方法、装置、电子设备及存储介质
CN110837875B (zh) * 2019-11-18 2022-07-05 国家基础地理信息中心 地表覆盖数据质量异常判断方法及装置
CN111126434B (zh) * 2019-11-19 2023-07-11 山东省科学院激光研究所 基于随机森林的微震初至波到时自动拾取方法及系统
US11704601B2 (en) * 2019-12-16 2023-07-18 Intel Corporation Poisson distribution based approach for bootstrap aggregation in a random forest
CN111159369B (zh) * 2019-12-18 2023-12-05 平安健康互联网股份有限公司 多轮智能问诊方法、装置及计算机可读存储介质
CN111309817B (zh) * 2020-01-16 2023-11-03 秒针信息技术有限公司 行为识别方法、装置及电子设备
CN111259975B (zh) * 2020-01-21 2022-07-22 支付宝(杭州)信息技术有限公司 分类器的生成方法及装置、文本的分类方法及装置
CN111814846B (zh) * 2020-06-19 2023-08-01 浙江大华技术股份有限公司 属性识别模型的训练方法、识别方法及相关设备
CN111813581B (zh) * 2020-07-24 2022-07-05 成都信息工程大学 一种基于完全二叉树的容错机制的配置方法
CN112052875B (zh) * 2020-07-30 2024-08-20 华控清交信息科技(北京)有限公司 一种训练树模型的方法、装置和用于训练树模型的装置
CN112183623A (zh) * 2020-09-28 2021-01-05 湘潭大学 基于风电运维人员紧张程度的运维操作方法
CN113067522B (zh) * 2021-03-29 2023-08-01 杭州吉易物联科技有限公司 基于rf-ga-svm算法的升降机输出电压控制方法
CN113516178A (zh) * 2021-06-22 2021-10-19 常州微亿智造科技有限公司 工业零部件的缺陷检测方法、缺陷检测装置
CN113379301A (zh) * 2021-06-29 2021-09-10 未鲲(上海)科技服务有限公司 通过决策树模型对用户进行分类的方法、装置和设备
CN113553514B (zh) * 2021-09-22 2022-08-19 腾讯科技(深圳)有限公司 基于人工智能的对象推荐方法、装置及电子设备
CN114399000A (zh) * 2022-01-20 2022-04-26 中国平安人寿保险股份有限公司 树模型的对象可解释性特征提取方法、装置、设备及介质
CN115081509A (zh) * 2022-05-06 2022-09-20 大同公元三九八智慧养老服务有限公司 用于嵌入式设备的跌倒判断模型构建方法及嵌入式设备
CN115001763B (zh) * 2022-05-20 2024-03-19 北京天融信网络安全技术有限公司 钓鱼网站攻击检测方法、装置、电子设备及存储介质
CN114666590A (zh) * 2022-05-25 2022-06-24 宁波康达凯能医疗科技有限公司 一种基于负载均衡的全视场视频编码方法与系统
CN116186628B (zh) * 2023-04-23 2023-07-07 广州钛动科技股份有限公司 App应用自动打标方法和系统
CN117370899B (zh) * 2023-12-08 2024-02-20 中国地质大学(武汉) 一种基于主成分-决策树模型的控矿因素权重确定方法
CN117540830B (zh) * 2024-01-05 2024-04-12 中国地质科学院探矿工艺研究所 基于断层分布指数的泥石流易发性预测方法、装置及介质
CN118471404B (zh) * 2024-07-10 2024-10-11 浙江七星纺织有限公司 抗静电面料的抗静电性能测试方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006236153A (ja) * 2005-02-25 2006-09-07 Dainippon Sumitomo Pharma Co Ltd 機能性核酸配列解析方法
CN103473231A (zh) * 2012-06-06 2013-12-25 深圳先进技术研究院 分类器构建方法和系统
CN104392250A (zh) * 2014-11-21 2015-03-04 浪潮电子信息产业股份有限公司 一种基于MapReduce的图像分类方法
CN105303262A (zh) * 2015-11-12 2016-02-03 河海大学 一种基于核主成分分析和随机森林的短期负荷预测方法

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7028250B2 (en) * 2000-05-25 2006-04-11 Kanisa, Inc. System and method for automatically classifying text
US6996575B2 (en) * 2002-05-31 2006-02-07 Sas Institute Inc. Computer-implemented system and method for text-based document processing
US8578041B2 (en) * 2005-06-03 2013-11-05 Adobe Systems Incorporated Variable sampling rates for website visitation analysis
US8935249B2 (en) * 2007-06-26 2015-01-13 Oracle Otc Subsidiary Llc Visualization of concepts within a collection of information
US8194933B2 (en) * 2007-12-12 2012-06-05 3M Innovative Properties Company Identification and verification of an unknown document according to an eigen image process
CN103258049A (zh) * 2013-05-27 2013-08-21 重庆邮电大学 一种基于海量数据的关联规则挖掘方法
US9331943B2 (en) 2013-09-10 2016-05-03 Robin Systems, Inc. Asynchronous scheduling informed by job characteristics and anticipatory provisioning of data for real-time, parallel processing
US10635644B2 (en) * 2013-11-11 2020-04-28 Amazon Technologies, Inc. Partition-based data stream processing framework
US10318882B2 (en) * 2014-09-11 2019-06-11 Amazon Technologies, Inc. Optimized training of linear machine learning models
US20160132787A1 (en) * 2014-11-11 2016-05-12 Massachusetts Institute Of Technology Distributed, multi-model, self-learning platform for machine learning
CN104750800A (zh) * 2014-11-13 2015-07-01 安徽四创电子股份有限公司 一种基于出行时间特征的机动车聚类方法
CN104679911B (zh) * 2015-03-25 2018-03-27 武汉理工大学 一种基于离散弱相关的云平台决策森林分类方法
CN105373606A (zh) * 2015-11-11 2016-03-02 重庆邮电大学 一种改进c4.5决策树算法下的不平衡数据抽样方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006236153A (ja) * 2005-02-25 2006-09-07 Dainippon Sumitomo Pharma Co Ltd 機能性核酸配列解析方法
CN103473231A (zh) * 2012-06-06 2013-12-25 深圳先进技术研究院 分类器构建方法和系统
CN104392250A (zh) * 2014-11-21 2015-03-04 浪潮电子信息产业股份有限公司 一种基于MapReduce的图像分类方法
CN105303262A (zh) * 2015-11-12 2016-02-03 河海大学 一种基于核主成分分析和随机森林的短期负荷预测方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU , KAI ET AL.: "Realization of RBM Training Based on Hadoop-GPU", MICROELECTRONICS & COMPUTER, vol. 32, no. 9, 30 September 2015 (2015-09-30), pages 70 - 72, XP055424631, ISSN: 1000-7180 *

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109257354B (zh) * 2018-09-25 2021-11-12 平安科技(深圳)有限公司 基于模型树算法的异常流量分析方法及装置、电子设备
CN109257354A (zh) * 2018-09-25 2019-01-22 平安科技(深圳)有限公司 基于模型树算法的异常流量分析方法及装置、电子设备
WO2020116727A1 (ko) * 2018-12-04 2020-06-11 주식회사 엘지생활건강 자외선 차단지수 산출 장치, 자외선 차단지수 산출 방법
CN109857862A (zh) * 2019-01-04 2019-06-07 平安科技(深圳)有限公司 基于智能决策的文本分类方法、装置、服务器及介质
CN109857862B (zh) * 2019-01-04 2024-04-19 平安科技(深圳)有限公司 基于智能决策的文本分类方法、装置、服务器及介质
CN109783967A (zh) * 2019-01-25 2019-05-21 深圳大学 一种滑坡预测方法及系统
CN110084377B (zh) * 2019-04-30 2023-09-29 京东城市(南京)科技有限公司 用于构建决策树的方法和装置
CN110084377A (zh) * 2019-04-30 2019-08-02 京东城市(南京)科技有限公司 用于构建决策树的方法和装置
CN111061968B (zh) * 2019-11-15 2023-05-30 北京三快在线科技有限公司 排序方法、装置、电子设备及可读存储介质
CN111061968A (zh) * 2019-11-15 2020-04-24 北京三快在线科技有限公司 排序方法、装置、电子设备及可读存储介质
CN111738297A (zh) * 2020-05-26 2020-10-02 平安科技(深圳)有限公司 特征选择方法、装置、设备及存储介质
CN111738297B (zh) * 2020-05-26 2024-11-19 平安科技(深圳)有限公司 特征选择方法、装置、设备及存储介质
CN113254494A (zh) * 2020-12-04 2021-08-13 南理工泰兴智能制造研究院有限公司 一种新能源研发分类记录方法
CN113254494B (zh) * 2020-12-04 2023-12-08 南理工泰兴智能制造研究院有限公司 一种新能源研发分类记录方法
CN115374763A (zh) * 2022-10-24 2022-11-22 北京睿企信息科技有限公司 一种获取用户优先级的系统
CN115374763B (zh) * 2022-10-24 2022-12-23 北京睿企信息科技有限公司 一种获取用户优先级的系统

Also Published As

Publication number Publication date
US11276013B2 (en) 2022-03-15
TW201737058A (zh) 2017-10-16
US20190034834A1 (en) 2019-01-31
CN107292186A (zh) 2017-10-24
CN107292186B (zh) 2021-01-12

Similar Documents

Publication Publication Date Title
WO2017167097A1 (zh) 一种基于随机森林的模型训练方法和装置
Sardar et al. An analysis of MapReduce efficiency in document clustering using parallel K-means algorithm
Anghel et al. Benchmarking and optimization of gradient boosting decision tree algorithms
Shahrivari et al. Single-pass and linear-time k-means clustering based on MapReduce
Bharill et al. Fuzzy based scalable clustering algorithms for handling big data using apache spark
Huang et al. Graphgdp: Generative diffusion processes for permutation invariant graph generation
US8515956B2 (en) Method and system for clustering datasets
Sarazin et al. SOM clustering using spark-mapreduce
Chen et al. Efficient maximum closeness centrality group identification
Al-Sawwa et al. Performance evaluation of a cost-sensitive differential evolution classifier using spark–Imbalanced binary classification
Sahoo et al. A novel approach for distributed frequent pattern mining algorithm using load-matrix
Lu et al. An improved k-means distributed clustering algorithm based on spark parallel computing framework
Chen et al. Parallel mining frequent patterns over big transactional data in extended mapreduce
Aparajita et al. Comparative analysis of clustering techniques in cloud for effective load balancing
Choi et al. Dynamic nonparametric random forest using covariance
Sukanya et al. Benchmarking support vector machines implementation using multiple techniques
Maithri et al. Parallel agglomerative hierarchical clustering algorithm implementation with hadoop MapReduce
Koli et al. Parallel decision tree with map reduce model for big data analytics
Rastogi et al. Unsupervised Classification of Mixed Data Type of Attributes Using Genetic Algorithm (Numeric, Categorical, Ordinal, Binary, Ratio-Scaled)
Łukasik et al. Efficient astronomical data condensation using approximate nearest neighbors
Lu et al. The research of decision tree mining based on Hadoop
Vardhan et al. A comprehensive analysis of the most common hard clustering algorithms
Soufi et al. A Survey on Big Data and Knowledge Acquisition Techniques
Verma et al. A Heuristic Approach To Redefine FIS By Matrix Implementation Through Update Apriori ‘HuApriori’In Textual Data Set
Cai et al. CDFRS: A scalable sampling approach for efficient big data analysis

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17773127

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17773127

Country of ref document: EP

Kind code of ref document: A1