WO2016090290A1 - Method and apparatus for decision tree based search result ranking - Google Patents
Method and apparatus for decision tree based search result ranking Download PDFInfo
- Publication number
- WO2016090290A1 WO2016090290A1 PCT/US2015/064069 US2015064069W WO2016090290A1 WO 2016090290 A1 WO2016090290 A1 WO 2016090290A1 US 2015064069 W US2015064069 W US 2015064069W WO 2016090290 A1 WO2016090290 A1 WO 2016090290A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- splitting
- feature
- training
- nodes
- decision tree
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/045—Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/24—Querying
- G06F16/245—Query processing
- G06F16/2457—Query processing with adaptation to user needs
- G06F16/24578—Query processing with adaptation to user needs using ranking
Definitions
- the present disclosure relates generally to data searching, and more particularly to decision tree based search result ranking.
- search engines are becoming a primary approach for users to obtain information of interest.
- a user enters into a search engine key words or key phrases to search for such information of interest.
- Different search engines generally utilizes different ranking factors to rank the search results returned and then present to the user the returned search results in a ranked order.
- a ranking model can only be established by training with training data sets with known relevance between the search key words/phrases and search results.
- training data sets include hundreds of millions of data, to train a ranking model with such a large amount of data is significantly time-consuming.
- a large number of different ranking models need to be established, let alone the problem of data updating. Therefore, there exists a need to improve the efficiency of establishing ranking models.
- An object of the present invention is to provide a decision tree based search result ranking method and apparatus for, when training with data sets of large volumes of data, e.g., hundreds of millions of data, decreased the amount of computational time, improved ranking efficiency and ranking flexibility, and lowered ranking associated costs, to a great extent.
- a method of decision tree based search result ranking includes obtaining a training data set for generating at least one decision tree which is used for ranking, the training data set having N training features and N being a natural number greater than or equal to 2.
- the method further includes dividing the computational system of the decision trees into N feature work groups, each feature work group corresponding to a training feature of the N training features.
- the method also includes, by use of the feature work groups, computing splitting nodes and splitting values corresponding to the splitting nodes for the decision trees.
- the method also includes generating the decision trees using the computed splitting nodes and the corresponding splitting values; and ranking search results using the decision trees.
- an apparatus for ranking search results based on decision trees includes a processor and a non- transitory computer-readable medium operably coupled to the processor.
- the non-transitory computer-readable medium has computer-readable instructions stored thereon to be executed when accessed by the processor.
- the instructions include an acquisition module, a division module, a computing module and a ranking module.
- the acquisition module is configured for obtaining a training data set for generating at least one decision tree, the training data set having N training features and N greater than or equal to 2.
- the division module is configured for dividing a computational system of decision trees into N feature work groups corresponding to the N training features respectively.
- the computing module is configured for, by use of the feature work groups, computing splitting nodes and splitting values corresponding to the splitting nodes for the decision trees; and for generating the decision trees using the computed splitting nodes and the corresponding splitting values.
- the ranking module is configured for ranking search results using the decision trees.
- embodiments in accordance with the present disclosure provide for the following differences and effects: the usage of dividing the computational system of decision trees into feature work groups based on training features, and the parallel computation and transmission of information based on the feature work groups, provides for training with training data sets of significantly large volumes, e.g., hundreds of millions of data, with decreased computational time. Especially for search engines with large correspondent databases, it provides for fast and precise training for a good quality decision tree to be used for ranking, increasing ranking efficiency and ranking flexibility, as well as lowering ranking associated costs.
- the usage of dividing the computational system of decision trees in the two dimensions of training features and training samples at the same time further provides for increased training efficiency for training data sets. For example, for a training data set with 3 hundreds of millions data, a good quality decision tree model can be trained within few hours.
- FIG. 1 is a flow chart of an exemplary method of decision tree based search result ranking in accordance with a first embodiment of the present disclosure
- FIG. 2 is a schematic diagram of exemplary feature work groups of a computational system of decision trees divided using MPI protocols, in accordance with a second embodiment of the present disclosure
- FIG. 3 is a schematic diagram of an exemplary distributed memory data structure of a feature work group of a computational system of decision trees divided using MPI protocols, in accordance with a second embodiment of the present disclosure.
- FIG. 4 is a block diagram of an exemplary apparatus for decision tree based search result ranking in accordance with a third embodiment of the present disclosure.
- references herein to "one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the disclosure.
- the appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Further, the order of blocks in process flowcharts or diagrams representing one or more embodiments of the disclosure do not inherently indicate any particular order nor imply any limitations in the disclosure.
- step 101 a training data set is obtained for generating at least one decision tree which is used for ranking search results.
- the training data set has N training features, where N is a natural number greater than or equal to 2.
- step 102 the computational system of decision trees is divided into N feature work groups, each feature work group corresponding to a training feature of the N training features respectively.
- step 103 by use of the feature work groups, optimal splitting nodes and optimal splitting values corresponding to the optimal splitting nodes are computed for each decision tree. Based on the computed optimal splitting nodes and optimal splitting values, each decision tree is generated accordingly.
- step 104 the search results are ranked using all the generated decision trees.
- the number of decision trees is greater than or equal to 2; and the step 103 further includes the step of determining whether the total number of the splitting nodes computed for the present decision tree exceeds a pre-determined threshold value. If so, the step 103 concludes computing optimal splitting nodes and their corresponding splitting values, and starts to generate a next decision tree, or proceeds to step 104.
- step 103 further includes the steps of
- the step 103 further includes, by use of the feature work group corresponding to the selected present optimal splitting value, based on the present decision tree's present optimal splitting values and present optimal splitting nodes, splitting the training data set to form present splitting nodes, where splitting results of the splitting nodes are transmitted to the computational system of decision trees.
- the above described step 104 includes the steps of fitting all the decision trees to obtain a ranking decision tree, and ranking search results based on degrees of relevance.
- the search results are retrieved using a search query, and the degrees of relevance are computed between the search results and the search query using the ranking decision tree.
- the step 101 includes the step of obtaining the training data set from search histories collected on an e-commerce platform.
- each work group can communicate information in an inter-groups manner amongst work groups, as well as in an intra- group manner amongst communication nodes, forming a communication domain. Further, all work groups can perform data processing in parallel.
- a second embodiment in accordance with the present disclosure relates to a method of decision tree based research result ranking.
- the second embodiment improves upon the first embodiment of the present disclosure, the improvements being dividing the computational system of decision trees in the two dimensions of training features and training samples at the same time, further providing for increased training efficiency for training data sets, and therefore increased ranking efficiency. For example, for a training data set with 3 hundreds of millions of data, a good quality decision tree model can be trained within few hours.
- the above described training data set includes M training samples, where M is a natural number greater than or equal to 2.
- the above described step 102 further includes the step of dividing each feature work group into M communication nodes corresponding to the M training samples respectively, where communication nodes belonging to different feature work groups but the same training sample form one sample work group.
- independently computing an optimal splitting value for the training feature corresponding to the feature work group further includes the steps of: based on the generated decision trees corresponding to the training data set, for each sample work group, independently computing a gradient for each training sample of the sample work group; and based on the computed gradients, for each feature work group, independently computing an optimal splitting value for the training feature corresponding to the feature work group.
- mis- classification information can be computed for each training sample.
- mis-classification information can be used to compute the optimal splitting nodes and optimal splitting values for the present to-be-generated decision tree.
- each decision tree can also be implemented to generate each decision tree, and then to fit all the generated decision trees into a final decision tree used for ranking, i.e., with a random forest model.
- a feature work group can be divided into a number of less than M communication nodes, i.e., each sample work group can correspond to at least 2 training samples.
- each feature work group can be divided into K groups, where K is a natural number and less than M. K does not necessarily equal M, for example, when K equals 2, then M training samples are divided into 2 groups, each feature work group having samples from 2 sample work groups.
- each training sample can be assumed as having an initial value of 0 for the purpose of computing a gradient for each training sample for generating a first decision tree.
- the computational system of decision trees uses information gateway Message Based Passing Interface (MPI) protocols to accomplish the above described dividing into feature work groups and information communication amongst feature work groups.
- MPI information gateway Message Based Passing Interface
- the computational system of decision tress is divided into N by M communication nodes, with N feature work groups 240_0, 240_1 , . . ., 240_n and M sample work groups 220_0, 220_1 , . . ., 220_m, where n and m are natural numbers.
- Data of the training data set has a schema of ⁇ target>
- Each feature work group can communicate in an intra-group manner, each feature working group includes M communication nodes, each sample work group includes N
- each training feature is stored by its corresponding feature work group as feature work group memory data 300 in the memory. Separately, further dividing the training features such that each communication node of a feature work group stores partial data, for example, in the form of Fi work memory data 320 0, 320 1 , . . . , 320_m. As shown in FIG.
- feature work groups' communication nodes i.e., Fi work m, F2_work 1 etc. store the following data: (1) training samples' feature values after division, (2) training samples' query ids after division, (3) training samples' target change values.
- Each sample work group also needs to store additional training related information such as (1) a training sample's negative gradient after division and (2) a training sample's present predicative value after division.
- sample work groups' communication nodes is evenly divided (in other alternative embodiments of the present disclosure, data can be divided using other methods, depending on circumstances). For example, if the total number of sample queries is q_total, then sample work group 0 stores (0, q_total/M) sequence of data, sample work group 1 stores (q_total/M, q_toal/M*2) sequence of data, and so on. Sample work groups are independent from each other, to establish a present decision tree, based on previously established decision trees, independently compute their respective sample work group's divided samples' negative gradients. If there are M sample work groups, then every sample work group only computes one sample negative gradient.
- every sample group computes more than one sample negative gradients.
- the communication nodes of a sample work group can co-operate to compute gradients, every real communication node computing part of sample gradients, after computation, using intra-work group communication to obtain all the gradients for the sample work group.
- the process of generating decision trees primarily is to compute, for a presently to be generated decision tree, optimal splitting points and their respective optimal splitting values, and to perform the splitting of the training data sets accordingly.
- Each feature work group computes its respective training feature's optimal splitting points, with statistics of all the feature work groups, global optimal splitting nodes (fid) and the corresponding optimal splitting values (split value) can be obtained.
- All the communication nodes of each feature work group compute candidate splitting values' regional samples' left_sum (a negative gradient of the left node after splitting) and left count (a count of the number of samples at the left node after splitting), forming a three element unit with a schema of ⁇ split_value, left sum, left_count>.
- there is no right sum (a negative gradient of the right node after splitting), or right count (a count of the number of samples at the right node after splitting) because left sum can be computed by subtracting left sum from the present node sum (the total number of nodes), for the purpose of reducing the amount of communication inside the feature work group.
- the optimal splitting nodes of the feature work group with the largest Critmax are selected as the present optimal splitting values for the present decision tree.
- the training feature corresponding to the feature work with the larger Critmax is selected as the present optimal splitting nodes for the present decision tree. It is also understood that in other alternative embodiments of the present disclosure, other methods can be used to compute the optimal splitting nodes and the optimal splitting values, not limited to the above described Critmax based computation.
- Each communication node of the feature work group maintains a table of node id for the present work group's training samples. At splitting, the table of node id is updated.
- the optimal splitting feature i.e., optimal splitting node
- split_value the corresponding optimal splitting values
- the feature work group of fid performs splitting, records information that indicates each sample is split into the left node or the right node, for example, utilizes 0 or 1 as indicators, where 0 indicates left node and 1 indicates right node, saves the indication information into a bitmap, and broadcasts it to the other feature work group.
- Embodiments of the present disclosure can be implemented using software, hardware, firmware, and/or the combinations thereof. Regardless of being implemented using software, hardware, firmware or the combinations thereof, instruction code can be stored in any kind of computer readable media (for example, permanent or modifiable, volatile or non-volatile, solid or non-solid, fixed or changeable medium, etc.). Similarly, such mediums can be implemented using, for example, programmable array logic (PAL), random access memory (RAM), programmable read only memory (PROM), read only memory (ROM), electrically erasable programmable ROM (EEPROM), magnetic storage, optical storage, digital versatile disc (DVD), or the like.
- PAL programmable array logic
- RAM random access memory
- PROM programmable read only memory
- ROM read only memory
- EEPROM electrically erasable programmable ROM
- magnetic storage optical storage
- DVD digital versatile disc
- the apparatus 400 includes an acquisition module 402 configured for obtaining a training data set to establish at least one decision tree.
- the training data set has N associated training features, where N is a natural number greater than or equal to 2.
- the apparatus 400 further includes a division module 404 configured for dividing the computational system of decision trees into N feature work groups, each feature work group corresponding to a training feature of the N training features respectively.
- the apparatus 400 also includes a computing module 406 configured for using the feature work groups to computing optimal splitting nodes and optimal splitting values corresponding to the optimal splitting nodes for each decision tree, and to generate each decision tree using the computed optimal splitting nodes and optimal splitting values.
- the apparatus 400 further includes a ranking module 408 configured for ranking search results using the decision trees.
- the total number of the above described decision trees is greater than or equal to 2; and the above described computing module includes the following sub-modules: a counting sub-module configured for determining whether a number of optimal splitting nodes computed for a present decision tree exceeds a predetermined threshold value; a computation conclusion sub-module configured for, when the counting sub-module returns a not-exceeding the threshold value condition, concluding the computation of optimal splitting nodes and optimal splitting values, and start to generate the next decision tree, or proceed to the ranking module.
- the computing module also includes an
- the computing module further includes a node assigning sub-module configured for transmitting amongst the feature work groups, where a present optimal splitting value for the present decision tree is selected from all the optimal splitting values computed for the feature work groups, where the training feature corresponding to the feature group, with which the selected present optimal splitting value is computed, is assigned as a present optimal splitting node for the present decision tree.
- the computing module also includes a node splitting sub-module configured for computing the present optimal splitting value's corresponding feature work group, based on the present decision tree's present optimal splitting value and present optimal splitting node's corresponding training samples, forming present splitting nodes, wherein splitting results are transmitted to the computational system of decision trees.
- a node splitting sub-module configured for computing the present optimal splitting value's corresponding feature work group, based on the present decision tree's present optimal splitting value and present optimal splitting node's corresponding training samples, forming present splitting nodes, wherein splitting results are transmitted to the computational system of decision trees.
- the above described ranking module includes a decision tree fitting sub-module configured for fitting the generated decision trees to form a ranking decision tree; and a decision tree based ranking sub-module configured for ranking search results based on degrees of relevance, where the search results are retrieved using search queries and the degrees of relevance are computed between the search results and the search queries using the ranking decision tree.
- the above described acquisition module further includes a training data set obtaining module configured for obtaining the training data set from search histories collected on an e-commerce platform.
- the first embodiment corresponds to the instant embodiment of the present disclosure, the instant embodiment can be implemented in cooperation with the first embodiment.
- the technical details described in the first embodiment apply to the instant embodiment, and are not repeated herein for the purposes of reducing repetition. Accordingly, the technical details described in the instant embodiment apply to the first embodiment.
- the fourth embodiment of the present disclosure relates to an exemplary apparatus for ranking search results using decision trees. It improves upon the third embodiment of the present disclosure, the primary improvement being the division of the computational system of decision trees in two dimensions of training features and training samples, further increasing the training data's training efficiency, and therefore the ranking efficiency. For example, for 3 hundreds of millions of data, a good quality decision tree model can be created within few hours.
- the above described training data set includes M training samples, where M is a natural number greater than or equal to 2.
- the above described division module includes a feature group division sub-module configured for dividing each feature work group into M communication nodes corresponding to the M training samples, where the communication nodes belonging to different feature work groups but to the same training sample form a sample work group.
- the above described independent computing sub-module further includes a gradient computing sub-module configured for, based on the generated decision trees corresponding to the training data set, for each sample work group, independently computing a gradient for each training sample of the sample work group; and a splitting value computing sub-module configured for, based on the computed gradients, for each feature work group, independently computing an optimal splitting value for the training feature corresponding to the feature work group.
- a computation system of decision trees utilizes the MPI protocols to accomplish feature work group division and information transmission amongst feature work groups.
- the second embodiment corresponds to the instant embodiment of the present disclosure, the instant embodiment can be implemented in cooperation with the second embodiment.
- the technical details described in the second embodiment apply to the instant embodiment, and are not repeated herein for the purposes of reducing repetition. Accordingly, the technical details described in the instant embodiment apply to the second embodiment.
- modules or blocks described by embodiments of the present disclosures are logical modules or logical blocks.
- a logical module or logical block can be a physical module or a physical block, a part of a physical module or a physical block, or the combinations of more than one physical modules or physical blocks.
- cloud-based services e.g., software as a service, platform as a service, infrastructure as a service, etc.
- Various functions described herein may be provided through a remote desktop environment or any other cloud-based computing environment.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
A method of decision tree based search result ranking includes obtaining a training data set for generating at least one decision tree, the training data set having N training features and N greater than or equal to 2. The method further includes dividing a computational system of decision trees into N feature work groups corresponding to the N training features respectively, and by use of the feature work groups, computing splitting nodes and splitting values corresponding to the splitting nodes for the decision trees. The method also includes generating the decision trees using the computed splitting nodes and the corresponding splitting values; and ranking search results using the decision trees.
Description
METHOD AND APPARATUS FOR DECISION TREE BASED SEARCH RESULT RANKING
CROSS REFERENCE TO RELATED APPLICATION
[0001] This application claims the benefits to Chinese Patent Application No. 201410742828.4, filed on Dec. 5, 2014, which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
[0002] The present disclosure relates generally to data searching, and more particularly to decision tree based search result ranking.
BACKGROUND
[0003] With rapid developments of the Internet technologies, search engines are becoming a primary approach for users to obtain information of interest. In general, a user enters into a search engine key words or key phrases to search for such information of interest. Different search engines generally utilizes different ranking factors to rank the search results returned and then present to the user the returned search results in a ranked order.
[0004] For the existent search engines, due to various user habits of entering key words or key phrases, as well as each search engine's different computation of degrees of relevance between search results and key words/phrases, the ranking results vary accordingly. In order to obtain search results satisfactory to users, nowadays a commonly practiced approach is to utilize machine learning methods to establish ranking models, and then apply the established ranking models to rank the search results. The decision tree model, a classic model of machine learning methods, handles both classification and regression analysis. The GBDT (Gradient Boosting Decision Tree), one of the decision tree models, essentially utilizes regression decision trees to solve ranking problems.
[0005] However, regardless of which types of decision trees are utilized to establish ranking models, a ranking model can only be established by training with training data sets with known relevance between the search key words/phrases and search results. In general, training data sets include hundreds of millions of data, to train a ranking model with such a large amount of data is significantly time-consuming. Further, for different search key words/phrases entered for different fields, a large number of different ranking models need to be established, let alone the problem of data updating. Therefore, there exists a need to improve the efficiency of establishing ranking models.
SUMMARY
[0006] An object of the present invention is to provide a decision tree based search result ranking method and apparatus for, when training with data sets of large volumes of data, e.g., hundreds of millions of data, decreased the amount of computational time, improved ranking efficiency and ranking flexibility, and lowered ranking associated costs, to a great extent.
[0007] To solve the above described technical problems, according to an exemplary embodiment in accordance with the present disclosure, a method of decision tree based search result ranking includes obtaining a training data set for generating at least one decision tree which is used for ranking, the training data set having N training features and N being a natural number greater than or equal to 2. The method further includes dividing the computational system of the decision trees into N feature work groups, each feature work group corresponding to a training feature of the N training features. The method also includes, by use of the feature work groups, computing splitting nodes and splitting values corresponding to the splitting nodes for the decision trees. The method also includes generating the decision trees using the computed splitting nodes and the corresponding splitting values; and ranking search results using the decision trees.
[0008] According to another exemplary embodiment in accordance with the present disclosure, an apparatus for ranking search results based on decision trees includes a processor and a non- transitory computer-readable medium operably coupled to the processor. The non-transitory computer-readable medium has computer-readable instructions stored thereon to be executed when accessed by the processor. The instructions include an acquisition module, a division module, a computing module and a ranking module. The acquisition module is configured for obtaining a training data set for generating at least one decision tree, the training data set having N training features and N greater than or equal to 2. The division module is configured for dividing a computational system of decision trees into N feature work groups corresponding to the N training features respectively. The computing module is configured for, by use of the feature work groups, computing splitting nodes and splitting values corresponding to the splitting nodes for the decision trees; and for generating the decision trees using the computed splitting nodes and the corresponding splitting values. The ranking module is configured for ranking search results using the decision trees.
[0009] In comparison with existent technologies, embodiments in accordance with the present disclosure provide for the following differences and effects: the usage of dividing the computational system of decision trees into feature work groups based on training features, and the parallel computation and transmission of information based on the feature work groups, provides for training
with training data sets of significantly large volumes, e.g., hundreds of millions of data, with decreased computational time. Especially for search engines with large correspondent databases, it provides for fast and precise training for a good quality decision tree to be used for ranking, increasing ranking efficiency and ranking flexibility, as well as lowering ranking associated costs.
[0010] Furthermore, the usage of dividing the computational system of decision trees in the two dimensions of training features and training samples at the same time further provides for increased training efficiency for training data sets. For example, for a training data set with 3 hundreds of millions data, a good quality decision tree model can be trained within few hours.
[0011] The details of one or more embodiments of the disclosure are set forth in the
accompanying drawings and the description below. Other features, objects, and advantages of the disclosure will be apparent from the description and drawings, and from the claims.
DESCRIPTION OF THE DRAWINGS
[0012] The accompanying drawings, which are incorporated in and form a part of this specification and in which like numerals depict like elements, illustrate embodiments of the present disclosure and, together with the description, serve to explain the principles of the disclosure.
[0013] FIG. 1 is a flow chart of an exemplary method of decision tree based search result ranking in accordance with a first embodiment of the present disclosure;
[0014] FIG. 2 is a schematic diagram of exemplary feature work groups of a computational system of decision trees divided using MPI protocols, in accordance with a second embodiment of the present disclosure;
[0015] FIG. 3 is a schematic diagram of an exemplary distributed memory data structure of a feature work group of a computational system of decision trees divided using MPI protocols, in accordance with a second embodiment of the present disclosure; and
[0016] FIG. 4 is a block diagram of an exemplary apparatus for decision tree based search result ranking in accordance with a third embodiment of the present disclosure.
DETAILED DESCRIPTION
[0017] In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. However, it will become obvious to those skilled in the art that the present disclosure may be practiced without these specific details. The descriptions and representations herein are the common means used by those experienced or skilled in the art to
most effectively convey the substance of their work to others skilled in the art. In other instances, well-known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the present disclosure.
[0018] Reference herein to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the disclosure. The appearances of the phrase "in one embodiment" in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Further, the order of blocks in process flowcharts or diagrams representing one or more embodiments of the disclosure do not inherently indicate any particular order nor imply any limitations in the disclosure.
[0019] Embodiments of the present disclosure are discussed herein with reference to FIGS. 1-4. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes as the disclosure extends beyond these limited embodiments.
[0020] Referring to FIG. 1 , a flow chart of an exemplary method of decision tree based search result ranking is shown in accordance with a first embodiment of the present disclosure. The method 100 starts in step 101 , where a training data set is obtained for generating at least one decision tree which is used for ranking search results. The training data set has N training features, where N is a natural number greater than or equal to 2. In step 102, the computational system of decision trees is divided into N feature work groups, each feature work group corresponding to a training feature of the N training features respectively. Then in step 103, by use of the feature work groups, optimal splitting nodes and optimal splitting values corresponding to the optimal splitting nodes are computed for each decision tree. Based on the computed optimal splitting nodes and optimal splitting values, each decision tree is generated accordingly. The method 100 concludes after step 104, where the search results are ranked using all the generated decision trees.
[0021] In some preferred embodiments of the present disclosure, the number of decision trees is greater than or equal to 2; and the step 103 further includes the step of determining whether the total number of the splitting nodes computed for the present decision tree exceeds a pre-determined threshold value. If so, the step 103 concludes computing optimal splitting nodes and their corresponding splitting values, and starts to generate a next decision tree, or proceeds to step 104.
[0022] If not, for each feature work group, the step 103 further includes the steps of
independently computing a present optimal splitting value for the training feature corresponding to the feature work group; and transmitting amongst the feature work groups, where a present optimal
splitting value for the present decision tree is selected from all the present optimal splitting values computed for the training feature by the feature work groups, and the training feature corresponding to the feature group from which the selected present optimal splitting value is computed, is assigned as a present optimal splitting node for the present decision tree. The step 103 further includes, by use of the feature work group corresponding to the selected present optimal splitting value, based on the present decision tree's present optimal splitting values and present optimal splitting nodes, splitting the training data set to form present splitting nodes, where splitting results of the splitting nodes are transmitted to the computational system of decision trees.
[0023] Furthermore, in some other preferred embodiments of the present disclosure, the above described step 104 includes the steps of fitting all the decision trees to obtain a ranking decision tree, and ranking search results based on degrees of relevance. The search results are retrieved using a search query, and the degrees of relevance are computed between the search results and the search query using the ranking decision tree.
[0024] In yet some other preferred embodiments of the present disclosure, the step 101 includes the step of obtaining the training data set from search histories collected on an e-commerce platform.
[0025] In accordance with embodiments of the present disclosure, each work group can communicate information in an inter-groups manner amongst work groups, as well as in an intra- group manner amongst communication nodes, forming a communication domain. Further, all work groups can perform data processing in parallel.
[0026] The usage of dividing the computational system of decision trees into feature work groups based on training features, and the parallel computation and transmission of information amongst the feature work groups, provides for training with training data sets of significantly large volumes, e.g., hundreds of millions of data, with computational time decreased to a great extent. Especially for search engines with large correspondent databases, it provides for fast and precise training for a good quality decision tree which can be used for ranking, increasing ranking efficiency and ranking flexibility, as well as lowering ranking associated costs.
[0027] A second embodiment in accordance with the present disclosure relates to a method of decision tree based research result ranking. The second embodiment improves upon the first embodiment of the present disclosure, the improvements being dividing the computational system of decision trees in the two dimensions of training features and training samples at the same time, further providing for increased training efficiency for training data sets, and therefore increased ranking efficiency. For example, for a training data set with 3 hundreds of millions of data, a good quality decision tree model can be trained within few hours.
[0028] In particular, the above described training data set includes M training samples, where M is a natural number greater than or equal to 2. The above described step 102 further includes the step of dividing each feature work group into M communication nodes corresponding to the M training samples respectively, where communication nodes belonging to different feature work groups but the same training sample form one sample work group. And, for the above described step of "for each feature work group, independently computing an optimal splitting value for the training feature corresponding to the feature work group" further includes the steps of: based on the generated decision trees corresponding to the training data set, for each sample work group, independently computing a gradient for each training sample of the sample work group; and based on the computed gradients, for each feature work group, independently computing an optimal splitting value for the training feature corresponding to the feature work group.
[0029] Further, it is understood that, in accordance with other alternative embodiments of the present disclosure, based on generated decision trees, by use of sample work groups, mis- classification information can be computed for each training sample. In other words, with an Adaboost decision tree model, mis-classification information can be used to compute the optimal splitting nodes and optimal splitting values for the present to-be-generated decision tree.
Furthermore, it can also be implemented to generate each decision tree, and then to fit all the generated decision trees into a final decision tree used for ranking, i.e., with a random forest model.
[0030] In accordance with other alternative embodiments of the present disclosure, a feature work group can be divided into a number of less than M communication nodes, i.e., each sample work group can correspond to at least 2 training samples. For M training samples, each feature work group can be divided into K groups, where K is a natural number and less than M. K does not necessarily equal M, for example, when K equals 2, then M training samples are divided into 2 groups, each feature work group having samples from 2 sample work groups.
[0031] In order to generate a first decision tree, each training sample can be assumed as having an initial value of 0 for the purpose of computing a gradient for each training sample for generating a first decision tree.
[0032] In accordance with another preferred embodiment of the present disclosure, the computational system of decision trees uses information gateway Message Based Passing Interface (MPI) protocols to accomplish the above described dividing into feature work groups and information communication amongst feature work groups. As shown in FIG. 2, using MPI, the computational system of decision tress is divided into N by M communication nodes, with N feature work groups 240_0, 240_1 , . . ., 240_n and M sample work groups 220_0, 220_1 , . . ., 220_m,
where n and m are natural numbers. Data of the training data set has a schema of <target>
<qid: query ID> <featureIDl : value 1> <featureID2:value2> . . . <featureIDn:valuen>, where <target> represents the target value for the present training sample, <qid:query ID> represents the present query identification, and <featureIDz':valuez> represents the value for the i feature, i being a natural number between 1 and n.
[0033] Each feature work group can communicate in an intra-group manner, each feature working group includes M communication nodes, each sample work group includes N
communication nodes. During the entire computing process, data is stored in a distributed manner in the memory. As shown in FIG. 3, data of the training data sets is divided based on training features, each training feature is stored by its corresponding feature work group as feature work group memory data 300 in the memory. Separately, further dividing the training features such that each communication node of a feature work group stores partial data, for example, in the form of Fi work memory data 320 0, 320 1 , . . . , 320_m. As shown in FIG. 3, for example, with data divided based on queries, feature work groups' communication nodes (i.e., Fi work m, F2_work 1 etc.) store the following data: (1) training samples' feature values after division, (2) training samples' query ids after division, (3) training samples' target change values. Each sample work group also needs to store additional training related information such as (1) a training sample's negative gradient after division and (2) a training sample's present predicative value after division.
[0034] The following, using the GBDT model as an example, illustrates a method of generating a GBDT ranking decision tree based on MPI protocols, in accordance with an embodiment of the present disclosure.
[0035] In generating a ranking decision tree using the GBDT model, there are two important steps: obtaining training samples' negative gradients, and generating decision trees.
[0036] (1) Obtain Training Samples' Negative Gradients
[0037] Data stored on sample work groups' communication nodes is evenly divided (in other alternative embodiments of the present disclosure, data can be divided using other methods, depending on circumstances). For example, if the total number of sample queries is q_total, then sample work group 0 stores (0, q_total/M) sequence of data, sample work group 1 stores (q_total/M, q_toal/M*2) sequence of data, and so on. Sample work groups are independent from each other, to establish a present decision tree, based on previously established decision trees, independently compute their respective sample work group's divided samples' negative gradients. If there are M sample work groups, then every sample work group only computes one sample negative gradient. If there are less than M sample work groups, then every sample group computes more than one sample
negative gradients. The communication nodes of a sample work group can co-operate to compute gradients, every real communication node computing part of sample gradients, after computation, using intra-work group communication to obtain all the gradients for the sample work group.
[0038] (2) Establish Decision Trees
[0039] The process of generating decision trees primarily is to compute, for a presently to be generated decision tree, optimal splitting points and their respective optimal splitting values, and to perform the splitting of the training data sets accordingly.
[0040] A) Work group computing optimal splitting points
[0041] Each feature work group computes its respective training feature's optimal splitting points, with statistics of all the feature work groups, global optimal splitting nodes (fid) and the corresponding optimal splitting values (split value) can be obtained.
[0042] When a feature work group computes an optimal splitting value (split value) for the present feature, as each communication node of the feature work group only stores part of the data, it is necessary to access data stored at all the communication nodes of the feature work group in order to compute an optimal splitting value. Detailed computation of an exemplary feature work group is illustrated in the following:
[0043] All the communication nodes of each feature work group compute candidate splitting values' regional samples' left_sum (a negative gradient of the left node after splitting) and left count (a count of the number of samples at the left node after splitting), forming a three element unit with a schema of <split_value, left sum, left_count>. Here, there is no right sum (a negative gradient of the right node after splitting), or right count (a count of the number of samples at the right node after splitting), because left sum can be computed by subtracting left sum from the present node sum (the total number of nodes), for the purpose of reducing the amount of communication inside the feature work group.
[0044] Communication node zero of each feature work group collects from other
communication nodes of the feature work group their computed three element unit information; computes for each candidate splitting value a gain Critmax = left_sum * left_sum/left_count + right _sum* right_sum/right_count; and sets the candidate splitting value corresponding to a largest Critmax as the optimal splitting point for the corresponding training feature of the feature work group. It is understood that in other alternative embodiments of the present disclosure,
communication nodes of the feature work group other than node zero can also be implemented, without any particular limitation, to collect the three element unit information from the rest of communication nodes of the feature work group.
[0045] The optimal splitting nodes of the feature work group with the largest Critmax are selected as the present optimal splitting values for the present decision tree. The training feature corresponding to the feature work with the larger Critmax is selected as the present optimal splitting nodes for the present decision tree. It is also understood that in other alternative embodiments of the present disclosure, other methods can be used to compute the optimal splitting nodes and the optimal splitting values, not limited to the above described Critmax based computation.
[0046] B) Splitting at the optimal splitting nodes
[0047] Each communication node of the feature work group maintains a table of node id for the present work group's training samples. At splitting, the table of node id is updated. When the optimal splitting feature (i.e., optimal splitting node) (fid) and the corresponding optimal splitting values (split_value) are determined, only the feature work group corresponding to the optimal split nodes performs the splitting using the optimal splitting node and updates the node id table accordingly. The other feature work groups don't have feature value for the fid. Detailed implementation of an exemplary splitting is illustrated as the following: the feature work group of fid performs splitting, records information that indicates each sample is split into the left node or the right node, for example, utilizes 0 or 1 as indicators, where 0 indicates left node and 1 indicates right node, saves the indication information into a bitmap, and broadcasts it to the other feature work group.
[0048] To generate an IGBT ranking model based on multiple decision trees, an exemplary work flow is illustrated as the following: (1) load for the computational system operative parameters and each sample's data sets; (2) to generate the i decision tree, using sample work groups, compute a negative gradient for each sample, based on the prior i-l decision trees. (When i=l , the sample's initial value is set to 0 for computing negative gradients, for example, to compute with loss functions as a constant). Next, with the computed negative gradients, use the feature work groups to compute the present decision tree's optimal splitting nodes and corresponding optimal splitting values. In the process of computing the j optimal splitting node, it is necessary to determine whether the total number of the nodes of the present decision tree exceeds a pre-determined threshold number; or whether it still has the feature suitable as an optimal splitting node. If no, then compute a j optimal splitting node. Otherwise, conclude the computation of optimal splitting nodes, generate the i decision tree, and start the computation to build the next decision tree or directly fitting the generated i decision tree into a ranking decision tree, i.e., an IGBT ranking model.
[0049] In addition, it is understood that, in other alternative embodiments of the present disclosure, other parallel transmission communication protocols can also be implemented to divide the computational system.
[0050] Embodiments of the present disclosure can be implemented using software, hardware, firmware, and/or the combinations thereof. Regardless of being implemented using software, hardware, firmware or the combinations thereof, instruction code can be stored in any kind of computer readable media (for example, permanent or modifiable, volatile or non-volatile, solid or non-solid, fixed or changeable medium, etc.). Similarly, such mediums can be implemented using, for example, programmable array logic (PAL), random access memory (RAM), programmable read only memory (PROM), read only memory (ROM), electrically erasable programmable ROM (EEPROM), magnetic storage, optical storage, digital versatile disc (DVD), or the like.
[0051] Referring to FIG. 4, a block diagram of an exemplary apparatus for decision trees based search result ranking in accordance with a third embodiment of the present disclosure is shown. The apparatus 400 includes an acquisition module 402 configured for obtaining a training data set to establish at least one decision tree. The training data set has N associated training features, where N is a natural number greater than or equal to 2. The apparatus 400 further includes a division module 404 configured for dividing the computational system of decision trees into N feature work groups, each feature work group corresponding to a training feature of the N training features respectively. The apparatus 400 also includes a computing module 406 configured for using the feature work groups to computing optimal splitting nodes and optimal splitting values corresponding to the optimal splitting nodes for each decision tree, and to generate each decision tree using the computed optimal splitting nodes and optimal splitting values. The apparatus 400 further includes a ranking module 408 configured for ranking search results using the decision trees.
[0052] In some embodiments in accordance with the present disclosure, the total number of the above described decision trees is greater than or equal to 2; and the above described computing module includes the following sub-modules: a counting sub-module configured for determining whether a number of optimal splitting nodes computed for a present decision tree exceeds a predetermined threshold value; a computation conclusion sub-module configured for, when the counting sub-module returns a not-exceeding the threshold value condition, concluding the computation of optimal splitting nodes and optimal splitting values, and start to generate the next decision tree, or proceed to the ranking module. The computing module also includes an
independent computing sub-module configured for, if the counting sub-module returns an exceeding the threshold value condition, for each feature work group, independently computing a optimal
splitting value for the training feature corresponding to the feature work group. The computing module further includes a node assigning sub-module configured for transmitting amongst the feature work groups, where a present optimal splitting value for the present decision tree is selected from all the optimal splitting values computed for the feature work groups, where the training feature corresponding to the feature group, with which the selected present optimal splitting value is computed, is assigned as a present optimal splitting node for the present decision tree. The computing module also includes a node splitting sub-module configured for computing the present optimal splitting value's corresponding feature work group, based on the present decision tree's present optimal splitting value and present optimal splitting node's corresponding training samples, forming present splitting nodes, wherein splitting results are transmitted to the computational system of decision trees.
[0053] In a preferred embodiment of the present disclosure, the above described ranking module includes a decision tree fitting sub-module configured for fitting the generated decision trees to form a ranking decision tree; and a decision tree based ranking sub-module configured for ranking search results based on degrees of relevance, where the search results are retrieved using search queries and the degrees of relevance are computed between the search results and the search queries using the ranking decision tree.
[0054] In another preferred embodiment of the present disclosure, the above described acquisition module further includes a training data set obtaining module configured for obtaining the training data set from search histories collected on an e-commerce platform.
[0055] The first embodiment corresponds to the instant embodiment of the present disclosure, the instant embodiment can be implemented in cooperation with the first embodiment. The technical details described in the first embodiment apply to the instant embodiment, and are not repeated herein for the purposes of reducing repetition. Accordingly, the technical details described in the instant embodiment apply to the first embodiment.
[0056] The fourth embodiment of the present disclosure relates to an exemplary apparatus for ranking search results using decision trees. It improves upon the third embodiment of the present disclosure, the primary improvement being the division of the computational system of decision trees in two dimensions of training features and training samples, further increasing the training data's training efficiency, and therefore the ranking efficiency. For example, for 3 hundreds of millions of data, a good quality decision tree model can be created within few hours.
[0057] In particular, the above described training data set includes M training samples, where M is a natural number greater than or equal to 2. The above described division module includes a
feature group division sub-module configured for dividing each feature work group into M communication nodes corresponding to the M training samples, where the communication nodes belonging to different feature work groups but to the same training sample form a sample work group.
[0058] The above described independent computing sub-module further includes a gradient computing sub-module configured for, based on the generated decision trees corresponding to the training data set, for each sample work group, independently computing a gradient for each training sample of the sample work group; and a splitting value computing sub-module configured for, based on the computed gradients, for each feature work group, independently computing an optimal splitting value for the training feature corresponding to the feature work group.
[0059] In a preferred embodiment of the present disclosure, a computation system of decision trees utilizes the MPI protocols to accomplish feature work group division and information transmission amongst feature work groups.
[0060] The second embodiment corresponds to the instant embodiment of the present disclosure, the instant embodiment can be implemented in cooperation with the second embodiment. The technical details described in the second embodiment apply to the instant embodiment, and are not repeated herein for the purposes of reducing repetition. Accordingly, the technical details described in the instant embodiment apply to the second embodiment.
[0061] It is necessary to point out that, modules or blocks described by embodiments of the present disclosures are logical modules or logical blocks. Physically, a logical module or logical block can be a physical module or a physical block, a part of a physical module or a physical block, or the combinations of more than one physical modules or physical blocks. Physical
implementation of those logical module or logical blocks is not of essence. The realized
functionalities realized by the modules, blocks and the combinations thereof are key to solving the problems addressed by the present disclosure. Further, in order to disclose the novelties of the present disclosure, the above described embodiments do not disclose about those modules or blocks not too related to solving the problems addressed by the present disclosure, which does not mean that the above described embodiments cannot include other modules or blocks.
[0062] It is also necessary to point out that, in the claims and specification of the present disclosure, terms such as first and second only are for distinguishing an embodiment or an operation from another embodiment or operation. It does not require or imply that those embodiments or operations having any such real relationship or order. Further, as used herein, the terms
"comprising," "including," or any other variation intended to cover a non-exclusive inclusion, such
that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Absent further limitation, elements recited by the phrase "comprising a" does not exclude a process, method, article, or apparatus that comprises such elements from including other same elements.
[0063] While the foregoing disclosure sets forth various embodiments using specific block diagrams, flowcharts, and examples, each block diagram component, flowchart step, operation, and/or component described and/or illustrated herein may be implemented, individually and/or collectively, using a wide range of hardware, software, or firmware (or any combination thereof) configurations. In addition, any disclosure of components contained within other components should be considered as examples because many other architectures can be implemented to achieve the same functionality.
[0064] The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various example methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
[0065] While various embodiments have been described and/or illustrated herein in the context of fully functional computing systems, one or more of these example embodiments may be distributed as a program product in a variety of forms, regardless of the particular type of computer- readable medium used to actually carry out the distribution. The embodiments disclosed herein may also be implemented using software modules that perform certain tasks. These software modules may include script, batch, or other executable files that may be stored on a computer-readable storage media or in a computing system. These software modules may configure a computing system to perform one or more of the example embodiments disclosed herein. One or more of the software modules disclosed herein may be implemented in a cloud computing environment. Cloud computing environments may provide various services and applications via the Internet. These cloud-based services (e.g., software as a service, platform as a service, infrastructure as a service, etc.) may be accessible through a Web browser or other remote interface. Various functions described herein may be provided through a remote desktop environment or any other cloud-based computing environment.
[0066] Although the present disclosure and its advantages have been described in detail, it should be understood that various changes substitutions, and alterations can be made herein without departing from the spirit and scope of the disclosure as defined by the appended claims. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to optimal explain the principles of the disclosure and its practical applications, to thereby enable others skilled in the art to optimal utilize the disclosure and various embodiments with various modifications as may be suited to the particular use contemplated.
[0067] Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the present disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the present disclosure. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
[0068] Embodiments according to the present disclosure are thus described. While the present disclosure has been described in particular embodiments, it should be appreciated that the disclosure should not be construed as limited by such embodiments, but rather construed according to the below claims.
Claims
1. A method of decision tree based search result ranking, the method comprising:
obtaining a training data set for generating at least one decision tree, the training data set having N training features, wherein N is greater than or equal to 2;
dividing a computational system of decision trees into N feature work groups corresponding to the N training features respectively;
computing , based upon the feature work groups, optimal splitting nodes and optimal splitting values corresponding to the optimal splitting nodes for the decision trees;
generating the decision trees using the computed optimal splitting nodes and the corresponding optimal splitting values; and
ranking search results using the decision trees.
2. The method of claim 1, further comprising:
computing, when a total number of the generated decision trees is greater than or equal to 2, splitting of the nodes by determining whether a number of splitting nodes computed for the decision tree exceeds a pre-determined threshold value.
3. The method of claim 2, further comprising:
computing, if the number of the optimal splitting nodes does not exceed the threshold value for each feature work group, an optimal splitting value for the training feature
corresponding to the feature work group;
transmitting, when a present splitting value for a present decision tree is selected from substantially all the optimal splitting values for the training feature of the feature work groups, amongst the feature work groups,
assigning the training feature corresponding to the feature group from which the selected present splitting value is computed as a present optimal splitting node for the present decision tree; and
splitting, by use of the feature work group corresponding to the selected splitting value and based upon a decision tree's splitting value and splitting node, the training data set to form
splitting nodes, wherein a splitting result of the splitting nodes are transmitted to the computational system.
4. The method of claim 3, further comprising:
dividing, when the training data set has M training samples and M is greater than or equal to 2, the computational system of decision trees by dividing each feature work group into M communication nodes corresponding to the M training samples, wherein communication nodes belonging to different feature work groups but to a same training sample form a sample work group; and
computing a splitting value by computing, based upon the generated decision trees corresponding to the training data set for each sample work group, gradients for training samples of the sample work group; and
computing, based on the computed gradients, for each feature work group, a splitting value for the training feature corresponding to the feature work group.
5. The method of claim 1 , wherein ranking search results comprises:
fitting the generated decision trees to form a ranking decision tree; and
ranking search results based on degrees of relevance, wherein the search results are retrieved using search queries and the degrees of relevance are computed between the search results and the search queries using the ranking decision tree.
6. The method of claim 1 , wherein the computational system of decision trees utilizes Message Passing Interface (MPI) protocols to divide feature work groups and to transmit information amongst groups.
7. The method of claim 1 , further comprising obtaining the training data set from search histories collected on an e-commerce platform.
8. An apparatus for ranking search results based on decision trees, the apparatus comprising:
a processor; and
a non-transitory computer-readable medium operably coupled to the processor, the non- transitory computer-readable medium having computer-readable instructions stored thereon to be executed when accessed by the processor, the instructions comprising:
an acquisition module configured for obtaining a training data set for generating at least one decision tree, the training data set having N training features, wherein N is greater than or equal to 2;
a division module configured for dividing a computational system of decision trees into N feature work groups corresponding to the N training features respectively; a computing module configured for computing, based upon the feature work groups, optimal splitting nodes and optimal splitting values corresponding to the optimal splitting nodes for the decision trees; and for generating the decision trees using the optimal splitting nodes and the corresponding optimal splitting values; and
a ranking module configured for ranking search results using the decision trees.
9. The apparatus of claim 8, wherein the computing module comprises:
a counting sub-module configured for determining, when a total number of the generated decision trees is greater than or equal to 2, whether a number of splitting nodes computed for a decision tree exceeds a pre-determined threshold value.
10. The apparatus of claim 9, wherein the computing module further comprises:
an independent computing sub-module configured for computing, if the number of the optimal splitting nodes does not exceed the threshold value for each feature work group, an optimal splitting value for the training feature corresponding to the feature work group;
a node assigning sub-module configured for transmitting, when a present splitting value for a present decision tree is selected from substantially all the optimal splitting values for the training feature of the feature work groups, amongst the feature work groups, and assigning the training feature corresponding to the feature group from which the selected present splitting value is computed as a present optimal splitting node for the present decision tree; and
a node splitting sub-module configured for splitting, by use of the feature work group corresponding to the selected splitting value and based upon a decision tree's splitting value and splitting node, the training data set to form splitting nodes, wherein a splitting result of the splitting nodes are transmitted to the computational system.
11. The apparatus of claim 10, wherein the division module comprises:
a feature group division sub-module configured for dividing, when the training data set 90 has M training samples and M is greater than or equal to 2, each feature work group into M
communication nodes corresponding to the M training samples, wherein communication nodes belonging to different feature work groups but to a same training sample form a sample work group; and the computing sub-module comprises:
a gradient computing sub-module configured for computing, based upon the generated 95 decision trees corresponding to the training data set for each sample work group, gradients for training samples of the sample work group; and
a splitting value computing sub-module configured for computing, based on the computed gradients, for each feature work group, a splitting value for the training feature corresponding to the feature work group.
100 12. The apparatus of claim 8, wherein the ranking module comprises:
a decision tree fitting sub-module configured for fitting the generated decision trees to form a ranking decision tree; and
a decision tree based ranking sub-module configured for ranking search results based on degrees of relevance, wherein the search results are retrieved using search queries, the degrees of 105 relevance are computed between the search results and the search queries using the ranking decision tree.
13. The apparatus of claim 8, wherein the computational system of decision trees utilizes Message Passing Interface (MPI) protocols to divide feature work groups and to transmit information amongst groups.
110 14. The apparatus of claim 8, wherein the acquisition module comprises a training data set acquisition sub-module configured for obtaining the training data set from search histories collected on an e-commerce platform.
15. A non-transitory computer readable storage medium having embedded therein program instructions, when executed by one or more processors of a device, causes the device to execute 115 a process for decision tree based search result ranking, the process comprising:
obtaining a training data set for generating at least one decision tree, the training data set having N training features, wherein N is greater than or equal to 2;
dividing a computational system of decision trees into N feature work groups corresponding to the N training features respectively;
120 computing , based upon the feature work groups, optimal splitting nodes and optimal splitting values corresponding to the splitting nodes for the decision trees;
generating the decision trees using the computed optimal splitting nodes and the corresponding optimal splitting values; and
ranking search results using the decision trees.
125 16. The non- transitory computer readable storage medium of claim 15, wherein the process further comprises computing, when a total number of the generated decision trees is greater than or equal to 2, splitting nodes by determining whether a number of splitting nodes computed for a decision tree exceeds a pre-determined threshold value.
130 17. The non-transitory computer readable storage medium of claim 16, the process further comprises:
computing, if the number of the optimal splitting nodes does not exceed the threshold value for each feature work group, an optimal splitting value for the training feature
corresponding to the feature work group;
135 transmitting, when a present splitting value for a present decision tree is selected from substantially all the optimal splitting values for the training feature of the feature work groups, amongst the feature work groups,
assigning the training feature corresponding to the feature group from which the selected present splitting value is computed as a present optimal splitting node for the present decision 140 tree; and
splitting, by use of the feature work group corresponding to the selected splitting value and based upon a decision tree's splitting value and splitting node, the training data set to form splitting nodes, wherein a splitting result of the splitting nodes are transmitted to the
computational system.
145
18. The non-transitory computer readable storage medium of claim 17, wherein the process further comprises:
dividing, when the training data set has M training samples and M is greater than or equal to 2, the computational system of decision trees by dividing each feature work group into M
150 communication nodes corresponding to the M training samples, wherein communication nodes belonging to different feature work groups but to a same training sample form a sample work group; and
computing a splitting value by computing, based upon the generated decision trees corresponding to the training data set for each sample work group, gradients for training samples 155 of the sample work group; and
computing, based on the computed gradients, for each feature work group, a splitting value for the training feature corresponding to the feature work group.
19. The non- transitory computer readable storage medium of claim 15, wherein ranking search results comprises:
160 fitting the generated decision trees to form a ranking decision tree; and
ranking search results based on degrees of relevance, wherein the search results are retrieved using search queries the degrees of relevance are computed between the search results and the search queries using the ranking decision tree.
20. The non- transitory computer readable storage medium of claim 15, wherein the
165 computational system of decision trees utilizes Message Passing Interface (MPI) protocols to divide feature work groups and to transmit information amongst groups.
21. The non- transitory computer readable storage medium of claim 15, wherein obtaining training data set for at least one decision tree comprises obtaining the training data set from search histories collected on an e-commerce platform.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410742828.4 | 2014-12-05 | ||
CN201410742828.4A CN105718493B (en) | 2014-12-05 | 2014-12-05 | Search result ordering method and its device based on decision tree |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016090290A1 true WO2016090290A1 (en) | 2016-06-09 |
Family
ID=56092551
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2015/064069 WO2016090290A1 (en) | 2014-12-05 | 2015-12-04 | Method and apparatus for decision tree based search result ranking |
Country Status (3)
Country | Link |
---|---|
US (1) | US20160162793A1 (en) |
CN (1) | CN105718493B (en) |
WO (1) | WO2016090290A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3540654A1 (en) * | 2018-03-16 | 2019-09-18 | Ricoh Company, Ltd. | Learning classification device and learning classification method |
CN110717509A (en) * | 2019-09-03 | 2020-01-21 | 中国平安人寿保险股份有限公司 | Data sample analysis method and device based on tree splitting algorithm |
WO2020057301A1 (en) * | 2018-09-21 | 2020-03-26 | 阿里巴巴集团控股有限公司 | Method and apparatus for generating decision tree |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108875955B (en) * | 2017-05-10 | 2023-04-18 | 腾讯科技(深圳)有限公司 | Gradient lifting decision tree implementation method based on parameter server and related equipment |
US10891295B2 (en) * | 2017-06-04 | 2021-01-12 | Apple Inc. | Methods and systems using linear expressions for machine learning models to rank search results |
CN108009931B (en) * | 2017-12-25 | 2021-08-06 | 上海七炅信息科技有限公司 | Insurance data decision tree construction method adopting variable gain algorithm and breadth in-layer gain algorithm |
CN108170866B (en) * | 2018-01-30 | 2022-03-11 | 深圳市茁壮网络股份有限公司 | Sample searching method and device |
US11030691B2 (en) | 2018-03-14 | 2021-06-08 | Chicago Mercantile Exchange Inc. | Decision tree data structure based processing system |
JP7040168B2 (en) * | 2018-03-16 | 2022-03-23 | 株式会社リコー | Learning identification device and learning identification method |
JP7069897B2 (en) * | 2018-03-16 | 2022-05-18 | 株式会社リコー | Learning identification device and learning identification method |
WO2019189249A1 (en) * | 2018-03-29 | 2019-10-03 | 日本電気株式会社 | Learning device, learning method, and computer-readable recording medium |
CN108536650B (en) * | 2018-04-03 | 2022-04-26 | 北京京东尚科信息技术有限公司 | Method and device for generating gradient lifting tree model |
JP7035827B2 (en) * | 2018-06-08 | 2022-03-15 | 株式会社リコー | Learning identification device and learning identification method |
CN109308545B (en) * | 2018-08-21 | 2023-07-07 | 中国平安人寿保险股份有限公司 | Method, device, computer equipment and storage medium for predicting diabetes probability |
CN110930103A (en) * | 2018-09-18 | 2020-03-27 | 北京京东尚科信息技术有限公司 | Service ticket checking method and system, medium and computer system |
CN110968767B (en) * | 2018-09-28 | 2023-03-31 | 北京嘀嘀无限科技发展有限公司 | Ranking engine training method and device, and business card ranking method and device |
US11699106B2 (en) | 2019-03-15 | 2023-07-11 | Microsoft Technology Licensing, Llc | Categorical feature enhancement mechanism for gradient boosting decision tree |
CN112101397B (en) * | 2019-06-18 | 2024-09-20 | 北京京东振世信息技术有限公司 | Method and device for predicting book weight interval |
CN110990829B (en) * | 2019-11-21 | 2021-09-28 | 支付宝(杭州)信息技术有限公司 | Method, device and equipment for training GBDT model in trusted execution environment |
US11568317B2 (en) | 2020-05-21 | 2023-01-31 | Paypal, Inc. | Enhanced gradient boosting tree for risk and fraud modeling |
CN112052875B (en) * | 2020-07-30 | 2024-08-20 | 华控清交信息科技(北京)有限公司 | A method for training tree model device and device for training tree model |
CN112036510B (en) * | 2020-09-30 | 2024-07-26 | 北京百度网讯科技有限公司 | Model generation method, device, electronic equipment and storage medium |
CN116760723B (en) * | 2023-05-17 | 2024-03-08 | 广州天懋信息系统股份有限公司 | Data prediction method, device, equipment and medium based on prediction tree model |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030130899A1 (en) * | 2002-01-08 | 2003-07-10 | Bruce Ferguson | System and method for historical database training of non-linear models for use in electronic commerce |
US20080154820A1 (en) * | 2006-10-27 | 2008-06-26 | Kirshenbaum Evan R | Selecting a classifier to use as a feature for another classifier |
US20110087673A1 (en) * | 2009-10-09 | 2011-04-14 | Yahoo!, Inc., a Delaware corporation | Methods and systems relating to ranking functions for multiple domains |
US8417654B1 (en) * | 2009-09-22 | 2013-04-09 | Google Inc. | Decision tree refinement |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103473231A (en) * | 2012-06-06 | 2013-12-25 | 深圳先进技术研究院 | Classifier building method and system |
CN103902591B (en) * | 2012-12-27 | 2019-04-23 | 中国科学院深圳先进技术研究院 | Construct the method and device of decision tree classifier |
-
2014
- 2014-12-05 CN CN201410742828.4A patent/CN105718493B/en active Active
-
2015
- 2015-12-04 US US14/959,375 patent/US20160162793A1/en not_active Abandoned
- 2015-12-04 WO PCT/US2015/064069 patent/WO2016090290A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030130899A1 (en) * | 2002-01-08 | 2003-07-10 | Bruce Ferguson | System and method for historical database training of non-linear models for use in electronic commerce |
US20080154820A1 (en) * | 2006-10-27 | 2008-06-26 | Kirshenbaum Evan R | Selecting a classifier to use as a feature for another classifier |
US8417654B1 (en) * | 2009-09-22 | 2013-04-09 | Google Inc. | Decision tree refinement |
US20110087673A1 (en) * | 2009-10-09 | 2011-04-14 | Yahoo!, Inc., a Delaware corporation | Methods and systems relating to ranking functions for multiple domains |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3540654A1 (en) * | 2018-03-16 | 2019-09-18 | Ricoh Company, Ltd. | Learning classification device and learning classification method |
US11354601B2 (en) | 2018-03-16 | 2022-06-07 | Ricoh Company, Ltd. | Learning classification device and learning classification method |
WO2020057301A1 (en) * | 2018-09-21 | 2020-03-26 | 阿里巴巴集团控股有限公司 | Method and apparatus for generating decision tree |
CN110717509A (en) * | 2019-09-03 | 2020-01-21 | 中国平安人寿保险股份有限公司 | Data sample analysis method and device based on tree splitting algorithm |
CN110717509B (en) * | 2019-09-03 | 2024-04-05 | 中国平安人寿保险股份有限公司 | Data sample analysis method and device based on tree splitting algorithm |
Also Published As
Publication number | Publication date |
---|---|
US20160162793A1 (en) | 2016-06-09 |
CN105718493B (en) | 2019-07-23 |
CN105718493A (en) | 2016-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20160162793A1 (en) | Method and apparatus for decision tree based search result ranking | |
US11372851B2 (en) | Systems and methods for rapid data analysis | |
CN107492008B (en) | Information recommendation method and device, server and computer storage medium | |
CN107329983B (en) | Machine data distributed storage and reading method and system | |
CN109597974B (en) | Report generation method and device | |
WO2015188006A1 (en) | Method and apparatus of matching text information and pushing a business object | |
US10353966B2 (en) | Dynamic attributes for searching | |
CN105335391A (en) | Processing method and device of search request on the basis of search engine | |
GB2583290A (en) | Blockwise extraction of document metadata | |
CN108427686A (en) | Text data querying method and device | |
RU2014118338A (en) | METHOD FOR PROCESSING SEARCH REQUEST, SERVER AND MACHINE READABLE MEDIA FOR ITS IMPLEMENTATION | |
CN104915426B (en) | Information sorting method, the method and device for generating information sorting model | |
CN104484392B (en) | Query sentence of database generation method and device | |
CN105069077A (en) | Search method and device | |
CN105893427A (en) | Resource searching method and server | |
CN109977135A (en) | A kind of data query method, apparatus and server | |
EP3221801A1 (en) | Offline evaluation of ranking functions | |
CN111159135A (en) | Data processing method and device, electronic equipment and storage medium | |
CN106648839B (en) | Data processing method and device | |
CN106021423B (en) | META Search Engine personalization results recommended method based on group division | |
CN106610989B (en) | Search keyword clustering method and device | |
CN104123321B (en) | A kind of determining method and device for recommending picture | |
CN112801703B (en) | Method, device and equipment for determining advertisement conversion user | |
CN114048148A (en) | Crowdsourcing test report recommendation method and device and electronic equipment | |
CN111400253B (en) | Statistical data query method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15864813 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15864813 Country of ref document: EP Kind code of ref document: A1 |