WO2020042579A1 - 分组归纳方法、装置、电子装置及存储介质 - Google Patents

分组归纳方法、装置、电子装置及存储介质 Download PDF

Info

Publication number
WO2020042579A1
WO2020042579A1 PCT/CN2019/077223 CN2019077223W WO2020042579A1 WO 2020042579 A1 WO2020042579 A1 WO 2020042579A1 CN 2019077223 W CN2019077223 W CN 2019077223W WO 2020042579 A1 WO2020042579 A1 WO 2020042579A1
Authority
WO
WIPO (PCT)
Prior art keywords
service type
condition attribute
sample data
largest
condition
Prior art date
Application number
PCT/CN2019/077223
Other languages
English (en)
French (fr)
Inventor
邓悦
金戈
徐亮
肖京
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020042579A1 publication Critical patent/WO2020042579A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • the present application relates to a group induction method, a group induction device, an electronic device, and a storage medium.
  • the existing clustering induction method basically adopts manual induction based on business types (such as: diligence type, resource type, etc.).
  • this method is subject to the subjective influence of individuals, and when the number of clusters is large, and the amount of features involved in each cluster is large, the induction of clusters cannot be effectively completed manually.
  • a preferred embodiment of the present application provides a grouping and induction method, including: obtaining multiple sets of sample data, each set of sample data including multiple condition attributes and corresponding decision attributes; training a decision tree model based on the sample data, so The decision tree model includes multiple leaf nodes, each leaf node representing a group; classifying the condition attributes to determine multiple service types, each service type corresponding to at least one condition attribute, and the condition attribute as the service Type of evaluation factor; determine the conditional attributes involved in the process of dividing and obtaining each group, and calculate the number of occurrences of conditional attributes corresponding to the same business type according to the business type to which the evaluation factor corresponding to each conditional attribute belongs, The number of times is used as the weight factor of the service type involved in each group; and the service type with the largest weight factor is selected, and the group is grouped into the service type.
  • a preferred embodiment of the present application also provides a grouping and induction device, including: an acquisition module for acquiring multiple sets of sample data, each group of sample data including multiple condition attributes and corresponding decision attributes; a training module for The sample data trains a decision tree model, the decision tree model includes a plurality of leaf nodes, each leaf node representing a group; a classification module, configured to classify the condition attributes to determine multiple service types, each A service type corresponds to at least one condition attribute, and the condition attribute is used as an evaluation factor of the service type; a calculation module is used to determine a condition attribute involved in the process of dividing and obtaining each group, and according to the evaluation factor corresponding to each condition attribute Calculate the number of occurrences of the condition attribute corresponding to the same business type, and use the number of times as the weight factor of the business type involved in each group; and an induction module for selecting the business type with the largest weight factor,
  • the grouping is summarized into the service type.
  • a preferred embodiment of the present application further provides an electronic device including a processor and a memory, wherein the memory stores a group induction program, and the processor is configured to execute the group induction program to implement the group induction described above. method.
  • a preferred embodiment of the present application further provides a non-volatile readable storage medium.
  • the non-volatile readable storage medium stores a group induction program, and the group induction program is implemented as before when executed by a processor.
  • the grouped induction method is not limited to:
  • each group is objectively summarized based on the number of occurrences of the conditional attributes involved in each group in the same service type, so that the induction standard is unified; and, the grouping result is matched with the type of service requirements, which is beneficial to auxiliary services analysis.
  • FIG. 1 is a flowchart of a group induction method provided by a preferred embodiment of the present application.
  • FIG. 2 is a schematic diagram of a decision tree trained by the group induction method of FIG. 1.
  • FIG. 3 is a schematic structural diagram of a group induction device provided by a preferred embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an electronic device according to a preferred embodiment of the present application.
  • FIG. 1 is a flowchart of a group induction method provided by a preferred embodiment of the present application.
  • the group induction method is applied to an electronic device 1. According to different requirements, the order of the steps of the group induction method may be changed, and some steps may be omitted or combined.
  • the group induction method includes the following steps:
  • step S11 multiple sets of sample data are obtained, and each set of sample data includes multiple condition attributes and corresponding decision attributes, and the decision attributes are performance of the sample data.
  • the sample data needs to include data of a higher performing person (ie, a high performing person) and a poor performing person (ie, a low performing person), and the sample data may be stored in the electronic device.
  • the electronic device may also connect to an external sample library through a network to further obtain sample data stored in the sample library.
  • the electronic device may also collect and establish the sample library by means of big data.
  • condition attributes may be behavior trajectories (such as business trips), app activity, business expansion, consumption, interests, hobbies, participation in training, attendance rate, and so on.
  • the condition attributes include business trips, business expansions, training participation, and attendance rates.
  • Table 1 The sample data is shown in Table 1.
  • a decision tree model is trained according to the sample data.
  • the decision tree model includes multiple leaf nodes, and each leaf node represents a group.
  • the decision tree algorithm belongs to a supervised learning classification algorithm, and the decision tree model represents a mapping relationship between object attributes and object values.
  • the decision tree model has N layers (N is a natural number, N> 2), and the electronic device sets each node (including a root node, an internal node, and a leaf node) in the decision tree model according to the condition attribute. ).
  • the first layer of the decision tree model is a root node
  • the second layer of the decision tree model represents a plurality of nodes obtained by segmenting the root node with a first-level condition attribute.
  • the third layer of the decision tree model is a root node.
  • the layer represents multiple nodes obtained by segmenting the nodes in the previous layer with the second-level condition attributes, and so on.
  • training the decision tree model specifically includes:
  • step S121 the sample data is used as a training set to calculate the information gain of each condition attribute.
  • Step S122 selecting the condition attribute with the largest information gain as the root node of the decision tree model to segment the sample data to obtain the next-level node; where the larger the information gain of a condition attribute is, the condition attribute is selected.
  • the more information provided for classification the more conducive it is to determinism, and the more beneficial it is to classify the sample data. As shown in FIG. 2, if the condition attribute for which the maximum information gain is calculated is “travel situation”, the “travel situation” condition attribute is selected as the root node to segment the sample data.
  • Step S123 Recalculate the information gain of each condition attribute using the sample data contained in each node as a training set.
  • step S124 the condition attribute with the largest information gain is selected to divide the node to obtain the next-level node.
  • C 0
  • the condition attribute that calculates the maximum information gain is "business development situation”
  • the nodes are segmented to obtain the next-level nodes.
  • the condition attribute with the largest information gain is “participation in training”
  • the node is segmented according to the condition of “participation in training” condition And get the next node.
  • the conditional attributes used to segment multiple nodes on the same layer are usually different. The nodes formed after each segmentation have higher data purity than the nodes in the previous layer.
  • each leaf node represents a group.
  • Each group contains a fixed ratio between the number of high-performing people and the number of low-performing people.
  • the ratio between the number of high-performing people and the number of low-performing people contained in multiple groups can be Different from each other.
  • the ratio of the leaf node "Group 1" may be 1: 8.
  • a leaf node can include all high-performing people, or all poor-performing people.
  • Each set of sample data can only be divided into one of the leaf nodes, that is, each set of sample data cannot belong to two or more leaf nodes at the same time.
  • the next segmentation is stopped when the number of samples contained in each node of the current layer of the decision tree is less than a preset number.
  • the information gain Gain (S, A) of each condition attribute can be calculated as follows:
  • Gain (S, A) represents the information gain of the conditional attribute A on the training set S
  • Entropy (S) represents the information entropy of the training set S
  • Entropy (S, A) represents the information entropy of the A attribute.
  • the training set has a total of 16 sets of sample data, 11 sets with excellent performance, and 5 sets with poor performance.
  • the information gain calculation of the condition attribute of "travel situation" is taken as an example for description.
  • the information gain calculation process of other condition attributes is the same.
  • step S13 the condition attributes are classified to determine multiple service types, and each service type corresponds to at least one condition attribute, and the condition attribute is used as an evaluation factor of the service type.
  • the service type may include a resource type, a hard-working type, and an open type.
  • Resource-based can refer to people with strong business ability and work ability.
  • Hard-working type can refer to people who study for a long time and work long hours each day, and open-type can refer to people who are active and social.
  • the evaluation factor is an evaluation index capable of characterizing important characteristics of the service type.
  • Each service type can correspond to one evaluation factor or at least two evaluation factors.
  • the corresponding evaluation factor may be the business expansion situation.
  • the corresponding evaluation factors can be training participation, attendance rate, etc.
  • the corresponding evaluation factors may be behavior trajectories (such as business trips), app activity, and so on.
  • the number of the service types is M (M> 1, M is a natural number), and the number of packets is N (M> 1, M is a natural number). M may be equal to N or not equal to N.
  • Step S14 Determine the condition attributes involved in the process of dividing and obtaining each group, and calculate the number of occurrences of the condition attribute corresponding to the same service type according to the service type to which the evaluation factor corresponding to each condition attribute belongs, and use the number of times As a weighting factor for the type of service involved in each packet.
  • the condition attributes involved are: “travel situation", “participation training situation", and "attendance rate”.
  • the business type of the evaluation factor corresponding to the "travel situation” condition attribute is "open”
  • the business type of the evaluation factor corresponding to "participation in training" and "attendance rate” is "diligent”. Therefore, for the conditional attributes involved in "Group 8", the conditional attributes corresponding to the "open” service type appear once, and the weight factor of the "open” service type is 1; and the corresponding "diligent” service
  • the condition attribute of the type appears twice, and the weight factor of the "diligent" business type is 2.
  • Step S15 Select the service type with the largest weighting factor, and summarize the grouping into the service type.
  • selecting the service type with the largest weighting factor and grouping the grouping into the service type includes:
  • step S151 when the service type with the largest weighting factor is selected, the number of the service types is determined.
  • step S152 when the service type with the largest weighting factor is only one, the group is directly classified into the service types; when the service type with the largest weighting factor includes at least two, the grouping is randomly classified into one of them. Business type.
  • the service type with the largest weighting factor includes at least two, since the grouping simultaneously meets the characteristics of at least two service types, the grouping is simultaneously allocated to the different service types in.
  • FIG. 3 is a schematic structural diagram of a group induction device 300 according to a preferred embodiment of the present application.
  • the group induction device 300 operates in an electronic device.
  • the group induction device 300 may include a plurality of functional modules composed of program code segments.
  • the program code of each program segment of the group induction device 300 may be stored in a memory of the electronic device and executed by the at least one processor to implement a group induction function.
  • the group induction device 300 may be divided into a plurality of functional modules according to functions performed by the group induction device 300.
  • the group induction device 300 includes: an acquisition module 301, a training module 302, a classification module 303, a calculation module 304, and an induction module 305.
  • the module referred to in the present application refers to a series of computer-readable instruction segments that can be executed by at least one processor and can perform fixed functions, which are stored in a memory. In this embodiment, functions of each module will be described in detail in subsequent embodiments.
  • the obtaining module 301 is configured to obtain multiple sets of sample data, and each set of sample data includes multiple condition attributes and corresponding decision attributes, and the decision attributes are performance of the sample data.
  • the sample data needs to include data of a higher performing person (ie, a high performing person) and a poor performing person (ie, a low performing person), and the sample data may be stored in the electronic device.
  • the electronic device may also connect to an external sample library through a network to further obtain sample data stored in the sample library.
  • the electronic device may also collect and establish the sample library by means of big data.
  • condition attributes may be behavior trajectories (such as business trips), app activity, business expansion, consumption, interests, hobbies, participation in training, attendance rate, and so on.
  • the condition attributes include business trips, business expansions, participation in training, and attendance rates, and the sample data is shown in Table 1 above.
  • the training module 302 is configured to train a decision tree model according to the sample data.
  • the decision tree model includes multiple leaf nodes, and each leaf node represents a group.
  • the decision tree algorithm belongs to a supervised learning classification algorithm, and the decision tree model represents a mapping relationship between object attributes and object values.
  • the decision tree model has N layers (N is a natural number, and N> 2), and the training module 302 sets each node (including a root node, an internal node, and a leaf) in the decision tree model according to the condition attribute. Node).
  • the first layer of the decision tree model is a root node
  • the second layer of the decision tree model represents a plurality of nodes obtained by segmenting the root node with a first-level condition attribute.
  • the third layer of the decision tree model is a root node.
  • the layer represents multiple nodes obtained by segmenting the nodes in the previous layer with the second-level condition attributes, and so on.
  • the training module 302 uses the sample data as a training set to calculate the information gain of each condition attribute, and selects the condition attribute with the largest information gain as the root node of the decision tree model to Segment the sample data to get the next level of nodes.
  • the condition attribute for which the maximum information gain is calculated is “travel situation”
  • the “travel situation” condition attribute is selected as the root node to segment the sample data.
  • the training module 302 further uses the sample data contained in each node as a training set to recalculate the information gain of each condition attribute, and selects the condition attribute with the largest information gain to segment the node to obtain the next-level node.
  • C 0
  • the condition attribute that calculates the maximum information gain is "business development situation”
  • the nodes are segmented to obtain the next-level nodes.
  • condition attribute with the largest information gain is “participation in training”
  • the node is segmented according to the condition of “participation in training” condition And get the next node.
  • conditional attributes used to segment multiple nodes on the same layer are usually different.
  • the nodes formed after each segmentation have higher data purity than the nodes in the previous layer.
  • each leaf node represents a group.
  • Each group contains a fixed ratio between the number of high-performing people and the number of low-performing people.
  • the ratio between the number of high-performing people and the number of low-performing people contained in multiple groups can be Different from each other.
  • the ratio of the leaf node "Group 1" may be 1: 8.
  • a leaf node can include all high-performing people, or all poor-performing people.
  • Each set of sample data can only be divided into one of the leaf nodes, that is, each set of sample data cannot belong to two or more leaf nodes at the same time.
  • the next segmentation is stopped when the number of samples contained in each node of the current layer of the decision tree is less than a preset number.
  • the information gain Gain (S, A) of each condition attribute can be calculated as follows:
  • Gain (S, A) represents the information gain of the conditional attribute A on the training set S
  • Entropy (S) represents the information entropy of the training set S
  • Entropy (S, A) represents the information entropy of the A attribute.
  • the training set has a total of 16 sets of sample data, 11 sets with excellent performance, and 5 sets with poor performance.
  • the information gain calculation of the condition attribute of "travel situation" is taken as an example for description.
  • the classification module 303 is configured to classify the condition attributes to determine multiple service types, and each service type corresponds to at least one condition attribute, and the condition attribute serves as an evaluation factor for the service type.
  • the service type may include a resource type, a hard-working type, and an open type.
  • Resource-based can refer to people with strong business ability and work ability.
  • Hard-working type can refer to people who study for a long time and work long hours each day, and open-type can refer to people who are active and social.
  • the evaluation factor is an evaluation index capable of characterizing important characteristics of the service type.
  • Each service type can correspond to one evaluation factor or at least two evaluation factors.
  • the corresponding evaluation factor may be the business expansion situation.
  • the corresponding evaluation factors can be training participation, attendance rate, etc.
  • the corresponding evaluation factors may be behavior trajectories (such as business trips), app activity, and so on.
  • the number of the service types is M (M> 1, M is a natural number), and the number of packets is N (M> 1, M is a natural number). M may be equal to N or not equal to N.
  • the calculation module 304 is configured to determine the condition attributes involved in the process of dividing and obtaining each group, and calculate the number of occurrences of the condition attributes corresponding to the same service type according to the service type to which the evaluation factor corresponding to each condition attribute belongs, and The number of times is used as the weight factor of the type of service involved in each packet.
  • the condition attributes involved are: “travel situation", “participation training situation", and "attendance rate”.
  • the business type of the evaluation factor corresponding to the "travel situation” condition attribute is "open”
  • the business type of the evaluation factor corresponding to "participation in training" and "attendance rate” is "diligent”. Therefore, for the conditional attributes involved in "Group 8", the conditional attributes corresponding to the "open” service type appear once, and the weight factor of the "open” service type is 1; and the corresponding "diligent” service
  • the condition attribute of the type appears twice, and the weight factor of the "diligent" business type is 2.
  • the induction module 305 is configured to select a service type with the largest weighting factor, and summarize the grouping into the service type.
  • the induction module 305 when the induction module 305 selects a service type with the largest weighting factor, it determines the number of the service types. When there is only one service type with the largest weighting factor, the induction module 305 directly summarizes the grouping into the service type. When the service type with the largest weighting factor includes at least two, the induction module 305 randomly summarizes the grouping into one of the service types. Of course, in other embodiments, when the service type with the largest weighting factor includes at least two, since the grouping simultaneously meets the characteristics of at least two service types, the inductive module 305 assigns the grouping to all of the groupings at the same time. As mentioned in the different business types.
  • FIG. 4 is a schematic structural diagram of an electronic device 1 that implements the group induction method in a preferred embodiment of the present application.
  • the electronic device 1 includes a memory 101, a processor 102, and computer-readable instructions 103 stored in the memory 101 and executable on the processor 102, such as a group induction program.
  • step S11 multiple sets of sample data are obtained, and each set of sample data includes multiple condition attributes and corresponding decision attributes, and the decision attributes are performance of the sample data.
  • the sample data needs to include data of a higher performing person (ie, a high performing person) and a poor performing person (ie, a low performing person), and the sample data may be stored in the electronic device.
  • the electronic device may also connect to an external sample library through a network to further obtain sample data stored in the sample library.
  • the electronic device may also collect and establish the sample library by means of big data.
  • condition attributes may be behavior trajectories (such as business trips), app activity, business expansion, consumption, interests, hobbies, participation in training, attendance rate, and so on.
  • the condition attributes include business trips, business expansions, participation in training, and attendance rates, and the sample data is shown in Table 1 above.
  • a decision tree model is trained according to the sample data.
  • the decision tree model includes multiple leaf nodes, and each leaf node represents a group.
  • the decision tree algorithm belongs to a supervised learning classification algorithm, and the decision tree model represents a mapping relationship between object attributes and object values.
  • the decision tree model has N layers (N is a natural number, N> 2), and the electronic device sets each node (including a root node, an internal node, and a leaf node) in the decision tree model according to the condition attribute. ).
  • the first layer of the decision tree model is a root node
  • the second layer of the decision tree model represents a plurality of nodes obtained by segmenting the root node with a first-level condition attribute.
  • the third layer of the decision tree model is a root node.
  • the layer represents multiple nodes obtained by segmenting the nodes in the previous layer with the second-level condition attributes, and so on.
  • training the decision tree model specifically includes:
  • step S121 the sample data is used as a training set to calculate the information gain of each condition attribute.
  • Step S122 selecting the condition attribute with the largest information gain as the root node of the decision tree model to segment the sample data to obtain the next-level node; where the larger the information gain of a condition attribute is, the condition attribute is selected.
  • the more information provided for classification the more conducive it is to determinism, and the more beneficial it is to classify the sample data. As shown in FIG. 2, if the condition attribute for which the maximum information gain is calculated is “travel situation”, the “travel situation” condition attribute is selected as the root node to segment the sample data.
  • Step S123 Recalculate the information gain of each condition attribute using the sample data contained in each node as a training set.
  • step S124 the condition attribute with the largest information gain is selected to divide the node to obtain the next-level node.
  • C 0
  • the condition attribute that calculates the maximum information gain is "business development situation”
  • the nodes are segmented to obtain the next-level nodes.
  • the condition attribute with the largest information gain is “participation in training”
  • the node is segmented according to the condition of “participation in training” condition And get the next node.
  • the conditional attributes used to segment multiple nodes on the same layer are usually different. The nodes formed after each segmentation have higher data purity than the nodes in the previous layer.
  • each leaf node represents a group.
  • Each group contains a fixed ratio between the number of high-performing people and the number of low-performing people.
  • the ratio between the number of high-performing people and the number of low-performing people contained in multiple groups can be Different from each other.
  • the ratio of the leaf node "Group 1" may be 1: 8.
  • a leaf node can include all high-performing people, or all poor-performing people.
  • Each set of sample data can only be divided into one of the leaf nodes, that is, each set of sample data cannot belong to two or more leaf nodes at the same time.
  • the next segmentation is stopped when the number of samples contained in each node of the current layer of the decision tree is less than a preset number.
  • the information gain Gain (S, A) of each condition attribute can be calculated as follows:
  • Gain (S, A) represents the information gain of the conditional attribute A on the training set S
  • Entropy (S) represents the information entropy of the training set S
  • Entropy (S, A) represents the information entropy of the A attribute.
  • the training set has a total of 16 sets of sample data, 11 sets with excellent performance, and 5 sets with poor performance.
  • the information gain calculation of the condition attribute of "travel situation" is taken as an example for description.
  • the information gain calculation process of other condition attributes is the same.
  • step S13 the condition attributes are classified to determine multiple service types, and each service type corresponds to at least one condition attribute, and the condition attribute is used as an evaluation factor of the service type.
  • the service type may include a resource type, a hard-working type, and an open type.
  • Resource-based can refer to people with strong business ability and work ability.
  • Hard-working type can refer to people who study for a long time and work long hours each day, and open-type can refer to people who are active and social.
  • the evaluation factor is an evaluation index capable of characterizing important characteristics of the service type.
  • Each service type can correspond to one evaluation factor or at least two evaluation factors.
  • the corresponding evaluation factor may be the business expansion situation.
  • the corresponding evaluation factors can be training participation, attendance rate, etc.
  • the corresponding evaluation factors may be behavior trajectories (such as business trips), app activity, and so on.
  • the number of the service types is M (M> 1, M is a natural number), and the number of packets is N (M> 1, M is a natural number). M may be equal to N or not equal to N.
  • Step S14 Determine the condition attributes involved in the process of dividing and obtaining each group, and calculate the number of occurrences of the condition attributes corresponding to the same service type according to the service type to which the evaluation factor corresponding to each condition attribute belongs, and use the number As a weighting factor for the type of service involved in each packet.
  • the condition attributes involved are: “travel situation", “participation training situation", and "attendance rate”.
  • the business type of the evaluation factor corresponding to the "travel situation” condition attribute is "open”
  • the business type of the evaluation factor corresponding to "participation in training" and "attendance rate” is "diligent”. Therefore, for the conditional attributes involved in "Group 8", the conditional attributes corresponding to the "open” service type appear once, and the weight factor of the "open” service type is 1; and the corresponding "diligent” service
  • the condition attribute of the type appears twice, and the weight factor of the "diligent" business type is 2.
  • Step S15 Select the service type with the largest weighting factor, and summarize the grouping into the service type.
  • selecting the service type with the largest weighting factor and summarizing the grouping into the service type includes:
  • step S151 when the service type with the largest weighting factor is selected, the number of the service types is determined.
  • step S152 when the service type with the largest weighting factor is only one, the group is directly classified into the service types; when the service type with the largest weighting factor includes at least two, the grouping is randomly classified into one of them. Business type.
  • the service type with the largest weighting factor includes at least two, since the grouping simultaneously meets the characteristics of at least two service types, the grouping is allocated to the different service types at the same time. in.
  • the processor 102 executes the computer-readable instructions 103
  • the functions of the modules / units in the embodiment of the group induction device described above are implemented, for example, units 301-305 in FIG.
  • each group is objectively summarized based on the number of occurrences of the conditional attributes involved in each group in the same service type, so that the induction standard is unified; and, the grouping result is matched with the type of service requirements, which is beneficial to auxiliary services analysis.
  • the computer-readable instructions 103 may be divided into one or more modules / units, and the one or more modules / units are stored in the memory 101 and executed by the processor 102, To complete this application.
  • the one or more modules / units may be a series of computer-readable instruction segments capable of performing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 103 in the electronic device 1.
  • the computer-readable instructions 103 may be divided into an acquisition module 301, a training module 302, a classification module 303, a calculation module 304, and an induction module 305 in FIG. 3.
  • the electronic device 1 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the schematic diagram is only an example of the electronic device 1 and does not constitute a limitation on the electronic device 1.
  • the schematic diagram may include more or fewer components than shown in the figure, or some components may be combined, or different Components, for example, the electronic device 1 may further include an input-output device, a network access device, a bus, and the like.
  • the so-called processor 102 may be a central processing unit (CPU), or other general-purpose processors, digital signal processors (DSP), application specific integrated circuits (ASICs), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor, or the processor 30 may be any conventional processor, etc.
  • the processor 102 is a control center of the electronic device 1, and uses various interfaces and lines to connect the entire electronic device 1. Various parts.
  • the memory 101 may be configured to store the computer-readable instructions 103 and / or modules / units, and the processor 102 may execute or execute the computer-readable instructions and / or modules / units stored in the memory 101, and Recalling the data stored in the memory 101 to implement various functions of the electronic device 1.
  • the memory 101 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, an application program required for at least one function (such as a sound playback function, an image playback function, etc.), etc .; the storage data area may Data (such as audio data, phonebook, etc.) created according to the use of the electronic device 1 are stored.
  • the memory 101 may include a high-speed random access memory, and may also include a non-volatile memory, such as a hard disk, an internal memory, a plug-in hard disk, a Smart Memory Card (SMC), and a Secure Digital (SD).
  • a non-volatile memory such as a hard disk, an internal memory, a plug-in hard disk, a Smart Memory Card (SMC), and a Secure Digital (SD).
  • SSD Secure Digital
  • flash memory card Flash card
  • flash memory device at least one disk storage device, flash memory device, or other volatile solid-state storage device.
  • the integrated module / unit of the electronic device 1 When the integrated module / unit of the electronic device 1 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a non-volatile readable storage medium. Based on this understanding, this application implements all or part of the processes in the methods of the above embodiments, and can also be completed by computer-readable instructions instructing related hardware.
  • the computer-readable instructions can be stored in a non-volatile memory. In the read storage medium, when the computer-readable instructions are executed by a processor, the steps of the foregoing method embodiments can be implemented.
  • the computer-readable instructions include computer-readable instruction codes, and the computer-readable instruction codes may be in a source code form, an object code form, an executable file, or some intermediate form.
  • the non-volatile readable medium may include: any entity or device capable of carrying the computer-readable instruction code, a recording medium, a U disk, a mobile hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), electric carrier signals, telecommunication signals, and software distribution media.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electric carrier signals telecommunication signals
  • telecommunication signals and software distribution media.
  • the content contained in the non-volatile readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in the jurisdictions. For example, in some jurisdictions, according to legislation and patent practices, non- Volatile readable media does not include electrical carrier signals and telecommunication signals.
  • each functional unit in each embodiment of the present application may be integrated in the same processing unit, or each unit may exist separately physically, or two or more units may be integrated in the same unit.
  • the integrated unit can be implemented in the form of hardware, or in the form of hardware plus software functional modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

一种分组归纳方法,包括:获取多组样本数据;训练决策树模型,所述决策树模型包括多个叶子节点;对所述条件属性进行分类以确定多个业务类型,每一业务类型对应至少一条件属性,所述条件属性作为所述业务类型的评价因子;确定划分得到每一分组的过程所涉及的条件属性,根据每一条件属性对应的评价因子所属的业务类型来计算对应于同一业务类型的条件属性所出现的次数,并以所述次数作为每一分组所涉及的业务类型的权重因子;选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中。还公开一种分组归纳装置、电子装置及存储介质。本方法归纳标准统一,有利于提高样本处理过程中数据分析的效率,而且有利于辅助业务分析。

Description

分组归纳方法、装置、电子装置及存储介质
本申请要求于2018年08月27日提交中国专利局,申请号为201810983116.X申请名称为“分组归纳方法及装置、电子装置及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及一种分组归纳方法、分组归纳装置、电子装置以及存储介质。
背景技术
现有的分群归纳方法基本采用人工基于业务类型(如:勤奋型、资源型等)进行归纳。然而,该方法会受到个人主观的影响,并且当分群数量较多,每个分群涉及的特征量较大时,人工无法有效的完成对分群的归纳。
发明内容
鉴于以上内容,有必要提出一种分组归纳方法、分组归纳装置、电子装置以及存储介质,能够解决以上问题。
本申请一较佳实施方式提供一种分组归纳方法,包括:获取多组样本数据,每一组样本数据包括多个条件属性以及对应的决策属性;根据所述样本数据训练一决策树模型,所述决策树模型包括多个叶子节点,每一叶子节点代表一个分组;对所述条件属性进行分类以确定多个业务类型,每一业务类型对应至少一条件属性,所述条件属性作为所述业务类型的评价因子;确定划分得到每一分组的过程所涉及的条件属性,根据每一条件属性对应的评价因子所属的业务类型来计算对应于同一业务类型的条件属性所出现的次数,并以所述次数作为每一分组所涉及的业务类型的权重因子;以及选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中。
本申请一较佳实施方式还提供一种分组归纳装置,包括:获取模块,用于获取多组样本数据,每一组样本数据包括多个条件属性以及对应的决策属性;训练模块,用于根据所述样本数据训练一决策树模型,所述决策树模型包括多个叶子节点,每一叶子节点代表一个分组;分类模块,用于对所述条件属性进行分类以确定多个业务类型,每一业务类型对应至少一条件属性,所述条件属性作为所述业务类型的评价因子;计算模块,用于确定划分得到每一分组的过程所涉及的条件属性,根据每一条件属性对应的评价因子所属的业务类型来计算对应于同一业务类型的条件属性所出现的次数,并以所述次数作为每一分组所涉及的业务类型的权重因子;以及归纳模块,用于选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中。
本申请一较佳实施方式还提供一种电子装置,包括处理器和存储器,所述存储器中存储有分组归纳程序,所述处理器用于执行所述分组归纳程序以实现如前所述的分组归纳方法。
本申请一较佳实施方式还提供一种非易失性可读存储介质,所述非易失性可读存储介质上存储有分组归纳程序,所述分组归纳程序被处理器执行时实现如前所述的分组归纳方法。
本申请实施例基于每一分组涉及的条件属性在同一业务类型中出现的次数对每一分组进行客观归纳,使得归纳标准统一;而且,将分组结果与业务的需求类型对应起来,有利于辅助业务分析。
附图说明
图1是本申请一较佳实施例提供的分组归纳方法的流程图。
图2是图1的分组归纳方法所训练的决策树的示意图。
图3是本申请一较佳实施例提供的分组归纳装置的结构示意图。
图4为本申请一较佳实施例提供的电子装置的结构示意图。
具体实施方式
为了能够更清楚地理解本申请的上述目的、特征和优点,下面结合附图和具体实施例对本申请进行详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本申请,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。
图1是本申请一较佳实施例提供的分组归纳方法的流程图。所述分组归纳方法应用于一电子装置1中。根据不同需求,所述分组归纳方法的步骤顺序可以改变,某些步骤可以省略或合并。所述分组归纳方法包括以下步骤:
步骤S11,获取多组样本数据,每一组样本数据包括多个条件属性以及对应的决策属性,所述决策属性为所述样本数据的绩效。
其中,所述样本数据需同时包括绩效较优的人员(即,绩优人员)和绩效较差的人员(即,绩差人员)的数据,所述样本数据可存储于所述电子装置中。在另一实施方式中,所述电子装置还可以通过接入网络连接之一外部样本库,进而来获取所述样本库存储的样本数据。在其它实施方式中,所述电子装置还可以通过大数据方式来收集并建立所述样本库。
在本实施方式中,所述条件属性可为行为轨迹(如,出差情况)、app活跃情况、业务扩展情况、消费情况、兴趣爱好、参加培训情况、考勤率等。以所述条件属性包括出差情况、业务扩展情况、参加培训情况以及考勤率为例 进行说明,所述样本数据如表1所示。
表1样本数据
Figure PCTCN2019077223-appb-000001
步骤S12,根据所述样本数据训练一决策树模型,所述决策树模型包括多个叶子节点,每一叶子节点代表一个分组。
其中,决策树算法属于监督学习分类算法,而决策树模型代表对象属性与对象值之间的一种映射关系。其中,所述决策树模型具有N层(N为自然数,N>2),所述电子装置根据所述条件属性设置所述决策树模型中的每个节点(包括根节点、内部节点和叶子节点)的值。所述决策树模型的第一层为根节点,所述决策树模型的第二层代表以第一级条件属性对所述根节点进行分割得到的多个节点,所述决策树模型的第三层代表以第二级条件属性对上一层节点进行分割得到的多个节点,等等。如图2所示,所述决策树模型包括四层,即,N=4。
在本实施方式中,训练所述决策树模型具体包括:
步骤S121,以所述样本数据作为训练集计算每一条件属性的信息增益(information gain)。
步骤S122,选择信息增益最大的条件属性作为所述决策树模型的根节点以分割所述样本数据以得到下一层节点;其中,某一条件属性的信息增益越大,说明选择所述条件属性对分类提供的信息越多,越有利于确定性,越有利于将所述样本数据进行分类。如图2所示,若计算信息增益最大的条件属性为“出差情况”,则选择“出差情况”条件属性作为根节点以分割所述样本数据。
步骤S123,以每一节点包含的样本数据作为训练集重新计算每一条件属性的信息增益。
步骤S124,选择信息增益最大的条件属性分割所述节点以得到下一层节点。如图2所示,对于包含每月出差情况为0次(C=0)的样本的节点,若计算信息增益最大的条件属性为“业务拓展情况”,则根据“业务拓展情况”条件属性对所述节点进行分割而获得下一层节点。对于包含每月出差情况为≧2次(C≧2)的样本的节点,若计算信息增益最大的条件属性为“参加培训情况”,则根据“参加培训情况”条件属性对所述节点进行分割而获得下一层节点。实际训练过程中,对位于同一层的多个节点进行分割所采用的条件属性通常不同。每一次分割后形成的节点比上一层节点的数据纯度更高。
步骤S125,递归执行步骤S123以及S124,直至分割停止。此时,每一叶节点代表一个分组,每一分组包含的绩优人员数量与绩差人员数量之间具有固定的比值,而多个分组包含的绩优人员数量与绩差人员数量之间的比值可以互不相同。如,叶子节点“分组1”的所述比值可为1:8。当然,某一叶节点包含的可均为绩优人员,也可以均为绩差人员。其中,每一组样本数据只能被划分至其中一个叶子节点中,即,每一组样本数据不能同属于两个及两个以上的叶子节点。
在本实例中,在所述决策树的层数达到一预设层数(如:4层)时停止下一次分割。在另一实施例中,在所述决策树的当前层的每一节点包含的样本数量小于一预设数量时停止下一次分割。
其中,每一条件属性的信息增益Gain(S,A)可通过如下方式计算:
Gain(S,A)=Entropy(S)-Entropy(S,A)
其中,Gain(S,A)表示A条件属性在训练集S上的信息增益,Entropy(S)表示训练集S的信息熵,Entropy(S,A)表示A属性的信息熵。
例如,如表1所示,在步骤S121中,所述训练集共有样本数据16组,绩优的有11组,绩差的有5组。以“出差情况”条件属性的信息增益计算为例进行说明,其它条件属性的信息增益计算过程同理。对于C=0的情况,共有样本数据5组,绩优的有4组,绩差的有1组;对于C=1的情况,共有样本数据4组,绩优的有2组,绩差的有2组;对于C≧2的情况,共有样本数据7组,绩优的有5组,绩差的有2组。因此,“出差情况”条件属性的信息 增益的计算方式如下:
Figure PCTCN2019077223-appb-000002
Figure PCTCN2019077223-appb-000003
Figure PCTCN2019077223-appb-000004
步骤S13,对所述条件属性进行分类以确定多个业务类型,每一业务类型对应至少一条件属性,所述条件属性作为所述业务类型的评价因子。
所述业务类型可包括资源型、勤奋型以及开放型等。资源型可以是指业务能力强、工作能力强的人员。勤奋型可以是指学习时间长、每天工作时间长的人员,开放型可以是指性格活跃、乐于社交的人员。其中,所述评价因子是能够表征所述业务类型的重要特征的评价指标。每一业务类型可对应一个评价因子,也可对应至少两个评价因子。
例如,对于“资源型”业务类型来说,对应的评价因子可为业务拓展情况等。对于“勤奋型”业务类型来说,对应的评价因子可为参加培训情况、考勤率等。对于“开放型”业务类型来说,对应的评价因子可为行为轨迹(如,出差情况)、app活跃情况等。
其中,所述业务类型的数量为M(M>1,M为自然数),所述分组的数量为N(M>1,M为自然数)。M可以等于N,也可以不等于N。
步骤S14,确定划分得到每一分组的过程所涉及的条件属性,根据每一条件属性对应的评价因子所属的业务类型来计算对应于同一业务类型的条件属性所出现的次数,并以所述次数作为每一分组所涉及的业务类型的权重因子。
例如,对于“分组8”所包含的样本数据来说,所涉及的条件属性为:“出差情况”、“参加培训情况”以及“考勤率”。“出差情况”条件属性对应的评价因子所属的业务类型为“开放型”,“参加培训情况”以及“考勤率”对应的评价因子所属的业务类型为“勤奋型”。因此,对于“分组8”所涉及的条件属性来说,对应于“开放型”业务类型的条件属性出现1次,“开放型”业务类型的权重因子为1;而对应于“勤奋型”业务类型的条件属性出现2次,“勤奋型”业务类型的权重因子为2。
步骤S15,选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中。
例如,若“分组8”所包含的样本数据中,“开放型”业务类型的权重因子为1,而“勤奋型”业务类型的权重因子为2,则将所述“分组8”归纳至“勤奋型”业务类型中。
在本实施方式中,所述选择权重因子最大的业务类型,并将所述分组归 纳至所述业务类型中包括:
步骤S151,当选择出权重因子最大的业务类型时,判断所述业务类型的个数。
步骤S152,当权重因子最大的业务类型仅为一个时,直接将所述分组归纳至所述业务类型中;当权重因子最大的业务类型包括至少两个时,随机将所述分组归纳至其中一业务类型中。当然,在其它实施方式中,当权重因子最大的业务类型包括至少两个时,由于所述分组同时符合至少两个业务类型的特征,因此,将所述分组同时分配至所述不同的业务类型中。
图3为本申请一较佳实施方式提供的分组归纳装置300的结构示意图。在一些实施例中,所述分组归纳装置300运行于电子装置中。所述分组归纳装置300可以包括多个由程序代码段所组成的功能模块。所述分组归纳装置300的各个程序段的程序代码可以存储于电子装置的存储器中,并由所述至少一个处理器所执行,以实现分组归纳功能。
本实施例中,所述分组归纳装置300根据其所执行的功能,可以被划分为多个功能模块。如图3所示,所述分组归纳装置300包括:获取模块301、训练模块302、分类模块303、计算模块304以及归纳模块305。本申请所称的模块是指一种能够被至少一个处理器所执行并且能够完成固定功能的一系列计算机可读指令段,其存储在存储器中。在本实施例中,关于各模块的功能将在后续的实施例中详述。
所述获取模块301用于获取多组样本数据,每一组样本数据包括多个条件属性以及对应的决策属性,所述决策属性为所述样本数据的绩效。
其中,所述样本数据需同时包括绩效较优的人员(即,绩优人员)和绩效较差的人员(即,绩差人员)的数据,所述样本数据可存储于所述电子装置中。在另一实施方式中,所述电子装置还可以通过接入网络连接之一外部样本库,进而来获取所述样本库存储的样本数据。在其它实施方式中,所述电子装置还可以通过大数据方式来收集并建立所述样本库。
在本实施方式中,所述条件属性可为行为轨迹(如,出差情况)、app活跃情况、业务扩展情况、消费情况、兴趣爱好、参加培训情况、考勤率等。以所述条件属性包括出差情况、业务扩展情况、参加培训情况以及考勤率为例进行说明,所述样本数据如上表1所示。
所述训练模块302用于根据所述样本数据训练一决策树模型,所述决策树模型包括多个叶子节点,每一叶子节点代表一个分组。
其中,决策树算法属于监督学习分类算法,而决策树模型代表对象属性与对象值之间的一种映射关系。其中,所述决策树模型具有N层(N为自然数,N>2),所述训练模块302根据所述条件属性设置所述决策树模型中的每个节点(包括根节点、内部节点和叶子节点)的值。所述决策树模型的第一层为根节点,所述决策树模型的第二层代表以第一级条件属性对所述根节点进行分割得到的多个节点,所述决策树模型的第三层代表以第二级条件属性对上一层节点进行分割得到的多个节点,等等。如图2所示,所述决策树模型 包括四层,即,N=4。
在本实施方式中,所述训练模块302以所述样本数据作为训练集计算每一条件属性的信息增益(information gain),并选择信息增益最大的条件属性作为所述决策树模型的根节点以分割所述样本数据以得到下一层节点。其中,某一条件属性的信息增益越大,说明选择所述条件属性对分类提供的信息越多,越有利于确定性,越有利于将所述样本数据进行分类。如图2所示,若计算信息增益最大的条件属性为“出差情况”,则选择“出差情况”条件属性作为根节点以分割所述样本数据。
所述训练模块302进一步以每一节点包含的样本数据作为训练集重新计算每一条件属性的信息增益,并选择信息增益最大的条件属性分割所述节点以得到下一层节点。如图2所示,对于包含每月出差情况为0次(C=0)的样本的节点,若计算信息增益最大的条件属性为“业务拓展情况”,则根据“业务拓展情况”条件属性对所述节点进行分割而获得下一层节点。对于包含每月出差情况为≧2次(C≧2)的样本的节点,若计算信息增益最大的条件属性为“参加培训情况”,则根据“参加培训情况”条件属性对所述节点进行分割而获得下一层节点。实际训练过程中,对位于同一层的多个节点进行分割所采用的条件属性通常不同。每一次分割后形成的节点比上一层节点的数据纯度更高。
所述训练模块302进一步递归执行每一条件属性的信息增益的计算步骤以及选择信息增益最大的条件属性分割所述节点以得到下一层节点的步骤,直至分割停止。此时,每一叶节点代表一个分组,每一分组包含的绩优人员数量与绩差人员数量之间具有固定的比值,而多个分组包含的绩优人员数量与绩差人员数量之间的比值可以互不相同。如,叶子节点“分组1”的所述比值可为1:8。当然,某一叶节点包含的可均为绩优人员,也可以均为绩差人员。其中,每一组样本数据只能被划分至其中一个叶子节点中,即,每一组样本数据不能同属于两个及两个以上的叶子节点。
在本实例中,在所述决策树的层数达到一预设层数(如:4层)时停止下一次分割。在另一实施例中,在所述决策树的当前层的每一节点包含的样本数量小于一预设数量时停止下一次分割。
其中,每一条件属性的信息增益Gain(S,A)可通过如下方式计算:
Gain(S,A)=Entropy(S)-Entropy(S,A)
其中,Gain(S,A)表示A条件属性在训练集S上的信息增益,Entropy(S)表示训练集S的信息熵,Entropy(S,A)表示A属性的信息熵。
例如,如表1所示,在所述训练模块302选择根节点的过程中,所述训练集共有样本数据16组,绩优的有11组,绩差的有5组。以“出差情况”条件属性的信息增益计算为例进行说明,其它条件属性的信息增益计算过程同理。对于C=0的情况,共有样本数据5组,绩优的有4组,绩差的有1组;对于C=1的情况,共有样本数据4组,绩优的有2组,绩差的有2组;对于C≧2的情况,共有样本数据7组,绩优的有5组,绩差的有2组。因此,“出差情况”条件属性的信息增益的计算方式如下:
Figure PCTCN2019077223-appb-000005
Figure PCTCN2019077223-appb-000006
Figure PCTCN2019077223-appb-000007
所述分类模块303用于对所述条件属性进行分类以确定多个业务类型,每一业务类型对应至少一条件属性,所述条件属性作为所述业务类型的评价因子。
所述业务类型可包括资源型、勤奋型以及开放型等。资源型可以是指业务能力强、工作能力强的人员。勤奋型可以是指学习时间长、每天工作时间长的人员,开放型可以是指性格活跃、乐于社交的人员。其中,所述评价因子是能够表征所述业务类型的重要特征的评价指标。每一业务类型可对应一个评价因子,也可对应至少两个评价因子。
例如,对于“资源型”业务类型来说,对应的评价因子可为业务拓展情况等。对于“勤奋型”业务类型来说,对应的评价因子可为参加培训情况、考勤率等。对于“开放型”业务类型来说,对应的评价因子可为行为轨迹(如,出差情况)、app活跃情况等。
其中,所述业务类型的数量为M(M>1,M为自然数),所述分组的数量为N(M>1,M为自然数)。M可以等于N,也可以不等于N。
所述计算模块304用于确定划分得到每一分组的过程所涉及的条件属性,根据每一条件属性对应的评价因子所属的业务类型来计算对应于同一业务类型的条件属性所出现的次数,并以所述次数作为每一分组所涉及的业务类型的权重因子。
例如,对于“分组8”所包含的样本数据来说,所涉及的条件属性为:“出差情况”、“参加培训情况”以及“考勤率”。“出差情况”条件属性对应的评价因子所属的业务类型为“开放型”,“参加培训情况”以及“考勤率”对应的评价因子所属的业务类型为“勤奋型”。因此,对于“分组8”所涉及的条件属性来说,对应于“开放型”业务类型的条件属性出现1次,“开放型”业务类型的权重因子为1;而对应于“勤奋型”业务类型的条件属性出现2次,“勤奋型”业务类型的权重因子为2。
所述归纳模块305用于选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中。
例如,若“分组8”所包含的样本数据中,“开放型”业务类型的权重因子为1,而“勤奋型”业务类型的权重因子为2,则将所述“分组8”归纳至“勤奋型”业务类型中。
在本实施方式中,当所述归纳模块305选择出权重因子最大的业务类型 时,判断所述业务类型的个数。当权重因子最大的业务类型仅为一个时,所述归纳模块305直接将所述分组归纳至所述业务类型中。当权重因子最大的业务类型包括至少两个时,所述归纳模块305随机将所述分组归纳至其中一业务类型中。当然,在其它实施方式中,当权重因子最大的业务类型包括至少两个时,由于所述分组同时符合至少两个业务类型的特征,因此,所述归纳模块305将所述分组同时分配至所述不同的业务类型中。
如图4所示,图4是本申请一较佳实施方式中实现所述分组归纳方法的电子装置1的结构示意图。所述电子装置1包括存储器101、处理器102以及存储于所述存储器101中并可在所述处理器102上运行的计算机可读指令103,例如分组归纳程序。
所述处理器102执行所述计算机可读指令103时实现上述实施例中分组归纳方法的步骤:
步骤S11,获取多组样本数据,每一组样本数据包括多个条件属性以及对应的决策属性,所述决策属性为所述样本数据的绩效。
其中,所述样本数据需同时包括绩效较优的人员(即,绩优人员)和绩效较差的人员(即,绩差人员)的数据,所述样本数据可存储于所述电子装置中。在另一实施方式中,所述电子装置还可以通过接入网络连接之一外部样本库,进而来获取所述样本库存储的样本数据。在其它实施方式中,所述电子装置还可以通过大数据方式来收集并建立所述样本库。
在本实施方式中,所述条件属性可为行为轨迹(如,出差情况)、app活跃情况、业务扩展情况、消费情况、兴趣爱好、参加培训情况、考勤率等。以所述条件属性包括出差情况、业务扩展情况、参加培训情况以及考勤率为例进行说明,所述样本数据如上表1所示。
步骤S12,根据所述样本数据训练一决策树模型,所述决策树模型包括多个叶子节点,每一叶子节点代表一个分组。
其中,决策树算法属于监督学习分类算法,而决策树模型代表对象属性与对象值之间的一种映射关系。其中,所述决策树模型具有N层(N为自然数,N>2),所述电子装置根据所述条件属性设置所述决策树模型中的每个节点(包括根节点、内部节点和叶子节点)的值。所述决策树模型的第一层为根节点,所述决策树模型的第二层代表以第一级条件属性对所述根节点进行分割得到的多个节点,所述决策树模型的第三层代表以第二级条件属性对上一层节点进行分割得到的多个节点,等等。如图2所示,所述决策树模型包括四层,即,N=4。
在本实施方式中,训练所述决策树模型具体包括:
步骤S121,以所述样本数据作为训练集计算每一条件属性的信息增益(information gain)。
步骤S122,选择信息增益最大的条件属性作为所述决策树模型的根节点以分割所述样本数据以得到下一层节点;其中,某一条件属性的信息增益越大,说明选择所述条件属性对分类提供的信息越多,越有利于确定性,越有 利于将所述样本数据进行分类。如图2所示,若计算信息增益最大的条件属性为“出差情况”,则选择“出差情况”条件属性作为根节点以分割所述样本数据。
步骤S123,以每一节点包含的样本数据作为训练集重新计算每一条件属性的信息增益。
步骤S124,选择信息增益最大的条件属性分割所述节点以得到下一层节点。如图2所示,对于包含每月出差情况为0次(C=0)的样本的节点,若计算信息增益最大的条件属性为“业务拓展情况”,则根据“业务拓展情况”条件属性对所述节点进行分割而获得下一层节点。对于包含每月出差情况为≧2次(C≧2)的样本的节点,若计算信息增益最大的条件属性为“参加培训情况”,则根据“参加培训情况”条件属性对所述节点进行分割而获得下一层节点。实际训练过程中,对位于同一层的多个节点进行分割所采用的条件属性通常不同。每一次分割后形成的节点比上一层节点的数据纯度更高。
步骤S125,递归执行步骤S123以及S124,直至分割停止。此时,每一叶节点代表一个分组,每一分组包含的绩优人员数量与绩差人员数量之间具有固定的比值,而多个分组包含的绩优人员数量与绩差人员数量之间的比值可以互不相同。如,叶子节点“分组1”的所述比值可为1:8。当然,某一叶节点包含的可均为绩优人员,也可以均为绩差人员。其中,每一组样本数据只能被划分至其中一个叶子节点中,即,每一组样本数据不能同属于两个及两个以上的叶子节点。
在本实例中,在所述决策树的层数达到一预设层数(如:4层)时停止下一次分割。在另一实施例中,在所述决策树的当前层的每一节点包含的样本数量小于一预设数量时停止下一次分割。
其中,每一条件属性的信息增益Gain(S,A)可通过如下方式计算:
Gain(S,A)=Entropy(S)-Entropy(S,A)
其中,Gain(S,A)表示A条件属性在训练集S上的信息增益,Entropy(S)表示训练集S的信息熵,Entropy(S,A)表示A属性的信息熵。
例如,如表1所示,在步骤S121中,所述训练集共有样本数据16组,绩优的有11组,绩差的有5组。以“出差情况”条件属性的信息增益计算为例进行说明,其它条件属性的信息增益计算过程同理。对于C=0的情况,共有样本数据5组,绩优的有4组,绩差的有1组;对于C=1的情况,共有样本数据4组,绩优的有2组,绩差的有2组;对于C≧2的情况,共有样本数据7组,绩优的有5组,绩差的有2组。因此,“出差情况”条件属性的信息增益的计算方式如下:
Figure PCTCN2019077223-appb-000008
Figure PCTCN2019077223-appb-000009
Figure PCTCN2019077223-appb-000010
步骤S13,对所述条件属性进行分类以确定多个业务类型,每一业务类型对应至少一条件属性,所述条件属性作为所述业务类型的评价因子。
所述业务类型可包括资源型、勤奋型以及开放型等。资源型可以是指业务能力强、工作能力强的人员。勤奋型可以是指学习时间长、每天工作时间长的人员,开放型可以是指性格活跃、乐于社交的人员。其中,所述评价因子是能够表征所述业务类型的重要特征的评价指标。每一业务类型可对应一个评价因子,也可对应至少两个评价因子。
例如,对于“资源型”业务类型来说,对应的评价因子可为业务拓展情况等。对于“勤奋型”业务类型来说,对应的评价因子可为参加培训情况、考勤率等。对于“开放型”业务类型来说,对应的评价因子可为行为轨迹(如,出差情况)、app活跃情况等。
其中,所述业务类型的数量为M(M>1,M为自然数),所述分组的数量为N(M>1,M为自然数)。M可以等于N,也可以不等于N。
步骤S14,确定划分得到每一分组的过程所涉及的条件属性,根据每一条件属性对应的评价因子所属的业务类型来计算对应于同一业务类型的条件属性所出现的次数,并以所述次数作为每一分组所涉及的业务类型的权重因子。
例如,对于“分组8”所包含的样本数据来说,所涉及的条件属性为:“出差情况”、“参加培训情况”以及“考勤率”。“出差情况”条件属性对应的评价因子所属的业务类型为“开放型”,“参加培训情况”以及“考勤率”对应的评价因子所属的业务类型为“勤奋型”。因此,对于“分组8”所涉及的条件属性来说,对应于“开放型”业务类型的条件属性出现1次,“开放型”业务类型的权重因子为1;而对应于“勤奋型”业务类型的条件属性出现2次,“勤奋型”业务类型的权重因子为2。
步骤S15,选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中。
例如,若“分组8”所包含的样本数据中,“开放型”业务类型的权重因子为1,而“勤奋型”业务类型的权重因子为2,则将所述“分组8”归纳至“勤奋型”业务类型中。
在本实施方式中,所述选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中包括:
步骤S151,当选择出权重因子最大的业务类型时,判断所述业务类型的个数。
步骤S152,当权重因子最大的业务类型仅为一个时,直接将所述分组归纳至所述业务类型中;当权重因子最大的业务类型包括至少两个时,随机将所述分组归纳至其中一业务类型中。当然,在其它实施方式中,当权重因子 最大的业务类型包括至少两个时,由于所述分组同时符合至少两个业务类型的特征,因此,将所述分组同时分配至所述不同的业务类型中。
或者,所述处理器102执行所述计算机可读指令103时实现上述分组归纳装置实施例中各模块/单元的功能,例如图3中的单元301-305。
本申请实施例基于每一分组涉及的条件属性在同一业务类型中出现的次数对每一分组进行客观归纳,使得归纳标准统一;而且,将分组结果与业务的需求类型对应起来,有利于辅助业务分析。
示例性的,所述计算机可读指令103可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器101中,并由所述处理器102执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令段,该指令段用于描述所述计算机可读指令103在所述电子装置1中的执行过程。例如,所述计算机可读指令103可以被分割成图3中的获取模块301、训练模块302、分类模块303、计算模块304以及归纳模块305。
所述电子装置1可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。本领域技术人员可以理解,所述示意图仅仅是电子装置1的示例,并不构成对电子装置1的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述电子装置1还可以包括输入输出设备、网络接入设备、总线等。
所称处理器102可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器30也可以是任何常规的处理器等,所述处理器102是所述电子装置1的控制中心,利用各种接口和线路连接整个电子装置1的各个部分。
所述存储器101可用于存储所述计算机可读指令103和/或模块/单元,所述处理器102通过运行或执行存储在所述存储器101内的计算机可读指令和/或模块/单元,以及调用存储在存储器101内的数据,实现所述电子装置1的各种功能。所述存储器101可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子装置1的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器101可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
所述电子装置1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个非易失性可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以 通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机可读指令包括计算机可读指令代码,所述计算机可读指令代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述非易失性可读介质可以包括:能够携带所述计算机可读指令代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述非易失性可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,非易失性可读介质不包括电载波信号和电信信号。
在本申请所提供的几个实施例中,应该理解到,所揭露的电子装置和方法,可以通过其它的方式实现。例如,以上所描述的电子装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
另外,在本申请各个实施例中的各功能单元可以集成在相同处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在相同单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围内的所有变化涵括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。电子装置权利要求中陈述的多个单元或电子装置也可以由同一个单元或电子装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种分组归纳方法,其特征在于,包括:
    获取多组样本数据,每一组样本数据包括多个条件属性以及对应的决策属性;
    根据所述样本数据训练一决策树模型,所述决策树模型包括多个叶子节点,每一叶子节点代表一个分组;
    对所述条件属性进行分类以确定多个业务类型,每一业务类型对应至少一条件属性,所述条件属性作为所述业务类型的评价因子;
    确定划分得到每一分组的过程所涉及的条件属性,根据每一条件属性对应的评价因子所属的业务类型来计算对应于同一业务类型的条件属性所出现的次数,并以所述次数作为每一分组所涉及的业务类型的权重因子;以及
    选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中。
  2. 如权利要求1所述的分组归纳方法,其特征在于,训练所述决策树模型包括:
    以所述样本数据作为训练集计算每一条件属性的信息增益;
    选择信息增益最大的条件属性作为所述决策树模型的根节点以分割所述样本数据以得到下一层节点;
    以每一节点包含的样本数据作为训练集重新计算每一条件属性的信息增益;
    选择信息增益最大的条件属性分割所述节点以得到下一层节点;以及
    递归执行重新计算每一条件属性的信息增益的步骤以及选择信息增益最大的条件属性分割所述节点的步骤,直至分割停止。
  3. 如权利要求2所述的分组归纳方法,其特征在于,在所述决策树的层数达到一预设层数时停止下一次分割。
  4. 如权利要求2所述的分组归纳方法,其特征在于,在所述决策树的当前层的每一节点包含的样本数量小于一预设数量时停止下一次分割。
  5. 如权利要求1所述的分组归纳方法,其特征在于,所述选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中包括:
    当选择出权重因子最大的业务类型时,判断所述业务类型的个数;以及
    当权重因子最大的业务类型仅为一个时,直接将所述分组归纳至所述业务类型中,当权重因子最大的业务类型包括至少两个时,随机将所述分组归纳至其中一业务类型中。
  6. 如权利要求1所述的分组归纳方法,其特征在于,所述选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中包括:
    当选择出权重因子最大的业务类型时,判断所述业务类型的个数;以及
    当权重因子最大的业务类型仅为一个时,直接将所述分组归纳至所述业务类型中,当权重因子最大的业务类型包括至少两个时,将所述分组同时分配至所述不同的业务类型中。
  7. 如权利要求1所述的分组归纳方法,其特征在于,所述决策属性为所述样本数据的绩效,所述样本数据同时包括绩优人员和绩差人员的数据。
  8. 一种分组归纳装置,其特征在于,包括:
    获取模块,用于获取多组样本数据,每一组样本数据包括多个条件属性以及对应的决策属性;
    训练模块,用于根据所述样本数据训练一决策树模型,所述决策树模型包括多个叶子节点,每一叶子节点代表一个分组;
    分类模块,用于对所述条件属性进行分类以确定多个业务类型,每一业务类型对应至少一条件属性,所述条件属性作为所述业务类型的评价因子;
    计算模块,用于确定划分得到每一分组的过程所涉及的条件属性,根据每一条件属性对应的评价因子所属的业务类型来计算对应于同一业务类型的条件属性所出现的次数,并以所述次数作为每一分组所涉及的业务类型的权重因子;以及
    归纳模块,用于选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中。
  9. 一种电子装置,其特征在于,包括处理器和存储器,所述存储器中存储至少一个计算机可读指令,所述处理器用于执行所述计算机可读指令以实现以下步骤:
    获取多组样本数据,每一组样本数据包括多个条件属性以及对应的决策属性;
    根据所述样本数据训练一决策树模型,所述决策树模型包括多个叶子节点,每一叶子节点代表一个分组;
    对所述条件属性进行分类以确定多个业务类型,每一业务类型对应至少一条件属性,所述条件属性作为所述业务类型的评价因子;
    确定划分得到每一分组的过程所涉及的条件属性,根据每一条件属性对应的评价因子所属的业务类型来计算对应于同一业务类型的条件属性所出现的次数,并以所述次数作为每一分组所涉及的业务类型的权重因子;以及
    选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中。
  10. 如权利要求9所述的电子装置,其特征在于,所述处理器在训练所述决策树模型时,执行所述计算机可读指令以实现以下步骤:
    以所述样本数据作为训练集计算每一条件属性的信息增益;
    选择信息增益最大的条件属性作为所述决策树模型的根节点以分割所述样本数据以得到下一层节点;
    以每一节点包含的样本数据作为训练集重新计算每一条件属性的信息增益;
    选择信息增益最大的条件属性分割所述节点以得到下一层节点;以及
    递归执行重新计算每一条件属性的信息增益的步骤以及选择信息增益最大的条件属性分割所述节点的步骤,直至分割停止。
  11. 如权利要求10所述的分组归纳方法,其特征在于,所述处理器执行所述计算机可读指令还用以实现以下步骤:
    在所述决策树的层数达到一预设层数时停止下一次分割。
  12. 如权利要求10所述的电子装置,其特征在于,所述处理器执行所述计算机可读指令还用以实现以下步骤:
    在所述决策树的当前层的每一节点包含的样本数量小于一预设数量时停止下一次分割。
  13. 如权利要求9所述的电子装置,其特征在于,所述处理器在所述选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中时,执行所述计算机可读指令以实现以下步骤:
    当选择出权重因子最大的业务类型时,判断所述业务类型的个数;以及
    当权重因子最大的业务类型仅为一个时,直接将所述分组归纳至所述业务类型中,当权重因子最大的业务类型包括至少两个时,随机将所述分组归纳至其中一业务类型中。
  14. 如权利要求9所述的电子装置,其特征在于,所述处理器在所述选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中时,执行所述计算机可读指令以实现以下步骤:
    当选择出权重因子最大的业务类型时,判断所述业务类型的个数;以及
    当权重因子最大的业务类型仅为一个时,直接将所述分组归纳至所述业务类型中,当权重因子最大的业务类型包括至少两个时,将所述分组同时分配至所述不同的业务类型中。
  15. 一种非易失性可读存储介质,其特征在于,所述非易失性可读存储介质上存储至少一个计算机可读指令,所述至少一个计算机可读指令被处理器执行以实现以下步骤:
    获取多组样本数据,每一组样本数据包括多个条件属性以及对应的决策属性;
    根据所述样本数据训练一决策树模型,所述决策树模型包括多个叶子节点,每一叶子节点代表一个分组;
    对所述条件属性进行分类以确定多个业务类型,每一业务类型对应至少一条件属性,所述条件属性作为所述业务类型的评价因子;
    确定划分得到每一分组的过程所涉及的条件属性,根据每一条件属性对应的评价因子所属的业务类型来计算对应于同一业务类型的条件属性所出现的次数,并以所述次数作为每一分组所涉及的业务类型的权重因子;以及
    选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中。
  16. 如权利要求15所述的存储介质,其特征在于,在训练所述决策树模型时,所述计算机可读指令被所述处理器执行以实现以下步骤:
    以所述样本数据作为训练集计算每一条件属性的信息增益;
    选择信息增益最大的条件属性作为所述决策树模型的根节点以分割所述样本数据以得到下一层节点;
    以每一节点包含的样本数据作为训练集重新计算每一条件属性的信息增益;
    选择信息增益最大的条件属性分割所述节点以得到下一层节点;以及
    递归执行重新计算每一条件属性的信息增益的步骤以及选择信息增益最大的条件属性分割所述节点的步骤,直至分割停止。
  17. 如权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还用以实现以下步骤:
    在所述决策树的层数达到一预设层数时停止下一次分割。
  18. 如权利要求16所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行时还用以实现以下步骤:
    在所述决策树的当前层的每一节点包含的样本数量小于一预设数量时停止下一次分割。
  19. 如权利要求15所述的存储介质,其特征在于,在所述选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中时,所述计算机可读指令被所述处理器执行时以实现以下步骤:
    当选择出权重因子最大的业务类型时,判断所述业务类型的个数;以及
    当权重因子最大的业务类型仅为一个时,直接将所述分组归纳至所述业务类型中,当权重因子最大的业务类型包括至少两个时,随机将所述分组归纳至其中一业务类型中。
  20. 如权利要求15所述的存储介质,其特征在于,在所述选择权重因子最大的业务类型,并将所述分组归纳至所述业务类型中时,所述计算机可读指令被所述处理器执行时以实现以下步骤:
    当选择出权重因子最大的业务类型时,判断所述业务类型的个数;以及
    当权重因子最大的业务类型仅为一个时,直接将所述分组归纳至所述业务类型中,当权重因子最大的业务类型包括至少两个时,将所述分组同时分配至所述不同的业务类型中。
PCT/CN2019/077223 2018-08-27 2019-03-06 分组归纳方法、装置、电子装置及存储介质 WO2020042579A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810983116.X 2018-08-27
CN201810983116.XA CN109242012A (zh) 2018-08-27 2018-08-27 分组归纳方法及装置、电子装置及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020042579A1 true WO2020042579A1 (zh) 2020-03-05

Family

ID=65069305

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/077223 WO2020042579A1 (zh) 2018-08-27 2019-03-06 分组归纳方法、装置、电子装置及存储介质

Country Status (2)

Country Link
CN (1) CN109242012A (zh)
WO (1) WO2020042579A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113782121A (zh) * 2021-08-06 2021-12-10 中国中医科学院中医药信息研究所 随机分组方法、装置、计算机设备及存储介质
CN116562769A (zh) * 2023-06-15 2023-08-08 深圳爱巧网络有限公司 一种基于货物属性分类的货物数据分析方法及系统

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109242012A (zh) * 2018-08-27 2019-01-18 平安科技(深圳)有限公司 分组归纳方法及装置、电子装置及计算机可读存储介质
CN109902129B (zh) * 2019-01-25 2023-06-20 平安科技(深圳)有限公司 基于大数据分析的保险代理人归类方法及相关设备
CN109992699B (zh) * 2019-02-28 2023-08-11 平安科技(深圳)有限公司 用户群的优化方法及装置、存储介质、计算机设备
CN111144495B (zh) * 2019-12-27 2024-03-22 浙江宇视科技有限公司 一种业务分发方法、装置及介质
CN112835682B (zh) * 2021-02-25 2024-04-05 平安消费金融有限公司 一种数据处理方法、装置、计算机设备和可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203774A (zh) * 2016-03-17 2017-09-26 阿里巴巴集团控股有限公司 对数据的归属类别进行预测的方法及装置
CN107292186A (zh) * 2016-03-31 2017-10-24 阿里巴巴集团控股有限公司 一种基于随机森林的模型训练方法和装置
CN108108455A (zh) * 2017-12-28 2018-06-01 广东欧珀移动通信有限公司 目的地的推送方法、装置、存储介质及电子设备
CN108205570A (zh) * 2016-12-19 2018-06-26 华为技术有限公司 一种数据检测方法和装置
CN109242012A (zh) * 2018-08-27 2019-01-18 平安科技(深圳)有限公司 分组归纳方法及装置、电子装置及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203774A (zh) * 2016-03-17 2017-09-26 阿里巴巴集团控股有限公司 对数据的归属类别进行预测的方法及装置
CN107292186A (zh) * 2016-03-31 2017-10-24 阿里巴巴集团控股有限公司 一种基于随机森林的模型训练方法和装置
CN108205570A (zh) * 2016-12-19 2018-06-26 华为技术有限公司 一种数据检测方法和装置
CN108108455A (zh) * 2017-12-28 2018-06-01 广东欧珀移动通信有限公司 目的地的推送方法、装置、存储介质及电子设备
CN109242012A (zh) * 2018-08-27 2019-01-18 平安科技(深圳)有限公司 分组归纳方法及装置、电子装置及计算机可读存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113782121A (zh) * 2021-08-06 2021-12-10 中国中医科学院中医药信息研究所 随机分组方法、装置、计算机设备及存储介质
CN113782121B (zh) * 2021-08-06 2024-03-19 中国中医科学院中医药信息研究所 随机分组方法、装置、计算机设备及存储介质
CN116562769A (zh) * 2023-06-15 2023-08-08 深圳爱巧网络有限公司 一种基于货物属性分类的货物数据分析方法及系统

Also Published As

Publication number Publication date
CN109242012A (zh) 2019-01-18

Similar Documents

Publication Publication Date Title
WO2020042579A1 (zh) 分组归纳方法、装置、电子装置及存储介质
WO2020042580A1 (zh) 人员分组方法、装置、电子装置及存储介质
US11238310B2 (en) Training data acquisition method and device, server and storage medium
WO2022126971A1 (zh) 基于密度的文本聚类方法、装置、设备及存储介质
US10268414B2 (en) Data stream processor and method to throttle consumption of message data in a distributed computing system
WO2020220758A1 (zh) 一种异常交易节点的检测方法及装置
US11863439B2 (en) Method, apparatus and storage medium for application identification
WO2020042583A1 (zh) 潜力绩优人员类型识别方法、系统、计算机装置及介质
CN109710413B (zh) 一种半结构化文本数据的规则引擎系统的整体计算方法
WO2020119053A1 (zh) 一种图片聚类方法、装置、存储介质及终端设备
WO2021017290A1 (zh) 基于知识图谱的实体识别数据增强方法及系统
WO2019232927A1 (zh) 分布式数据删除流控方法、装置、电子设备及存储介质
WO2015154484A1 (zh) 流量数据分类方法及装置
WO2019232926A1 (zh) 数据一致性校验流控方法、装置、电子设备及存储介质
WO2019233089A1 (zh) 一种互联网测试床拓扑结构大比例规模缩减方法及装置
WO2021027331A1 (zh) 基于图数据的全量关系计算方法、装置、设备及存储介质
CN115600128A (zh) 一种半监督加密流量分类方法、装置及存储介质
CN115273191A (zh) 一种人脸聚档方法、人脸识别方法、装置、设备及介质
WO2019119635A1 (zh) 种子用户拓展方法、电子设备及计算机可读存储介质
CN110876072A (zh) 一种批量注册用户识别方法、存储介质、电子设备及系统
CN111222032B (zh) 舆情分析方法及相关设备
CN113207101A (zh) 基于5g城市部件传感器的信息处理方法及物联网云平台
CN114723652A (zh) 细胞密度确定方法、装置、电子设备及存储介质
CN116611914A (zh) 一种基于分组统计的薪资预测方法及设备
CN109522915B (zh) 病毒文件聚类方法、装置及可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19854207

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19854207

Country of ref document: EP

Kind code of ref document: A1