WO2017159403A1 - 予測システム、方法およびプログラム - Google Patents
予測システム、方法およびプログラム Download PDFInfo
- Publication number
- WO2017159403A1 WO2017159403A1 PCT/JP2017/008489 JP2017008489W WO2017159403A1 WO 2017159403 A1 WO2017159403 A1 WO 2017159403A1 JP 2017008489 W JP2017008489 W JP 2017008489W WO 2017159403 A1 WO2017159403 A1 WO 2017159403A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- cluster
- customer
- prediction
- master data
- prediction model
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
- G06N5/048—Fuzzy inferencing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/28—Databases characterised by their database models, e.g. relational or object models
- G06F16/284—Relational databases
- G06F16/285—Clustering or classification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
- G06N20/10—Machine learning using kernel methods, e.g. support vector machines [SVM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computing arrangements based on specific mathematical models
- G06N7/01—Probabilistic graphical models, e.g. probabilistic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
- G06Q10/067—Enterprise or organisation modelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
- G06Q30/0201—Market modelling; Market analysis; Collecting market data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/11—Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
Definitions
- the present invention relates to a prediction system, a prediction method, and a prediction program that predict an unknown value of an attribute.
- Supervised learning represented by regression / discrimination is used for various analysis processes such as product demand prediction at retail stores and power consumption prediction. Supervised learning learns the relationship between input and output when given a pair of input and output, and predicts the output based on the learned relationship when given unknown input .
- Non-Patent Document 1 describes a technique using a mixed model as one of Mixture of Experts.
- the technology described in Non-Patent Document 1 clusters data (for example, product ID) based on data properties (for example, product price), and generates a prediction model for each cluster.
- data for example, product ID
- data properties for example, product price
- a prediction model is generated based on “data having similar properties” belonging to the same cluster. Therefore, compared with the case where a prediction model is generated for the entire data, the technique described in Non-Patent Document 1 can generate a prediction model that captures more details, and the prediction accuracy is improved.
- FIG. 23 is a diagram exemplifying the results of graphing the age and the number of times of use for the six persons.
- the x-axis indicates age
- the y-axis indicates the number of uses.
- the function can be represented as a straight line shown in FIG.
- the value of y when age x is substituted into this function is a predicted value of the number of uses. As can be seen from FIG. 23, the difference between this predicted value and the actual number of uses is large, and the prediction accuracy is low.
- FIG. 24 shows an example of the age and the number of uses for each cluster and the prediction model in this case.
- FIG. 24A is a graph corresponding to “beauty group”
- FIG. 24B is a graph corresponding to “Liquor lover”.
- the x-axis indicates the age
- the y-axis indicates the number of uses.
- Non-Patent Document 2 describes learning using IRM (Infinite Relational Model).
- the learning described in Non-Patent Document 2 does not allow an unknown value to exist in the data set.
- the data set used for learning is a set of customer IDs and various attribute values of the customer.
- Non-Patent Document 1 a data set (for example, customer information) is clustered using attribute values (for example, customer age) of the data itself, and for each customer cluster having similar attributes, A prediction model of unknown attributes (eg, customer revenue) is generated. It is assumed that the unknown attribute is unknown with respect to some of the data, and there is data whose attribute value is known. In the above example, it is assumed that data in which the customer's income is known and data whose customer's income is unknown are mixed. As a result of generating the prediction model in this way, a prediction model that captures the characteristics of each cluster can be generated, and the prediction accuracy can be improved.
- attribute values for example, customer age
- a prediction model of unknown attributes eg, customer revenue
- an object of the present invention is to provide a prediction system, a prediction method, and a prediction program capable of predicting an unknown attribute value with high accuracy.
- the prediction system includes first master data, second master data, a first ID that is an ID of a record in the first master data, and a second ID that is an ID of a record in the second master data.
- a co-clustering means for co-clustering the first ID and the second ID
- a prediction model generation means for generating a prediction model for each cluster of the first ID output by the co-clustering means,
- the first ID corresponds to the first ID based on the prediction model and the affiliation probability that the first ID belongs to each cluster.
- a prediction means for predicting the value of the objective variable is provided.
- the prediction system includes first master data including a customer and a customer attribute, second master data including a product and a product attribute, and fact data indicating a relationship between the customer and the product.
- first master data including a customer and a customer attribute
- second master data including a product and a product attribute
- fact data indicating a relationship between the customer and the product.
- a co-clustering means for co-clustering customers and products
- a prediction model generating means for generating a prediction model for each customer cluster output by the co-clustering means
- the purpose of the customer and customer attributes When a variable is specified, it is provided with a prediction means for predicting the value of the objective variable corresponding to the specified customer based on the prediction model and the belonging probability that the specified customer belongs to each cluster.
- the prediction method includes the first master data, the second master data, the first ID that is the ID of the record in the first master data, and the ID of the record in the second master data. Based on the fact data indicating the relationship with the second ID, the first ID and the second ID are co-clustered, a prediction model is generated for each cluster of the first ID, and the attributes included in the first ID and the first master data are generated. When one objective variable is designated, the value of the objective variable corresponding to the first ID is predicted based on the prediction model and the affiliation probability that the first ID belongs to each cluster.
- the prediction method includes first master data including a customer and customer attributes, second master data including a product and a product attribute, and fact data indicating a relationship between the customer and the product. Based on, customer and product are co-clustered, and a prediction model is generated for each customer cluster. When a customer and an objective variable that is one of the customer attributes are specified, the prediction model is specified. The value of the objective variable corresponding to the designated customer is predicted based on the membership probability that the customer belongs to each cluster.
- the prediction program causes the computer to store the first master data, the second master data, the ID of the record in the first master data, and the records in the second master data.
- Prediction model that generates a prediction model for each cluster of the first ID output by the co-clustering process for co-clustering the first ID and the second ID based on the fact data indicating the relationship with the second ID as the ID
- the first ID, and an objective variable that is one of the attributes included in the first master data are specified, based on the prediction model and the affiliation probability that the first ID belongs to each cluster
- a prediction process for predicting the value of the objective variable corresponding to the first ID is executed.
- the prediction program shows a relationship between the first master data including the customer and the customer attribute, the second master data including the product and the product attribute, and the customer and the product.
- co-clustering processing for co-clustering customers and products prediction model generation processing for generating a prediction model for each customer cluster output by the co-clustering processing, and one of customer and customer attributes
- prediction model generation processing for generating a prediction model for each customer cluster output by the co-clustering processing
- a prediction process is performed to predict the value of the target variable corresponding to the specified customer based on the prediction model and the probability that the specified customer belongs to each cluster. It is made to perform.
- an unknown value of an attribute can be predicted with high accuracy.
- FIG. 4 is an explanatory diagram illustrating an example of a result of integrating the first master data, the second master data, and the fact data illustrated in FIG. 3 illustrated in FIGS. 1 and 2. It is explanatory drawing which shows the example of 1st master data. It is explanatory drawing which shows the example of 2nd master data. It is explanatory drawing which shows the example of fact data. It is a functional block diagram which shows the example of the prediction system of the 2nd Embodiment of this invention. It is a flowchart which shows the example of the process progress of 2nd Embodiment.
- first master data second master data
- fact data are provided.
- the master data may be referred to as dimension data.
- first master data and the second master data may be referred to as first dimension data and second dimension data, respectively.
- fact data may be referred to as transaction data or performance data.
- the first master data and the second master data each include a plurality of records.
- the ID of the record of the first master data is referred to as a first ID.
- the ID of the record of the second master data is referred to as a second ID.
- the first ID and the attribute value corresponding to the first ID are associated with each other.
- values are unknown in some records.
- the second ID is associated with the attribute value corresponding to the second ID.
- the value may be unknown in some records regarding a specific attribute.
- the case where all the attribute values are defined in the second master data will be described as an example.
- first ID is a customer ID and the second ID is a product ID
- first ID and the second ID are not limited to the customer ID and the product ID.
- FIG. 1 is an explanatory diagram showing an example of first master data.
- “?” Indicates that the value is unknown.
- “age”, “annual income”, and “the number of times the esthetic salon is used annually” are illustrated as attributes corresponding to the customer ID (first ID).
- first ID the customer ID
- a value of “the number of times the esthetic salon is used per year” is set.
- the value of “the number of times the esthetic salon is used per year” is unknown.
- the values of other attributes (“age”, “annual income”) are determined in each record. It can be said that the master data illustrated in FIG. 1 is customer data.
- FIG. 2 is an explanatory diagram showing an example of second master data.
- product name and “price” are illustrated as attributes corresponding to the product ID (second ID). All the attribute values shown in FIG. 2 are defined.
- the master data illustrated in FIG. 2 is product data.
- the fact data is data indicating the relationship between the first ID and the second ID.
- FIG. 3 is an explanatory diagram showing an example of fact data.
- a relationship is indicated as to whether or not the customer specified by the customer ID (first ID) has a record of purchasing the product specified by the product ID (second ID).
- “1” indicates that the customer has purchased the product
- “0” indicates that there is no record.
- “Customer 1” has purchased “Product 1” but has not purchased “Product 2”.
- the value indicating the relationship between the first ID and the second ID is not limited to binary (“0” and “1”).
- the value indicating the relationship between the customer ID and the product ID may be the number of products purchased by the customer.
- the fact data illustrated in FIG. 3 can be said to be purchase record data.
- Clustering is a task of dividing data into a plurality of groups called clusters.
- clustering data is divided so that some kind of property is defined in the data and data having similar properties belong to the same cluster.
- Clustering includes hard clustering and soft clustering.
- FIG. 4 is a schematic diagram illustrating an example of a result of hard clustering.
- FIG. 5 is a schematic diagram illustrating an example of the result of soft clustering.
- hard clustering can be regarded as a clustering in which the affiliation probability of each data is “1.0” in one cluster and “0.0” in all remaining clusters. That is, the result of hard clustering can also be expressed by a binary membership probability. Further, in the process of deriving the result of hard clustering, a membership probability in the range of 0.0 to 1.0 may be used. Finally, the process of setting the membership probability to “1.0” and the membership probability of each other cluster to “0.0” in the cluster having the maximum membership probability may be performed for each data. .
- Embodiment 1 The inventor of the present invention uses the IRM described in Non-Patent Document 2 to co-cluster the first ID and the second ID when the first master data, the second master data, and the fact data are given. The processing was examined. Hereinafter, the flow of this process will be described. Further, in the first embodiment of the present invention, when the first master data, the second master data, and the fact data are given, the first ID and the second ID are co-clustered. The processing to be performed will be described.
- a probability model is held between each cluster of the first ID and each cluster of the second ID (on the product space of the clusters).
- a probability model is typically a Bernoulli distribution that represents the strength of the relationship between clusters.
- the value of the probability model between that cluster and each cluster of the other ID in this example, the second ID
- the probability that a certain customer ID belongs to a certain customer ID cluster is the product indicated by the product ID belonging to the product ID cluster closely related to that customer ID cluster. Is determined by how many customers indicated by the customer ID have purchased.
- the belonging probability to each cluster of the first ID (each cluster having the first ID as an element) and the belonging probability to each cluster of the second ID (each cluster having the second ID as an element) are updated.
- the affiliation probability is determined from fact data (for example, purchase record data illustrated in FIG. 3) and attributes corresponding to the first ID and the second ID (for example, the age of the customer and the price of the product).
- the weight (prior probability) of each cluster of the first ID and the weight (prior probability) of each cluster of the second ID are updated. For example, when there are many records of young people in the first master data (see FIG. 1), the prior probability that the first ID belongs to the cluster of the younger generation is increased.
- the cluster model information is information indicating the statistical properties of the attribute values corresponding to the IDs belonging to the cluster. It can be said that the model information of a cluster expresses the properties of typical elements of the cluster. For example, the cluster model information can be represented by the average or variance of attribute values corresponding to IDs belonging to the cluster.
- the affiliation probability of each cluster of the first ID and the affiliation probability of each cluster of the second ID is known, it is possible to calculate cluster model information (for example, the average age of customers and the average price of products). it can.
- the probability model held between each cluster of the first ID and each cluster of the second ID is updated based on the belonging probability of each ID. For example, the relationship between a certain customer ID cluster and a certain product ID cluster becomes stronger as there is a relationship (for example, purchase results) between the customer ID and the product ID belonging to those clusters.
- the prediction model is updated using the value of the attribute corresponding to the first ID belonging to the cluster. For example, the weight of the support vector machine is updated.
- the belonging probability to each cluster of the first ID (each cluster having the first ID as an element) and the belonging probability to each cluster of the second ID (each cluster having the second ID as an element) are updated.
- the affiliation probability is determined from fact data (for example, purchase record data illustrated in FIG. 3) and attributes corresponding to the first ID and the second ID (for example, the age of the customer and the price of the product).
- the prediction model for each cluster is also taken into consideration. For example, regarding a certain first ID, the higher the prediction accuracy by the prediction model, the higher the belonging probability of the first ID.
- the weight (prior probability) of each cluster of the first ID and the weight (prior probability) of each cluster of the second ID are updated. For example, when there are many records of young people in the first master data (see FIG. 1), the prior probability that the first ID belongs to the cluster of the younger generation is increased. (3-2) For each cluster having the first ID as an element and each cluster having the second ID as an element, update the cluster model information based on the current cluster assignment. In addition, since the affiliation probability of each cluster of the first ID and the affiliation probability of each cluster of the second ID is known, it is possible to calculate cluster model information (for example, the average age of customers and the average price of products). it can.
- the probability model held between each cluster of the first ID and each cluster of the second ID is updated based on the belonging probability of each ID. For example, the relationship between a certain customer ID cluster and a certain product ID cluster becomes stronger as there is a relationship (for example, purchase results) between the customer ID and the product ID belonging to those clusters.
- FIG. 6 is a functional block diagram illustrating an example of the co-clustering system according to the first embodiment of this invention.
- the co-clustering system 1 includes a data input unit 2, a processing unit 3, a storage unit 4, and a result output unit 5.
- the processing unit 3 includes an initialization unit 31 and a clustering unit 32.
- the clustering unit 32 includes a prediction model learning unit 321, a cluster allocation unit 322, a cluster information calculation unit 323, a cluster relationship calculation unit 324, and an end determination unit 325.
- the data input unit 2 acquires a data group used for co-clustering and a set value for clustering.
- the data input unit 2 may access an external device to acquire a data group and a set value for clustering.
- the data input unit 2 may be an input interface to which a data group and a set value for clustering are input.
- the data group used for co-clustering includes first master data (for example, customer data illustrated in FIG. 1), second master data (for example, product data illustrated in FIG. 2), and fact data (for example, Purchase result data illustrated in FIG. 3).
- first master data for example, customer data illustrated in FIG. 1
- second master data for example, product data illustrated in FIG. 2
- fact data for example, Purchase result data illustrated in FIG. 3
- the attributes of the first master data with respect to a specific attribute, the value is unknown in some records.
- the technology described in Non-Patent Document 2 does not allow an attribute whose value is not determined to exist in input data. That is, the technique described in Non-Patent Document 2 does not allow a missing attribute value. Therefore, the point that the value of a specific attribute is unknown in some records is different from the technique described in Non-Patent Document 2.
- the set value of clustering is, for example, the maximum value of the number of clusters of the first ID, the maximum value of the number of clusters of the second ID, the designation of master data for generating the prediction model, the attribute used as an explanatory variable in the prediction model, And the type of prediction model.
- the prediction model is used to predict the value of a specific attribute whose value is not fixed. Therefore, in this example, the first master data is designated as the master data for generating the prediction model.
- the specific attribute (for example, “the number of times the esthetic salon is used per year” shown in FIG. 1) is designated as the attribute that is the objective variable in the prediction model.
- the prediction model type includes, for example, support vector machine, support vector regression, logistic regression, and the like.
- One of various prediction models is designated as the type of prediction model.
- the initialization unit 31 receives the first master data, the second master data, the fact data, and the set values for clustering from the data input unit 2, and stores them in the storage unit 4.
- the initialization unit 31 initializes various parameters used for clustering.
- the clustering unit 32 realizes co-clustering of the first ID and the second ID by iterative processing. Hereinafter, each part with which the clustering part 32 is provided is demonstrated. It is assumed that first master data is designated as master data for generating a prediction model.
- the prediction model learning unit 321 learns a prediction model of an attribute corresponding to the objective variable for each cluster related to master data (first master data) for generating a prediction model (that is, for each cluster of the first ID).
- the prediction model learning unit 321 uses the value of the attribute corresponding to the first ID belonging to the cluster as teacher data when generating a prediction model corresponding to the cluster.
- FIG. 7 is an explanatory diagram of teacher data used when the prediction model learning unit 321 generates a learning model.
- the prediction model learning unit 321 generates a prediction model corresponding to the cluster 1 using each attribute value corresponding to the customers 1 and 2 as teacher data, and uses each attribute value corresponding to the customer 3 as teacher data. Then, a prediction model corresponding to cluster 2 is generated.
- the prediction model learning unit 321 uses the attribute values of all records that do not include an unknown value as teacher data when generating a prediction model corresponding to the cluster. At this time, the prediction model learning unit 321 weights the attribute value of each record by the affiliation probability of each first ID to the cluster, and generates a prediction model using the weighted result. Therefore, the teacher data corresponding to the first ID having a high belonging probability to the cluster has a strong influence in the prediction model corresponding to the cluster, and the teacher data corresponding to the first ID having a low belonging probability to the cluster is Does not affect much in the prediction model.
- the cluster allocation unit 322 performs cluster allocation for each first ID and each second ID. It can also be said that the cluster assignment unit 322 co-clusters the first ID and the second ID. As already described, the result of hard clustering can also be expressed by a binary affiliation probability. Further, in the process of deriving the result of hard clustering, a membership probability in the range of 0.0 to 1.0 may be used. Here, the operation of the cluster assigning unit 322 will be described using the affiliation probability without distinguishing between hard clustering and soft clustering.
- the cluster allocation unit 322 refers to two pieces of information when executing cluster allocation.
- the first information is fact data.
- the probability that a certain customer ID belongs to a certain customer ID cluster is determined by how much the customer specified by the customer ID purchases the product specified by the product ID belonging to the product ID cluster closely related to the customer ID cluster. It depends on what you are doing. The same applies to the probability that a certain product ID belongs to a certain product ID cluster.
- the cluster allocating unit 322 refers to the fact data when obtaining the affiliation probability of the first ID to each cluster and the affiliation probability of the second ID to each cluster. Details of this operation will be described later.
- the second information is the accuracy of the prediction model.
- a prediction model is generated for each customer ID cluster (first ID cluster).
- the cluster allocation unit 322 applies the record corresponding to the customer ID belonging to the customer ID cluster to the prediction model corresponding to the customer ID cluster, calculates the predicted value of the attribute serving as the objective variable, Calculate the difference from the correct value (actual value shown in the record). This difference is the accuracy of the prediction model.
- the affiliation probability of the customer ID is corrected so that the affiliation probability of the customer ID is lowered.
- the cluster assigning unit 322 performs this correction for each customer ID cluster. By this operation, the clustering result is adjusted so that the accuracy of the prediction model is improved.
- the cluster information calculation unit 323 refers to the cluster assignment (affiliation probability) of each first ID and each second ID, calculates model information of each cluster of the first ID and each cluster of the second ID, and is stored in the storage unit 4 Update model information for each cluster.
- the cluster model information is information representing the statistical properties of the attribute values corresponding to the IDs belonging to the cluster. For example, in each customer ID cluster, when the annual income of each customer follows a normal distribution, the model information of each customer ID cluster is an average value and a variance value in the normal distribution.
- the cluster model information is used for determining cluster allocation and calculating the cluster relationship described later.
- the cluster relationship calculation unit 324 calculates a cluster relationship between each cluster of the first ID and each cluster of the second ID, and updates the cluster relationship stored in the storage unit 4.
- a cluster relationship is a value that represents the nature of a combination of clusters.
- the cluster relationship calculation unit 324 calculates a cluster relationship for each combination of the first ID cluster and the second ID cluster based on the fact data. Accordingly, the cluster relationship is calculated by the product of the number of first clusters and the number of clusters of the second ID.
- FIG. 8 is a schematic diagram illustrating an example of a cluster relationship. In the example shown in FIG.
- the cluster relationship between the customer ID cluster 2 and the product ID cluster 1 is 0.1, which is a value close to 0. This means that the customer specified by the customer ID belonging to the customer ID cluster 2 rarely purchases the product specified by the product ID belonging to the product ID cluster 1 (the relationship is Represents weakness).
- the cluster relationship calculation unit 324 may calculate the cluster relationship by calculating the following formula (A).
- k 1 represents the ID of the first ID cluster
- k 2 represents the ID of the second ID cluster
- a [1] k1k2 and b [1] k1k2 are parameters used for calculation of the cluster relationship. as a [1] k1k2 is large, k 1 and the relationship of k 2 is strong, b [1] k1k2 the relationship of about k 1 and k 2 large weak.
- the hat symbol shown in the mathematical formula is omitted.
- the cluster relationship calculation unit 324 may calculate a [1] k1k2 by the following equation (B). Further, the cluster relationship calculation unit 324 may calculate b [1] k1k2 by the following equation (C).
- d 1 represents the order of the first IDs
- D (1) represents the total number of the first IDs
- d 2 represents the order of the second IDs
- D (2) represents the total number of the second IDs.
- ⁇ d1, k2 (1) is the probability that the d 1st first ID belongs to the cluster k 1 .
- ⁇ d2, k2 (2) is the probability that the d 2nd second ID belongs to the cluster k 2 .
- x d1d2 is a value in the fact data corresponding to the combination of d 1 and d 2 .
- the customer ID (first ID) is represented by a variable i.
- the product ID (second ID) is represented by a variable j.
- x is a value in fact data (see FIG. 10) corresponding to a combination of subscripts i and j. Therefore, in the example shown in FIG. 10, x is 1 or 0.
- ⁇ is a cluster relationship corresponding to a combination of subscripts k 1 and k 2 .
- E q is an operation for obtaining an expected value of probability
- E q [logp (x i 1, j )
- the expected value of the probability that the customer i 1 buys the product j.
- the cluster allocation unit 322 also obtains the probability that the customer ID of interest belongs to another customer ID cluster by the same calculation. In the case of hard clustering, the cluster allocating unit 322 may determine that the customer ID of interest belongs only to the customer ID cluster having the highest affiliation probability obtained as a result. The cluster assigning unit 322 also calculates the probability of belonging to each customer ID cluster for other customer IDs.
- the cluster assigning unit 322 also obtains the probability that each product ID belongs to each product ID cluster by the same calculation.
- the cluster allocation unit 322 may perform the affiliation probability correction using the prediction model.
- the clustering unit 32 repeats the processing by the prediction model learning unit 321, the processing by the cluster allocation unit 322, the processing by the cluster information calculation unit 323, and the processing by the cluster relationship calculation unit 324.
- the end determination unit 325 determines whether or not to end the above series of processing. When the end condition is satisfied, the end determination unit 325 determines to end the above-described series of processing, and when the end condition is not satisfied, the end determination unit 325 determines to continue the repetition.
- the end condition is satisfied, the end determination unit 325 determines to end the above-described series of processing, and when the end condition is not satisfied, the end determination unit 325 determines to continue the repetition.
- the number of repetitions of the above-described series of processing may be determined in the set values for clustering.
- the end determination unit 325 may determine to end the repetition when the number of repetitions of the series of processes reaches a predetermined number.
- the clustering accuracy may be derived and the clustering accuracy may be stored in the storage unit 4.
- the end determination unit 325 calculates the amount of change from the previously derived clustering accuracy to the most recently derived clustering accuracy, and if the amount of change is small (specifically, the absolute value of the amount of change is If it is less than or equal to a predetermined threshold), it may be determined that the repetition is to be terminated.
- the cluster allocation unit 322 may calculate, for example, the likelihood of a clustering model as the clustering accuracy. In the case of hard clustering, the cluster allocation unit 322 may calculate, for example, Pseudo F as the clustering accuracy.
- the storage unit 4 is a storage device that stores various data acquired by the data input unit 2 and various data obtained by the processing of the processing unit 3.
- the storage unit 4 may be a main storage device of a computer or a secondary storage device. In the case where the storage unit 4 is a secondary storage device, the clustering unit 32 can suspend processing and resume processing thereafter.
- the storage unit 4 is divided into a main storage device and a secondary storage device, and the processing unit 3 stores part of the data in the main storage device and other data in the secondary storage device. It may be memorized.
- the result output unit 5 outputs the result of the processing by the clustering unit 32 stored in the storage unit 4. Specifically, the result output unit 5 outputs all or part of the prediction model, cluster assignment, cluster relationship, and cluster model information as the processing result.
- the cluster assignment is a probability of belonging to each cluster of each first ID and a probability of belonging to each cluster of each second ID.
- the cluster allocation may be information directly indicating which cluster each first ID belongs to, and information directly indicating which cluster each second ID belongs to. .
- the manner in which the result output unit 5 outputs the result is not particularly limited.
- the result output unit 5 may output the result to another device.
- the result output unit 5 may display the result on the display device.
- the prediction model learning unit 321, the cluster allocation unit 322, the cluster information calculation unit 323, the clustering unit 32 including the cluster relation calculation unit 324 and the end determination unit 325, the data input unit 2, the initialization unit 31, and the result output unit 5 For example, it is realized by a CPU of a computer that operates according to a program (co-clustering program). In this case, for example, the CPU reads a program from a program recording medium such as a computer program storage device (not shown in FIG. 6), and in accordance with the program, the data input unit 2, the initialization unit 31, the clustering unit 32, and the result The output unit 5 may be operated.
- each element in the co-clustering system 1 shown in FIG. 6 may be realized by dedicated hardware.
- system 1 of the present invention may have a configuration in which two or more physically separated devices are connected by wire or wirelessly. This also applies to each embodiment described later.
- FIG. 11 is a flowchart illustrating an example of processing progress of the first embodiment.
- the data input unit 2 acquires a data group (first master data, second master data, and fact data) used for co-clustering and a set value for clustering (step S1).
- the initialization unit 31 causes the storage unit 4 to store the first master data, the second master data, the fact data, and the clustering setting value.
- the initialization unit 31 sets initial values for “cluster model information”, “cluster assignment”, and “cluster relation”, and stores the initial values in the storage unit 4 (step S2).
- the initial value in step S2 may be arbitrary.
- the initialization unit 31 may derive each initial value as shown below, for example.
- the initialization unit 31 may calculate an average value of attribute values in the first master data, and may determine the average value as model information of clusters in all clusters of the first ID. Similarly, the initialization unit 31 may calculate an average value of attribute values in the second master data, and may determine the average value as model information of clusters in all clusters of the second ID.
- the initialization unit 31 may determine the initial value of cluster allocation as follows. In the case of hard clustering, the initialization unit 31 randomly assigns each first ID to any cluster, and similarly assigns each second ID to any cluster at random. Further, in the case of soft clustering, the initialization unit 31 uniformly determines the probability of belonging to each cluster for each first ID. For example, when the number of clusters of the first ID is two, the affiliation probability to the first cluster and the second affiliation probability of each first ID are set to 0.5. Similarly, the initialization unit 31 uniformly determines the belonging probability to each cluster for each second ID.
- the initialization unit 31 may set the cluster relationship to the same value (for example, 0.5) for each combination of the first ID cluster and the second ID cluster.
- step S2 the clustering unit 32 repeats the processing of steps S3 to S7 until the end condition is satisfied.
- steps S3 to S7 will be described.
- the prediction model learning unit 321 refers to the information stored in the storage unit 4 and sets an attribute whose value is unknown in some records in the first master data for each cluster of the first ID.
- the prediction model is learned.
- the prediction model learning part 321 memorize
- the cluster allocation unit 322 updates the cluster allocation of each first ID and the cluster allocation of the second ID stored in the storage unit 4 (step S4).
- the cluster allocation unit 322 reads the cluster allocation, fact data, and cluster relationship stored in the storage unit 4, and based on these, newly allocates the cluster allocation of each first ID and the cluster allocation of the second ID. Determine.
- the cluster allocation unit 322 calculates a predicted value of an attribute serving as an objective variable using the prediction model corresponding to the cluster, and the difference between the predicted value and the correct value. (Prediction model accuracy) is calculated.
- the cluster allocation unit 322 increases the probability of belonging to the first ID belonging to the cluster of interest as the difference is smaller, and the membership of the first ID belonging to the cluster of interest as the difference is larger.
- the affiliation probability of the first ID is corrected so as to reduce the probability.
- the cluster allocation unit 322 does not need to perform this process for each cluster for which no prediction model has been generated (that is, each cluster of the second ID).
- the cluster allocation unit 322 stores the updated cluster allocation of each first ID and the cluster allocation of each second ID in the storage unit 4.
- the cluster information calculation unit 323 refers to the first master data and the allocation of each first ID cluster, and uses the value of the attribute corresponding to the first ID belonging to the cluster for each cluster of the first ID, Recalculate the cluster model information. Similarly, the cluster information calculation unit 323 refers to the second master data and the cluster assignment of each second ID, and uses the value of the attribute corresponding to the second ID belonging to the cluster for each cluster of the second ID to Recalculate model information. The cluster information calculation unit 323 updates the cluster model information stored in the storage unit 4 with the newly calculated cluster model information (step S5).
- the cluster relation calculation unit 324 refers to the cluster assignment of each first ID, the cluster assignment of each second ID, and the fact data, and calculates the cluster relation for each combination of the first ID cluster and the second ID cluster. cure.
- the cluster relationship calculation unit 324 updates the cluster relationship stored in the storage unit 4 with the newly calculated cluster relationship (step S6).
- the end determination unit 325 determines whether or not the end condition is satisfied (step S7). If the end condition is not satisfied (No in step S7), the end determination unit 325 determines to repeat steps S3 to S7. Then, the clustering unit 32 executes steps S3 to S7 again.
- the end determination unit 325 determines to end the repetition of steps S3 to S7. In this case, the result output unit 5 outputs the result of the processing by the clustering unit 32 at that time, and the processing of the co-clustering system 1 ends.
- the cluster allocation unit 322 refers to the fact data and performs cluster allocation of the first ID and the second ID.
- the cluster allocation unit 322 refers to the fact data and executes co-clustering of the first ID and the second ID.
- the prediction model learning unit 321 generates a prediction model for each cluster. As a result, a different prediction model is obtained for each cluster.
- the fact data represents the relationship between the first ID and the second ID. For example, the fact data represents a relationship such that “customer 1” has purchased “product 1” but “product 2” has never purchased it.
- the clustering result of the first ID in the present embodiment provides a more appropriate cluster as compared to the clustering result when the first ID is clustered based simply on the attribute value in the first master data.
- the prediction model learning unit 321 adjusts the belonging probability of the ID belonging to the cluster according to the prediction accuracy of the cluster. Also from this, a more appropriate cluster can be obtained. Therefore, the prediction accuracy of the prediction model for each cluster can be further improved.
- the customer data illustrated in FIG. 1 has been described with an example in which the value of a specific attribute is unknown in some records.
- the value of each attribute is all determined, and in the product data illustrated in FIG. 2, the value of a specific attribute may be unknown in some records.
- the co-clustering system 1 may perform the same processing as in the first embodiment, with the product data as the first master data and the customer data as the second master data.
- the value of a specific attribute may be unknown in some records.
- the prediction model learning unit 321 may learn the prediction model for each cluster of the first ID and learn the prediction model for each cluster of the second ID.
- the cluster allocation unit 322 may use the accuracy of the prediction model corresponding to the cluster of the second ID when determining the affiliation probability to each cluster regarding the second ID.
- the following method can be considered apart from the method according to the first embodiment. Specifically, by adding information indicated by the second master data and fact data to each record of the first master data, the first master data, the second master data, and the fact data are integrated, A method of learning a prediction model based on the data after integration without performing clustering is conceivable. However, the prediction accuracy of the prediction model obtained by this method is lower than the prediction accuracy of the prediction model obtained in the first embodiment described above. This point will be specifically described.
- FIG. 12 is an explanatory diagram showing an example of a result of integrating the first master data, the second master data, and the fact data shown in FIG. 3 shown in FIGS.
- the column corresponding to the product name such as “carbonated water” and “shochu” “1” or “0” is stored based on the fact data (see FIG. 3). “1” means that the customer has purchased the product, and “0” means that the customer has never purchased the product.
- FIG. 12 illustrates the case where the price of the product is stored in the column next to the product name such as “carbonated water” and “shochu”.
- the integration result shown in FIG. 12 is expressed in a format in which each column other than the customer ID is an attribute of the customer ID. This means that some information indicated by the master data before integration is lost.
- the price of carbonated water is not originally an attribute of a customer ID, but is formally expressed as an attribute of a customer ID.
- the information indicated in the second master data (see FIG. 2) before the integration that the price of “carbonated water” is “150”. , Will be lost.
- the prediction accuracy of the prediction model is lower than the prediction accuracy of the prediction model obtained in the first embodiment.
- Embodiment 2 a prediction system that executes co-clustering, generates a prediction model for each cluster of the first ID, and further executes prediction based on the prediction model will be described.
- the first master data, the second master data, and the fact data are also input to the prediction system according to the second embodiment of the present invention.
- the first master data, the second master data, and the fact data in the second embodiment are respectively the same as the first master data, the second master data, and the fact data in the first embodiment.
- the value is unknown in some records for a specific attribute.
- the first ID (the ID of the record of the first master data) is the customer ID
- the first master data represents the correspondence between the customer and the attribute of the customer.
- the second ID (the ID of the record of the second master data) is the product ID
- the second master data represents the correspondence between the product and the attribute of the product.
- the customer ID represents a customer
- the customer ID may be simply referred to as a customer
- the product ID may be simply referred to as a product.
- the second embodiment will be described with reference to the first master data illustrated in FIG. 13 and the second master data illustrated in FIG.
- attributes other than the attributes shown in FIG. 13 may be indicated.
- attributes other than the attributes shown in FIG. 14 may be indicated.
- the fact data is data indicating the relationship between the first ID (customer ID) and the second ID (product ID).
- the fact data indicates a relationship as to whether or not a customer has a record of purchasing a product.
- “1” indicates that the customer has a record of purchasing the product and “0” indicates that there is no record.
- FIG. 16 is a functional block diagram showing an example of the prediction system of the second embodiment of the present invention.
- a prediction system 500 according to the second embodiment of the present invention includes a co-clustering unit 501, a prediction model generation unit 502, and a prediction unit 503.
- the first master data, the second master data, and the fact data are input to the prediction system 500.
- the co-clustering unit 501 co-clusters the first ID (customer ID) and the second ID (product ID) based on the first master data, the second master data, and the fact data. It can also be said that the co-clustering unit 501 co-clusters customers and products based on the first master data, the second master data, and the fact data.
- the method in which the co-clustering unit 501 co-clusters the customer ID and the product ID based on the first master data, the second master data, and the fact data may be a known co-clustering method. Further, the co-clustering unit 501 may execute soft clustering or hard clustering as co-clustering.
- the process of repeating the generation of the prediction model and the co-clustering process (more specifically, the process of steps S3 to S7) is shown until it is determined that the predetermined condition is satisfied.
- the prediction model generation unit 502 described later generates a prediction model after the co-clustering of the customer ID and the product ID by the co-clustering unit 501 is completed.
- the prediction model generation unit 502 When the co-cluster rig by the co-clustering unit 501 is completed, the prediction model generation unit 502 generates a prediction model for each cluster of customer IDs.
- the prediction model generation unit 502 generates a prediction model having an attribute in the first master data whose value is unknown in some records as an objective variable. For example, the prediction model generation unit 502 generates a prediction model having “an annual number of times of using an esthetic salon” illustrated in FIG. 13 as an objective variable.
- the prediction model generation unit 502 generates a prediction model having some or all of the attributes in the first master data having no unknown value as explanatory variables. For example, the prediction model generation unit 502 generates a prediction model having “age” and “annual income” shown in FIG. 13 as explanatory variables. For example, the prediction model generation unit 502 may generate a prediction model having “age” alone (or “annual income” only) as an explanatory variable.
- the prediction model generation unit 502 may use not only the attribute in the first master data but also the aggregate value calculated from the value of the attribute in the second master data as the explanatory variable. However, when using the aggregate value calculated from the attribute value in the second master data as the explanatory variable, the prediction model generation unit 502 determines that the second master is determined to be related to the customer ID based on the fact data. Let the statistical value of the attribute value in each record in the data be an explanatory variable.
- “statistic value of attribute value in each record in second master data determined to be related to customer ID by fact data” for example, “maximum of prices of products purchased by customer” Value ”,“ average price of the product purchased by the customer ”, etc., but are not limited thereto.
- “a product purchased by the customer” corresponds to a record in the second master data determined to be related to the customer ID by the fact data.
- the prediction model generation unit 502 may use price statistics (for example, maximum value, average value, etc.) in such records as explanatory variables.
- price statistics for example, maximum value, average value, etc.
- the prediction model generation unit 502 pays attention to the customer ID that can specify the value of the explanatory variable and the value of the objective variable, specifies the value of the explanatory variable and the value of the objective variable, and uses these values as teacher data.
- a prediction model may be generated by performing learning. The prediction model generation unit 502 may perform this process for each cluster.
- explanatory variables and objective variables can be specified. For example, values such as “age” and “annual income” of “customer 1” and “customer 2” and “the number of times of using the esthetic salon per year” can be specified from the first master data. Further, based on the fact data (see FIG. 15), the prediction model generation unit 502 determines that the product purchased by “customer 1” is only “carbonated beverage P”, and “carbonated beverage P” in the second master table. “130” can be specified as the attribute statistic in the record. That is, the prediction model generation unit 502 can specify the maximum value among the prices of the products purchased by the customer 1 by referring to the fact data.
- the prediction model generation unit 502 determines that the products purchased by “customer 2” are “confectionery 1” and “carbonated beverage P”, and the second master table “130” can be specified as the attribute statistic in the record of “confectionery 1” and the record of “carbonated drink P”. That is, the prediction model generation unit 502 can specify the maximum value among the prices of the products purchased by the customer 2 by referring to the fact data. Therefore, data related to “customer 1” and “customer 2” can be used as teacher data.
- the teacher data value may be weighted according to the affiliation probability that the customer ID belongs to each cluster.
- the prediction unit 503 receives designation of a customer ID and a target variable (in the embodiment, an attribute called “the number of times of using an esthetic salon per year”) from a user of the prediction system 500, for example. Then, the prediction unit 503 predicts the value of the objective variable corresponding to the designated customer ID using the prediction model generated by the prediction model generation unit 502.
- a target variable in the embodiment, an attribute called “the number of times of using an esthetic salon per year”
- the prediction unit 503 identifies a cluster to which the specified customer ID belongs, and uses the prediction model corresponding to the cluster to determine the value of the objective variable corresponding to the customer ID Predict.
- the prediction unit 503 specifies the value of the explanatory variable for the specified customer ID, and applies the value of the explanatory variable to the prediction model corresponding to the cluster to which the specified customer ID belongs, thereby predicting the value. May be calculated.
- the explanatory variables are “age” and “maximum value of the price of the product purchased by the customer”.
- “customer 4” shown in FIG. 13 is designated.
- the prediction unit 503 specifies the age “50” of “customer 4” from the first master data. Further, the prediction unit 503 determines that the products purchased by the “customer 4” are “confectionery 1”, “carbonated beverage P”, and “carbonated beverage Q” based on the fact data (see FIG. 15).
- the prediction unit 503 may apply the values “50” and “130” of the explanatory variables to the prediction model corresponding to the cluster to which “customer 4” belongs.
- the prediction unit 503 predicts the value of the objective variable corresponding to the designated customer ID for each prediction model corresponding to each cluster of customer IDs.
- the operation of predicting the value of the objective variable by focusing on one prediction model is the same as the above operation, and the description thereof is omitted.
- the prediction unit 503 obtains a prediction value for each prediction model corresponding to each cluster, and then weights and adds each prediction value by the affiliation probability that the specified customer ID belongs to each cluster, and the result is an objective variable. As the value of.
- the co-clustering unit 501, the prediction model generation unit 502, and the prediction unit 503 are realized by a CPU of a computer that operates according to a program (prediction program), for example.
- the CPU reads a program from a program recording medium such as a computer program storage device (not shown in FIG. 16), and the co-clustering unit 501, the prediction model generation unit 502, and the prediction unit 503 according to the program.
- the co-clustering unit 501, the prediction model generation unit 502, and the prediction unit 503 may be realized by dedicated hardware, respectively.
- FIG. 17 is a flowchart illustrating an example of processing progress of the second embodiment.
- the co-clustering unit 501 determines the customer ID based on the first master data, the second master data, and the fact data. And the product ID are co-clustered (step S101).
- the co-clustering method in step S101 may be a known co-clustering method.
- the co-clustering unit 501 outputs each cluster obtained as a result of the co-clustering to the prediction model generation unit 502.
- the prediction model generation unit 502 When the co-clustering of the customer ID and the product ID is completed, the prediction model generation unit 502 generates a prediction model for each cluster of customer IDs output by the co-clustering unit 501 (step S102). Since the details of the operation of the prediction model generation unit 502 have already been described, description thereof is omitted here.
- step S102 when the prediction unit 503 receives the customer ID and the objective variable designation, the prediction unit 503 predicts the value of the objective variable corresponding to the designated customer ID using the prediction model generated in step S102 ( Step S103). Since the details of the operation of the prediction unit 503 have already been described, the description thereof is omitted here.
- the co-clustering unit 501 co-clusters the customer ID (first ID) and the product ID (second ID) based on the first master data, the second master data, and the fact data. . Therefore, the clustering accuracy of each of the customer ID and the product ID is improved as compared with the case where the customer ID is clustered based only on the first master data or the case where the product ID is clustered based only on the second master data.
- the prediction model generation unit 502 For each cluster of customer IDs clustered with such good accuracy, the prediction model generation unit 502 generates a prediction model. Accordingly, the accuracy of the prediction model is improved, and the accuracy of the predicted value of the objective variable obtained based on the prediction model is also increased. That is, according to the prediction system of the second embodiment, prediction can be performed with high accuracy.
- the prediction model generation unit 502 includes not only the attribute of the first master data but also the attribute value statistic in each record in the second master data determined to be related to the customer ID by the fact data. Are also preferably used as explanatory variables for the prediction model. By using such a statistic as an explanatory variable, the accuracy of the prediction model can be further improved, and as a result, the accuracy of the prediction value obtained based on the prediction model is further improved.
- Embodiment 3 FIG. In the second embodiment, unlike the first embodiment, a system that generates a prediction model after co-clustering is completed without repeating generation of a prediction model and co-clustering processing has been described.
- the co-clustering system according to the third embodiment of the present invention co-clusters the first ID and the second ID by repeating the processing of steps S3 to S7, and performs prediction corresponding to the cluster. Generate a model. Furthermore, the co-clustering system of the third exemplary embodiment of the present invention predicts the value of the objective variable when test data is input.
- FIG. 18 is a functional block diagram illustrating an example of the co-clustering system according to the third embodiment of this invention.
- the same elements as those in the first embodiment are denoted by the same reference numerals as those in FIG.
- the co-clustering system 1 of the third embodiment further includes a test data input unit 6, a prediction unit 7, and a prediction result output unit. 8.
- the processing unit 3 completes the processing described in the first embodiment, the first ID and the second ID are classified into clusters, and a prediction model is generated for each cluster of the first ID. explain.
- the test data input unit 6 acquires test data.
- the test data input unit 6 may obtain test data by accessing an external device, for example.
- the test data input unit 6 may be an input interface through which test data is input.
- the test data includes a new first ID record in which the objective variable (for example, “the number of times of use of the esthetic salon per year” in the first master data shown in FIG. 1) is unknown, and the new first ID and the second ID. And data indicating the relationship with the second ID in the master data.
- the objective variable for example, “the number of times of use of the esthetic salon per year” in the first master data shown in FIG. 1
- the new first ID and the second ID And data indicating the relationship with the second ID in the master data.
- the new first ID record is, for example, a record of a member who has just registered as a member of a certain service.
- this record it is assumed that values of attributes (for example, “age”, “annual income”, etc.) other than the attribute corresponding to the objective variable are defined.
- customer product purchase history data specified by the new first ID can be cited. It can also be said that the data indicating the relationship between the new first ID and the second ID in the second master data is fact data relating to the new first ID.
- the prediction unit 7 specifies a cluster to which the new first ID included in the test data belongs. At this time, the prediction unit 7 may specify a cluster based on the value of the attribute included in the new first ID record. For example, the prediction unit 7 determines the attribute values (for example, “age” and “annual income” values) included in the new first ID record, and the attribute values in each first ID record belonging to each cluster. And the cluster having the closest attribute value of the first ID to the attribute value included in the new record of the first ID may be specified. The prediction unit 7 may regard the cluster as a cluster to which the new first ID belongs.
- the prediction unit 7 determines, based on data indicating the relationship between the new first ID and the second ID in the second master data (for example, product purchase history data), the customer specified by the new first ID.
- a product purchase tendency may be specified, and a cluster of the first ID having the same product purchase tendency may be specified.
- the prediction unit 7 may regard the cluster as a cluster to which the new first ID belongs.
- the prediction unit 7 identifies the cluster to which the first ID belongs, and then applies the attribute value included in the new first ID record to the prediction model corresponding to the cluster, thereby corresponding to the new first ID. Predict the value of the objective variable.
- the prediction unit 7 may obtain the affiliation probability that the new first ID belongs to the cluster for each cluster of the first ID. For example, the prediction unit 7 determines the attribute values (for example, “age” and “annual income” values) included in the new first ID record, and the attribute values in each first ID record belonging to each cluster. And, for each cluster, a new first ID according to the degree of proximity between the attribute value of each first ID belonging to the cluster and the value of the attribute included in the new first ID record. You may obtain
- the attribute values for example, “age” and “annual income” values
- the prediction unit 7 determines, based on data indicating the relationship between the new first ID and the second ID in the second master data (for example, product purchase history data), the customer specified by the new first ID.
- a merchandise purchase tendency may be specified, and the affiliation probability of each new first ID in each cluster may be obtained according to the degree of proximity between the merchandise purchase tendency and the merchandise purchase tendency for each cluster of the first ID.
- the prediction unit 7 applies the attribute value included in the new first ID record for each prediction model corresponding to each cluster of the first ID. And predict the value of the objective variable. Further, the prediction unit 7 obtains a predicted value for each prediction model corresponding to each cluster, and then weights and adds each predicted value with the probability of belonging to each cluster of the new first ID. It may be determined as the value of the variable.
- the prediction result output unit 8 outputs the value of the objective variable predicted by the prediction unit 7.
- the manner in which the prediction result output unit 8 outputs the predicted value of the objective variable is not particularly limited.
- the prediction result output unit 8 may output the predicted value of the objective variable to another device.
- the prediction result output unit 8 may display the predicted value of the objective variable on the display device.
- test data input unit 6, the prediction unit 7, and the prediction result output unit 8 are also realized by a CPU of a computer that operates according to a program (co-clustering program), for example.
- an unknown value in given test data can be predicted.
- the master data may be referred to as a data set.
- the first master data may be referred to as “data set 1”
- the second master data may be referred to as “data set 2”.
- fact data may be referred to as related data.
- the first master data (data set 1) is master data related to customers
- the second master data (data set 2) is master data related to products. It is assumed to be data. Also, it is assumed that an attribute whose value is unknown in some records exists in the first master data.
- ⁇ is a digamma function.
- ⁇ is a parameter that can be set by the system administrator, and ⁇ is set to a value in the range of 0 to 1. The closer the value of ⁇ is to 0, the stronger the learning effect in co-clustering. That is, it becomes easy to determine the belonging probability of the ID to the cluster so that the accuracy of the prediction model is improved.
- the following part in the equation (1) represents the score when the value of the attribute of the first customer d is predicted by the prediction model of the cluster k 1 .
- parameter update formula is expressed by the following formulas (5) and (6).
- parameter update formula for data set 2 is expressed by the following formulas (7) and (8).
- parameter update formula is expressed by the following formulas (11) and (12).
- parameter update formula is expressed by the following formula (14).
- ⁇ k1 (1) is represented by Expression (16) shown below.
- FIG. 19 and FIG. 20 are flowcharts showing an example of processing progress in the specific example of the first embodiment.
- the data input unit 2 acquires data (step S300).
- the initialization unit 31 initializes the cluster (step S302).
- the prediction model learning unit 321 obtains the parameter ⁇ by solving Expression (15) for each cluster of the data set 1 (step S304).
- the prediction model learning unit 321 updates the SVM model q ( ⁇ k1 (1) ) according to Expression (14) in each cluster of the data set 1 (step S306).
- the cluster information calculation unit 323 updates the model q (v k1 (1) ) of each cluster of the data set 1 according to the equation (6) (step S316).
- the cluster information calculation unit 323 updates the model q (v k2 (2) ) of each cluster of the data set 2 according to the equation (8) (step S318).
- the cluster relationship calculation unit 324 updates the cluster relevance q ( ⁇ k1k2 [1] ) according to the equation (12) for the combination of clusters in the data sets 1 and 2 (step S320).
- step S322 determines whether or not the end condition is satisfied.
- the rastering unit 32 repeats the processes after step S304.
- the result output unit 5 outputs the processing result by the clustering unit 32 at that time, and ends the processing.
- FIG. 21 is a schematic block diagram showing a configuration example of a computer according to each embodiment of the present invention.
- the computer 1000 includes a CPU 1001, a main storage device 1002, an auxiliary storage device 1003, and an interface 1004.
- the system of each embodiment (co-clustering system in the first and third embodiments, prediction system in the second embodiment) is implemented in the computer 1000.
- the operation of the system of each embodiment is stored in the auxiliary storage device 1003 in the form of a program.
- the CPU 1001 reads out the program from the auxiliary storage device 1003, develops it in the main storage device 1002, and executes the above processing according to the program.
- the auxiliary storage device 1003 is an example of a tangible medium that is not temporary.
- Other examples of the non-temporary tangible medium include a magnetic disk, a magneto-optical disk, a CD-ROM, a DVD-ROM, and a semiconductor memory connected via the interface 1004.
- this program is distributed to the computer 1000 via a communication line, the computer 1000 that has received the distribution may develop the program in the main storage device 1002 and execute the above processing.
- the program may be for realizing a part of the above-described processing.
- the program may be a differential program that realizes the above-described processing in combination with another program already stored in the auxiliary storage device 1003.
- each device may be realized by general-purpose or dedicated circuits (circuitry IV), processors, and the like, or combinations thereof. These may be configured by a single chip or may be configured by a plurality of chips connected via a bus. Part or all of each component of each device may be realized by a combination of the above-described circuit and the like and a program.
- each component of each device When a part or all of each component of each device is realized by a plurality of information processing devices, circuits, etc., the information processing devices, circuits, etc. may be centrally arranged or distributedly arranged. .
- the information processing apparatus, the circuit, and the like may be realized as a form in which each is connected via a communication network, such as a client and server system and a cloud computing system.
- FIG. 22 is a block diagram showing an outline of the prediction system of the present invention.
- the prediction system of the present invention includes co-clustering means 81, prediction model generation means 82, and prediction means 83.
- the co-clustering means 81 (for example, the co-clustering unit 501) includes the first master data, the second master data, the first ID that is the ID of the record in the first master data, and the second master data.
- the first ID and the second ID are co-clustered based on fact data indicating the relationship with the second ID that is the ID of the record.
- the prediction model generation unit 82 (for example, the prediction model generation unit 502) generates a prediction model for each cluster of the first ID output by the co-clustering unit 81.
- the prediction unit 83 assigns the prediction model and the first ID to each cluster.
- the value of the objective variable corresponding to the first ID is predicted based on the belonging probability.
- Co-clustering means for co-clustering the first ID and the second ID based on fact data;
- a prediction model generation unit that generates a prediction model for each cluster of the first ID output by the co-clustering unit;
- a prediction system comprising prediction means for predicting a value of the objective variable corresponding to the first ID.
- the predictive model generation means describes the attribute in the first master data and the statistical value of the attribute value in each record in the second master data determined to be related to the first ID by the fact data.
- the prediction system according to attachment 1 wherein a prediction model is generated for each cluster of the first ID.
- the prediction means is The prediction system according to Supplementary Note 1 or Supplementary Note 2, wherein a cluster to which the designated first ID belongs is specified, and a value of an objective variable corresponding to the first ID is predicted using a prediction model corresponding to the cluster.
- the prediction means is For each prediction model corresponding to each cluster of the first ID, the value of the objective variable corresponding to the specified first ID is predicted, and the assigned probability that the specified first ID belongs to each cluster for each predicted value
- the prediction system according to Supplementary Note 1 or Supplementary Note 2, wherein the result obtained by weighting and adding is determined as the value of the objective variable.
- Co-clustering means for co-clustering the customer and the product;
- a prediction model generation means for generating a prediction model for each of the customer clusters output by the co-clustering means;
- the specified customer is assigned to the specified customer based on the prediction model and the affiliation probability that the specified customer belongs to each cluster.
- a prediction system comprising prediction means for predicting the value of the corresponding objective variable.
- the predictive model generation means uses a predictive model in which the attribute of the customer and the statistic of the value of the attribute in each record in the second master data determined to be related to the customer by the fact data are explanatory variables.
- the present invention is preferably applied to a prediction system that predicts an unknown value of an attribute.
- Prediction System 501 Co-Clustering Unit 502 Prediction Model Generation Unit 503 Prediction Unit
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- Data Mining & Analysis (AREA)
- Software Systems (AREA)
- Strategic Management (AREA)
- General Engineering & Computer Science (AREA)
- Databases & Information Systems (AREA)
- Development Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Accounting & Taxation (AREA)
- Mathematical Physics (AREA)
- Finance (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Economics (AREA)
- Human Resources & Organizations (AREA)
- General Business, Economics & Management (AREA)
- Game Theory and Decision Science (AREA)
- Marketing (AREA)
- Medical Informatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Mathematics (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Operations Research (AREA)
- Pure & Applied Mathematics (AREA)
- Tourism & Hospitality (AREA)
- Quality & Reliability (AREA)
- Educational Administration (AREA)
- Algebra (AREA)
- Automation & Control Theory (AREA)
- Fuzzy Systems (AREA)
- Computational Linguistics (AREA)
- Probability & Statistics with Applications (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Description
例えば、あるサービスの会員が年間にエステティックサロンを利用する利用回数を、年齢から予測する予測問題を考える。この予測問題は、年齢を入力とし、利用回数を出力する関数を求める問題である。また、ここでは、データ全体が6人分のデータであるとする。図23は、その6人分の年齢と利用回数とをグラフに示した結果を例示した図である。図23に示すグラフにおいて、x軸は年齢を示し、y軸は利用回数を示している。また、その6人分のデータ全体から、線形回帰により予測モデル(上記の関数)を生成し、その関数を図示すると、その関数は、図23に示す直線のように示すことができる。この関数に年齢xを代入したときのyの値が利用回数の予測値となる。図23から分かるように、この予測値と、実際の利用回数との差は大きく、予測精度は低い。
本発明の発明者は、非特許文献2に記載のIRMを利用して、第1のマスタデータ、第2のマスタデータおよびファクトデータが与えられた場合に、第1IDおよび第2IDを共クラスタリングする処理を検討した。以下、この処理の流れを述べ、さらに、本発明の第1の実施形態において、第1のマスタデータ、第2のマスタデータおよびファクトデータが与えられた場合に、第1IDおよび第2IDを共クラスタリングする処理について述べる。
非特許文献2に記載のIRMを利用した共クラスタリング処理では、以下のステップを繰り返す。
(2-1)第1IDの各クラスタの重み(事前確率)、および、第2IDの各クラスタの重み(事前確率)を更新する。例えば、第1のマスタデータ(図1参照)の中に若い人のレコードが多い場合、若年層のクラスタに第1IDが所属する事前確率を高くする。
(2-2)第1IDを要素とする各クラスタ、および第2IDを要素とする各クラスタを対象にして、クラスタのモデル情報を、現時点でのクラスタ割り当てに基づいて更新する。クラスタのモデル情報は、そのクラスタに所属するIDに対応する属性の値の統計的な性質を表す情報である。クラスタのモデル情報は、そのクラスタの代表的な要素の持つ性質を表現していると言える。例えば、クラスタのモデル情報は、クラスタに所属しているIDに対応する属性の値の平均や分散で表すことができる。なお、第1IDの各クラスタへの所属確率および第2IDの各クラスタへの所属確率が判明しているので、クラスタのモデル情報(例えば、顧客の平均年齢や商品の平均価格)を計算することができる。
本発明の第1の実施形態の共クラスタリング処理では、一部のレコードで特定の属性の値が未知となっているマスタデータ(ここでは、第1のマスタデータ)における各レコードのID(すなわち、第1ID)のクラスタ毎に、予測モデルを保持する。本実施形態では、属性の値が類似している第1IDを同じクラスタに所属させ、クラスタ毎に異なる予測モデルを生成することで、上記の特定の属性における未知の値の予測精度を向上させる。また、本実施形態では、クラスタ割り当ての決定において、第1IDが各クラスタに所属する所属確率を、クラスタに対応する予測モデルの予測誤差が小さいほど高い確率とすることで、クラスタリングの精度を向上させる。
(3-1)第1IDの各クラスタの重み(事前確率)、および、第2IDの各クラスタの重み(事前確率)を更新する。例えば、第1のマスタデータ(図1参照)の中に若い人のレコードが多い場合、若年層のクラスタに第1IDが所属する事前確率を高くする。
(3-2)第1IDを要素とする各クラスタ、および第2IDを要素とする各クラスタを対象にして、クラスタのモデル情報を、現時点でのクラスタ割り当てに基づいて更新する。なお、第1IDの各クラスタへの所属確率および第2IDの各クラスタへの所属確率が判明しているので、クラスタのモデル情報(例えば、顧客の平均年齢や商品の平均価格)を計算することができる。
本発明の第2の実施形態では、共クラスタリングを実行し、第1IDのクラスタ毎に予測モデルを生成し、さらに、予測モデルによる予測を実行する予測システムについて説明する。
第2の実施形態では、第1の実施形態とは異なり、予測モデルの生成と、共クラスタリング処理との繰り返しをせずに、共クラスタリングが完了した後に予測モデルを生成するシステムを説明した。
以下に、第1の実施形態の具体例を示す。以下に示す具体例では、マスタデータをデータセットと記す場合がある。また、第1のマスタデータを“データセット1”と記し、第2のマスタデータを“データセット2”と記す場合がある。また、ファクトデータを関係データと記す場合がある。
第1のマスタデータと、第2のマスタデータと、前記第1のマスタデータ内のレコードのIDである第1IDと前記第2のマスタデータ内のレコードのIDである第2IDとの関係を示すファクトデータとに基づいて、前記第1IDおよび前記第2IDを共クラスタリングする共クラスタリング手段と、
前記共クラスタリング手段が出力する前記第1IDのクラスタ毎に予測モデルを生成する予測モデル生成手段と、
前記第1IDと、前記第1のマスタデータに含まれる属性の一つである目的変数とが指定された場合に、前記予測モデルと、前記第1IDが各クラスタに属する所属確率とに基づいて、前記第1IDに対応する前記目的変数の値を予測する予測手段を備える
ことを特徴とする予測システム。
予測モデル生成手段は、第1のマスタデータ内の属性と、ファクトデータによって第1IDとの関連があると判定される第2のマスタデータ内の各レコードにおける属性の値の統計量とを説明変数とする予測モデルを、第1IDのクラスタ毎に生成する
付記1に記載の予測システム。
予測手段は、
指定された第1IDが属するクラスタを特定し、前記クラスタに対応する予測モデルを用いて、前記第1IDに対応する目的変数の値を予測する
付記1または付記2に記載の予測システム。
予測手段は、
第1IDの個々のクラスタに対応する予測モデル毎に、指定された第1IDに対応する目的変数の値を予測し、予測した各値に対して、指定された第1IDが各クラスタに属する所属確率で重み付け加算した結果を、前記目的変数の値として確定する
付記1または付記2に記載の予測システム。
顧客と前記顧客の属性とを含む第1のマスタデータと、商品と前記商品の属性とを含む第2のマスタデータと、前記顧客と前記商品との関係を示すファクトデータとに基づいて、前記顧客および前記商品を共クラスタリングする共クラスタリング手段と、
前記共クラスタリング手段が出力する前記顧客のクラスタ毎に予測モデルを生成する予測モデル生成手段と、
顧客と、前記顧客の属性の一つである目的変数とが指定された場合に、前記予測モデルと、指定された前記顧客が各クラスタに属する所属確率とに基づいて、指定された前記顧客に対応する前記目的変数の値を予測する予測手段を備える
ことを特徴とする予測システム。
予測モデル生成手段は、顧客の属性と、ファクトデータによって顧客との関連があると判定される第2のマスタデータ内の各レコードにおける属性の値の統計量とを説明変数とする予測モデルを、顧客のクラスタ毎に生成する
付記5に記載の予測システム。
501 共クラスタリング部
502 予測モデル生成部
503 予測部
Claims (14)
- 第1のマスタデータと、第2のマスタデータと、前記第1のマスタデータ内のレコードのIDである第1IDと前記第2のマスタデータ内のレコードのIDである第2IDとの関係を示すファクトデータとに基づいて、前記第1IDおよび前記第2IDを共クラスタリングする共クラスタリング手段と、
前記共クラスタリング手段が出力する前記第1IDのクラスタ毎に予測モデルを生成する予測モデル生成手段と、
前記第1IDと、前記第1のマスタデータに含まれる属性の一つである目的変数とが指定された場合に、前記予測モデルと、前記第1IDが各クラスタに属する所属確率とに基づいて、前記第1IDに対応する前記目的変数の値を予測する予測手段を備える
ことを特徴とする予測システム。 - 予測モデル生成手段は、第1のマスタデータ内の属性と、ファクトデータによって第1IDとの関連があると判定される第2のマスタデータ内の各レコードにおける属性の値の統計量とを説明変数とする予測モデルを、第1IDのクラスタ毎に生成する
請求項1に記載の予測システム。 - 予測手段は、
指定された第1IDが属するクラスタを特定し、前記クラスタに対応する予測モデルを用いて、前記第1IDに対応する目的変数の値を予測する
請求項1または請求項2に記載の予測システム。 - 予測手段は、
第1IDの個々のクラスタに対応する予測モデル毎に、指定された第1IDに対応する目的変数の値を予測し、予測した各値に対して、指定された第1IDが各クラスタに属する所属確率で重み付け加算した結果を、前記目的変数の値として確定する
請求項1または請求項2に記載の予測システム。 - 顧客と前記顧客の属性とを含む第1のマスタデータと、商品と前記商品の属性とを含む第2のマスタデータと、前記顧客と前記商品との関係を示すファクトデータとに基づいて、前記顧客および前記商品を共クラスタリングする共クラスタリング手段と、
前記共クラスタリング手段が出力する前記顧客のクラスタ毎に予測モデルを生成する予測モデル生成手段と、
顧客と、前記顧客の属性の一つである目的変数とが指定された場合に、前記予測モデルと、指定された前記顧客が各クラスタに属する所属確率とに基づいて、指定された前記顧客に対応する前記目的変数の値を予測する予測手段を備える
ことを特徴とする予測システム。 - 予測モデル生成手段は、顧客の属性と、ファクトデータによって顧客との関連があると判定される第2のマスタデータ内の各レコードにおける属性の値の統計量とを説明変数とする予測モデルを、顧客のクラスタ毎に生成する
請求項5に記載の予測システム。 - 第1のマスタデータと、第2のマスタデータと、前記第1のマスタデータ内のレコードのIDである第1IDと前記第2のマスタデータ内のレコードのIDである第2IDとの関係を示すファクトデータとに基づいて、前記第1IDおよび前記第2IDを共クラスタリングし、
前記第1IDのクラスタ毎に予測モデルを生成し、
前記第1IDと、前記第1のマスタデータに含まれる属性の一つである目的変数とが指定された場合に、前記予測モデルと、前記第1IDが各クラスタに属する所属確率とに基づいて、前記第1IDに対応する前記目的変数の値を予測する
ことを特徴とする予測方法。 - 第1のマスタデータ内の属性と、ファクトデータによって第1IDとの関連があると判定される第2のマスタデータ内の各レコードにおける属性の値の統計量とを説明変数とする予測モデルを、第1IDのクラスタ毎に生成する
請求項7に記載の予測方法。 - 顧客と前記顧客の属性とを含む第1のマスタデータと、商品と前記商品の属性とを含む第2のマスタデータと、前記顧客と前記商品との関係を示すファクトデータとに基づいて、前記顧客および前記商品を共クラスタリングし、
前記顧客のクラスタ毎に予測モデルを生成し、
顧客と、前記顧客の属性の一つである目的変数とが指定された場合に、前記予測モデルと、指定された前記顧客が各クラスタに属する所属確率とに基づいて、指定された前記顧客に対応する前記目的変数の値を予測する
ことを特徴とする予測方法。 - 顧客の属性と、ファクトデータによって顧客との関連があると判定される第2のマスタデータ内の各レコードにおける属性の値の統計量とを説明変数とする予測モデルを、顧客のクラスタ毎に生成する
請求項9に記載の予測方法。 - コンピュータに、
第1のマスタデータと、第2のマスタデータと、前記第1のマスタデータ内のレコードのIDである第1IDと前記第2のマスタデータ内のレコードのIDである第2IDとの関係を示すファクトデータとに基づいて、前記第1IDおよび前記第2IDを共クラスタリングする共クラスタリング処理、
前記共クラスタリング処理で出力される前記第1IDのクラスタ毎に予測モデルを生成する予測モデル生成処理、および、
前記第1IDと、前記第1のマスタデータに含まれる属性の一つである目的変数とが指定された場合に、前記予測モデルと、前記第1IDが各クラスタに属する所属確率とに基づいて、前記第1IDに対応する前記目的変数の値を予測する予測処理
を実行させるための予測プログラム。 - コンピュータに、
予測モデル生成処理で、第1のマスタデータ内の属性と、ファクトデータによって第1IDとの関連があると判定される第2のマスタデータ内の各レコードにおける属性の値の統計量とを説明変数とする予測モデルを、第1IDのクラスタ毎に生成させる
請求項11に記載の予測プログラム。 - コンピュータに、
顧客と前記顧客の属性とを含む第1のマスタデータと、商品と前記商品の属性とを含む第2のマスタデータと、前記顧客と前記商品との関係を示すファクトデータとに基づいて、前記顧客および前記商品を共クラスタリングする共クラスタリング処理、
前記共クラスタリング処理で出力される前記顧客のクラスタ毎に予測モデルを生成する予測モデル生成処理、および、
顧客と、前記顧客の属性の一つである目的変数とが指定された場合に、前記予測モデルと、指定された前記顧客が各クラスタに属する所属確率とに基づいて、指定された前記顧客に対応する前記目的変数の値を予測する予測処理
を実行させるための予測プログラム。 - コンピュータに
予測モデル生成処理で、顧客の属性と、ファクトデータによって顧客との関連があると判定される第2のマスタデータ内の各レコードにおける属性の値の統計量とを説明変数とする予測モデルを、顧客のクラスタ毎に生成させる
請求項13に記載の予測プログラム。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/750,335 US20180225581A1 (en) | 2016-03-16 | 2017-03-03 | Prediction system, method, and program |
JP2018505812A JP6414363B2 (ja) | 2016-03-16 | 2017-03-03 | 予測システム、方法およびプログラム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016-052738 | 2016-03-16 | ||
JP2016052738 | 2016-03-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017159403A1 true WO2017159403A1 (ja) | 2017-09-21 |
Family
ID=59850923
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/008489 WO2017159403A1 (ja) | 2016-03-16 | 2017-03-03 | 予測システム、方法およびプログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US20180225581A1 (ja) |
JP (1) | JP6414363B2 (ja) |
WO (1) | WO2017159403A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019188101A1 (ja) * | 2018-03-27 | 2019-10-03 | カルチュア・コンビニエンス・クラブ株式会社 | 顧客の属性情報を解析する装置、方法、およびプログラム |
JP2020140572A (ja) * | 2019-02-28 | 2020-09-03 | 富士通株式会社 | 配分方法、抽出方法、配分プログラム、抽出プログラム、配分装置及び抽出装置 |
JP2021103339A (ja) * | 2018-03-27 | 2021-07-15 | カルチュア・コンビニエンス・クラブ株式会社 | 顧客の属性情報を解析する装置、方法、およびプログラム |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016157707A1 (ja) * | 2015-03-30 | 2016-10-06 | 日本電気株式会社 | 表操作システム、方法およびプログラム |
US10423781B2 (en) * | 2017-05-02 | 2019-09-24 | Sap Se | Providing differentially private data with causality preservation |
US10922335B1 (en) | 2018-01-29 | 2021-02-16 | Facebook, Inc. | User targeting using an unresolved graph |
US10803094B1 (en) * | 2018-01-29 | 2020-10-13 | Facebook, Inc. | Predicting reach of content using an unresolved graph |
JP7155074B2 (ja) * | 2019-07-03 | 2022-10-18 | 富士フイルム株式会社 | 情報提案システム、情報提案方法、プログラムおよび記録媒体 |
US11551024B1 (en) * | 2019-11-22 | 2023-01-10 | Mastercard International Incorporated | Hybrid clustered prediction computer modeling |
US11620542B2 (en) * | 2019-12-05 | 2023-04-04 | At&T Intellectual Property I, L.P. | Bias scoring of machine learning project data |
KR102501496B1 (ko) * | 2020-06-11 | 2023-02-20 | 라인플러스 주식회사 | 개인화를 통한 연합 학습의 다중 모델 제공 방법, 시스템, 및 컴퓨터 프로그램 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007164346A (ja) * | 2005-12-12 | 2007-06-28 | Toshiba Corp | 決定木変更方法、異常性判定方法およびプログラム |
US20090055139A1 (en) * | 2007-08-20 | 2009-02-26 | Yahoo! Inc. | Predictive discrete latent factor models for large scale dyadic data |
WO2014179724A1 (en) * | 2013-05-02 | 2014-11-06 | New York University | System, method and computer-accessible medium for predicting user demographics of online items |
-
2017
- 2017-03-03 WO PCT/JP2017/008489 patent/WO2017159403A1/ja active Application Filing
- 2017-03-03 JP JP2018505812A patent/JP6414363B2/ja active Active
- 2017-03-03 US US15/750,335 patent/US20180225581A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007164346A (ja) * | 2005-12-12 | 2007-06-28 | Toshiba Corp | 決定木変更方法、異常性判定方法およびプログラム |
US20090055139A1 (en) * | 2007-08-20 | 2009-02-26 | Yahoo! Inc. | Predictive discrete latent factor models for large scale dyadic data |
WO2014179724A1 (en) * | 2013-05-02 | 2014-11-06 | New York University | System, method and computer-accessible medium for predicting user demographics of online items |
Non-Patent Citations (1)
Title |
---|
MASAFUMI OYAMADA ET AL.: "Mugen Kongo SVM no Kankei Model-ka", 2016 NENDO PROCEEDINGS OF THE ANNUAL CONFERENCE OF JSAI (JSAI2016, 6 June 2016 (2016-06-06), pages 1 - 4, Retrieved from the Internet <URL:http://kaigi.org/jsai/webprogram/2016/paper-310.html> * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019188101A1 (ja) * | 2018-03-27 | 2019-10-03 | カルチュア・コンビニエンス・クラブ株式会社 | 顧客の属性情報を解析する装置、方法、およびプログラム |
JP2021103339A (ja) * | 2018-03-27 | 2021-07-15 | カルチュア・コンビニエンス・クラブ株式会社 | 顧客の属性情報を解析する装置、方法、およびプログラム |
JP7198591B2 (ja) | 2018-03-27 | 2023-01-04 | カルチュア・コンビニエンス・クラブ株式会社 | 顧客の属性情報を解析する装置、方法、およびプログラム |
JP2020140572A (ja) * | 2019-02-28 | 2020-09-03 | 富士通株式会社 | 配分方法、抽出方法、配分プログラム、抽出プログラム、配分装置及び抽出装置 |
JP7310171B2 (ja) | 2019-02-28 | 2023-07-19 | 富士通株式会社 | 配分方法、抽出方法、配分プログラム、抽出プログラム、配分装置及び抽出装置 |
Also Published As
Publication number | Publication date |
---|---|
US20180225581A1 (en) | 2018-08-09 |
JP6414363B2 (ja) | 2018-10-31 |
JPWO2017159403A1 (ja) | 2018-06-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6414363B2 (ja) | 予測システム、方法およびプログラム | |
JP6311851B2 (ja) | 共クラスタリングシステム、方法およびプログラム | |
TWI631518B (zh) | 具有一或多個計算裝置的電腦伺服系統及訓練事件分類器模型的電腦實作方法 | |
CN112085172B (zh) | 图神经网络的训练方法及装置 | |
Yao et al. | New fairness metrics for recommendation that embrace differences | |
US10984343B2 (en) | Training and estimation of selection behavior of target | |
US8768866B2 (en) | Computer-implemented systems and methods for forecasting and estimation using grid regression | |
WO2020135642A1 (zh) | 一种基于生成对抗网络的模型训练方法及设备 | |
CN111966886A (zh) | 对象推荐方法、对象推荐装置、电子设备及存储介质 | |
JP2017199355A (ja) | レコメンデーション生成 | |
WO2023103527A1 (zh) | 一种访问频次的预测方法及装置 | |
CN107392217B (zh) | 计算机实现的信息处理方法及装置 | |
US11301763B2 (en) | Prediction model generation system, method, and program | |
Shamsabadi et al. | Confidential-PROFITT: confidential PROof of fair training of trees | |
CN113886697A (zh) | 基于聚类算法的活动推荐方法、装置、设备及存储介质 | |
CN113591881A (zh) | 基于模型融合的意图识别方法、装置、电子设备及介质 | |
CN112560105A (zh) | 保护多方数据隐私的联合建模方法及装置 | |
US11704598B2 (en) | Machine-learning techniques for evaluating suitability of candidate datasets for target applications | |
US20210133853A1 (en) | System and method for deep learning recommender | |
JPWO2018088276A1 (ja) | 予測モデル生成システム、方法およびプログラム | |
CN114493674A (zh) | 一种广告点击率预测模型及方法 | |
Kuznietsova et al. | Business intelligence techniques for missing data imputation | |
CN112463964A (zh) | 文本分类及模型训练方法、装置、设备及存储介质 | |
CN111368337A (zh) | 保护隐私的样本生成模型构建、仿真样本生成方法及装置 | |
JP7309673B2 (ja) | 情報処理装置、情報処理方法、及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2018505812 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15750335 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17766406 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17766406 Country of ref document: EP Kind code of ref document: A1 |