WO2023149138A1 - Dispositif d'apprentissage d'estimateur - Google Patents

Dispositif d'apprentissage d'estimateur Download PDF

Info

Publication number
WO2023149138A1
WO2023149138A1 PCT/JP2022/048176 JP2022048176W WO2023149138A1 WO 2023149138 A1 WO2023149138 A1 WO 2023149138A1 JP 2022048176 W JP2022048176 W JP 2022048176W WO 2023149138 A1 WO2023149138 A1 WO 2023149138A1
Authority
WO
WIPO (PCT)
Prior art keywords
condition
qubo
estimator
learning device
unit
Prior art date
Application number
PCT/JP2022/048176
Other languages
English (en)
Japanese (ja)
Inventor
晃一郎 八幡
彰規 淺原
好弘 刑部
秀和 森田
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Publication of WO2023149138A1 publication Critical patent/WO2023149138A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass

Definitions

  • the present invention relates to an estimator learning device.
  • the technology for estimating objective variables from explanatory variable data is the most basic technology of machine learning or artificial intelligence.
  • Such estimation techniques are utilized in many situations. For example, in the field of material development, it takes an enormous amount of time and money to experiment with all combinations (conditions) of a plurality of material combinations in order to develop materials with high specific material property values. If the material property values can be estimated in advance from those experimental conditions, it will be possible to omit experiments with low prospects and enable efficient material development. At this time, it is desirable that the material property values be estimated with high estimation accuracy. Decision trees and their derived algorithms are used in techniques for estimating objective variables from explanatory variable data due to their high accuracy.
  • Ising machines are machines that can solve QUBO (Quadratic Unconstrained Binary Optimization) problems, such as quadratic binary variable optimization problems, and are used to solve combinatorial optimization problems. Therefore, if the problem of searching for a decision tree that minimizes the estimation error can be converted to a QUBO problem, it will be possible to make use of the strengths of the Ising machine in learning decision trees.
  • QUBO Quadrattic Unconstrained Binary Optimization
  • Patent Document 1 discloses an Ising machine data input device and a method of inputting data to the Ising machine.
  • the Ising machine data input device includes a conversion unit that performs a conversion process to convert an input expression in a format not suitable for input to the Ising machine into a format suitable for input.
  • a transformation unit derives a mathematical expression and evaluates whether the derived expression satisfies a preset quality metric.
  • the derived expressions are input to the Ising machine when evaluated to satisfy the criteria. When the derived representation is evaluated as not satisfying the criteria, the transformation unit repeats the transformation process using a different input representation.
  • Patent Document 1 converts the input problem into a QUBO problem by repeating conversion processing that converts the input problem into a mathematically equivalent problem, and solves the converted QUBO problem with an Ising machine.
  • the problem of searching a decision tree that minimizes the estimation error cannot be converted into a QUBO problem with only mathematically equivalent transformations.
  • the present invention has been made in view of the above problems, and its purpose is to provide a technique for increasing the accuracy of decision tree estimation.
  • the present invention provides an estimator learning apparatus for learning an estimator that searches for a branching condition of a decision tree for estimating an objective variable from explanatory variable data, wherein the estimator comprises: a QUBO transforming unit that transforms a prediction error minimization problem in searching for conditions into a QUBO problem or a first problem equivalent to the QUBO problem; a QUBO computing unit that computes the first problem transformed by the QUBO transforming unit; a branch condition generation unit that generates the branch condition based on the calculation result of the QUBO calculation unit.
  • FIG. 2 is a functional block diagram of the estimator learning device according to the first embodiment;
  • FIG. A decision tree according to the first embodiment. 4 is a functional block diagram of a QUBO question conversion unit according to the first embodiment;
  • FIG. 4 is a diagram showing a data structure example of an explanatory variable DB according to the first embodiment;
  • FIG. 4 is a diagram showing a data structure example of an objective variable DB according to the first embodiment;
  • FIG. 4 is a diagram showing an example data structure of a condition DB according to the first embodiment;
  • FIG. 4 is a diagram showing a data structure example of a conditional explanatory variable DB according to the first embodiment;
  • FIG. 4 is a diagram showing an example data structure of a decision tree DB according to the first embodiment;
  • FIG. 4 is a diagram showing an example data structure of a learning parameter DB according to the first embodiment;
  • FIG. 4 is a processing flow of the estimator learning device according to the first embodiment;
  • FIG. 1 A first embodiment of the present invention will be described using FIG. 1
  • FIG. 1 is a functional block diagram of the estimator learning device according to the first embodiment.
  • the estimator learning device 100 of the present invention comprises an interface 10, a database (DB) 11, and an estimator 12.
  • the interface 10 includes an input unit 101 and an output unit 102 as an example of a "display unit". Data including an explanatory variable and an objective variable are input to the input unit 101 .
  • the explanatory variables are stored in the explanatory variable DB 110 (FIG. 4), and the objective variables are stored in the objective variable DB 111 (FIG. 5).
  • the output unit 102 outputs data to the outside.
  • the estimator 12 includes a condition generation unit 103, a conditional explanatory variable generation unit 104, a QUBO problem conversion unit 105, a QUBO problem calculation unit 106, a branch condition generation unit 107, a condition determination unit 108, and an objective variable estimation. 109.
  • FIG. 2 is a decision tree according to the first embodiment.
  • the condition generation unit 103 generates branch conditions.
  • a branching condition is used for branching a decision tree.
  • a decision tree is a machine learning algorithm that estimates objective variables based on given explanatory variables, as shown in FIG. In the decision tree, branches are created sequentially using explanatory variables so that the prediction error becomes small.
  • the branching condition of the decision tree is a numerical condition using one explanatory variable, such as "temperature is higher than 30 degrees (temperature>30)".
  • the branching conditions of the present invention are not limited to such conditions, and may be conditions that can return true/false from the explanatory variables of each sample. For example, "the sum of temperature and humidity is 100 or more" and "a person is reflected in the image" can be considered.
  • the condition generation unit 103 may create branch conditions manually by the user, or may create branch conditions automatically from explanatory variables. For example, if the explanatory variable is a continuous quantity, you can use "temperature > 1/5th quantile of temperature", “temperature > 2/5th quantile of temperature”, or "temperature > 3/5th quantile of temperature”. It can be determined based on the statistics of explanatory variables. Parameters related to statistics, such as quantile granularity, may be selected by the user or automatically determined based on the sample size. If the explanatory variable is label data, it is conceivable to automatically generate a condition such as "the day of the week is Monday" or "the day of the week is not Monday”.
  • condition DB 112 it is thought that conditions such as "temperature data is missing" or "the number of missing explanatory variables is 5 or more" can be automatically generated. If explanatory variables are missing and it is difficult to determine the conditions, for example, it may be determined that the conditions are not met. The generated conditions are saved in the condition DB 112 .
  • the conditioned explanatory variable generation unit 104 generates conditioned explanatory variables from the explanatory variables, and stores the generated conditioned explanatory variables in the conditioned explanatory variable DB 113 .
  • the QUBO problem conversion unit 105 converts a branch condition search problem that reduces the prediction error (prediction error minimization problem) into a QUBO (Quadratic Unconstrained Binary Optimization) problem as an example of the "first problem".
  • the QUBO problem conversion unit 105 may convert the prediction error minimization problem into a problem equivalent to the QUBO problem.
  • FIG. 3 is a functional block diagram of the QUBO question conversion unit.
  • the QUBO problem conversion unit 105 includes an error function generation unit 301 and a QUBO problem generation unit 302.
  • the error function generator 301 will be explained.
  • the error function which is an index of the prediction error, is the sum of squares of residuals, which is the error between prediction and estimation. It is represented by Equation 1 below.
  • J is the sum of squared residuals
  • y[i] is the objective variable for sample i
  • S1 is the set of samples that satisfy the condition
  • S0 is the set of samples that do not satisfy the condition
  • pred1 is the set of samples that satisfy the condition
  • the predicted value of the sample that is satisfied, pred0 is the predicted value of the unsatisfied sample.
  • the pred1 and pred0 that minimize J are the mean y of the satisfying and the mean y of the unsatisfied samples, respectively. Therefore, the sum of squares J of the residuals is represented by Equation 2 below.
  • Var(S) represents the variance of the set S
  • N(S) represents the number of sets S.
  • FIG. the residual sum of squares is a value obtained by weighting the variance of the sample group divided by the condition by the number of each sample. Transforming the formula of the formula 2, the following formula 3 is obtained.
  • N(S1) and N(S0) exist in the denominator of Equation 3, it cannot be converted into a QUBO problem.
  • the QUBO problem transforming unit 105 transforms the residual sum of squares J into a QUBO problem by adjusting the weight for the variance of the sample group. For example, weighting is performed not by the number of samples, but by the square of the number of samples, as in Equation 4 below. However, if N(S1) and N(S0) can be eliminated from the denominator, the weight does not have to be the square of the number of samples. For example, it may be the 3rd power of the number of samples, the 4th power of the number of samples, or the square of the ratio of the number of samples.
  • the error function H is a value obtained by changing the weighting from the residual sum of squares J, has a strong correlation with the residual sum of squares J, and is in a form that can be transformed into a QUBO problem. Therefore, a branching condition that reduces the error function H is a branching condition that reduces the sum of squares J of the residuals.
  • the QUBO question generation unit 302 will be explained.
  • the QUBO question generator 302 determines search conditions and data to be input to the QUBO question calculator 106 .
  • a search condition for example, a condition (temperature>20, etc.) corresponding to a column of conditional explanatory variables (FIG. 7), which will be described later, can be considered.
  • the generated QUBO will be described.
  • the QUBO problem is expressed by an error function to be minimized expressed by QUBO variables expressed by 0 or 1 and one or more constraints to be satisfied by the QUBO variables.
  • the error function is represented by the following equation (6).
  • S is a set of all samples
  • X[i][j] is a conditioned explanatory variable of condition j of sample i
  • C is a set of conditions
  • c is a QUBO variable expressing whether to use a condition
  • the condition to be used must be narrowed down to one, which is represented by the following constraints of Expression 7.
  • the QUBO problem conversion unit 105 outputs the error function and constraints calculated as described above.
  • the QUBO problem calculation unit 106 calculates QUBO problems.
  • the QUBO problem calculation unit 106 (also called an annealing machine) is a digital Annealer or the like can be used.
  • the QUBO question calculation unit 106 outputs the QUBO variable c as an example of the "calculation result".
  • the condition determination unit 108 determines whether to use the condition j output from the QUBO problem calculation unit 106 for the estimator 12 . First, the condition determination unit 108 uses the output condition j to calculate how the number of samples is divided and the prediction error at that time. Condition determination unit 108 then stores this information in decision tree DB 114 . When the condition determination unit 108 determines to use the condition j, it may be repeated whether to further divide each divided sample group.
  • the objective variable estimation unit 109 estimates the objective variable from the explanatory variable data using the learned estimator 12 .
  • the database (DB) 11 comprises an explanatory variable database (DB) 110, an objective variable DB 111, a condition DB 112, a conditioned explanatory variable DB 113, a decision tree DB 114, and a learning parameter DB 115.
  • the user inputs data including an explanatory variable and an objective variable in the input section, and obtains an estimator that estimates the objective variable or an estimation result of the estimator using the explanatory variable of new data. be able to.
  • FIG. 4 is a diagram showing an example data structure of the explanatory variable DB according to the first embodiment.
  • FIG. 5 is a diagram showing an example data structure of a target variable DB according to the first embodiment.
  • the case of learning an estimator for estimating the daily sales of juice at a certain store will be described as an example.
  • the explanatory variable DB 110 is a table that stores an ID 401 as item values (column values), and temperature 402, humidity 403, day of the week 404, and a photo 405 of the previous day's shop front as examples of "explanatory variables" for each sample.
  • the ID 401 is an identifier that identifies an explanatory variable.
  • the temperature 402 is the Celsius temperature (degrees) around a certain store on the day.
  • Humidity 403 is the current humidity (%) around a certain store.
  • the day of the week 404 is the current day of the week at a store.
  • a photo 405 of the front of the store on the previous day is an image of the front of a certain store on the previous day.
  • Each row of the explanatory variable DB 110 and objective variable DB 111 corresponds to a sample, and these two explanatory variable DB 110 and objective variable DB 111 are linked with IDs 401 and 501.
  • IDs 401 and 501 may be not only numbers but also character strings. For example, for juice sales, ID 401, 501 may be the date.
  • each ID 401 is associated with an explanatory variable for each sample.
  • Explanatory variables may correspond to continuous numerical values such as temperature 402 and humidity 403, class information such as day of the week 204, or ID 401 such as a photo (image information) 405 in front of the shop on the previous day. format is not limited.
  • explanatory variables may also include speech, sentences, chemical formulas, and the like. Also, some of the explanatory variables may be missing.
  • the objective variable DB 111 is a table that stores an ID 501 as an item value (column value) and juice sales 502 as an example of the "objective variable" to be estimated.
  • ID 501 is an identifier that identifies the objective variable.
  • the juice sales 502 is the number of sales of juice on the day at a certain store. As an example, the juice sales 502 are “20 (bottles)”, “22 (bottles)”, and “33 (bottles)”.
  • a target variable is stored in the target variable DB 111 for each ID 501 .
  • FIG. 6 is a diagram showing an example data structure of the condition DB according to the first embodiment.
  • the condition DB 112 is a table that stores condition IDs 601 as item values (column values) and conditions 602 as an example of "branch conditions".
  • a condition ID 601 is an identifier that identifies a branch condition.
  • a condition 602 is a branching condition in the decision tree for estimating the objective variable from the explanatory variables. As an example, the condition 602 is "Temperature>20 (degrees)", “Temperature>22 (degrees)", and "Day of the week is Sunday".
  • FIG. 7 is a diagram showing an example data structure of the conditional explanatory variable DB according to the first embodiment.
  • the conditional explanatory variable DB 113 has, as item values (column values), an ID 701, "Temperature > 20 (Condition 0)" 702, “Temperature > 22 (Condition 1)” 703, and "Day of the week is Sunday (Condition 2) 704 and 'A person exists in the image (Condition 3)' 705.
  • the ID 501 is an identifier that identifies a conditional explanatory variable.
  • “Temperature>20 (condition 0)” 502 is a branching condition that the temperature around a certain store on the day is higher than 20 degrees.
  • “Temperature>22 (Condition 1)” 503 is a branching condition that the temperature around a certain store on the day is higher than 22 degrees.
  • “Day of the week is Sunday (Condition 2)” 504 is a branching condition that the current day of the week is Sunday at a store.
  • “People exist in image (Condition 3)” 505 is a branching condition that a person exists in an image captured in front of a shop on the previous day.
  • Each column in FIG. 7 indicates with 0 and 1 whether each sample satisfies the condition.
  • the value to be stored does not have to be "1, 0" as long as it is known whether the condition is satisfied. For example, "True, False” or “True, False” may be used.
  • FIG. 8 is a diagram showing an example data structure of the learning decision tree DB according to the first embodiment.
  • the decision tree DB 114 shown in the upper part of FIG. 8 stores the characteristics of the decision trees that are being created.
  • the decision tree DB 114 contains, as item values (column values), a node ID 801, a parent node 802, a true/false parent node condition 803, a condition 804, a predicted value 805 when true, and a predicted value when false. It is a table that stores values 806 .
  • the condition 804 is managed as a node, and the node indicating the condition used in the condition is identified by the ID 801 of the parent node, the truth 803 of the condition of the parent node, the ID related to the condition of the node, and the truth of the condition of the node.
  • the predicted value is the average value of the objective variable for each divided sample.
  • a condition for determining whether or not to use the estimator for example, a case where the number of samples divided under the condition is equal to or less than a threshold can be considered. Alternatively, it can be considered that the decrease in the prediction error is small, or that the depth of the decision tree exceeds the threshold.
  • the threshold is stored in the learning data parameter DB 115.
  • a decision tree based on the data stored in the decision tree DB 114 is shown at the bottom of FIG.
  • the node ID: 0 and the condition 804 of "Temperature>22 (degrees)" is true (YES)
  • the node ID: 1 proceeds to the condition 804 of "Sunday”. If the condition 804 of node ID: 0 and "temperature>22 (degrees)" is false (NO), the expected value 806 in the false case is "10 (books)”. If the node ID is 1 and the condition 804 "day of the week is Sunday” is true (YES), the expected value 805 in the true case is "120 (books)”. If the node ID is 1 and the “day of the week is Sunday” condition 804 is false (NO), then the expected value 806 is “90 (books)”.
  • FIG. 9 is a diagram showing an example data structure of a learning parameter DB according to the first embodiment.
  • the learning parameter DB 115 includes, as item values (column values), a minimum division parameter 901, It is a table that stores a minimum prediction error decrease width 902 and a maximum decision tree seismic intensity 903 .
  • the minimum division parameter 901 is "10”
  • the minimum prediction error decrease width 902 is "0.01”
  • the maximum decision tree seismic intensity 903 is "5".
  • the parameters may be set by the user or may be fixed values. Or you can try multiple parameters.
  • FIG. 10 is a diagram showing the processing flow of the estimator learning device according to the first embodiment. The system configuration will be explained in order of the processing flow.
  • Data including an explanatory variable and an objective variable are input to the input unit 101 (S1).
  • the explanatory variables input to the input unit 101 are stored in the explanatory variable DB 110 and the objective variables are stored in the objective variable DB 111 .
  • the condition generation unit 103 generates conditions used for branching of the decision tree (S2).
  • conditional explanatory variable generation unit 104 generates a conditional explanatory variable from the explanatory variables, and stores the generated conditional explanatory variable in the conditional explanatory variable DB 113 (S3).
  • the QUBO problem conversion unit 105 converts the branch condition search problem that reduces the prediction error into a QUBO problem (S4).
  • the QUBO problem calculation unit 106 calculates the QUBO problem converted by the QUBO problem conversion unit 105, and the branch condition generation unit 107 generates conditions used for branching (S5).
  • condition determination unit 108 divides the data sample using the branch condition generated by the branch condition generation unit 107, determines whether the division is used in the estimator 12, and stores the determination result in the decision tree DB 114. (S6). If the determination result is true (S6: YES), the process flow returns to S5 in order to further divide each divided sample group. If the result of this determination is false (when there are no sample groups to be divided) (S6: NO), the condition determination unit 108 proceeds to the next processing flow S7.
  • the output unit 102 outputs the features of the decision tree stored in the decision tree DB 114. That is, the output unit 102 outputs parameters obtained by learning (S7).
  • the estimator learning device trains the estimator 12 for searching the branching condition of the decision tree for estimating the objective variable from the explanatory variable data.
  • a QUBO problem calculation unit 106 and a branch condition generation unit 107 are provided.
  • the QUBO problem transforming unit 105 transforms the prediction error minimization problem in the branch condition search into a QUBO problem.
  • the QUBO question calculation unit 106 calculates the QUBO question converted by the QUBO question conversion unit.
  • a branching condition generation unit 107 generates a branching condition based on the calculation result of the QUBO problem calculation unit 106 . As a result, the accuracy of estimating the branching condition of the decision tree can be improved.
  • FIG. 11 is a decision tree including conditions that can be expressed by a logical product according to the first embodiment.
  • FIG. 12 is a diagram for explaining a method of expressing a logical product condition according to the second embodiment.
  • Embodiment 2 discloses an example in which the QUBO question conversion unit 1005, which is different from that in Embodiment 1, is applied, and shows that it includes not only one condition but also a condition that can be expressed as a logical AND of the conditions.
  • a branch using a condition that can be expressed by a logical product is defined as “temperature>30” and “day of the week is Sunday”, such as “temperature>30 and Sunday” as shown in FIG. condition to be met.
  • the conditions are not limited to two, and any number of conditions described in the condition DB 112 can be used.
  • a condition that can be expressed as a logical product of such conditions is called a logical product condition.
  • condition IDs are expressed by vectors indicating whether or not each condition is used, as shown in FIG.
  • the conditions are "temperature>30 and humidity>50". Therefore, the QUBO problem conversion unit searches for the vector.
  • Equation 8 The error function H in the search problem is represented by Equation 8 below.
  • KX is a matrix of QUBO variables that indicates the number of unsatisfied conditions among the conditions that constitute the conditions expressed by the logical AND in sample i.
  • K is the maximum of the conditions that make up the conjunctive condition.
  • the QUBO problem conversion unit 1005 generates a QUBO problem expressed by the error function and three types of constraints as described above.
  • the branch condition generation unit 107 sets the branch condition search range to conditions generated from table-format data divided for each branch, or conditions expressed by a logical product of conditions. This makes it possible to widen the search range of branch conditions.
  • the present invention is not limited to the above-described embodiments, and includes various modifications.
  • the above-described embodiments have been described in detail in order to explain the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the described configurations.
  • it is possible to replace part of the configuration of one embodiment with the configuration of another embodiment and it is also possible to add the configuration of another embodiment to the configuration of one embodiment.
  • each of the above configurations may be partially or wholly configured by hardware, or may be configured to be realized by executing a program on a processor.
  • control lines and information lines indicate those considered necessary for explanation, and not all control lines and information lines are necessarily indicated on the product. In practice, it may be considered that almost all configurations are interconnected.
  • the QUBO problem conversion units 105 and 1005 convert the error of the sample group out of the error to be minimized, which is represented by the sum of the errors of the sample groups of table-format data divided for each branch, into the sample group
  • the error minimization problem may be converted to a QUBO problem by weighting with a number, a value proportional to the number of samples, or an output value of a function expressed by a sample coefficient or a sample number and a sample coefficient.
  • the branching condition generation unit 107 may create a new branching condition based on the decision tree searched for the branching condition. This makes it possible to create deep decision trees.
  • the branching condition generation unit 107 may create a plurality of decision trees and combine the created decision trees to create a new decision tree. This makes it possible to further improve the accuracy of estimating the branching condition of the decision tree.
  • An importance calculation unit that calculates the importance of the branch condition based on the calculation result of the QUBO problem calculation unit 106, and a display unit 102 that displays the importance calculated by the importance calculation unit may be provided. This allows the user to determine the branching condition while confirming the degree of importance.
  • a display unit 102 for displaying the importance of the conditions generated by the branch condition generation unit 107 may be provided. This allows the user to determine the branching condition while confirming the degree of importance.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

La présente invention permet d'augmenter la précision d'estimation d'une condition de branche d'un arbre de décision. Le dispositif d'apprentissage d'estimateur (100) selon l'invention sert à entraîner un estimateur (12) qui recherche une condition de branche d'un arbre de décision pour estimer une variable cible à partir de données d'une variable explicative. L'estimateur (12) comprend : une unité de conversion de problème QUBO (105) qui convertit un problème de minimisation d'erreur de prédiction dans la recherche de la condition de branche en un premier problème qui est un problème QUBO ou qui est équivalent à un problème QUBO ; une unité de calcul de problème QUBO (106) qui calcule le premier problème converti par l'unité de conversion de problème QUBO (105) ; et une unité de génération de condition de branche (107) qui génère la condition de branche en fonction du résultat de calcul de l'unité de calcul de problème QUBO (106).
PCT/JP2022/048176 2022-02-03 2022-12-27 Dispositif d'apprentissage d'estimateur WO2023149138A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022015734A JP2023113393A (ja) 2022-02-03 2022-02-03 推定器学習装置
JP2022-015734 2022-02-03

Publications (1)

Publication Number Publication Date
WO2023149138A1 true WO2023149138A1 (fr) 2023-08-10

Family

ID=87552279

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/048176 WO2023149138A1 (fr) 2022-02-03 2022-12-27 Dispositif d'apprentissage d'estimateur

Country Status (2)

Country Link
JP (1) JP2023113393A (fr)
WO (1) WO2023149138A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10222370A (ja) * 1997-02-06 1998-08-21 Kokusai Denshin Denwa Co Ltd <Kdd> データベースにおける決定木生成システム
JP2004157814A (ja) * 2002-11-07 2004-06-03 Fuji Electric Holdings Co Ltd 決定木生成方法およびモデル構造生成装置
WO2019189249A1 (fr) * 2018-03-29 2019-10-03 日本電気株式会社 Dispositif d'apprentissage, procédé d'apprentissage, et support d'enregistrement
US20190392332A1 (en) * 2018-06-25 2019-12-26 Tmaxsoft Co., Ltd Computer Program Stored in Computer Readable Medium and Database Server Transforming Decision Table Into Decision Tree
JP2020030699A (ja) * 2018-08-23 2020-02-27 株式会社リコー 学習装置および学習方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10222370A (ja) * 1997-02-06 1998-08-21 Kokusai Denshin Denwa Co Ltd <Kdd> データベースにおける決定木生成システム
JP2004157814A (ja) * 2002-11-07 2004-06-03 Fuji Electric Holdings Co Ltd 決定木生成方法およびモデル構造生成装置
WO2019189249A1 (fr) * 2018-03-29 2019-10-03 日本電気株式会社 Dispositif d'apprentissage, procédé d'apprentissage, et support d'enregistrement
US20190392332A1 (en) * 2018-06-25 2019-12-26 Tmaxsoft Co., Ltd Computer Program Stored in Computer Readable Medium and Database Server Transforming Decision Table Into Decision Tree
JP2020030699A (ja) * 2018-08-23 2020-02-27 株式会社リコー 学習装置および学習方法

Also Published As

Publication number Publication date
JP2023113393A (ja) 2023-08-16

Similar Documents

Publication Publication Date Title
Nasteski An overview of the supervised machine learning methods
CN114186084B (zh) 在线多模态哈希检索方法、系统、存储介质及设备
Navgaran et al. Evolutionary based matrix factorization method for collaborative filtering systems
CN113420421B (zh) 移动边缘计算中基于时序正则化张量分解的QoS预测方法
CN110516950A (zh) 一种面向实体解析任务的风险分析方法
Fazilov et al. Formation an informative description of recognizable objects
Fan et al. Adaptive partition intuitionistic fuzzy time series forecasting model
Lu et al. Research on adaptive algorithm recommendation system based on parallel data mining platform
US7392231B2 (en) Determining utility functions from ordenal rankings
Mohammed et al. A new optimizer for image classification using wide ResNet (WRN)
Mashinchi et al. An improvement on genetic-based learning method for fuzzy artificial neural networks
Vizuete-Luciano et al. Decision making in the assignment process by using the Hungarian algorithm with OWA operators
WO2023149138A1 (fr) Dispositif d&#39;apprentissage d&#39;estimateur
Wang et al. An improved neural network with random weights using backtracking search algorithm
CN107451855A (zh) 一种图构建与l1正则矩阵分解联合学习的推荐方法
US20230087642A1 (en) Training apparatus and method for neural network model, and related device
CN118246939A (zh) 一种基于注意力机制的DeepFM模型浏览数据处理的方法及系统
Vander Mijnsbrugge et al. Parameter efficient neural networks with singular value decomposed kernels
Price et al. Stochastic gradient descent
Ko et al. Deep compression of sum-product networks on tensor networks
Ko et al. Deep model compression and inference speedup of sum–product networks on tensor trains
KR20210030210A (ko) 기술의 지속 가능성을 탐색하기 위한 특허 분석 장치
Krzyśko et al. New variants of pairwise classification
CN111290756A (zh) 一种基于对偶强化学习的代码-注释转换方法
Dalai et al. Supervised machine learning approaches for medical data classification

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22925040

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE