US20230186152A1 - Iterative data-driven configuration of optimization methods and systems - Google Patents

Iterative data-driven configuration of optimization methods and systems Download PDF

Info

Publication number
US20230186152A1
US20230186152A1 US17/674,410 US202217674410A US2023186152A1 US 20230186152 A1 US20230186152 A1 US 20230186152A1 US 202217674410 A US202217674410 A US 202217674410A US 2023186152 A1 US2023186152 A1 US 2023186152A1
Authority
US
United States
Prior art keywords
optimization
processor
computer
features
machine learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/674,410
Other languages
English (en)
Inventor
Sebastien OUELLET
Masoud CHITSAZ
Jacob LAFRAMBOISE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kinaxis Inc
Original Assignee
Kinaxis Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kinaxis Inc filed Critical Kinaxis Inc
Priority to US17/674,410 priority Critical patent/US20230186152A1/en
Assigned to KINAXIS INC. reassignment KINAXIS INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHITSAZ, Masoud, LAFRAMBOISE, Jacob, OUELLET, SEBASTIEN
Priority to PCT/CA2022/051806 priority patent/WO2023102667A1/fr
Publication of US20230186152A1 publication Critical patent/US20230186152A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • Optimization problems differ in the types of variables, constraints, and other related matters that determine the overall configuration of the entity to be optimized.
  • the conventional approach has been to apply one algorithm to these myriad of optimization problems, which often provides unsatisfactory results.
  • One approach to mitigate this issue is the creation of a portfolio of optimization algorithms. Each optimization algorithm in the portfolio is executed on a given optimization problem, and the best result is selected after executing the entire portfolio of optimization algorithms on the given optimization problem.
  • this approach can become complex, as each optimization algorithm can have multiple options that can be adjusted or selected, leading to multiple “versions” of each algorithm in the portfolio. It becomes expensive and time consuming to try many optimization algorithms (along with different associated multiple options), in order to find which optimization algorithm solves the problem in the best manner.
  • supply chain optimization problems are complex; the complexity depends on many features, such as the number of suppliers, the number of parts and products to be transported, the number of production facilities, among many other features.
  • Each optimization algorithm takes a different amount of time to execute on a given supply chain problem. For example, an optimization algorithm may take hours or days to run.
  • each optimization algorithm returns a different solution (that is, solutions with differing quality or accuracy) for the supply chain optimization problem.
  • the portfolio of optimization algorithms is executed on each complex supply chain, thereby increasing the amount of computational infrastructure required, in terms of data storage, CPU time, and so forth.
  • Training of machine learning models is based on features of the optimization problem. These features may include the number of variables, the number of constraints, structures, relationships between variables, and so on.
  • the machine learning model learns from previous optimization solutions and suggests the best options, so that a new solution is calculated as fast as desired, and with the best quality metrics as desired. This is important, since increasing the quality of the solution often takes a lot of processing time.
  • S&OP supply and operations planning
  • the disclosed systems and methods select a correct solution method for the right S&OP problem—that is, optimize supply chain planning for a family of products.
  • the selection mechanism can be trained based on results obtained by applying different methods to similar problems. In some embodiments, the training can be done offline so that the trained model will not incur any extra delay in returning the solution to the user.
  • the disclosed methods and systems allow for the flexibility of not only choosing different optimization algorithms, but also, different configurations within a given optimization algorithm.
  • the disclosed methods and systems also increase computer efficiency by cutting down on the CPU time needed to optimize a problem, since only one optimization algorithm is selected from an entire portfolio of algorithms, for execution on a complex optimization problem.
  • the selection of the optimization algorithm is based on the algorithm providing the best metrics.
  • the disclosed methods and systems require less computer storage. All in all, knowledge of which optimization algorithm returns the best solution (in a given time frame) is valuable in terms of saving computing power and user waiting time.
  • the result is a supply chain plan that moves resources and goods, and schedules manufacturing.
  • the disclosed methods and systems improve computer efficiency, CPU time and data storage.
  • computer efficiency is enhanced, in that the disclosed systems and methods provide an optimization solution in less time: namely one optimization algorithm is applied to an optimization problem in order to arrive at the best solution possible (in terms of a combination of run-time and quality metric), instead of applying all available algorithms to the given problem.
  • the “CPU time” is the total time that computer spends to optimize a problem by an optimization algorithm
  • the disclosed systems and methods decrease CPU time since not all of the optimization algorithms are executed on the problem at hand.
  • there is improvement in data storage since one optimization algorithm is selected to apply on a given optimization problem, thereby reducing the number of optimized solutions kept in storage.
  • a computer-implemented method includes extracting, by a processor, a first set of features from a plurality of optimization problems, receiving, by the processor, respective characteristics of a plurality of optimization algorithms, the characteristics of each algorithm based on application of the optimization algorithm applied to each optimization problem of the plurality of optimization problems, training, by the processor, a plurality of machine learning models on a first portion of a dataset, the dataset includes the first set of features and the respective characteristics, selecting a trained machine learning model based on a second portion of the dataset, extracting, by the processor, a second set of features related to a new optimization problem, and obtaining, by the processor, predicted performance characteristics for each optimization algorithm based on application of the selected trained machine learning model on the second set of features.
  • the performance characteristics may comprise a run-time and a performance metric.
  • each of the first set of features and the second set of features can be based on tabular data and graph structures generated from the tabular data.
  • the performance characteristics can comprise a run-time and a performance metric.
  • the computer-implemented method may also include ranking, by the processor, each optimization algorithm according to the predicted performance characteristics.
  • a first-ranked optimization algorithm may be executed on the new optimization problem.
  • successively-ranked optimization algorithms can be executed iteratively until one or more conditions are satisfied.
  • the one or more conditions can be: obtaining an actual run-time and an actual performance metric that is acceptable; or attaining a run-time limit; or expecting no further improvement on the run-time and performance metric of the successively-ranked optimization algorithms.
  • a system in another aspect, includes a processor.
  • the system also includes a memory storing instructions that, when executed by the processor, configure the system to extract, by a processor, a first set of features from a plurality of optimization problems, receive, by the processor, respective characteristics of a plurality of optimization algorithms, the characteristics of each algorithm based on application of the optimization algorithm applied to each optimization problem of the plurality of optimization problems, train, by the processor, a plurality of machine learning models on a first portion of a dataset, the dataset includes the first set of features and the respective characteristics, select a trained machine learning model based on a second portion of the dataset, extract, by the processor, a second set of features related to a new optimization problem, and obtain, by the processor, predicted performance characteristics for each optimization algorithm based on application of the selected trained machine learning model on the second set of features.
  • the performance characteristics may comprise a run-time and a performance metric.
  • each of the first set of features and the second set of features can be based on tabular data and graph structures generated from the tabular data.
  • the performance characteristics can comprise a run-time and a performance metric.
  • the system may also include instructions that further configure the system to rank, by the processor, each optimization algorithm according to the predicted performance characteristics.
  • a first-ranked optimization algorithm may be executed on the new optimization problem.
  • successively-ranked optimization algorithms can be executed iteratively until one or more conditions are satisfied.
  • the one or more conditions can be: obtaining an actual run-time and an actual performance metric that is acceptable; or attaining a run-time limit; or expecting no further improvement on the run-time and performance metric of the successively-ranked optimization algorithms.
  • a non-transitory computer-readable storage medium including instructions that when executed by a computer, cause the computer to extract, by a processor, a first set of features from a plurality of optimization problems, receive, by the processor, respective characteristics of a plurality of optimization algorithms, the characteristics of each algorithm based on application of the optimization algorithm applied to each optimization problem of the plurality of optimization problems, train, by the processor, a plurality of machine learning models on a first portion of a dataset, the dataset includes the first set of features and the respective characteristics, select a trained machine learning model based on a second portion of the dataset, extract, by the processor, a second set of features related to a new optimization problem, and obtain, by the processor, predicted performance characteristics for each optimization algorithm based on application of the selected trained machine learning model on the second set of features.
  • the performance characteristics may comprise a run-time and a performance metric.
  • each of the first set of features and the second set of features can be based on tabular data and graph structures generated from the tabular data.
  • the performance characteristics can comprise a run-time and a performance metric.
  • the computer-readable storage medium may also include instructions that further configure the computer to rank, by the processor, each optimization algorithm according to the predicted performance metric and predicted run-time.
  • a first-ranked optimization algorithm may be executed on the new optimization problem.
  • successively-ranked optimization algorithms can be executed iteratively until one or more conditions are satisfied.
  • the one or more conditions can be: obtaining an actual run-time and an actual performance metric that is acceptable; or attaining a run-time limit; or expecting no further improvement on the run-time and performance metric of the successively-ranked optimization algorithms.
  • FIG. 1 illustrates a block diagram in accordance with one embodiment.
  • FIG. 2 illustrates a block diagram of the training phase block shown in FIG. 1 in accordance with one embodiment.
  • FIG. 3 illustrates an example of a graph in accordance with one embodiment.
  • FIG. 4 illustrates a block diagram of the compute features block shown in FIG. 1 in accordance with one embodiment.
  • FIG. 5 illustrates a block diagram of the machine learning output block shown in FIG. 1 in accordance with one embodiment.
  • FIG. 6 illustrates a block diagram of the predicted performance characteristics block shown in FIG. 1 in accordance with one embodiment.
  • FIG. 7 illustrates a block diagram of the performance optimization block shown in FIG. 1 in accordance with one embodiment.
  • FIG. 8 illustrates of conditions in the decision block shown in FIG. 7 in accordance with one embodiment.
  • FIG. 9 illustrates a block diagram in accordance with one embodiment.
  • FIG. 10 illustrates a block diagram of the training phase block shown in FIG. 9 in accordance with one embodiment.
  • FIG. 11 illustrates a computer system in accordance with one embodiment.
  • FIG. 12 illustrates a block diagram in accordance with one embodiment.
  • Methods and systems disclosed herein can comprise: an optimization solving framework comprising a set of optimizing algorithms used for solving an optimization problem; data representing each optimization problem to solve; data representing the quality of the optimized solution provided by each optimization algorithm for each optimization problem; data representing the run-time required to obtained the optimized solutions provided by each optimization algorithm for each optimization problem; and a machine learning framework.
  • Each problem can be optimized by applying every optimization algorithm to the problem.
  • successive problems can be optimized through a pretrained machine learning model. For example, if there are five hundred similar problems to solve, the first one hundred can be solved by each optimization algorithm in a portfolio of optimization algorithms. The graphical and tabular features of each of the first one hundred optimization problems, along with the run-time and quality of the solutions provided by each optimization algorithm, can be used to train the machine learning model. The remaining four hundred similar problems can then be solved using the trained machine-learning model.
  • the methods and systems can solve the problem stated above, as the problems can change over time.
  • the machine learning model can generalize which optimization algorithms provide the best run-time and solution quality, given the characteristics (or features) of an optimization problem. This reduces the processing time and data storage needed to find an appropriate optimization algorithm for a current set of problems, thereby providing insights to a user about what makes a problem difficult to solve, while improving the quality of the solutions found.
  • aspects of the present disclosure may be embodied as a system, method or computer program product. Accordingly, aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present disclosure may take the form of a computer program product embodied in one or more computer readable storage media having computer readable program code embodied thereon.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, system, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, system, or device.
  • Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Python, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable storage medium that can direct a computer, other programmable data processing system, or other devices to function in a particular manner, such that the instructions stored in the computer readable storage medium produce an article of manufacture including instructions which implement the function/act specified in the schematic flowchart diagrams and/or schematic block diagrams block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing system, or other devices to cause a series of operational steps to be performed on the computer, other programmable system or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable system provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the schematic flowchart diagrams and/or schematic block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • a computer program (which may also be referred to or described as a software application, code, a program, a script, software, a module or a software module) can be written in any form of programming language. This includes compiled or interpreted languages, or declarative or procedural languages.
  • a computer program can be deployed in many forms, including as a module, a subroutine, a stand-alone program, a component, or other unit suitable for use in a computing environment.
  • a computer program can be deployed to be executed on one computer or can be deployed on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • a “software engine” or an “engine,” refers to a software implemented system that provides an output that is different from the input.
  • An engine can be an encoded block of functionality, such as a platform, a library, an object or a software development kit (“SDK”).
  • SDK software development kit
  • Each engine can be implemented on any type of computing device that includes one or more processors and computer readable media.
  • two or more of the engines may be implemented on the same computing device, or on different computing devices.
  • Non-limiting examples of a computing device include tablet computers, servers, laptop or desktop computers, music players, mobile phones, e-book readers, notebook computers, PDAs, smart phones, or other stationary or portable devices.
  • the processes and logic flows described herein can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flows can also be performed by, and system can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • special purpose logic circuitry e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
  • the processes and logic flows can be performed by, and systems can also be implemented as a graphics processing unit (GPU).
  • GPU graphics processing unit
  • Computers suitable for the execution of a computer program include, by way of example, general or special purpose microprocessors or both, or any other kind of central processing unit.
  • a central processing unit receives instructions and data from a read-only memory or a random access memory or both.
  • a computer can also include, or be operatively coupled to receive data from, or transfer data to, or both, one or more mass storage devices for storing data, e.g., optical disks, magnetic, or magneto optical disks. It should be noted that a computer does not require these devices.
  • a computer can be embedded in another device.
  • Non-limiting examples of the latter include a game console, a mobile telephone a mobile audio player, a personal digital assistant (PDA), a video player, a Global Positioning System (GPS) receiver, or a portable storage device.
  • PDA personal digital assistant
  • GPS Global Positioning System
  • a non-limiting example of a storage device include a universal serial bus (USB) flash drive.
  • Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices; non-limiting examples include magneto optical disks; semiconductor memory devices (e.g., EPROM, EEPROM, and flash memory devices); CD ROM disks; magnetic disks (e.g., internal hard disks or removable disks); and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • embodiments of the subject matter described herein can be implemented on a computer having a display device for displaying information to the user and input devices by which the user can provide input to the computer (for example, a keyboard, a pointing device such as a mouse or a trackball, etc.).
  • Other kinds of devices can be used to provide for interaction with a user.
  • Feedback provided to the user can include sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback).
  • Input from the user can be received in any form, including acoustic, speech, or tactile input.
  • a computer can send web pages to a web browser on a user's client device in response to requests received from the web browser.
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes: a front end component (e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described herein); or a middleware component (e.g., an application server); or a back end component (e.g. a data server); or any combination of one or more such back end, middleware, or front end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network.
  • Non-limiting examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”).
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • FIG. 1 illustrates a block diagram 100 in accordance with one embodiment.
  • optimization families or optimization problems
  • the optimization algorithms may be versions of mixed-integer linear optimization.
  • each optimization algorithm takes a certain amount of run-time to execute. Furthermore, each optimization algorithm provides an optimized solution whose quality is measured by a corresponding quality metric. All of this information is stored in database 118 .
  • the term “optimization family” is used to include instances where it is not just one particular problem that is being optimized, but an entire family of related problems that is being optimized. As an example, with reference to supply chain management, an optimization family refers to a whole family of inter-dependent parts (in a supply chain) that have one or more relationships between each other.
  • a new optimization family (or “new problem”), shown at 106 , is to be optimized.
  • block diagram 100 illustrates the use of machine learning to predict how long each optimization algorithm will take to optimize the new problem, along with predicting the corresponding quality metric of each optimization algorithm. This approach greatly improves computer efficiency, CPU time and data storage, in that the laborious execution of each optimization algorithm on the new optimization problem is avoided.
  • Data associated with the new optimization family can be stored in database 118 .
  • such data can include the lead time of a part, which sites are manufacturing this part, what are the components assembled for this part, and so on.
  • the new optimization family input may be used to compute features of the new optimization family at 108 . These features can be used in conjunction with a trained machine learning model to predict characteristics of each optimization algorithm (namely, predicted run-time and solution quality), as if it had been executed on the new optimization family.
  • the features computed at 108 can use the optimization family input (item 106 ) as input and data from the database 118 .
  • the optimization family input (item 106 ) may also be stored in the database 118 , for possible later use in further training of machine learning models.
  • a training phase 102 can provide trained machine learning models at 104 .
  • the machine learning models may belong to a common class, or type, of model, or can be a mixture of different types of machine learning models.
  • the machine learning models trained at training phase 102 can be any type of machine learning model.
  • Non-limiting examples of machine learning models include decision trees, neural networks and support vector machines. In some embodiments, a tree-based machine learning model is used.
  • the machine learning models can be trained using hyperparameter optimization. Optimal hyperparameter values can be found by making use of different samples such as Bayesian, random, evolutionary and grid search algorithms.
  • Selection of the best machine learning model may be based on three portions of the data: a first portion for training each of the machine learning models; a second portion for validating the machine learning models, in which one machine learning model is selected, and a third portion to further test the selected machine learning model.
  • a predicted output of each trained machine learning model is compared to the actual data in the validation portion.
  • the machine learning model that provides the most accurate prediction is selected for the testing phase, in which the performance of the selected model can be tested one more time.
  • the data can be portioned as follows: 35% train, 35% validation, and 30% test. Other partitioning of the training data is also possible.
  • the result of 104 is the selection of one trained machine learning model at 120 , which can then be used to predict the performance of the different optimization algorithms at 110 .
  • the selected trained machine learning model can predict the performance of each of the optimization algorithms, using the features that have been computed at 108 .
  • the predicted performance characteristics are listed at 112 , and may be ranked according to a pre-determined criteria.
  • the performance characteristics can include the run-time for executing the a given optimization algorithm, and metrics associated with one or more goals of the final optimization. As an example of the latter in the field of supply chain management, such metrics can include the timely availability of supplies, the cost of production, the overall revenue, and so on.
  • each metric can be weighted, with the total weighted sum providing an overall “quality” metric.
  • the top-ranked optimization algorithm can then be selected, and executed to perform the optimization on the optimization family at block 114 , thereby providing an optimized solution at block 116 .
  • the actual solution is then stored in the database 118 , along with the characteristics associated with the selected optimization algorithm, for use in further machine learning training. In this manner, computer efficiency is increased, CPU time is decreased, and database storage is decreased by running only one optimization algorithm on the new optimization family.
  • FIG. 2 illustrates a block diagram of the training phase 102 shown in FIG. 1 , in accordance with one embodiment.
  • features can be computed from basic input tabular data, and graphs (or tree structures) that are generated from the tabular input. Generation of graphical relationships from tabular data can provide additional knowledge of the structural relationship between various entities, thereby enhancing the robustness of the machine learning training. For example, in supply chain management of bicycles, table records provide useful data such as the names of the manufacturing sites, the amount of labor available per day at a production line, and so on, while graphs can be generated based on information in the tables, such as the relationship between the various components needed to manufacture a bicycle.
  • Basic input used to calculate features can include tables, at block 208 .
  • these tables can include a table of Bill of Materials, and other supply chain features. Relevant features may be extracted from the tables at block 212 .
  • the tables can also be used to generate, at block 210 , a graph structure for an optimization family based on relationships between entities in the tables.
  • Data may be naturally understood as a network/graph and the relationships between the various data points matter for the problem at hand.
  • one component of a bicycle is a wheel, which in turn requires an ‘X’ number of bearings.
  • a graph of the data reveals that the bicycle requires an ‘X’ number of bearings.
  • the relationships between the various data points are illustrated through a graph. For example, a bicycle with three layers of dependent materials is easier to plan for than a bicycle with seven layers.
  • An example of a graph structure is shown in FIG. 3 .
  • Features of the graph structure are then computed using graph computations at block 214 .
  • the features extracted from the tables may be merged with the features computed at block 214 .
  • optimization can be triggered for the optimization family for which the features are being computed.
  • This optimization results in a database 118 working in tandem with an optimization software server 204 to execute each optimization algorithm on the optimization family.
  • each optimization algorithm can also provide a set of characteristics associated with its execution. For example, characteristics can include the execution time of the optimization algorithm (on the given optimization family), along with different metrics that measure the quality of the optimized solution. As an example of the latter in the field of supply chain management, such metrics can include the timely availability of supplies, the cost of production, the overall revenue, and so on.
  • each metric can be weighted, with the total weighted sum providing an overall “quality” metric.
  • the characteristics from block 206 can be used with the merged features from block 216 , to train machine learning models at block 218 in order to predict the characteristics of the optimization algorithms for new optimization families.
  • the input for the training can include features of each optimization family and a feature that identifies a particular optimization algorithm (for example, an optimization algorithm identification number).
  • the output labels can include the corresponding optimization algorithm characteristics.
  • the machine learning models can belong to a common class, or type, of model.
  • machine learning models from a gradient boosting library can be used, such as LGBM.
  • the machine learning models trained at training phase 102 can be tree-based machine learning models. Other types of models are also possible, such as neural networks and support vector machines.
  • the machine learning models can be trained using hyperparameter optimization. Optimal hyperparameter values can be found by making use of different samples such as Bayesian, random, evolutionary and grid search algorithms. In some embodiments, three to seven distinct machine learning models can be used. For each distinct machine learning model, it is possible to have a set of parameters associated with the distinct model. As such, one distinct machine learning model may actually result in multiple machine learning models as different parameter values are chosen. For example, if a machine learning model has a parameter than can have binary values, then the machine learning model can be run as two associated machine learning models.
  • FIG. 3 illustrates an example of a graph 300 in accordance with one embodiment, that can be generated in block 210 of FIG. 2 .
  • the graph 300 illustrates relationships between different parts of a supply chain for the production of electronic bicycles.
  • Each entity (or part) is identified as a node 302 , while relationships between the entities are illustrated with links 304 .
  • the node shape key 306 describes the nature of the ordered entity, while the node colour key 308 reflects how often the order for each entity is on-time.
  • This example graph is generated from a table of data for an optimization family.
  • FIG. 4 illustrates a block diagram of the compute features block 108 shown in FIG. 1 in accordance with one embodiment.
  • the features of the new optimization family are computed in the same manner as in block 220 of FIG. 2 .
  • Basic input used to calculate features can include tables, at block 402 .
  • these tables can include a table of Bill of Materials, and other supply chain features. Relevant features may be extracted from the tables at block 406 .
  • the tables (at block 402 ) can also be used to generate, at block 404 , a graph structure for an optimization family based on relationships between entities in the tables.
  • Data may be naturally understood as a network/graph and the relationships between the various data points matter for the problem at hand.
  • one component of a bicycle is a wheel, which in turn requires an ‘X’ number of bearings.
  • a graph of the data reveals that the bicycle requires an ‘X’ number of bearings.
  • the relationships between the various data points can be illustrated through a graph. For example, a bicycle with three layers of dependent materials is easier to plan for than a bicycle with seven layers.
  • Features of the graph structure are then computed using graph computations at block 408 .
  • the features extracted from the tables may be merged with the features computed at block 408 .
  • the merged features may then be used by the trained machine learning model at 120 of FIG. 1 .
  • FIG. 5 illustrates a block diagram of the machine learning output block 110 shown in FIG. 1 in accordance with one embodiment.
  • the merged features of the new optimization family which are computed at block 408 , can be used with the selected trained selected machine learning model 120 , to predict the performance (that is, quality metrics) and run-time for each optimization algorithm.
  • merged features of the new optimization family can be used with the trained machine learning model to run a performance model 504 , using a first optimization algorithm 502 , to provide a predicted performance (or quality metric) at 506 .
  • the merged features are used with the trained machine learning model to run a runtime model 508 , using a first optimization algorithm 502 , to provide a predicted processing time at 510 .
  • the selected machine learning model 120 provides the predicted quality and execution time of a first optimization algorithm, as if it were to be applied to the new optimization family.
  • This process is repeated as the merged features of the new optimization family (at block 408 ) are used with the trained machine learning model to run a performance model 514 , using a second optimization algorithm 512 , to provide a predicted performance (or quality metric) at 516 .
  • the merged features are used with the trained machine learning models to run a runtime model 518 , using the second optimization algorithm 512 , to provide a predicted processing time at 520 .
  • the selected machine learning model 120 provides the predicted quality and execution time of a second optimization algorithm, as if it were to be applied to the new optimization family.
  • the selected machine learning model 120 provides the predicted quality and run-time of each optimization algorithm, as if it were to be applied to the new optimization family.
  • FIG. 6 illustrates a block diagram of the predicted performance characteristics block 112 shown in FIG. 1 in accordance with one embodiment.
  • Characteristics of each predicted solution are provided in Table 610 .
  • the performance characteristics predicted for each optimization algorithm 604 are listed.
  • the two characteristics are predicted: run-time 606 and predicted performance 608 (or quality).
  • such metrics can include the timely availability of supplies, the cost of production, the overall revenue, and so on.
  • each metric can be weighted, with the total weighted sum providing an overall “quality” metric.
  • optimization algorithm A results in a predicted run-time of 30 seconds and a quality metric of 98.
  • Optimization algorithm B results in a predicted run-time of 10 seconds and a quality metric of 95. That is, optimization algorithm A takes 3 times longer than optimization algorithm B to execute on the new optimization family. However, the quality of the optimized solution (as measured by a combination of weighted performance metrics) provided by than optimization algorithm A is higher than that of than optimization algorithm B.
  • optimization algorithm A provides better overall metrics than optimization algorithm B.
  • All of the optimization algorithms and their associated predicted characteristics can then be ranked in order of preference, according to priorities of time versus solution quality tradeoff, at block 602 .
  • FIG. 7 illustrates a block diagram of the performance optimization block 114 shown in FIG. 1 in accordance with one embodiment.
  • optimization is triggered at block 704 .
  • the database 118 works in tandem with the optimization software server 204 to provide an optimized result of the new optimization family. This result is analyzed at decision block 702 . If the executed result meets one or more conditions to exit optimization, then optimization is complete, and a solution is provided at block 116 . However, if the conditions are not met, then optimization is triggered using the next-ranked optimization algorithm. The process is then repeated, until a satisfactory solution emerges at block 116 . Examples of conditions are discussed below.
  • FIG. 8 illustrates of conditions in the decision block 702 shown in FIG. 7 in accordance with one embodiment.
  • the selected machine learning model produces estimates of optimization algorithm characteristics, as applied to a new optimization family. It is possible that the estimates are quite far away from the actual characteristics, once the selected optimization algorithm is executed on the new optimization family.
  • One condition in decision block 702 can be to determine if the optimized result (obtained after executing the selected optimization algorithm) is good enough for the user. That is, a user can set an upper limit for the difference between predicted and actual characteristics. If the predicted characteristics are very inaccurate, then the next-ranked optimization algorithm can be executed to see if it's actual characteristics are closer to its expected characteristics, than the previous optimization algorithm. Once the characteristics are acceptable (that is, accurate within a pre-set threshold), then the accompanying optimization solution is accepted.
  • Another condition in decision block 702 can be to see if a time limit is exceeded for the optimization.
  • a time limit is exceeded for the optimization.
  • a top-ranked optimization algorithm has a predicted time of execution. However, the actual time of execution may exceed a certain run-time threshold, at which point the execution will be aborted and the next-ranked optimization algorithm is executed, until a solution with an acceptable run-time characteristic is reached.
  • Another example of setting a run-time limit in decision block 702 is as follows. For example, an upper run-time threshold of 90 seconds per new optimization family can be set. Suppose there are five optimization algorithms, and each is predicted to take 30 seconds to execute on a given problem. Suppose further that the top three-ranked optimization algorithms execute in a total run-time of less than the upper run-time threshold of 90 seconds, yet none yield a result that is good enough (see above). Then, the optimization is aborted (that is, the next-ranked optimization algorithms are not executed). The best solution of the three is then returned as the optimized solution. This is another example of setting a run-time limit in decision block 702 .
  • Another condition in decision block 702 can be to determine if no further improvement is expected.
  • improvement is a measure of the difference in terms of the quality of a new solution versus that of a previous solution. That is, the machine learning output may suggest the extra time required to compute the next-ranked optimization algorithm is not worth the time and effort, based on a threshold.
  • the machine learning output of the top-ranked optimization algorithm indicates a run-time of 10 seconds and a quality metric value of ‘X’.
  • the machine learning output of the second-ranked optimization algorithm indicates a run-time of 70 seconds and a quality metric value of ‘0.9X’, suggesting that it is not worthwhile to use the second-ranked optimization algorithm.
  • the step of training, validating and testing a number of machine learning models can be eliminated by using only one machine learning model in the training phase ( 102 of FIG. 1 ). Such an alternative is illustrated in FIG. 9 and FIG. 10 .
  • FIG. 9 illustrates a block diagram 900 in accordance with one embodiment.
  • FIG. 9 is similar to FIG. 1 , except that only one machine learning model is trained at training phase 902 .
  • optimization families (or optimization problems) have already been solved, in that a number of optimization algorithms have been executed on each optimization family to find an optimal solution for each optimization family.
  • the optimization algorithms may be versions of mixed-integer linear optimization.
  • each optimization algorithm takes a certain amount of run-time to execute. Furthermore, each optimization algorithm provides an optimized solution whose quality is measured by a corresponding quality metric. All of this information is stored in database 118 .
  • the term “optimization family” is used to include instances where it is not just one particular problem that is being optimized, but an entire family of related problems that is being optimized. As an example, with reference to supply chain management, an optimization family refers to a whole family of inter-dependent parts (in a supply chain) that have one or more relationships between each other.
  • a new optimization family (or “new problem”), shown at 106 , is to be optimized.
  • block diagram 900 illustrates the use of machine learning to predict how long each optimization algorithm will take to optimize the new problem, along with predicting the corresponding quality metric of each optimization algorithm. This approach greatly improves computer efficiency, CPU time and data storage.
  • computer efficiency is enhanced, in that the disclosed systems and methods provide more in less time: namely one optimization algorithm is applied to an optimization problem in order to arrive at the best solution possible (in terms of a combination of run-time and quality metric), instead of applying all available algorithms to the given problem.
  • the “CPU time” is the total time that computer spends to optimize a problem by an optimization algorithm
  • the disclosed systems and methods decrease CPU time since not all of the optimization algorithms are executed on the problem at hand.
  • there is improvement in data storage since one optimization algorithm is selected to apply on a given optimization problem, thereby reducing the number of optimized solutions kept in storage. keep in the storage will decrease.
  • Data associated with the new optimization family can be stored in database 118 .
  • such data can include the lead time of a part, which sites are manufacturing this part, what are the components assembled for this part, and so on.
  • the new optimization family input may be used to compute features of the new optimization family at 108 . These features can be used in conjunction with a trained machine learning model to predict characteristics of each optimization algorithm (namely, predicted run-time and solution quality), as if it had been executed on the new optimization family.
  • the features computed at 108 can use the optimization family input (item 106 ) as input and data from the database 118 .
  • the optimization family input (item 106 ) may also be stored in the database 118 , for possible later use in further training of machine learning models.
  • a training phase 902 can provide a trained machine learning model at Trained machine learning model 904 .
  • the machine learning model trained at training phase 902 can be any type of machine learning model.
  • Non-limiting examples of machine learning models include decision trees, neural networks and support vector machines. In some embodiments, a tree-based machine learning model is used.
  • the machine learning model can be trained using hyperparameter optimization. Optimal hyperparameter values can be found by making use of different samples such as Bayesian, random, evolutionary and grid search algorithms.
  • the trained machine learning model can predict the performance of each of the optimization algorithms, using the features that have been computed at 108 .
  • the predicted performance characteristics are listed at 112 , and may be ranked according to a pre-determined criteria.
  • the performance characteristics can include the run-time for executing the a given optimization algorithm, and metrics associated with one or more goals of the final optimization. As an example of the latter in the field of supply chain management, such metrics can include the timely availability of supplies, the cost of production, the overall revenue, and so on.
  • each metric can be weighted, with the total weighted sum providing an overall “quality” metric.
  • the top-ranked optimization algorithm can then be selected, and executed to perform the optimization on the optimization family at block 114 , thereby providing an optimized solution at block 116 .
  • the actual solution is then stored in the database 118 , along with the characteristics associated with the selected optimization algorithm, for use in further machine learning training. In this manner, computer efficiency is increased, CPU time is decreased, and database storage is decreased by running only one optimization algorithm on the new optimization family.
  • FIG. 10 illustrates a block diagram of the training phase 902 shown in FIG. 9 , in accordance with one embodiment.
  • FIG. 10 is similar to FIG. 2 , except at box 1014 , in which only one machine learning model is trained.
  • features can be computed from basic input tabular data, and graphs (or tree structures) that are generated from the tabular input. Generation of graphical relationships from tabular data can provide additional knowledge of the structural relationship between various entities, thereby enhancing the robustness of the machine learning training. For example, in supply chain management of bicycles, table records provide useful data such as the names of the manufacturing sites, the amount of labor available per day at a production line, and so on, while graphs can be generated based on information in the tables, such as the relationship between the various components needed to manufacture a bicycle.
  • Basic input used to calculate features can include tables, at block 1004 .
  • these tables can include a table of Bill of Materials, and other supply chain features. Relevant features may be extracted from the tables at block 1008 .
  • the tables (at block 1004 ) can also be used to generate, at block 1006 , a graph structure for an optimization family based on relationships between entities in the tables.
  • Data may be naturally understood as a network/graph and the relationships between the various data points matter for the problem at hand.
  • one component of a bicycle is a wheel, which in turn requires an ‘X’ number of bearings.
  • a graph of the data reveals that the bicycle requires an ‘X’ number of bearings.
  • the relationships between the various data points can be illustrated through a graph. For example, a bicycle with three layers of dependent materials is easier to plan for than a bicycle with seven layers.
  • An example of a graph structure is shown in FIG. 3 .
  • Features of the graph structure are then computed using graph computations at block 1010 .
  • the features extracted from the tables may be merged with the features computed at block 1010 .
  • optimization can be triggered for the optimization family for which the features are being computed.
  • This optimization results in a database 118 working in tandem with an optimization software server 204 to execute each optimization algorithm on the optimization family.
  • each optimization algorithm can also provide a set of characteristics associated with its execution. For example, characteristics can include the execution time of the optimization algorithm (on the given optimization family), along with different metrics that measure the quality of the optimized solution. As an example of the latter in the field of supply chain management, such metrics can include the timely availability of supplies, the cost of production, the overall revenue, and so on.
  • each metric can be weighted, with the total weighted sum providing an overall “quality” metric.
  • the characteristics from block 1002 can be used with the merged features from block 1012 , to train machine learning model at block 1014 in order to predict the characteristics of the optimization algorithms for new optimization families.
  • the input for the training can include features of each optimization family and a feature that identifies a particular optimization algorithm (for example, an optimization algorithm identification number).
  • the output can include the corresponding optimization algorithm characteristics.
  • the machine learning model can belong to a common class, or type, of model.
  • a machine learning model from a gradient boosting library can be used, such as LGBM.
  • the machine learning model trained at training phase 902 can be a tree-based machine learning model.
  • Other types of models are also possible, such as neural networks and support vector machines.
  • the machine learning model can be trained using hyperparameter optimization. Optimal hyperparameter values can be found by making use of different samples such as Bayesian, random, evolutionary and grid search algorithms.
  • FIG. 11 illustrates a computer system 1100 in accordance with one embodiment.
  • One or more embodiments of the invention, or elements thereof, can be implemented in the form of an system including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps.
  • System server 1102 may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system.
  • program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types.
  • system server 1102 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer system storage media including memory storage devices.
  • system server 1102 is shown in the form of a general-purpose computing device.
  • the components of system server 1102 may include, but are not limited to, one or more processors 1112 , a memory 1110 , program 1116 and disk 1114 may be coupled by a bus structure (not shown).
  • Program 1116 may comprise a set of program modules which can execute functions and/or methods of embodiments of the invention as described herein.
  • Computer system 1100 can also include additional features and/or functionality.
  • computer system 1100 can also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape.
  • additional storage is illustrated in FIG. 11 by memory 1110 and disk 1114 .
  • Storage media can include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
  • Memory 1110 and disk 1114 are examples of non-transitory computer-readable storage media.
  • Non-transitory computer-readable media also includes, but is not limited to, Random Access Memory (RAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory and/or other memory technology, Compact Disc Read-Only Memory (CD-ROM), digital versatile discs (DVD), and/or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, and/or any other medium which can be used to store the desired information and which can be accessed by computer system 1100 . Any such non-transitory computer-readable storage media can be part of computer system 1100 .
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • DVD digital versatile discs
  • Any such non-transitory computer-readable storage media can be part of computer system 1100 .
  • Network 1104 Communication between system server 1102 , external devices 1106 and data storage 1108 via network 1104 can be over various network types.
  • Non-limiting example network types can include Fibre Channel, small computer system interface (SCSI), Bluetooth, Ethernet, Wi-fi, Infrared Data Association (IrDA), Local area networks (LAN), Wireless Local area networks (WLAN), wide area networks (WAN) such as the Internet, serial, and universal serial bus (USB).
  • network types can include Fibre Channel, small computer system interface (SCSI), Bluetooth, Ethernet, Wi-fi, Infrared Data Association (IrDA), Local area networks (LAN), Wireless Local area networks (WLAN), wide area networks (WAN) such as the Internet, serial, and universal serial bus (USB).
  • IrDA Infrared Data Association
  • LAN Local area networks
  • WLAN Wireless Local area networks
  • WAN wide area networks
  • USB universal serial bus
  • communication between various components of system 200 may take place over hard-wired, cellular, Wi-Fi or Bluetooth networked components or the like.
  • data storage 1108 is illustrated as separate from system server 1102 , data storage 1108 can also be integrated into system server 1102 , either as a separate component within system server 1102 , or as part of at least one of memory 1110 and disk 1114 .
  • Data storage 1108 may implement an “in-memory” database, in which volatile (e.g., non-disk-based) storage (e.g., Random Access Memory) is used both for cache memory and for storing the full database during operation, and persistent storage (e.g., one or more fixed disks) is used for offline persistency and maintenance of database snapshots.
  • volatile storage may be used as cache memory for storing recently-used data, while persistent storage stores the full database.
  • Data storage 1108 may store metadata regarding the structure, relationships and meaning of data. This information may include data defining the schema of database tables stored within the data.
  • a database table schema may specify the name of the database table, columns of the database table, the data type associated with each column, and other information associated with the database table.
  • Data storage 1108 may also or alternatively support multi-tenancy by providing multiple logical database systems which are programmatically isolated from one another. Moreover, the data may be indexed and/or selectively replicated in an index to allow fast searching and retrieval thereof.
  • System server 1102 may also communicate with one or more external devices 1106 such as a keyboard, a pointing device, a display, etc.; one or more devices that enable a user to interact with system server 1102 ; and/or any devices that enable system server 1102 to communicate with one or more other computing devices.
  • external devices 1106 such as a keyboard, a pointing device, a display, etc.
  • devices that enable a user to interact with system server 1102 and/or any devices that enable system server 1102 to communicate with one or more other computing devices.
  • one or more embodiments can make use of software running on a general purpose computer or workstation.
  • a processor 1112 might employ, for example, a processor 1112 , a memory 1110 , and one or more external devices 1106 such as a keyboard, a pointing device, or the like.
  • external devices 1106 such as a keyboard, a pointing device, or the like.
  • the term “processor” as used herein is intended to include any processing device, such as, for example, one that includes a CPU (central processing unit) and/or other forms of processing circuitry. Further, the term “processor” may refer to more than one individual processor.
  • memory is intended to include memory associated with a processor or CPU, such as, for example, RAM (random access memory), ROM (read only memory), a fixed memory device, a removable memory device (for example, diskette), a flash memory and the like.
  • input/output interface is intended to contemplate an interface to, for example, one or more mechanisms for inputting data to the processing unit (for example, mouse), and one or more mechanisms for providing results associated with the processing unit (for example, printer).
  • computer software including instructions or code for performing methods as described herein, may be stored in one or more of the associated memory devices (for example, ROM, fixed or removable memory) and, when ready to be utilized, loaded in part or in whole (for example, into RAM) and implemented by a CPU.
  • Such software could include, but is not limited to, firmware, resident software, microcode, and the like.
  • a data processing system suitable for storing and/or executing program code will include at least one processor 1112 coupled directly or indirectly to memory 1110 .
  • the memory elements can include local memory employed during actual implementation of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during implementation.
  • a “server” includes a physical data processing system (for example, system server 1102 as shown in FIG. 11 ) running a server program. It will be understood that such a physical server may or may not include a display and keyboard.
  • One or more embodiments can be at least partially implemented in the context of a cloud or virtual machine environment, although this is exemplary and non-limiting.
  • any of the methods described herein can include an additional step of providing a system comprising distinct software modules embodied on a computer readable storage medium; the modules can include, for example, any or all of the appropriate elements depicted in the block diagrams and/or described herein; by way of example and not limitation, any one, some or all of the modules/blocks and or sub-modules/sub-blocks described.
  • the method steps can then be carried out using the distinct software modules and/or sub-modules of the system, as described above, executing on one or more hardware processors such as 1112 .
  • a computer program product can include a computer-readable storage medium with code adapted to be implemented to carry out one or more method steps described herein, including the provision of the system with the distinct software modules.
  • HTML hypertext markup language
  • GUI graphical user interface
  • FIG. 12 illustrates a system 1200 in accordance with one embodiment.
  • Basic hardware includes a data storage 1206 in communication with a machine learning server 1202 and an optimization software server 1216 via network 1204 .
  • each server can independently include, but is not limited to, one or more processors, a memory, a program and a disk, each of which may be coupled by a bus structure.
  • machine learning server 1202 may include, but is not limited to, one or more processors 1210 , a memory 1208 , program 1214 and disk 1212 that may be coupled by a bus structure (not shown).
  • Program 1214 may comprise a set of program modules which can execute functions and/or methods of embodiments of the invention as described herein.
  • optimization software server 1216 may include, but is not limited to, one or more processors 1220 , a memory 1218 , program 1224 and disk disks 1222 that may be coupled by a bus structure (not shown).
  • Program 1224 may comprise a set of program modules which can execute functions and/or methods of embodiments of the invention as described herein.
  • System 1200 can also include additional features and/or functionality.
  • system 1200 can also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape.
  • additional storage is illustrated in FIG. 12 , in machine learning server 1202 , by memory 1208 and disk 1212 ; and in optimization software server 1216 by memory 1218 and disk 1222 .
  • Storage media can include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data.
  • Memory 1110 . memory 1218 , disk 1222 and disk 1114 are examples of non-transitory computer-readable storage media.
  • Non-transitory computer-readable media also includes, but is not limited to, Random Access Memory (RAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), flash memory and/or other memory technology, Compact Disc Read-Only Memory (CD-ROM), digital versatile discs (DVD), and/or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, and/or any other medium which can be used to store the desired information and which can be accessed by system 1200 . Any such non-transitory computer-readable storage media can be part of system 1200 .
  • RAM Random Access Memory
  • ROM Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • DVD digital versatile discs
  • Any such non-transitory computer-readable storage media can be part of system 1200 .
  • Non-limiting example network types can include Fibre Channel, small computer system interface (SCSI), Bluetooth, Ethernet, Wi-fi, Infrared Data Association (IrDA), Local area networks (LAN), Wireless Local area networks (WLAN), wide area networks (WAN) such as the Internet, serial, and universal serial bus (USB).
  • network types can include Fibre Channel, small computer system interface (SCSI), Bluetooth, Ethernet, Wi-fi, Infrared Data Association (IrDA), Local area networks (LAN), Wireless Local area networks (WLAN), wide area networks (WAN) such as the Internet, serial, and universal serial bus (USB).
  • communication between various components of system 200 may take place over hard-wired, cellular, Wi-Fi or Bluetooth networked components or the like.
  • one or more electronic devices of system 200 may include cloud-based features, such as cloud-based memory storage. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with machine learning server 1202 and 1216 , respectively.
  • data storage 1206 is illustrated as separate from either machine learning server 1202 and optimization software server 1216 , data storage 1206 can also be integrated into machine learning server 1202 and/or optimization software server 1216 , either as a separate component within each of machine learning server 1202 and/or optimization software server 1216 , or as part of at least one of memory and disk in each server.
  • Data storage 1206 may implement an “in-memory” database, in which volatile (e.g., non-disk-based) storage (e.g., Random Access Memory) is used both for cache memory and for storing the full database during operation, and persistent storage (e.g., one or more fixed disks) is used for offline persistency and maintenance of database snapshots.
  • volatile storage may be used as cache memory for storing recently-used data, while persistent storage stores the full database.
  • Data storage 1206 may store metadata regarding the structure, relationships and meaning of data. This information may include data defining the schema of database tables stored within the data.
  • a database table schema may specify the name of the database table, columns of the database table, the data type associated with each column, and other information associated with the database table.
  • Data storage 1206 may also or alternatively support multi-tenancy by providing multiple logical database systems which are programmatically isolated from one another. Moreover, the data may be indexed and/or selectively replicated in an index to allow fast searching and retrieval thereof.
  • Each server may also communicate with one or more external device(s); 1226 such as a keyboard, a pointing device, a display, etc.; one or more devices that enable a user to interact respectively with machine learning server 1202 and optimization software server 1216 ; and/or any devices that enable either machine learning server 1202 or optimization software server 1216 to communicate with one or more other computing devices.
  • external device(s) 1226 such as a keyboard, a pointing device, a display, etc.
  • devices that enable a user to interact respectively with machine learning server 1202 and optimization software server 1216 and/or any devices that enable either machine learning server 1202 or optimization software server 1216 to communicate with one or more other computing devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
US17/674,410 2021-12-09 2022-02-17 Iterative data-driven configuration of optimization methods and systems Pending US20230186152A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/674,410 US20230186152A1 (en) 2021-12-09 2022-02-17 Iterative data-driven configuration of optimization methods and systems
PCT/CA2022/051806 WO2023102667A1 (fr) 2021-12-09 2022-12-09 Configuration itérative commandée par des données de procédés et de systèmes d'optimisation

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163287684P 2021-12-09 2021-12-09
US17/674,410 US20230186152A1 (en) 2021-12-09 2022-02-17 Iterative data-driven configuration of optimization methods and systems

Publications (1)

Publication Number Publication Date
US20230186152A1 true US20230186152A1 (en) 2023-06-15

Family

ID=86694524

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/674,410 Pending US20230186152A1 (en) 2021-12-09 2022-02-17 Iterative data-driven configuration of optimization methods and systems

Country Status (2)

Country Link
US (1) US20230186152A1 (fr)
WO (1) WO2023102667A1 (fr)

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107391569B (zh) * 2017-06-16 2020-09-15 阿里巴巴集团控股有限公司 数据类型的识别、模型训练、风险识别方法、装置及设备
US20210110298A1 (en) * 2019-10-15 2021-04-15 Kinaxis Inc. Interactive machine learning
WO2021072556A1 (fr) * 2019-10-19 2021-04-22 Kinaxis Inc. Systèmes et procédés d'interprétabilité d'apprentissage machine
EP3920067B1 (fr) * 2020-06-01 2024-05-01 Tata Consultancy Services Limited Procédé et système de test d'un modèle d'apprentissage par machine et de recommandation de mesure préventive

Also Published As

Publication number Publication date
WO2023102667A1 (fr) 2023-06-15

Similar Documents

Publication Publication Date Title
US11194809B2 (en) Predicting performance of database queries
Cortez et al. Modern optimization with R
US8364613B1 (en) Hosting predictive models
US11474817B2 (en) Provenance-based reuse of software code
US20170017900A1 (en) System and method for feature generation over arbitrary objects
US9710755B2 (en) System and method for calculating search term probability
US10438133B2 (en) Spend data enrichment and classification
US20110282861A1 (en) Extracting higher-order knowledge from structured data
US20210304278A1 (en) System and method for prioritized product index searching
Ivanov et al. Big data benchmark compendium
Arteaga et al. xlogit: An open-source Python package for GPU-accelerated estimation of Mixed Logit models
US20220058558A1 (en) Accurate and transparent path prediction using process mining
US20210042297A1 (en) Automated feature generation for machine learning application
Paludo Licks et al. SmartIX: A database indexing agent based on reinforcement learning
US11757808B2 (en) Data processing for enterprise application chatbot
Jun et al. Cloud computing based solution to decision making
US10599649B2 (en) Real time query planner statistics with time based changing
Perri et al. Towards a learning-based performance modeling for accelerating deep neural networks
US9324036B1 (en) Framework for calculating grouped optimization algorithms within a distributed data store
Li et al. Research on the application of multimedia entropy method in data mining of retail business
Tran et al. New machine learning model based on the time factor for e-commerce recommendation systems
JP2013065084A (ja) 予測方法及び予測プログラム
Qian et al. Parallel time series modeling-a case study of in-database big data analytics
US20230186152A1 (en) Iterative data-driven configuration of optimization methods and systems
US10769651B2 (en) Estimating prospect lifetime values

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: KINAXIS INC., CANADA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:OUELLET, SEBASTIEN;CHITSAZ, MASOUD;LAFRAMBOISE, JACOB;REEL/FRAME:059999/0803

Effective date: 20220407