US20170286839A1 - Selection of machine learning algorithms - Google Patents

Selection of machine learning algorithms Download PDF

Info

Publication number
US20170286839A1
US20170286839A1 US15/478,097 US201715478097A US2017286839A1 US 20170286839 A1 US20170286839 A1 US 20170286839A1 US 201715478097 A US201715478097 A US 201715478097A US 2017286839 A1 US2017286839 A1 US 2017286839A1
Authority
US
United States
Prior art keywords
algorithms
algorithm
benchmark
individual
predictions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/478,097
Inventor
Charles Parker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BigML Inc
Original Assignee
BigML Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BigML Inc filed Critical BigML Inc
Priority to US15/478,097 priority Critical patent/US20170286839A1/en
Assigned to BigML, Inc. reassignment BigML, Inc. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PARKER, CHARLES
Publication of US20170286839A1 publication Critical patent/US20170286839A1/en
Priority to US16/915,551 priority patent/US11586953B2/en
Priority to US18/099,904 priority patent/US20230153654A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/22Indexing; Data structures therefor; Storage structures
    • G06F17/30312
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • G06N99/005
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • the present disclosure pertains to data processing, and in particular systems and methods for identifying and selecting machine learning algorithms.
  • ML machine learning
  • a panoply of possible ML algorithms may be used to generate these models, such as decision trees, support vector machines, Bayesian networks, and various combinations thereof. Choosing the best or even a “good enough” model to use for a particular application has largely been a matter of intuition, experience, and trial and error. This is partly a result of wide variability in the characteristics of input data sets.
  • the usual process of creating a machine learning model from a dataset includes training a first type of ML model that usually works well for a particular application, checking the performance of the first model (for example, on unseen-during-training holdout data), then trying a second ML model that usually works well to see if the second model performs better than the first model. This process may be repeated until a satisfactory level of performance of an ML model is achieved.
  • This somewhat haphazard approach to selecting ML models may cause overhead-related issues because there may be a relatively large number algorithms to be tested depending on the possible parameterizations of those algorithms. Therefore, attempting to execute each algorithm on a particular dataset may result in relatively large expenditures of time and/or computational resources.
  • FIG. 1 illustrates a flow diagram for selecting an optimum algorithm for a candidate dataset, in accordance with various embodiments
  • FIG. 2 illustrates an example environment in which embodiments of the present disclosure may be implemented
  • FIG. 3 illustrates an example for generating a user dataset, in accordance with various embodiments
  • FIG. 4 illustrates an example process for generating a candidate dataset, in accordance with various embodiments
  • FIG. 5 illustrates an example process for generating benchmark data, in accordance with various embodiments
  • FIG. 6 illustrates an example process for selecting an optimum algorithm for a candidate dataset, in accordance with various embodiments.
  • FIG. 7 illustrates another example process for selecting an optimum algorithm for a candidate dataset, in accordance with various embodiments.
  • Embodiments discussed herein provide systems and methods for selecting machine learning (ML) algorithms and/or ML models.
  • an ML algorithm is a computer program that learns from an experience (e.g., one or more datasets) with respect to some task and some performance measure.
  • An ML model may be any object created after an ML algorithm is trained with one or more training datasets. After training, an ML model may be used to make predictions on new datasets.
  • the various embodiments are discussed infra with regard to selecting ML algorithms. However, the embodiments discussed herein are also applicable to selecting models as well as selecting algorithms. Accordingly, the term “algorithm” as discussed herein may be interchangeable with the term “model” even though these terms refer to different concepts.
  • a database of historical results of application of one or more ML algorithms/models on previous datasets may be used to for selecting a next candidate model for an input dataset.
  • the selection of a next candidate model may be based on a set of previously attempted parameters or criteria.
  • a sequence of candidates may be created by attempting actions that are likely to work well according to various parameters/criteria, and attempting actions that are uncorrelated to one another. In this way, new models that may be desirable may be discovered when the same or similar models were not previously desirable when applied to similar datasets.
  • Conventional techniques for dealing with ML model selection include “grid searches,” whereby essentially all possible algorithms are tried to the degree that this is possible.
  • Another conventional approach includes “random searches” where random algorithms are tried for as much time is allowed, and a best algorithm tried among these is returned.
  • Other conventional approaches focus on trying to find the best algorithm given only part of the information above.
  • a “meta-learning” approach may use historical data and statistical analysis of the current data to try to find a best algorithm given the data. While such an approach provides a good idea about the first model to try, a “second best” model suggested by such techniques will very likely be something conceptually close to the first model, not taking into account that this an algorithm space has already been explored to a degree.
  • Another related approach includes Bayesian parameter optimization.
  • various embodiments include using historical performance data (also referred to as “benchmarks”, “benchmark datasets”, and the like) to inform the search for a best or optimum ML algorithm and/or ML model.
  • a computing system may generate a set of algorithms that have already been tried on the current dataset based on a given dataset or a set of datasets, a variety of ML algorithms or models, and historical data indicating various performance metrics of the algorithms/models on various benchmark datasets.
  • the computing system may not generate the algorithms themselves; rather, the computing system may generate a list (or set) of the ML models/algorithms that have been applied to the dataset.
  • a historical database (also referred to as a “benchmark database” and the like) including the historical performance data may be created “off-line” before learning processes begin.
  • the historical performance data may include any data relating to performance of a particular ML algorithm/model.
  • the historical performance data may indicate how well an ML, algorithm/model fits or predicts a certain dataset.
  • historical performance data may take into account required processor resources, memory utilization, input/output (I/O) operations, network resources, and the like.
  • the computing system may generate and evaluate multiple ML models/algorithms and may choose an optimum ML algorithm/model, which may be an ML algorithm/model that fits desired parameters/criteria (e.g., predicting a desired number data points, etc.) better than other ML algorithms/models, executes or otherwise performs faster than other ML algorithms/models, and/or uses the least amount of resources compared to resources used by other ML algorithms/models.
  • desired parameters/criteria e.g., predicting a desired number data points, etc.
  • Embodiments provide that a submodular function may be utilized by the computing system to select a next candidate algorithm/model to test against a candidate dataset.
  • a submodular function (also referred to as a “submodular set function” and the like) may be a set function whose value, informally, has the property that the difference in the incremental value of the function, that a single element makes when added to an input set, decreases as the size of the input set increases.
  • Submodular functions may have a natural diminishing returns property, which may make them suitable for many applications, including approximation algorithms, game theory (as functions modeling user preferences), electrical networks, and the like.
  • the computing system may implement the submodular function to select an ML algorithm/model from a set of m algorithms/models that worked best or was optimal for one or more datasets of a set of n datasets, where m and n are numbers.
  • the term “best algorithm/model”, “optimum model”, and the like may refer to an ML, algorithm/model that is fulfills the predetermined criteria/parameters and/or is more economical/cost-effective (in terms of speed, computer resource usage, and the like) than other tested algorithms/models.
  • one of the m algorithms/models may be the best/optimum algorithm/model for the largest number of datasets in the set of n datasets, which in various embodiments may be applied to each of the n datasets.
  • another algorithm/model of the set of m algorithms/models which is the best for the greatest number of the remaining n datasets, may be discovered. This process may be repeated until there are no datasets left in the set of n datasets.
  • FIG. 1 illustrates a flow diagram for practicing various example embodiments.
  • operations 105 - 130 may be performed before learning or training begins and operations 135 - 145 may be performed as part of a learning or training process.
  • operations 105 - 130 maybe an example “off-line” process to build a historical database (also referred to herein as a “benchmark database”).
  • a set of m number of modeling algorithms 280 also referred to as “algorithms 280 ”, “algorithms 280 ”, and the like
  • a benchmark collection of n number of datasets 282 may be collected or otherwise made available.
  • the size of m and/or n may be quite large because performance or computing costs are not important at this stage of process 100 .
  • Operation 115 may include running/executing each possible algorithm 280 on each one of the benchmark datasets 282 to obtain predictions 285 .
  • data from a benchmark dataset 282 may be used as an input to an algorithm 280 , and the resulting output may be a prediction 285 .
  • operation 115 may include generating one or more ML models to be used to obtain the predictions 285 .
  • the act of computing predictions 285 on a selected one of the datasets 282 using a selected one of the algorithms 280 may be referred to as a “test” and the like. In some cases, fewer tests or selected ones could be run rather than testing all of the benchmark datasets 282 using all of the algorithms 280 .
  • the process 100 may include running/executing a first algorithm 280 (e.g., algorithm 280 - 1 ) on one or more of the benchmark datasets 282 until one or more criteria or conditions are satisfied, recording/storing various data items associated with the one or more criteria or conditions, and then repeating this process for each algorithm 280 in the set.
  • a first algorithm 280 e.g., algorithm 280 - 1
  • the process 100 may include running/executing a first algorithm 280 (e.g., algorithm 280 - 1 ) on one or more of the benchmark datasets 282 until one or more criteria or conditions are satisfied, recording/storing various data items associated with the one or more criteria or conditions, and then repeating this process for each algorithm 280 in the set.
  • the number of algorithms 280 and the number of benchmark datasets 282 need not be equal.
  • at most n ⁇ m tests may be run.
  • the resulting predictions 285 may then be evaluated in various ways to obtain results 287 .
  • predictions 285 may be compared to a holdout dataset.
  • a portion of the predictions 285 may be “withheld” as the holdout dataset, and one or more tests may be performed on the holdout dataset to determine how well the selected algorithm 280 forecasts or predicts the remaining portion of the predictions 285 (within a certain margin of error).
  • the algorithm 280 may be ranked or scored according to a number of data points in the remaining portion that were accurately forecasted/predicted by the algorithm 280 (within a certain margin of error).
  • the evaluation may include performing a two-fold cross-validation procedure on the predictions 285 , such as by randomly assigning data points in the predictions 285 to two separate data (e.g., set 1 and set 2) having an equal size.
  • ML training may be performed on set 1, and testing may be performed on set 2, followed by ML training on set 2 and testing on set 1.
  • the algorithm 280 may be ranked or scored according to a number of data points in the set 1 that were accurately forecasted/predicted by set 2, and vice versa (within a certain margin of error). Other validation or evaluation procedures may be used.
  • the evaluation results 287 may be stored in the benchmark database 210 (see FIG. 2 ) along with related data, including identifiers of the corresponding algorithms 280 and benchmark datasets 282 .
  • the benchmark database 210 may be the historical performance database since the benchmarks 287 stored therein are based on tested algorithms 280 .
  • the evaluation of predictions 285 may be performed at any convenient time and/or place where appropriate computing resources are made available. Such resources may be provisioned remotely or “in the cloud.”
  • the results 287 of the benchmark evaluation stored in the benchmark database 210 may be used to create a submodular function 290 .
  • the submodular function 290 may describe the likelihood that a best possible or optimum algorithm 280 has already been tried/tested given a current set of experiments.
  • the submodular function 290 may be optimized using known techniques.
  • Operation 135 may be the beginning of the learning or training process.
  • a first algorithm 280 (e.g., algorithm 280 - 1 ) may be selected and applied to the candidate dataset 275 .
  • the candidate dataset 275 may be a dataset for which an optimum algorithm 280 is to be discovered.
  • One goal of the example embodiments may include identifying the optimum algorithm 280 using as few tests/evaluations as possible in order to save time, effort, and computational and/or network resources.
  • Another goal of the example embodiments may include identifying a subset of algorithms 280 that is likely to include the best or optimum algorithm 280 for the candidate dataset 275 regardless of the data, format or type, and/or other properties of the candidate dataset 275 .
  • the set or sequence of algorithms 280 that are applied against the candidate dataset 275 may be a relatively small subset of the whole set of algorithms 280 (e.g., the set of algorithms 280 identified at operation 105 ) used to generate benchmark data (e.g., the results obtained at operation 125 ).
  • Application of the first algorithm 280 - 1 may include using data of the candidate dataset 275 as an input to the first algorithm 280 - 1 .
  • the results 287 of applying the first algorithm 280 - 1 to the candidate dataset 275 may be evaluated, and at operation 145 , the tested (selected) algorithm 280 - 1 may be added to a list of algorithms 292 tested on the candidate dataset 275 (e.g., as algorithm i in FIG. 1 ).
  • Operation 135 may also include selecting a next algorithm (e.g., algorithm 280 - 2 ) according to the submodular function 290 , which was created at operation 125 based on the benchmark database 210 .
  • the next algorithm 280 - 2 may be tested on the candidate dataset 275 , results 287 of applying the algorithm 280 - 2 to the candidate dataset 275 may be evaluated, and the algorithm 280 - 2 may be added to the list of tested algorithms 292 at operation 145 . This procedure is repeated until all algorithms 280 (or a selected subset of the algorithms 280 ) are tested on the candidate dataset 275 , and/or until results that are deemed satisfactory are achieved thereby populating the list of tested algorithms 292 with algorithms i-j. In this way, process 100 may be used to identify or select a candidate algorithm 280 whose performance is likely to be superior to the set of already-tested algorithms.
  • FIG. 2 illustrates an example environment 200 in which various embodiments of may be implemented.
  • a user system 205 may be coupled to a computing system 206 via a network 219 .
  • the user system 205 may include a processor system 205 A, a memory system 205 B, an input system 205 C, an output system 205 D, and a communications system 205 E.
  • the computing system 206 may include the processor system 207 (also referred to as a “server system 207 ” and the like), the network interface 208 , benchmark database 210 , benchmark dataset database 220 , modeling algorithms database 230 (also referred to as a “model algorithm database 230 ”, “algorithm database 230 ”, “modeling database 230 ”, and the like), and the data storage system 240 .
  • the memory system 205 B may include an operating system (OS), one or more databases (not shown), and one or more applications (not shown).
  • the processor system 205 A can include any suitable combination of one or more processors, such as one or more central processing units (CPUs) including single-core or multi-core processors, one or more graphics processing units (GPUs), one or more field-programmable gate arrays (FPGAs), or any other electronic circuitry capable of executing program code and/or software modules to perform arithmetic, logical, and/or input/output operations.
  • CPUs central processing units
  • GPUs graphics processing units
  • FPGAs field-programmable gate arrays
  • the memory system 205 B can include any suitable combination of one or more memory devices that may be embodied as any type of volatile or non-volatile memory or data storage.
  • Memory system 205 B may generally include volatile memory (e.g., random access memory (RAM), synchronous dynamic RAM (SDRAM) devices, double-data rate synchronous dynamic RAM (DDR SDRAM) device, flash memory, and the like), non-volatile memory (e.g., read only memory (ROM), solid state storage (SSS), non-volatile RAM (NVRAM), and the like), and/or other like storage media capable of storing and recording data.
  • RAM random access memory
  • SDRAM synchronous dynamic RAM
  • DDR SDRAM double-data rate synchronous dynamic RAM
  • ROM read only memory
  • SSD solid state storage
  • NVRAM non-volatile RAM
  • the memory system 205 B may be configured to store an operating system (OS) and program code for one or more software components or application(s).
  • OS operating system
  • program code for one or more software components or application(s
  • Instructions, program code and/or software components may be loaded into the memory system 205 B by one or more network elements (not shown) via communications system 205 E using wired or wireless communications interfaces.
  • the program code and/or software components may be loaded into the memory system 205 B during manufacture of the user system 205 , or loaded from a separate computer readable storage medium into the memory system 205 B using a drive mechanism (not shown), such as a memory card, memory stick, removable flash drive, removable sim card, a secure digital (SD) card, and/or other like computer readable storage medium (not shown).
  • the input system 205 C may include one or more interfaces, components or devices designed to enable interaction with the user system 205 .
  • the output system 205 D can include any suitable combination of output devices, such as one or more display devices, printers, or interfaces to networks.
  • the input system 205 C may include user interfaces and devices such as a physical keyboard or keypad, mice, trackballs, a touchpad, touchscreen, a speaker, a microphone, a fingerprint or handprint scanning device, etc.
  • the input system 205 C and/or the output system 205 D may include peripheral component interfaces, such as a non-volatile memory port, communications ports (e.g., universal serial bus (USB) port, FireWire port, Serial Digital Interface (SDI) port), IEEE 1284 port, etc.), an audio jack, a power supply interface.
  • peripheral component interfaces such as a non-volatile memory port, communications ports (e.g., universal serial bus (USB) port, FireWire port, Serial Digital Interface (SDI) port), IEEE 1284 port, etc.), an audio jack, a power supply interface.
  • the input system 205 C and/or the output system 205 D may include, memory controller hubs, input/output control hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the I/O operations.
  • the communications system 205 E may include circuitry for communicating with a wireless network or wired network. Communications system 205 E may be used to establish a link 216 (also referred to as “channel 216 ,” ‘networking layer tunnel 216 ,” “internet layer tunnel 216 ”, and the like) through which the user system 205 may communicate with the computing system 206 .
  • the Communications system 205 E may enable the user system 205 to communicate with computing system 206 using Transfer Control Protocol and Internet Protocol (TCP/IP) and, at a higher network level, other common Internet protocols to communicate, such as Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), etc.
  • TCP/IP Transfer Control Protocol and Internet Protocol
  • HTTP Hypertext Transfer Protocol
  • FTP File Transfer Protocol
  • the communications system 205 E may include one or more processors (e.g., baseband processors, etc.) that are dedicated to a particular wireless communication protocol (e.g., Wi-Fi and/or IEEE 802.11 protocols), a cellular communication protocol (e.g., Long Term Evolution (LTE) and the like), a wireless personal area network (WPAN) protocol (e.g., IEEE 802.15.4-802.15.5 protocols, Bluetooth or Bluetooth low energy (BLE), etc.), and/or a wired communication protocol (e.g., Ethernet, Fiber Distributed Data Interface (FDDI), Point-to-Point (PPP), etc.).
  • a particular wireless communication protocol e.g., Wi-Fi and/or IEEE 802.11 protocols
  • a cellular communication protocol e.g., Long Term Evolution (LTE) and the like
  • WPAN wireless personal area network
  • IEEE 802.15.4-802.15.5 protocols e.g., IEEE 802.15.4-802.15.5 protocols, Bluetooth or Bluetooth low energy (BLE), etc
  • the communications system 205 E may also include hardware devices that enable communication with wireless/wired networks and/or other user systems 12 using modulated electromagnetic radiation through a solid or non-solid medium.
  • Such hardware devices may include switches, filters, amplifiers, antenna elements, and the like to facilitate the communications over the air or through a wire by generating or otherwise producing radio waves to transmit data to one or more other devices, and converting received signals into usable information, such as digital data, which may be provided to one or more other components of user system 205 .
  • the user system 205 using the communications system 205 E may establish link 216 with network interface 208 of the computing system 206 .
  • a user system 205 may obtain and upload raw data 215 to the computing system 206 via a link 216 for processing.
  • the processor system 205 A may implement an application (e.g., a locally stored application, a web application, a native application, and the like) to generate and send message 214 to the computing system 206 .
  • the message 214 may be a request to convert the raw data 215 into one or more user datasets and/or candidate datasets 275 , and in such embodiments, the message 214 may include the raw data 214 to be converted.
  • the message 214 may be a request to store the raw data 214 in one or more data elements, records, and/or fields in one or more database object(s) of user database 235 .
  • the message 214 may include one or more targets indicating fields or records to be searched in user database 235 for data to be converted into the user datasets and/or candidate datasets 275 .
  • the message 214 may also include one or more other options, conditions, filters, etc. (e.g., sort parameters, maximum result size, and the like) to be used to obtain data from user database 235 .
  • the message 214 may be an HTTP message, where the raw data 215 , credentials, and/or other pertinent information may be located in the header or body portion of the HTTP message.
  • Other message types may be used to convey the message 214 , such as a Session Initiation Protocol (SIP) message, or any message used in the Internet protocols discussed previously.
  • SIP Session Initiation Protocol
  • the applications that enable communication with the computing system 206 may utilize any suitable query language to query, store, and obtain information in/from user database 235 , such as structured query language (SQL), object query language (OQL), object search language (OSL), and/or other like query languages.
  • these applications may provide a graphical user interface (GUI) that displays a visualization of the raw data 215 and/or data in user database 235 .
  • GUI graphical user interface
  • the GUI may include various graphical control elements, and may convert selections of the graphical control elements into suitable requests using the aforementioned query languages.
  • Such applications may be designed to run on a specific platform, such as when the user system 205 is implemented in a mobile device, such as a smartphone, tablet computer, and the like.
  • Such applications may also enable the user system 205 to provide authentication credentials (e.g., user identifier, password, personal identification number (PIN), biometric data, etc.) to the computing system 206 so that the computing system 206 may authenticate the identity of a user of the user system 205 .
  • authentication credentials e.g., user identifier, password, personal identification number (PIN), biometric data, etc.
  • PIN personal identification number
  • biometric data biometric data
  • Suitable implementations for the OS, databases, and applications, as well as the general functionality of the user system 205 are known or commercially available, and are readily implemented by persons having ordinary skill in the art, particularly in light of the disclosure herein.
  • Network 219 may be any network that allows computers to exchange data.
  • Network 219 may include one or more network elements (not shown) capable of physically or logically connecting computers.
  • the network 219 may include any appropriate network, including an intranet, the Internet, a cellular network, wireless network, cellular network, a local area network (LAN), wide area network (WAN), a personal or enterprise network, point-to-point network, star network, token ring network, hub network, or any other such network or combination thereof.
  • Components used for such a system can depend at least in part upon the type of network and/or environment selected. Protocols and components for communicating via such a network are well known and will not be discussed herein in detail. Communication over the network 219 may be enabled by wired or wireless connections, and combinations thereof.
  • the computing system 206 may include data storage system 240 , the processor system 207 , a network interface 208 , and the various databases 210 - 235 .
  • the processor system 207 may be implemented to include any suitable combination of one or more processors, such as one or more central processing units (CPUs) including single-core or multi-core processors, one or more graphics processing units (GPUs), one or more field-programmable gate arrays (FPGAs), or any other electronic circuitry capable of executing program code and/or software modules to perform arithmetic, logical, and/or input/output operations.
  • CPUs central processing units
  • GPUs graphics processing units
  • FPGAs field-programmable gate arrays
  • the processor system 207 may include one or more modules or engines that perform one or more functions as discussed herein, such as a source engine 241 , a dataset engine 243 , a model engine 245 , or a predication engine 247 .
  • program code for the various engines/modules may be stored in the data storage system 240 and executed by the processor system 207 .
  • each of the engines/modules may be embodied as an FPGA or as some other dedicated processor circuitry.
  • the processor system 207 may be implemented as a server system 207 (each with their own processor(s), memory device(s), I/O interfaces, network interfaces, and configured with suitable program code), where each server in the server system 207 carries out one or more functions as discussed herein.
  • Network interface 208 may be embodied as any type of communication circuit(s), device(s), hardware component(s) or collection thereof, capable of enabling communications between the computing system 206 and the user systems 12 via one or more communication networks (e.g., network 219 ).
  • network interface 208 may include one or more communication interfaces (e.g., ports) and one or more dedicated processors and/or FPGAs to communicate using one or more wired network communications protocols, such as Ethernet, token ring, Fiber Distributed Data Interface (FDDI), Point-to-Point Protocol (PPP), network sockets, and/or other like network communications protocols).
  • FDDI Fiber Distributed Data Interface
  • PPP Point-to-Point Protocol
  • the communication interfaces may be configured to communicatively couple the computing system 206 to any number of other nodes 110 , the interconnect device 120 , networks (e.g., physical or logical networks), and/or external computer devices.
  • each communication interface may be associated with a network socket address (e.g., a combination of an IP address and port number) or other like address that allows other devices to connect to the computer system 206 .
  • the network interface 208 may also include one or more virtual network interfaces configured to operate with the one or more applications of the computer system 206 .
  • the network interface 208 may be implemented as a set of application servers (also referred to as “app servers”), where each app server is configured to communicate with one or more components of the computing system 206 , and to serve requests received from the user systems 12 .
  • the network interface 208 may implement a user interface and/or application programming interface (API) to allow the user systems 12 to interact with the computing system 206 .
  • API application programming interface
  • the data storage system 240 may be computer-readable media having instructions stored thereon, which are executable by the processor system 207 .
  • Data storage system 240 may include program code for flow diagram 100 , program code for processes of flow diagram 100 , and processes 300 - 500 discussed with regard to FIGS. 3-5 (not shown by FIG. 2 ), and program code used for implementing the various functions of the database system 206 , such as an operating system and one or more other applications.
  • the databases 210 - 235 may be stored within the data storage system 240 , while in other embodiments, the databases 210 - 235 may be implemented in or by one or more separate/remote data storage systems.
  • the data storage system 240 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, optical drives, removable disk drives (e.g., optical discs, digital versatile disks (DVD), compact disks (CD), etc.), solid-state drives, and/or any of the other types of memory devices discussed with regard to memory system 205 B.
  • the various instructions, or portions thereof, may be loaded into the data storage system 240 from removable/separate storage media using a drive mechanism, from remote computing system via network interface 208 (e.g., over the Internet or network 219 using any of the aforementioned communication protocols).
  • the user system 205 may obtain the raw data 215 , compile or otherwise include the raw data 215 in a message 214 , and send the message 214 to the computing system 206 via a link 216 .
  • the raw data 215 may be converted into a candidate dataset 275 by the source server 241 and/or the dataset server 243 .
  • the source server 241 and/or the dataset server 243 may store the candidate dataset 275 in the user database 235 or some other data storage device for later retrieval by the model server 245 and/or the prediction server 247 .
  • the benchmark database 210 may be created as described above and may be available to one or more processors 17 of the computing system 206 .
  • the benchmark database 210 may include a set of benchmarks 284 , which may have been created using the stored set of benchmark datasets 282 stored in benchmark dataset database 220 .
  • a set of modeling algorithms 280 (also referred to as a “set of algorithms 280 ” and the like) may be stored in the modeling database 230 , where the set of algorithms 280 includes m number of algorithms (where m is a number).
  • the m number of algorithms may include 1 to m algorithms 280 where a first algorithms may be referred to as “algorithm 280 - 1 ”, a second model may be referred to as “algorithm 280 - 2 ”, and so forth until an mth algorithm may be referred to as “algorithm 280 - m ” (see e.g., FIG. 1 ).
  • Each of the m number of algorithms 280 may be applied to the n number of benchmark datasets 282 stored in the benchmark dataset database 220 to produce the benchmarks 284 .
  • the benchmarks 284 also referred to as “benchmark data 284 ” and the like) may provide a basis for determining a submodular function 290 .
  • the submodular function 290 may be used to choose a subset of the algorithms 280 to be applied against a candidate dataset 275 .
  • One benefit of using the submodular function 290 is that the submodular function 290 can suggest a set of “next” algorithms 280 that should be tried against the candidate dataset 275 based on a set of algorithms 280 that were already tested against the benchmark datasets 282 .
  • set of “next” algorithms 280 indicated by the submodular function 290 will likely include an optimum algorithm 280 that is better than other algorithms 280 for making predictions for a given candidate dataset 275 .
  • the set of “next” algorithms 280 may be a relatively a small subset of the stored algorithms 280 in the modeling database 230 , and thus, testing the algorithms 280 in the set of “next” algorithms 280 may be less time consuming and computationally intensive than using a randomized approach to choosing algorithms for a candidate dataset.
  • a user dataset 270 may be provided to the computing system 206 as noted previously (e.g., from raw data 215 and/or from user database 235 ), and the processor or server system 207 may carry out the operations discussed herein, such as: (1) obtaining, by the dataset server 243 , the user dataset 270 from the user database 235 ; (2) converting, by the dataset server 243 , the user dataset 270 into a candidate dataset 275 .
  • converting the user dataset 270 into a candidate dataset 275 may include normalizing data in the user dataset 270 , or performing other formatting operations on the user dataset 270 ; (3) selecting, by the model server 245 , a first algorithm 280 - 1 from modeling database 230 ; (4) compute or otherwise determine predictions 285 by applying the first algorithm 280 - 1 to the candidate dataset 275 at the model server 245 and/or the prediction server 247 ; (5) evaluating, by the prediction server 247 , the predictions 285 to obtain results 287 of applying the first algorithm 280 - 1 to the candidate dataset 275 ; (6) controlling storage, by the prediction server 247 , of the predictions 285 and the results 287 in the data storage system 240 (or in some other database or data storage device); (7) adding, by the model server 245 and/or the prediction server 247 , the selected first algorithm 280 - 1 to a list of algorithms 292 that are applied to the user dataset 270 .
  • the list of algorithms 292 may be stored in/by the data storage system 240 ; (8) creating/generating, by the model server 245 and/or the prediction server 247 , a submodular function 290 based on benchmark data stored in the benchmark database 210 , and controlling storage of the submodular function 290 in/by the data storage system 240 ; (9) applying, by the model server 245 and/or the prediction server 247 , the submodular function 290 to select a second algorithm 280 - 2 of the algorithms 280 from the modeling database 230 (not shown by FIG.
  • the resulting (best) algorithm may be delivered or indicated to the user system 205 via the network 219 .
  • the best algorithm may be used to make predictions on the prediction server 247 ; which may interact with the user system.
  • FIG. 2 The arrangement shown by FIG. 2 is merely illustrative, and in various other embodiments, some or all of the illustrated databases may be located elsewhere and accessible via the network 219 . In addition, some or all of the databases illustrated by FIG. 2 may be located at a remote system 250 (labeled “additional resources 250 ” in FIG. 2 ). Furthermore, some or all of the servers illustrated (dataset, model, etc.) may be implemented as software in one or more computers, again located in computing system 206 or elsewhere, such as provided by a cloud computing service and the like.
  • FIGS. 3-7 illustrates processes 300 - 700 , respectively, in accordance with various example embodiments.
  • the operations of processes 300 - 700 are described as being performed by entities discussed with regard to FIG. 2 .
  • process 300 is described as being performed by the source engine 241
  • process 400 is described as being performed by the dataset server 243
  • process 500 is described as being performed by the model engine 245
  • processes 600 - 700 are described as being performed by the prediction engine 247 .
  • other computing devices may operate the processes 300 - 700 in a multitude of implementations, arrangements, and/or environments.
  • the computing system 206 may include program code (stored in data storage system 240 ), which when executed by the processor system 207 , causes the computing system 206 to perform the various operations of processes 300 - 700 .
  • the processes 300 - 700 may be performed by respective server systems as discussed previously. While particular examples and orders of operations are illustrated in FIGS. 3-7 , in various embodiments, these operations may be re-ordered, separated into additional operations, combined, or omitted altogether.
  • FIG. 3 illustrates a process 300 for generating a user dataset in accordance with various embodiments.
  • the source engine 241 may obtain raw data 215 from a user system 205 or various other sources.
  • the source engine 310 may convert the raw data 215 into a user dataset 270 .
  • the source engine 241 may store the user dataset 270 in the user database 235 , or may provide the user dataset 270 to the dataset engine 243 .
  • Process 300 may end or repeat as necessary after the source engine 241 performs operation 315 .
  • the raw data 215 may be extracted from one or more messages 214 at operation 305 .
  • the source engine 241 may normalize (e.g., index, partition, augment, canonicalize, etc.) the raw data 215 to convert the raw data 215 into the user dataset 270 .
  • the source engine 241 may obtain Extract-Load-Transform (ELT) data or Extract-Transform-Load (ETL) data, which may be raw data 215 extracted from various sources and normalized for analysis and other transformations.
  • ELT Extract-Load-Transform
  • ETL Extract-Transform-Load
  • the raw data 215 may be loaded into the user database 235 and/or some other data store (not shown by FIG. 2 ) and stored as key-value pairs, which may allow the data to be stored in a mostly native form without requiring substantial normalization or formatting.
  • Other methods for normalizing and/or storing the user dataset 270 may be used.
  • FIG. 4 illustrates a process 400 for generating a candidate dataset in accordance with various embodiments.
  • the dataset engine 243 may obtain the user dataset 270 .
  • the dataset engine 325 may convert the user dataset 270 into a candidate dataset 275 .
  • the dataset engine 243 may store the candidate dataset 275 , or provide the candidate dataset 275 to the model engine 245 and/or the prediction engine 247 .
  • Process 400 may end or repeat as necessary after the source engine 241 performs operation 330 .
  • the dataset engine 243 may normalize (e.g., index, partition, augment, canonicalize, etc.) the user dataset 270 to convert the user dataset 270 into the candidate dataset 275 .
  • This normalization procedure may be the same or different than the normalization procedure discussed with regard to FIG. 3 .
  • the dataset engine 243 may store the candidate dataset 275 in the user database 235 and/or some other data store (not shown by FIG. 2 ) for later retrieval by the model engine 245 or the prediction engine 247 .
  • FIG. 5 illustrates a process 500 for generating benchmark data 284 (also referred to as “historical performance data” and the like), in accordance with various embodiments.
  • the model engine 245 may identify a set of algorithms, and at operation 510 , the model engine 245 may identify a set of datasets.
  • the model engine 245 may compute predictions 285 by applying individual algorithms 280 of the set of models to individual datasets 282 of the set of datasets.
  • the model engine 245 may use data of an individual dataset 282 as an input to an individual algorithm 280 .
  • the output of the individual algorithm 280 may be the predictions 285 .
  • the model engine 245 may generate one or more models to be used for obtaining the predictions 285 , where the datasets may be input to the one or more models and a resulting output may be the predictions 285 .
  • the model engine 245 may evaluate the predictions 285 to obtain results 287 .
  • the results 287 may be results of performing a holdout procedure, a cross-validation procedure, or some other like ML testing procedure that is used to assess the strength and/or utility of a predictive algorithm or model.
  • the results 287 may include other metrics or metadata pertaining to performance of the applied algorithm 280 , such as speed (e.g., time from initiating execution to obtaining an output) of one or more tasks, computing resource usage, etc.
  • the results 287 may be stored in a benchmark database 210 .
  • the model engine 245 may determine if there are any remaining algorithms of the set of algorithms that need to be evaluated. If at operation 530 the model engine 245 determines that there is an individual algorithm of the set of algorithms that needs to be evaluated, the model engine 245 may proceed back to operation 515 to compute predictions using the next individual algorithm. If at operation 530 the model engine 245 determines that there is no individual algorithms of the set of algorithms that need to be evaluated, the model engine 245 may proceed to operation 535 to end or proceed to process 600 (shown and described with regard to FIG. 6 ) to evaluate algorithms to be used for a candidate dataset 275 .
  • FIG. 6 illustrates a process 600 for selecting an optimum algorithm 280 for a candidate dataset 275 , in accordance with various embodiments.
  • the prediction engine 247 may obtain the candidate dataset 275 .
  • the prediction engine 247 may identify an initial algorithm 280 - 1 and apply the initial algorithm 280 - 1 to the candidate dataset 275 to obtain predictions 285 - 1 (not shown by FIG. 2 ) for the initial algorithm 280 - 1 .
  • operations 605 and 610 may be performed by the model engine 245 , and in such embodiments, the predictions 285 - 1 may be passed to the prediction engine 247 or may be stored for later retrieval by the prediction engine 247 .
  • the prediction engine 247 may evaluate the predictions 290 - 1 to obtain results 287 - 1 (not shown by FIG. 2 ) of the initial algorithm 280 - 1 .
  • the prediction engine 247 may store the results 287 - 1 and add the initial algorithm 280 - 1 to the algorithms list 292 .
  • the prediction engine 247 may generate a submodular function 290 based on benchmark datasets 282 stored in the benchmark dataset database 220 .
  • the prediction engine 247 may identify a next algorithm 280 - 2 and may apply the next algorithm 280 - 2 to the candidate dataset 275 to obtain predictions 285 - 2 of the next algorithm 280 - 2 .
  • the prediction engine 247 may evaluate the predictions 285 - 2 to obtain results 287 - 2 of the next algorithm 280 - 2 , and at operation 540 , the prediction engine 247 may store the results 287 - 2 and add the next algorithm 280 - 2 to the algorithms list 292 .
  • the prediction engine 247 may determine whether there are any remaining algorithms 280 to be evaluated against the candidate dataset 275 . Whether there are any remaining algorithms 280 to be evaluated may be based on the size of the subset of algorithms 280 to be evaluated. In some embodiments, the number of algorithms 280 to be evaluated may be a user input, which may be based on the amount of time and/or computational resources that the user is willing and/or able to devote to the evaluation process. In other embodiments, the size of the subset of algorithms 280 to be evaluated may be a predetermined number, based on the type of data in the candidate dataset 275 , or based on any other criteria.
  • an evaluation of an algorithm 280 may take an entire day to complete, and in such scenarios, limiting the number of evaluations can save a considerable amount of time and resources while still providing the user with confidence that the best/optimum algorithm 280 has been discovered. Even where a user has unlimited time and resources, in cases where large amounts of data and/or hundreds of potential algorithms 280 that could be used, the size of the subset of algorithms 280 to be evaluated could be used to speed up the evaluation process.
  • the prediction engine 247 may proceed back to operation 630 to identify a next algorithm 280 - 3 using the submodular function 290 . If at operation 6 the prediction engine 247 determines that there are no remaining algorithms 280 to be evaluated against the candidate dataset 275 , the prediction engine 247 may proceed to operation 550 to report the outcome of the process 600 .
  • the prediction engine 247 may report, to the user system 205 , the best (optimal) algorithm 280 based on the results 287 - 1 to 287 - n , the algorithms list 292 and/or the results 287 - 1 to 287 - n .
  • the process 600 may end or repeat as necessary.
  • FIG. 7 illustrates a process 700 for selecting an optimum algorithm 280 for a candidate dataset 275 , in accordance with various other embodiments.
  • the prediction engine 247 may access historical performance data, such as benchmark data 284 stored in benchmark database 210 .
  • the prediction engine 247 may obtain an input dataset, such as candidate dataset 275 .
  • the prediction engine 247 may select a first algorithm 280 - 1 of a set of algorithms 280 .
  • the prediction engine 247 may apply the first algorithm 280 - 1 to the input dataset to create a first model of the input dataset.
  • the prediction engine 247 may evaluate and store results 287 - 1 of applying the first algorithm 280 - 1 to the input dataset.
  • the prediction engine 247 may add the first algorithm 280 - 1 to the algorithms list 292 .
  • the prediction engine 247 may generate a submodular function 290 based on the first results 287 - 1 and the historical performance data 284 .
  • the prediction engine 247 may process, in turn, each of the second algorithm 280 - 2 through the mth algorithm 280 - m of the set of algorithms until a termination condition is met.
  • the prediction engine 247 may select, using the submodular function 290 , a next algorithm 280 - x (where x is any number between 2 and m) based on the historical performance data 284 and the algorithms list 292 .
  • the prediction engine 247 may apply the next algorithm 280 to the input dataset to create a next model of the input dataset.
  • the prediction engine 247 may evaluate and store results 287 - x of applying the next algorithm 280 - x to the input dataset.
  • the prediction engine 247 may add the next algorithm 280 - x to the algorithms list 292 .
  • the prediction engine 247 may iterate back to opening loop operation 740 to process a next algorithm 280 , if any, or until a termination condition is met.
  • the termination condition may be, for example, when the likelihood of finding a better-performing algorithm reaches some negligibly small value within some margin of error.
  • the best (optimum) algorithm 280 may be an algorithm in the list of algorithms 292 that is closest to fulfilling a predetermined criterion than other algorithms in the algorithms list 292 .
  • the predetermined criterion may be a threshold regression value, detection of a threshold number of anomalies, detecting a threshold number of classes in a multi-class classification scheme, and the like.
  • the prediction engine 247 may end the process 700 or repeat process 700 as necessary.

Abstract

Systems and methods of selecting machine learning models/algorithms for a candidate dataset are disclosed. A computer system may access historical data of a set of algorithms applied to a set of benchmark datasets; select a first algorithm of the set of algorithms; apply the first algorithm to an input dataset to create a model of the input dataset; evaluate and store results of the applying; and add the first algorithm to a set of tried algorithms. The computer system may select a next algorithm of the algorithm set via submodular optimization based on the historical data and the set of tried algorithms; apply the next algorithm to the input dataset; capture a next result based on the applying; add the next result to update the set of tried algorithms; and repeat the submodular optimization. The procedure may continue until a termination condition is reached.

Description

    RELATED APPLICATIONS
  • The present application claims priority under 35 U.S.C. §119 to U.S. Provisional Application No. 62/318,672 filed on Apr. 5, 2016, which is hereby incorporated by reference in its entirety.
  • COPYRIGHT NOTICE
  • © 2017 BigML, Inc. A portion of the present disclosure may contain material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the present disclosure, as it appears in the Patent and Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
  • TECHNICAL FIELD
  • The present disclosure pertains to data processing, and in particular systems and methods for identifying and selecting machine learning algorithms.
  • BACKGROUND
  • A variety of machine learning (ML) models may be used for various tasks, such as categorization and prediction. A panoply of possible ML algorithms may be used to generate these models, such as decision trees, support vector machines, Bayesian networks, and various combinations thereof. Choosing the best or even a “good enough” model to use for a particular application has largely been a matter of intuition, experience, and trial and error. This is partly a result of wide variability in the characteristics of input data sets.
  • The usual process of creating a machine learning model from a dataset includes training a first type of ML model that usually works well for a particular application, checking the performance of the first model (for example, on unseen-during-training holdout data), then trying a second ML model that usually works well to see if the second model performs better than the first model. This process may be repeated until a satisfactory level of performance of an ML model is achieved.
  • This somewhat haphazard approach to selecting ML models may cause overhead-related issues because there may be a relatively large number algorithms to be tested depending on the possible parameterizations of those algorithms. Therefore, attempting to execute each algorithm on a particular dataset may result in relatively large expenditures of time and/or computational resources.
  • BRIEF DRAWINGS DESCRIPTION
  • FIG. 1 illustrates a flow diagram for selecting an optimum algorithm for a candidate dataset, in accordance with various embodiments;
  • FIG. 2 illustrates an example environment in which embodiments of the present disclosure may be implemented;
  • FIG. 3 illustrates an example for generating a user dataset, in accordance with various embodiments;
  • FIG. 4 illustrates an example process for generating a candidate dataset, in accordance with various embodiments;
  • FIG. 5 illustrates an example process for generating benchmark data, in accordance with various embodiments;
  • FIG. 6 illustrates an example process for selecting an optimum algorithm for a candidate dataset, in accordance with various embodiments; and
  • FIG. 7 illustrates another example process for selecting an optimum algorithm for a candidate dataset, in accordance with various embodiments.
  • DETAILED DESCRIPTION
  • Embodiments discussed herein provide systems and methods for selecting machine learning (ML) algorithms and/or ML models. Generally, an ML algorithm is a computer program that learns from an experience (e.g., one or more datasets) with respect to some task and some performance measure. An ML model may be any object created after an ML algorithm is trained with one or more training datasets. After training, an ML model may be used to make predictions on new datasets. The various embodiments are discussed infra with regard to selecting ML algorithms. However, the embodiments discussed herein are also applicable to selecting models as well as selecting algorithms. Accordingly, the term “algorithm” as discussed herein may be interchangeable with the term “model” even though these terms refer to different concepts.
  • In embodiments, a database of historical results of application of one or more ML algorithms/models on previous datasets may be used to for selecting a next candidate model for an input dataset. The selection of a next candidate model may be based on a set of previously attempted parameters or criteria. In embodiments, a sequence of candidates may be created by attempting actions that are likely to work well according to various parameters/criteria, and attempting actions that are uncorrelated to one another. In this way, new models that may be desirable may be discovered when the same or similar models were not previously desirable when applied to similar datasets.
  • Conventional techniques for dealing with ML model selection include “grid searches,” whereby essentially all possible algorithms are tried to the degree that this is possible. Another conventional approach includes “random searches” where random algorithms are tried for as much time is allowed, and a best algorithm tried among these is returned. Other conventional approaches focus on trying to find the best algorithm given only part of the information above. For example, a “meta-learning” approach may use historical data and statistical analysis of the current data to try to find a best algorithm given the data. While such an approach provides a good idea about the first model to try, a “second best” model suggested by such techniques will very likely be something conceptually close to the first model, not taking into account that this an algorithm space has already been explored to a degree. Another related approach includes Bayesian parameter optimization. While this technique focuses on selecting a sequence of candidates to evaluate, this technique does so by finding the “best parts” of the algorithm space by experiment, then finding subareas of the space that work better. However, Bayesian parameter optimization typically does not use historical data to determine other parts of the algorithm space that are uncorrelated and may also work well. Using the aforementioned approaches may result in relatively large expenditures of time and/or computational resources.
  • In contrast to conventional approaches, various embodiments include using historical performance data (also referred to as “benchmarks”, “benchmark datasets”, and the like) to inform the search for a best or optimum ML algorithm and/or ML model. In various embodiments, a computing system may generate a set of algorithms that have already been tried on the current dataset based on a given dataset or a set of datasets, a variety of ML algorithms or models, and historical data indicating various performance metrics of the algorithms/models on various benchmark datasets. In embodiments, the computing system may not generate the algorithms themselves; rather, the computing system may generate a list (or set) of the ML models/algorithms that have been applied to the dataset.
  • In embodiments, a historical database (also referred to as a “benchmark database” and the like) including the historical performance data may be created “off-line” before learning processes begin. In embodiments the historical performance data may include any data relating to performance of a particular ML algorithm/model. In embodiments, the historical performance data may indicate how well an ML, algorithm/model fits or predicts a certain dataset. In some embodiments, historical performance data may take into account required processor resources, memory utilization, input/output (I/O) operations, network resources, and the like. In embodiments, the computing system may generate and evaluate multiple ML models/algorithms and may choose an optimum ML algorithm/model, which may be an ML algorithm/model that fits desired parameters/criteria (e.g., predicting a desired number data points, etc.) better than other ML algorithms/models, executes or otherwise performs faster than other ML algorithms/models, and/or uses the least amount of resources compared to resources used by other ML algorithms/models.
  • Embodiments provide that a submodular function may be utilized by the computing system to select a next candidate algorithm/model to test against a candidate dataset. A submodular function (also referred to as a “submodular set function” and the like) may be a set function whose value, informally, has the property that the difference in the incremental value of the function, that a single element makes when added to an input set, decreases as the size of the input set increases. Submodular functions may have a natural diminishing returns property, which may make them suitable for many applications, including approximation algorithms, game theory (as functions modeling user preferences), electrical networks, and the like.
  • In various embodiments, the computing system may implement the submodular function to select an ML algorithm/model from a set of m algorithms/models that worked best or was optimal for one or more datasets of a set of n datasets, where m and n are numbers. As used herein the term “best algorithm/model”, “optimum model”, and the like may refer to an ML, algorithm/model that is fulfills the predetermined criteria/parameters and/or is more economical/cost-effective (in terms of speed, computer resource usage, and the like) than other tested algorithms/models. In various scenarios, one of the m algorithms/models may be the best/optimum algorithm/model for the largest number of datasets in the set of n datasets, which in various embodiments may be applied to each of the n datasets. For all datasets for which the selected algorithm/model is not the best/optimum, another algorithm/model of the set of m algorithms/models, which is the best for the greatest number of the remaining n datasets, may be discovered. This process may be repeated until there are no datasets left in the set of n datasets.
  • Referring now to the figures. FIG. 1 illustrates a flow diagram for practicing various example embodiments. In process 100, operations 105-130 may be performed before learning or training begins and operations 135-145 may be performed as part of a learning or training process. Additionally, operations 105-130 maybe an example “off-line” process to build a historical database (also referred to herein as a “benchmark database”). At operation 105, a set of m number of modeling algorithms 280 (also referred to as “algorithms 280”, “algorithms 280”, and the like) may be collected, and at operation 110, a benchmark collection of n number of datasets 282 may be collected or otherwise made available. The size of m and/or n may be quite large because performance or computing costs are not important at this stage of process 100.
  • Operation 115 may include running/executing each possible algorithm 280 on each one of the benchmark datasets 282 to obtain predictions 285. For example, data from a benchmark dataset 282 may be used as an input to an algorithm 280, and the resulting output may be a prediction 285. In some embodiments, operation 115 may include generating one or more ML models to be used to obtain the predictions 285. The act of computing predictions 285 on a selected one of the datasets 282 using a selected one of the algorithms 280 may be referred to as a “test” and the like. In some cases, fewer tests or selected ones could be run rather than testing all of the benchmark datasets 282 using all of the algorithms 280. In some embodiments, the process 100 may include running/executing a first algorithm 280 (e.g., algorithm 280-1) on one or more of the benchmark datasets 282 until one or more criteria or conditions are satisfied, recording/storing various data items associated with the one or more criteria or conditions, and then repeating this process for each algorithm 280 in the set. In embodiments where m=n, at most n̂2 tests may be run. The number of algorithms 280 and the number of benchmark datasets 282 need not be equal. In some embodiments, for m algorithms 280 and n benchmark 282 datasets, at most n̂m tests may be run.
  • At operation 120, the resulting predictions 285 may then be evaluated in various ways to obtain results 287. As one example, predictions 285 may be compared to a holdout dataset. In this example, a portion of the predictions 285 may be “withheld” as the holdout dataset, and one or more tests may be performed on the holdout dataset to determine how well the selected algorithm 280 forecasts or predicts the remaining portion of the predictions 285 (within a certain margin of error). Furthermore, the algorithm 280 may be ranked or scored according to a number of data points in the remaining portion that were accurately forecasted/predicted by the algorithm 280 (within a certain margin of error). In another example, the evaluation may include performing a two-fold cross-validation procedure on the predictions 285, such as by randomly assigning data points in the predictions 285 to two separate data (e.g., set 1 and set 2) having an equal size. ML training may be performed on set 1, and testing may be performed on set 2, followed by ML training on set 2 and testing on set 1. The algorithm 280 may be ranked or scored according to a number of data points in the set 1 that were accurately forecasted/predicted by set 2, and vice versa (within a certain margin of error). Other validation or evaluation procedures may be used.
  • At operation 125, the evaluation results 287 (also referred to as “benchmarks”) may be stored in the benchmark database 210 (see FIG. 2) along with related data, including identifiers of the corresponding algorithms 280 and benchmark datasets 282. The benchmark database 210 may be the historical performance database since the benchmarks 287 stored therein are based on tested algorithms 280. The evaluation of predictions 285 may be performed at any convenient time and/or place where appropriate computing resources are made available. Such resources may be provisioned remotely or “in the cloud.”
  • At operation 130, the results 287 of the benchmark evaluation stored in the benchmark database 210 may be used to create a submodular function 290. The submodular function 290 may describe the likelihood that a best possible or optimum algorithm 280 has already been tried/tested given a current set of experiments. In embodiments, the submodular function 290 may be optimized using known techniques.
  • Operation 135 may be the beginning of the learning or training process. At operation 135, given an input or candidate dataset 275 at operation 140, a first algorithm 280 (e.g., algorithm 280-1) may be selected and applied to the candidate dataset 275. The candidate dataset 275 may be a dataset for which an optimum algorithm 280 is to be discovered. One goal of the example embodiments may include identifying the optimum algorithm 280 using as few tests/evaluations as possible in order to save time, effort, and computational and/or network resources. Another goal of the example embodiments may include identifying a subset of algorithms 280 that is likely to include the best or optimum algorithm 280 for the candidate dataset 275 regardless of the data, format or type, and/or other properties of the candidate dataset 275. Thus, in some embodiments, the set or sequence of algorithms 280 that are applied against the candidate dataset 275 (e.g., as identified by the submodular function 290 at operation 130) may be a relatively small subset of the whole set of algorithms 280 (e.g., the set of algorithms 280 identified at operation 105) used to generate benchmark data (e.g., the results obtained at operation 125). Application of the first algorithm 280-1 may include using data of the candidate dataset 275 as an input to the first algorithm 280-1. The results 287 of applying the first algorithm 280-1 to the candidate dataset 275 may be evaluated, and at operation 145, the tested (selected) algorithm 280-1 may be added to a list of algorithms 292 tested on the candidate dataset 275 (e.g., as algorithm i in FIG. 1).
  • Operation 135 may also include selecting a next algorithm (e.g., algorithm 280-2) according to the submodular function 290, which was created at operation 125 based on the benchmark database 210. The next algorithm 280-2 may be tested on the candidate dataset 275, results 287 of applying the algorithm 280-2 to the candidate dataset 275 may be evaluated, and the algorithm 280-2 may be added to the list of tested algorithms 292 at operation 145. This procedure is repeated until all algorithms 280 (or a selected subset of the algorithms 280) are tested on the candidate dataset 275, and/or until results that are deemed satisfactory are achieved thereby populating the list of tested algorithms 292 with algorithms i-j. In this way, process 100 may be used to identify or select a candidate algorithm 280 whose performance is likely to be superior to the set of already-tested algorithms.
  • FIG. 2 illustrates an example environment 200 in which various embodiments of may be implemented. In FIG. 2, a user system 205 may be coupled to a computing system 206 via a network 219. As shown, the user system 205 may include a processor system 205A, a memory system 205B, an input system 205C, an output system 205D, and a communications system 205E. The computing system 206 may include the processor system 207 (also referred to as a “server system 207” and the like), the network interface 208, benchmark database 210, benchmark dataset database 220, modeling algorithms database 230 (also referred to as a “model algorithm database 230”, “algorithm database 230”, “modeling database 230”, and the like), and the data storage system 240.
  • Referring to the user system 205, the memory system 205B may include an operating system (OS), one or more databases (not shown), and one or more applications (not shown). The processor system 205A can include any suitable combination of one or more processors, such as one or more central processing units (CPUs) including single-core or multi-core processors, one or more graphics processing units (GPUs), one or more field-programmable gate arrays (FPGAs), or any other electronic circuitry capable of executing program code and/or software modules to perform arithmetic, logical, and/or input/output operations.
  • The memory system 205B can include any suitable combination of one or more memory devices that may be embodied as any type of volatile or non-volatile memory or data storage. Memory system 205B may generally include volatile memory (e.g., random access memory (RAM), synchronous dynamic RAM (SDRAM) devices, double-data rate synchronous dynamic RAM (DDR SDRAM) device, flash memory, and the like), non-volatile memory (e.g., read only memory (ROM), solid state storage (SSS), non-volatile RAM (NVRAM), and the like), and/or other like storage media capable of storing and recording data. The memory system 205B may be configured to store an operating system (OS) and program code for one or more software components or application(s). Instructions, program code and/or software components may be loaded into the memory system 205B by one or more network elements (not shown) via communications system 205E using wired or wireless communications interfaces. In some embodiments, the program code and/or software components may be loaded into the memory system 205B during manufacture of the user system 205, or loaded from a separate computer readable storage medium into the memory system 205B using a drive mechanism (not shown), such as a memory card, memory stick, removable flash drive, removable sim card, a secure digital (SD) card, and/or other like computer readable storage medium (not shown).
  • The input system 205C may include one or more interfaces, components or devices designed to enable interaction with the user system 205. The output system 205D can include any suitable combination of output devices, such as one or more display devices, printers, or interfaces to networks. The input system 205C may include user interfaces and devices such as a physical keyboard or keypad, mice, trackballs, a touchpad, touchscreen, a speaker, a microphone, a fingerprint or handprint scanning device, etc. In embodiments, the input system 205C and/or the output system 205D may include peripheral component interfaces, such as a non-volatile memory port, communications ports (e.g., universal serial bus (USB) port, FireWire port, Serial Digital Interface (SDI) port), IEEE 1284 port, etc.), an audio jack, a power supply interface. In some embodiments, the input system 205C and/or the output system 205D may include, memory controller hubs, input/output control hubs, firmware devices, communication links (i.e., point-to-point links, bus links, wires, cables, light guides, printed circuit board traces, etc.) and/or other components and subsystems to facilitate the I/O operations.
  • The communications system 205E may include circuitry for communicating with a wireless network or wired network. Communications system 205E may be used to establish a link 216 (also referred to as “channel 216,” ‘networking layer tunnel 216,” “internet layer tunnel 216”, and the like) through which the user system 205 may communicate with the computing system 206. The Communications system 205E may enable the user system 205 to communicate with computing system 206 using Transfer Control Protocol and Internet Protocol (TCP/IP) and, at a higher network level, other common Internet protocols to communicate, such as Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), etc. To do so, the communications system 205E may include one or more processors (e.g., baseband processors, etc.) that are dedicated to a particular wireless communication protocol (e.g., Wi-Fi and/or IEEE 802.11 protocols), a cellular communication protocol (e.g., Long Term Evolution (LTE) and the like), a wireless personal area network (WPAN) protocol (e.g., IEEE 802.15.4-802.15.5 protocols, Bluetooth or Bluetooth low energy (BLE), etc.), and/or a wired communication protocol (e.g., Ethernet, Fiber Distributed Data Interface (FDDI), Point-to-Point (PPP), etc.).
  • The communications system 205E may also include hardware devices that enable communication with wireless/wired networks and/or other user systems 12 using modulated electromagnetic radiation through a solid or non-solid medium. Such hardware devices may include switches, filters, amplifiers, antenna elements, and the like to facilitate the communications over the air or through a wire by generating or otherwise producing radio waves to transmit data to one or more other devices, and converting received signals into usable information, such as digital data, which may be provided to one or more other components of user system 205. To communicate (e.g., transmit/receive) with the computing system 206, the user system 205 using the communications system 205E may establish link 216 with network interface 208 of the computing system 206.
  • During operation, a user system 205 may obtain and upload raw data 215 to the computing system 206 via a link 216 for processing. To this end, the processor system 205A may implement an application (e.g., a locally stored application, a web application, a native application, and the like) to generate and send message 214 to the computing system 206. The message 214 may be a request to convert the raw data 215 into one or more user datasets and/or candidate datasets 275, and in such embodiments, the message 214 may include the raw data 214 to be converted. In some embodiments, the message 214 may be a request to store the raw data 214 in one or more data elements, records, and/or fields in one or more database object(s) of user database 235. In some embodiments, the message 214 may include one or more targets indicating fields or records to be searched in user database 235 for data to be converted into the user datasets and/or candidate datasets 275. In such embodiments, the message 214 may also include one or more other options, conditions, filters, etc. (e.g., sort parameters, maximum result size, and the like) to be used to obtain data from user database 235. In embodiments, the message 214 may be an HTTP message, where the raw data 215, credentials, and/or other pertinent information may be located in the header or body portion of the HTTP message. Other message types may be used to convey the message 214, such as a Session Initiation Protocol (SIP) message, or any message used in the Internet protocols discussed previously.
  • The applications that enable communication with the computing system 206 may utilize any suitable query language to query, store, and obtain information in/from user database 235, such as structured query language (SQL), object query language (OQL), object search language (OSL), and/or other like query languages. In some embodiments, these applications may provide a graphical user interface (GUI) that displays a visualization of the raw data 215 and/or data in user database 235. The GUI may include various graphical control elements, and may convert selections of the graphical control elements into suitable requests using the aforementioned query languages. Such applications may be designed to run on a specific platform, such as when the user system 205 is implemented in a mobile device, such as a smartphone, tablet computer, and the like. Furthermore, such applications may also enable the user system 205 to provide authentication credentials (e.g., user identifier, password, personal identification number (PIN), biometric data, etc.) to the computing system 206 so that the computing system 206 may authenticate the identity of a user of the user system 205. Suitable implementations for the OS, databases, and applications, as well as the general functionality of the user system 205 are known or commercially available, and are readily implemented by persons having ordinary skill in the art, particularly in light of the disclosure herein.
  • Network 219 may be any network that allows computers to exchange data. Network 219 may include one or more network elements (not shown) capable of physically or logically connecting computers. The network 219 may include any appropriate network, including an intranet, the Internet, a cellular network, wireless network, cellular network, a local area network (LAN), wide area network (WAN), a personal or enterprise network, point-to-point network, star network, token ring network, hub network, or any other such network or combination thereof. Components used for such a system can depend at least in part upon the type of network and/or environment selected. Protocols and components for communicating via such a network are well known and will not be discussed herein in detail. Communication over the network 219 may be enabled by wired or wireless connections, and combinations thereof.
  • Referring to the computing system 206, the computing system 206 may include data storage system 240, the processor system 207, a network interface 208, and the various databases 210-235. The processor system 207 may be implemented to include any suitable combination of one or more processors, such as one or more central processing units (CPUs) including single-core or multi-core processors, one or more graphics processing units (GPUs), one or more field-programmable gate arrays (FPGAs), or any other electronic circuitry capable of executing program code and/or software modules to perform arithmetic, logical, and/or input/output operations. In various embodiments, the processor system 207 may include one or more modules or engines that perform one or more functions as discussed herein, such as a source engine 241, a dataset engine 243, a model engine 245, or a predication engine 247. In some embodiments, program code for the various engines/modules may be stored in the data storage system 240 and executed by the processor system 207. In other embodiments, each of the engines/modules may be embodied as an FPGA or as some other dedicated processor circuitry. In other embodiments, the processor system 207 may be implemented as a server system 207 (each with their own processor(s), memory device(s), I/O interfaces, network interfaces, and configured with suitable program code), where each server in the server system 207 carries out one or more functions as discussed herein.
  • Network interface 208 may be embodied as any type of communication circuit(s), device(s), hardware component(s) or collection thereof, capable of enabling communications between the computing system 206 and the user systems 12 via one or more communication networks (e.g., network 219). To this end, network interface 208 may include one or more communication interfaces (e.g., ports) and one or more dedicated processors and/or FPGAs to communicate using one or more wired network communications protocols, such as Ethernet, token ring, Fiber Distributed Data Interface (FDDI), Point-to-Point Protocol (PPP), network sockets, and/or other like network communications protocols). The communication interfaces may be configured to communicatively couple the computing system 206 to any number of other nodes 110, the interconnect device 120, networks (e.g., physical or logical networks), and/or external computer devices. In this regard, each communication interface may be associated with a network socket address (e.g., a combination of an IP address and port number) or other like address that allows other devices to connect to the computer system 206. The network interface 208 may also include one or more virtual network interfaces configured to operate with the one or more applications of the computer system 206. In some embodiments, the network interface 208 may be implemented as a set of application servers (also referred to as “app servers”), where each app server is configured to communicate with one or more components of the computing system 206, and to serve requests received from the user systems 12. In addition, the network interface 208 may implement a user interface and/or application programming interface (API) to allow the user systems 12 to interact with the computing system 206.
  • The data storage system 240 may be computer-readable media having instructions stored thereon, which are executable by the processor system 207. Data storage system 240 may include program code for flow diagram 100, program code for processes of flow diagram 100, and processes 300-500 discussed with regard to FIGS. 3-5 (not shown by FIG. 2), and program code used for implementing the various functions of the database system 206, such as an operating system and one or more other applications. In some embodiments, the databases 210-235 may be stored within the data storage system 240, while in other embodiments, the databases 210-235 may be implemented in or by one or more separate/remote data storage systems. When the instructions are executed by the processor system 207, the computing system 206 may carry out the various functions of the system 206 and perform the various example embodiments described herein, such as the processes of flow diagram 100, and processes 300-500 discussed with regard to FIGS. 3-5. The data storage system 240 may be embodied as any type of device or devices configured for short-term or long-term storage of data such as, for example, memory devices and circuits, memory cards, hard disk drives, optical drives, removable disk drives (e.g., optical discs, digital versatile disks (DVD), compact disks (CD), etc.), solid-state drives, and/or any of the other types of memory devices discussed with regard to memory system 205B. The various instructions, or portions thereof, may be loaded into the data storage system 240 from removable/separate storage media using a drive mechanism, from remote computing system via network interface 208 (e.g., over the Internet or network 219 using any of the aforementioned communication protocols).
  • As discussed previously, the user system 205 may obtain the raw data 215, compile or otherwise include the raw data 215 in a message 214, and send the message 214 to the computing system 206 via a link 216. The raw data 215 may be converted into a candidate dataset 275 by the source server 241 and/or the dataset server 243. In such embodiments, the source server 241 and/or the dataset server 243 may store the candidate dataset 275 in the user database 235 or some other data storage device for later retrieval by the model server 245 and/or the prediction server 247. In embodiments, the benchmark database 210 may be created as described above and may be available to one or more processors 17 of the computing system 206. The benchmark database 210 may include a set of benchmarks 284, which may have been created using the stored set of benchmark datasets 282 stored in benchmark dataset database 220. A set of modeling algorithms 280 (also referred to as a “set of algorithms 280” and the like) may be stored in the modeling database 230, where the set of algorithms 280 includes m number of algorithms (where m is a number). The m number of algorithms may include 1 to m algorithms 280 where a first algorithms may be referred to as “algorithm 280-1”, a second model may be referred to as “algorithm 280-2”, and so forth until an mth algorithm may be referred to as “algorithm 280-m” (see e.g., FIG. 1). Each of the m number of algorithms 280 may be applied to the n number of benchmark datasets 282 stored in the benchmark dataset database 220 to produce the benchmarks 284. The benchmarks 284 (also referred to as “benchmark data 284” and the like) may provide a basis for determining a submodular function 290. The submodular function 290 may be used to choose a subset of the algorithms 280 to be applied against a candidate dataset 275. One benefit of using the submodular function 290 is that the submodular function 290 can suggest a set of “next” algorithms 280 that should be tried against the candidate dataset 275 based on a set of algorithms 280 that were already tested against the benchmark datasets 282. In addition, set of “next” algorithms 280 indicated by the submodular function 290 will likely include an optimum algorithm 280 that is better than other algorithms 280 for making predictions for a given candidate dataset 275. In addition, the set of “next” algorithms 280 may be a relatively a small subset of the stored algorithms 280 in the modeling database 230, and thus, testing the algorithms 280 in the set of “next” algorithms 280 may be less time consuming and computationally intensive than using a randomized approach to choosing algorithms for a candidate dataset.
  • During operation, a user dataset 270 may be provided to the computing system 206 as noted previously (e.g., from raw data 215 and/or from user database 235), and the processor or server system 207 may carry out the operations discussed herein, such as: (1) obtaining, by the dataset server 243, the user dataset 270 from the user database 235; (2) converting, by the dataset server 243, the user dataset 270 into a candidate dataset 275. In embodiments, converting the user dataset 270 into a candidate dataset 275 may include normalizing data in the user dataset 270, or performing other formatting operations on the user dataset 270; (3) selecting, by the model server 245, a first algorithm 280-1 from modeling database 230; (4) compute or otherwise determine predictions 285 by applying the first algorithm 280-1 to the candidate dataset 275 at the model server 245 and/or the prediction server 247; (5) evaluating, by the prediction server 247, the predictions 285 to obtain results 287 of applying the first algorithm 280-1 to the candidate dataset 275; (6) controlling storage, by the prediction server 247, of the predictions 285 and the results 287 in the data storage system 240 (or in some other database or data storage device); (7) adding, by the model server 245 and/or the prediction server 247, the selected first algorithm 280-1 to a list of algorithms 292 that are applied to the user dataset 270. The list of algorithms 292 may be stored in/by the data storage system 240; (8) creating/generating, by the model server 245 and/or the prediction server 247, a submodular function 290 based on benchmark data stored in the benchmark database 210, and controlling storage of the submodular function 290 in/by the data storage system 240; (9) applying, by the model server 245 and/or the prediction server 247, the submodular function 290 to select a second algorithm 280-2 of the algorithms 280 from the modeling database 230 (not shown by FIG. 2); (10) applying, by the model server 245 and/or the prediction server 247, the second algorithm 280-2 to the same candidate dataset 275 compute or otherwise determine predictions 285 for the second algorithm 280-2; (11) evaluating, by the prediction server 247, the predictions 285 to obtain results 287 of applying the second algorithm 280-2 to the candidate dataset 275; (12) adding the selected algorithm 280-2 to the list of algorithms 292 tried on the user dataset 270; and (13) repeating operations 1-13 until results that are deemed satisfactory are achieved.
  • The resulting (best) algorithm may be delivered or indicated to the user system 205 via the network 219. The best algorithm may be used to make predictions on the prediction server 247; which may interact with the user system.
  • The arrangement shown by FIG. 2 is merely illustrative, and in various other embodiments, some or all of the illustrated databases may be located elsewhere and accessible via the network 219. In addition, some or all of the databases illustrated by FIG. 2 may be located at a remote system 250 (labeled “additional resources 250” in FIG. 2). Furthermore, some or all of the servers illustrated (dataset, model, etc.) may be implemented as software in one or more computers, again located in computing system 206 or elsewhere, such as provided by a cloud computing service and the like.
  • FIGS. 3-7 illustrates processes 300-700, respectively, in accordance with various example embodiments. For illustrative purposes, the operations of processes 300-700 are described as being performed by entities discussed with regard to FIG. 2. In particular, process 300 is described as being performed by the source engine 241, process 400 is described as being performed by the dataset server 243, and process 500 is described as being performed by the model engine 245, and processes 600-700 are described as being performed by the prediction engine 247. However, it should be noted that other computing devices may operate the processes 300-700 in a multitude of implementations, arrangements, and/or environments. In embodiments, the computing system 206 may include program code (stored in data storage system 240), which when executed by the processor system 207, causes the computing system 206 to perform the various operations of processes 300-700. In other embodiments, the processes 300-700 may be performed by respective server systems as discussed previously. While particular examples and orders of operations are illustrated in FIGS. 3-7, in various embodiments, these operations may be re-ordered, separated into additional operations, combined, or omitted altogether.
  • FIG. 3 illustrates a process 300 for generating a user dataset in accordance with various embodiments. Referring to FIG. 3, at operation 305 the source engine 241 may obtain raw data 215 from a user system 205 or various other sources. At operation 310, the source engine 310 may convert the raw data 215 into a user dataset 270. At operation 315, the source engine 241 may store the user dataset 270 in the user database 235, or may provide the user dataset 270 to the dataset engine 243. Process 300 may end or repeat as necessary after the source engine 241 performs operation 315.
  • In embodiments, the raw data 215 may be extracted from one or more messages 214 at operation 305. In some embodiments, the source engine 241 may normalize (e.g., index, partition, augment, canonicalize, etc.) the raw data 215 to convert the raw data 215 into the user dataset 270. Additionally or alternatively, at operations 305 and 310 the source engine 241 may obtain Extract-Load-Transform (ELT) data or Extract-Transform-Load (ETL) data, which may be raw data 215 extracted from various sources and normalized for analysis and other transformations. In some embodiments, at operation 315 the raw data 215 may be loaded into the user database 235 and/or some other data store (not shown by FIG. 2) and stored as key-value pairs, which may allow the data to be stored in a mostly native form without requiring substantial normalization or formatting. Other methods for normalizing and/or storing the user dataset 270 may be used.
  • FIG. 4 illustrates a process 400 for generating a candidate dataset in accordance with various embodiments. Referring to FIG. 4, at operation 320 the dataset engine 243 may obtain the user dataset 270. At operation 325, the dataset engine 325 may convert the user dataset 270 into a candidate dataset 275. At operation 330, the dataset engine 243 may store the candidate dataset 275, or provide the candidate dataset 275 to the model engine 245 and/or the prediction engine 247. Process 400 may end or repeat as necessary after the source engine 241 performs operation 330.
  • In some embodiments, at operation 325 the dataset engine 243 may normalize (e.g., index, partition, augment, canonicalize, etc.) the user dataset 270 to convert the user dataset 270 into the candidate dataset 275. This normalization procedure may be the same or different than the normalization procedure discussed with regard to FIG. 3. In embodiments, the dataset engine 243 may store the candidate dataset 275 in the user database 235 and/or some other data store (not shown by FIG. 2) for later retrieval by the model engine 245 or the prediction engine 247.
  • FIG. 5 illustrates a process 500 for generating benchmark data 284 (also referred to as “historical performance data” and the like), in accordance with various embodiments. Referring to FIG. 5, at operation 505, the model engine 245 may identify a set of algorithms, and at operation 510, the model engine 245 may identify a set of datasets. At operation 515, the model engine 245 may compute predictions 285 by applying individual algorithms 280 of the set of models to individual datasets 282 of the set of datasets. In embodiments, the model engine 245 may use data of an individual dataset 282 as an input to an individual algorithm 280. The output of the individual algorithm 280 may be the predictions 285. In some embodiments, the model engine 245 may generate one or more models to be used for obtaining the predictions 285, where the datasets may be input to the one or more models and a resulting output may be the predictions 285.
  • At operation 520, the model engine 245 may evaluate the predictions 285 to obtain results 287. In embodiments, the results 287 may be results of performing a holdout procedure, a cross-validation procedure, or some other like ML testing procedure that is used to assess the strength and/or utility of a predictive algorithm or model. In other embodiments, the results 287 may include other metrics or metadata pertaining to performance of the applied algorithm 280, such as speed (e.g., time from initiating execution to obtaining an output) of one or more tasks, computing resource usage, etc. At operation 525, the results 287 may be stored in a benchmark database 210.
  • At operation 530, the model engine 245 may determine if there are any remaining algorithms of the set of algorithms that need to be evaluated. If at operation 530 the model engine 245 determines that there is an individual algorithm of the set of algorithms that needs to be evaluated, the model engine 245 may proceed back to operation 515 to compute predictions using the next individual algorithm. If at operation 530 the model engine 245 determines that there is no individual algorithms of the set of algorithms that need to be evaluated, the model engine 245 may proceed to operation 535 to end or proceed to process 600 (shown and described with regard to FIG. 6) to evaluate algorithms to be used for a candidate dataset 275.
  • FIG. 6 illustrates a process 600 for selecting an optimum algorithm 280 for a candidate dataset 275, in accordance with various embodiments. Referring to FIG. 6, at operation 605, the prediction engine 247 may obtain the candidate dataset 275. At operation 610, the prediction engine 247 may identify an initial algorithm 280-1 and apply the initial algorithm 280-1 to the candidate dataset 275 to obtain predictions 285-1 (not shown by FIG. 2) for the initial algorithm 280-1. In various embodiments, operations 605 and 610 may be performed by the model engine 245, and in such embodiments, the predictions 285-1 may be passed to the prediction engine 247 or may be stored for later retrieval by the prediction engine 247.
  • At operation 615, the prediction engine 247 may evaluate the predictions 290-1 to obtain results 287-1 (not shown by FIG. 2) of the initial algorithm 280-1. At operation 620, the prediction engine 247 may store the results 287-1 and add the initial algorithm 280-1 to the algorithms list 292. At operation 625, the prediction engine 247 may generate a submodular function 290 based on benchmark datasets 282 stored in the benchmark dataset database 220. At operation 630, the prediction engine 247 may identify a next algorithm 280-2 and may apply the next algorithm 280-2 to the candidate dataset 275 to obtain predictions 285-2 of the next algorithm 280-2. At operation 635, the prediction engine 247 may evaluate the predictions 285-2 to obtain results 287-2 of the next algorithm 280-2, and at operation 540, the prediction engine 247 may store the results 287-2 and add the next algorithm 280-2 to the algorithms list 292.
  • At operation 645, the prediction engine 247 may determine whether there are any remaining algorithms 280 to be evaluated against the candidate dataset 275. Whether there are any remaining algorithms 280 to be evaluated may be based on the size of the subset of algorithms 280 to be evaluated. In some embodiments, the number of algorithms 280 to be evaluated may be a user input, which may be based on the amount of time and/or computational resources that the user is willing and/or able to devote to the evaluation process. In other embodiments, the size of the subset of algorithms 280 to be evaluated may be a predetermined number, based on the type of data in the candidate dataset 275, or based on any other criteria. For example, in some scenarios, an evaluation of an algorithm 280 may take an entire day to complete, and in such scenarios, limiting the number of evaluations can save a considerable amount of time and resources while still providing the user with confidence that the best/optimum algorithm 280 has been discovered. Even where a user has unlimited time and resources, in cases where large amounts of data and/or hundreds of potential algorithms 280 that could be used, the size of the subset of algorithms 280 to be evaluated could be used to speed up the evaluation process.
  • If at operation 645 the prediction engine 247 determines that there are remaining algorithms 280 to be evaluated against the candidate dataset 275, the prediction engine 247 may proceed back to operation 630 to identify a next algorithm 280-3 using the submodular function 290. If at operation 6 the prediction engine 247 determines that there are no remaining algorithms 280 to be evaluated against the candidate dataset 275, the prediction engine 247 may proceed to operation 550 to report the outcome of the process 600.
  • At operation 650, the prediction engine 247 may report, to the user system 205, the best (optimal) algorithm 280 based on the results 287-1 to 287-n, the algorithms list 292 and/or the results 287-1 to 287-n. After performance of operation 650, the process 600 may end or repeat as necessary.
  • FIG. 7 illustrates a process 700 for selecting an optimum algorithm 280 for a candidate dataset 275, in accordance with various other embodiments. Referring to FIG. 7, at operation 705 the prediction engine 247 may access historical performance data, such as benchmark data 284 stored in benchmark database 210. At operation 710, the prediction engine 247 may obtain an input dataset, such as candidate dataset 275. At operation 715, the prediction engine 247 may select a first algorithm 280-1 of a set of algorithms 280. At operation 720, the prediction engine 247 may apply the first algorithm 280-1 to the input dataset to create a first model of the input dataset. At operation 725, the prediction engine 247 may evaluate and store results 287-1 of applying the first algorithm 280-1 to the input dataset. At operation 730, the prediction engine 247 may add the first algorithm 280-1 to the algorithms list 292. At operation 735, the prediction engine 247 may generate a submodular function 290 based on the first results 287-1 and the historical performance data 284.
  • At opening loop operation 740, the prediction engine 247 may process, in turn, each of the second algorithm 280-2 through the mth algorithm 280-m of the set of algorithms until a termination condition is met. At operation 745, the prediction engine 247 may select, using the submodular function 290, a next algorithm 280-x (where x is any number between 2 and m) based on the historical performance data 284 and the algorithms list 292. At operation 750, the prediction engine 247 may apply the next algorithm 280 to the input dataset to create a next model of the input dataset. At operation 755, the prediction engine 247 may evaluate and store results 287-x of applying the next algorithm 280-x to the input dataset. At operation 760, the prediction engine 247 may add the next algorithm 280-x to the algorithms list 292. At closing loop operation 765, the prediction engine 247 may iterate back to opening loop operation 740 to process a next algorithm 280, if any, or until a termination condition is met. The termination condition may be, for example, when the likelihood of finding a better-performing algorithm reaches some negligibly small value within some margin of error. Once all algorithms 280 of the set of algorithms have been processed and/or when the termination condition is met, the prediction engine 247 may proceed to operation 770 to report a best (optimum) algorithm 280 of the set of algorithms, or report the algorithms list 292 with the results 287-1 to 287-m. The best (optimum) algorithm 280 may be an algorithm in the list of algorithms 292 that is closest to fulfilling a predetermined criterion than other algorithms in the algorithms list 292. In embodiments, the predetermined criterion may be a threshold regression value, detection of a threshold number of anomalies, detecting a threshold number of classes in a multi-class classification scheme, and the like. After performance of operation 770, the prediction engine 247 may end the process 700 or repeat process 700 as necessary.
  • Persons of ordinary skill in the art will recognize that the present disclosure is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present disclosure includes both combinations and sub-combinations of the various features described hereinabove as well as modifications and variations that would occur to such skilled persons upon reading the foregoing description without departing from the underlying principles. Only the following claims, however, define the scope of the present disclosure.

Claims (20)

1. A computer-readable medium including instructions, which when executed by one or more processors of a computing system, causes the computing system to:
identify a set of algorithms;
identify a set of benchmark datasets;
generate a set of predictions by application of individual algorithms of the set of algorithms to individual benchmark datasets of the set of benchmark datasets;
evaluate the set of predictions to obtain results;
control storage of the results in a benchmark database;
generate a submodular function based on the stored results;
apply the individual algorithms to a candidate dataset;
identify an optimum algorithm, wherein the optimum algorithm is an individual algorithm of the set of algorithms that is closest to fulfilling a predetermined criterion than other algorithms; and
control transmission of a report indicating the optimum algorithm.
2. The computer-readable medium of claim 1, wherein, in response to execution of the instructions, is to:
add the individual algorithms to a list of algorithms after application the individual algorithms to the candidate dataset, and
wherein, to identify the optimum algorithm, the computing system, in response to execution of the instructions, is to identify the optimum algorithm from among the individual algorithms in the list of algorithms based on the predetermined criterion.
3. The computer-readable medium of claim 2, wherein, in response to execution of the instructions, is to:
evaluate another set of predictions to obtain other results, wherein the other set of predictions is obtained from the application of the individual algorithms to the candidate dataset; and
control storage of the other results in the benchmark database, and
wherein, to identify the optimum algorithm from among the individual algorithms, the computing system, in response to execution of the instructions, is to identify the optimum algorithm from among the individual algorithms in the list of algorithms further based on the stored other results.
4. The computer-readable medium of claim 1, wherein:
to identify the set of datasets, the computing system, in response to execution of the instructions, is to obtain the set of datasets from a benchmark datasets database, and
to identify the set of algorithms, the computing system, in response to execution of the instructions, is to obtain the set of algorithms from a modeling algorithms database.
5. The computer-readable medium of claim 1, wherein, to generate the set of predictions, the computing system, in response to execution of the instructions, is to:
control execution of the individual algorithms using, as an input, data of the individual benchmark datasets, and
wherein the set of predictions comprises an output of the execution of the individual algorithms with the data of the individual datasets as the input.
6. The computer-readable medium of claim 1, wherein, to evaluate the set of predictions, the computing system, in response to execution of the instructions, is to:
control performance of a holdout procedure using the set of predictions; or
control performance of a two-fold cross-validation procedure using the set of predictions.
7. The computer-readable medium of claim 1, wherein to control storage of the results, the computing system, in response to execution of the instructions, is to:
control storage, in the benchmark database, of identifiers of the individual algorithms and identifiers of the benchmark datasets in association with the individual algorithms.
8. The computer-readable medium of claim 1, wherein, to apply the individual algorithms to the candidate dataset, the computing system, in response to execution of the instructions, is to:
control execution of the individual algorithms using, as an input, data of the candidate dataset; and
control storage of an output of the execution of the individual algorithms with the data of the candidate dataset as the input.
9. A computing system comprising:
one or more processors to implement a prediction engine, the prediction engine to:
obtain a candidate dataset from a user database;
identify a first algorithm from a set of algorithms;
apply the first algorithm to the candidate dataset to obtain first predictions;
evaluate the first predictions to obtain first results;
store the first results in a database;
add the first algorithm to an algorithms list;
generate a submodular function based on the stored benchmark data;
for each algorithm in the set of algorithms,
select a next algorithm from the set of algorithms based on the submodular function,
apply the next algorithm to the candidate dataset to obtain next predictions,
evaluate the next predictions to obtain next results, and
add the next algorithm to the algorithms list;
identify an optimum algorithm from the algorithms list, wherein the optimum algorithm is an individual algorithm of the algorithms list that is closer to fulfilling a predetermined criterion than other algorithms of the algorithms list; and
generate a report indicating the optimum algorithm; and
a network interface to transmit the report to a user system.
10. The computing system of claim 9, further comprising:
a one or more processors to implement a modeling engine to:
obtain another set of algorithms from a modeling algorithms database,
obtain a set of benchmark datasets from the benchmark dataset database,
apply individual algorithms of the set of algorithms to individual benchmark datasets of the set of benchmark datasets to obtain benchmark predictions for the individual algorithms,
evaluate the benchmark predictions to obtain benchmark results for the individual algorithms, and
store, as the benchmark data, the benchmark results in a benchmark database,
wherein the prediction engine is to obtain the set of algorithms from the benchmark database based on the benchmark results.
11. The computing system of claim 10, wherein, to apply the individual algorithms to the individual benchmark datasets, the modeling engine is to:
execute the individual algorithms using data of the individual benchmark datasets as an input to the individual algorithms, and wherein the benchmark predictions comprise an output of the execution of the individual algorithms.
12. The computing system of claim 10, wherein, to evaluate the benchmark predictions, the modeling engine is to:
perform a holdout procedure using the benchmark predictions; or
perform a two-fold cross-validation procedure using the benchmark predictions.
13. The computing system of claim 10, wherein, to store the benchmark results, the modeling engine is to:
store, in the benchmark database, identifiers of the individual algorithms and identifiers of the benchmark datasets in association with the individual algorithms and the benchmark results.
14. The computing system of claim 9, wherein, to apply the first algorithm to the candidate dataset, the prediction engine is to:
execute the first algorithm using candidate data of the candidate dataset as an input to the first algorithm, and wherein the first predictions comprise an output of the execution of the first algorithm.
15. The computing system of claim 9, wherein, to apply the next algorithm to the candidate dataset, the prediction engine is to:
execute the next algorithm using candidate data of the candidate dataset as an input to the next algorithm, and wherein the next predictions comprise an output of the execution of the next algorithm.
16. The computing system of claim 9, wherein, to evaluate the first predictions or to evaluate the next predictions, the prediction engine is to:
perform a holdout procedure using the first predictions or the next predictions; or
perform a two-fold cross-validation procedure using the first predictions or the next predictions.
17. The computing system of claim 10, further comprising:
one or more processors to implement a source engine to obtain, from the user system, raw data or an indication of a location from which the raw data is to be obtained, convert the raw data into a user dataset, and store the user dataset in the user database; and
one or more processors to implement a modeling server to obtain a set of algorithms from a modeling algorithms database, obtain a set of benchmark datasets from the benchmark dataset database, apply individual algorithms of the set of algorithms to individual benchmark datasets of the set of benchmark datasets to obtain benchmark predictions for the individual algorithms, evaluate the benchmark predictions to obtain benchmark results for the individual algorithms, and store the benchmark results in a benchmark database.
18. A computing system comprising:
a prediction server to:
obtain an input dataset;
select a first algorithm from a set of algorithms;
apply the first algorithm to the input dataset to create a first model of the input dataset;
evaluate and control storage of results of applying the first algorithm to the input dataset;
add the first algorithm into a set of tried algorithms;
for each algorithm in the set of algorithms other than the first algorithm and until a termination condition is met:
select, via submodular optimization, a next algorithm of the set of algorithms to apply to the input dataset based accessed historical performance data in a benchmark database and the set of tried algorithms,
evaluate and control storage of results of applying the next algorithm to the input dataset,
add the next algorithm to the set of tried algorithms;
identify an optimum algorithm from the algorithms list, wherein the optimum algorithm is an individual algorithm of the algorithms list that is closer to fulfilling a predetermined criterion than other algorithms of the algorithms list;
generate a report indicating the optimum algorithm; and
an application server to implement a user interface, the application server to obtain the report, and transmit the report to the user system.
19. The computing system of claim 18, further comprising:
a modeling server to obtain a set of algorithms from a modeling algorithms database, obtain a set of benchmark datasets from the benchmark dataset database, apply individual algorithms of the set of algorithms to individual benchmark datasets of the set of benchmark datasets to obtain benchmark predictions for the individual algorithms, evaluate the benchmark predictions to obtain benchmark results for the individual algorithms, and store the benchmark results in a benchmark database.
20. The computing system of claim 19, wherein the prediction server is further to generate a submodular function based on the stored first results and select the next algorithm using the submodular function.
US15/478,097 2016-04-05 2017-04-03 Selection of machine learning algorithms Abandoned US20170286839A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US15/478,097 US20170286839A1 (en) 2016-04-05 2017-04-03 Selection of machine learning algorithms
US16/915,551 US11586953B2 (en) 2016-04-05 2020-06-29 Selection of machine learning algorithms
US18/099,904 US20230153654A1 (en) 2016-04-05 2023-01-20 Selection of machine learning algorithms

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662318672P 2016-04-05 2016-04-05
US15/478,097 US20170286839A1 (en) 2016-04-05 2017-04-03 Selection of machine learning algorithms

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/915,551 Continuation US11586953B2 (en) 2016-04-05 2020-06-29 Selection of machine learning algorithms

Publications (1)

Publication Number Publication Date
US20170286839A1 true US20170286839A1 (en) 2017-10-05

Family

ID=59961632

Family Applications (3)

Application Number Title Priority Date Filing Date
US15/478,097 Abandoned US20170286839A1 (en) 2016-04-05 2017-04-03 Selection of machine learning algorithms
US16/915,551 Active 2037-07-11 US11586953B2 (en) 2016-04-05 2020-06-29 Selection of machine learning algorithms
US18/099,904 Pending US20230153654A1 (en) 2016-04-05 2023-01-20 Selection of machine learning algorithms

Family Applications After (2)

Application Number Title Priority Date Filing Date
US16/915,551 Active 2037-07-11 US11586953B2 (en) 2016-04-05 2020-06-29 Selection of machine learning algorithms
US18/099,904 Pending US20230153654A1 (en) 2016-04-05 2023-01-20 Selection of machine learning algorithms

Country Status (5)

Country Link
US (3) US20170286839A1 (en)
EP (1) EP3440554A4 (en)
AU (1) AU2017246545A1 (en)
BR (1) BR112018070581A2 (en)
WO (1) WO2017176742A1 (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009643A (en) * 2017-12-15 2018-05-08 清华大学 A kind of machine learning algorithm automatic selecting method and system
US20190102693A1 (en) * 2017-09-29 2019-04-04 Facebook, Inc. Optimizing parameters for machine learning models
US20190325307A1 (en) * 2018-04-20 2019-10-24 EMC IP Holding Company LLC Estimation of resources utilized by deep learning applications
JP2019204392A (en) * 2018-05-25 2019-11-28 ギリア株式会社 Learning data discrimination device and learning data discrimination program
US20190370218A1 (en) * 2018-06-01 2019-12-05 Cisco Technology, Inc. On-premise machine learning model selection in a network assurance service
CN111611085A (en) * 2020-05-28 2020-09-01 中国科学院自动化研究所 Man-machine hybrid enhanced intelligent system, method and device based on cloud edge collaboration
US20210334699A1 (en) * 2020-04-24 2021-10-28 Ul Llc Using machine learning to virtualize product tests
US11373063B2 (en) 2018-12-10 2022-06-28 International Business Machines Corporation System and method for staged ensemble classification
US20220224990A1 (en) * 2019-05-15 2022-07-14 Nippon Telegraph And Telephone Corporation Control apparatus, control method, and program
WO2022218633A1 (en) * 2021-04-13 2022-10-20 British Telecommunications Public Limited Company Algorithm selection for processor-controlled device
US20220343349A1 (en) * 2021-04-21 2022-10-27 Sensormatic Electronics, LLC Techniques for preventing shrink based on hybrid machine learning system
US20220343234A1 (en) * 2021-04-22 2022-10-27 Sensormatic Electronics, LLC Method for hybrid machine learning for shrink prevention system
US11501191B2 (en) 2018-09-21 2022-11-15 International Business Machines Corporation Recommending machine learning models and source codes for input datasets
US11574233B2 (en) 2018-08-30 2023-02-07 International Business Machines Corporation Suggestion and completion of deep learning models from a catalog
US11681947B2 (en) 2018-08-02 2023-06-20 Samsung Electronics Co., Ltd Method and apparatus for selecting model of machine learning based on meta-learning
US11687795B2 (en) 2019-02-19 2023-06-27 International Business Machines Corporation Machine learning engineering through hybrid knowledge representation
KR102583120B1 (en) * 2023-05-24 2023-09-26 주식회사 노타 Apparatus and method for providing benchmark prediction result of artificial intelligence based model

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170286839A1 (en) * 2016-04-05 2017-10-05 BigML, Inc. Selection of machine learning algorithms
US11847771B2 (en) * 2020-05-01 2023-12-19 Samsung Electronics Co., Ltd. Systems and methods for quantitative evaluation of optical map quality and for data augmentation automation

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7512626B2 (en) * 2005-07-05 2009-03-31 International Business Machines Corporation System and method for selecting a data mining modeling algorithm for data mining applications
US7801836B2 (en) * 2006-09-27 2010-09-21 Infosys Technologies Ltd. Automated predictive data mining model selection using a genetic algorithm
US9330127B2 (en) * 2007-01-04 2016-05-03 Health Care Productivity, Inc. Methods and systems for automatic selection of classification and regression trees
US8255346B2 (en) * 2009-11-11 2012-08-28 International Business Machines Corporation Methods and systems for variable group selection and temporal causal modeling
US8473431B1 (en) * 2010-05-14 2013-06-25 Google Inc. Predictive analytic modeling platform
US8533222B2 (en) * 2011-01-26 2013-09-10 Google Inc. Updateable predictive analytical modeling
US9070047B2 (en) * 2011-12-27 2015-06-30 Microsoft Technology Licensing, Llc Decision tree fields to map dataset content to a set of parameters
US9536207B2 (en) * 2013-11-13 2017-01-03 Linkedin Corporation Using data obtained from an online social network to optimize subscription offers
US9911088B2 (en) * 2014-05-01 2018-03-06 Microsoft Technology Licensing, Llc Optimizing task recommendations in context-aware mobile crowdsourcing
US10068186B2 (en) * 2015-03-20 2018-09-04 Sap Se Model vector generation for machine learning algorithms
US10572819B2 (en) * 2015-07-29 2020-02-25 International Business Machines Corporation Automated intelligent data navigation and prediction tool
US20170193392A1 (en) * 2015-12-31 2017-07-06 Linkedin Corporation Automated machine learning tool
US20170286839A1 (en) * 2016-04-05 2017-10-05 BigML, Inc. Selection of machine learning algorithms

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190102693A1 (en) * 2017-09-29 2019-04-04 Facebook, Inc. Optimizing parameters for machine learning models
CN108009643A (en) * 2017-12-15 2018-05-08 清华大学 A kind of machine learning algorithm automatic selecting method and system
US20190325307A1 (en) * 2018-04-20 2019-10-24 EMC IP Holding Company LLC Estimation of resources utilized by deep learning applications
JP2019204392A (en) * 2018-05-25 2019-11-28 ギリア株式会社 Learning data discrimination device and learning data discrimination program
JP7037180B2 (en) 2018-05-25 2022-03-16 ギリア株式会社 Learning data discriminator and learning data discriminator
US20190370218A1 (en) * 2018-06-01 2019-12-05 Cisco Technology, Inc. On-premise machine learning model selection in a network assurance service
US11681947B2 (en) 2018-08-02 2023-06-20 Samsung Electronics Co., Ltd Method and apparatus for selecting model of machine learning based on meta-learning
US11574233B2 (en) 2018-08-30 2023-02-07 International Business Machines Corporation Suggestion and completion of deep learning models from a catalog
US11501191B2 (en) 2018-09-21 2022-11-15 International Business Machines Corporation Recommending machine learning models and source codes for input datasets
US11373063B2 (en) 2018-12-10 2022-06-28 International Business Machines Corporation System and method for staged ensemble classification
US11687795B2 (en) 2019-02-19 2023-06-27 International Business Machines Corporation Machine learning engineering through hybrid knowledge representation
US20220224990A1 (en) * 2019-05-15 2022-07-14 Nippon Telegraph And Telephone Corporation Control apparatus, control method, and program
US20210334699A1 (en) * 2020-04-24 2021-10-28 Ul Llc Using machine learning to virtualize product tests
CN111611085A (en) * 2020-05-28 2020-09-01 中国科学院自动化研究所 Man-machine hybrid enhanced intelligent system, method and device based on cloud edge collaboration
WO2022218633A1 (en) * 2021-04-13 2022-10-20 British Telecommunications Public Limited Company Algorithm selection for processor-controlled device
US20220343349A1 (en) * 2021-04-21 2022-10-27 Sensormatic Electronics, LLC Techniques for preventing shrink based on hybrid machine learning system
US20220343234A1 (en) * 2021-04-22 2022-10-27 Sensormatic Electronics, LLC Method for hybrid machine learning for shrink prevention system
KR102583120B1 (en) * 2023-05-24 2023-09-26 주식회사 노타 Apparatus and method for providing benchmark prediction result of artificial intelligence based model

Also Published As

Publication number Publication date
BR112018070581A2 (en) 2019-02-05
AU2017246545A1 (en) 2018-11-22
US20230153654A1 (en) 2023-05-18
EP3440554A4 (en) 2020-05-27
US20210019639A1 (en) 2021-01-21
EP3440554A1 (en) 2019-02-13
WO2017176742A1 (en) 2017-10-12
US11586953B2 (en) 2023-02-21

Similar Documents

Publication Publication Date Title
US11586953B2 (en) Selection of machine learning algorithms
US10459954B1 (en) Dataset connector and crawler to identify data lineage and segment data
JP2021532488A (en) Determining the suitability of machine learning models for datasets
US10387430B2 (en) Geometry-directed active question selection for question answering systems
Wang et al. Semi-supervised multiple feature analysis for action recognition
US20120158623A1 (en) Visualizing machine learning accuracy
US11429863B2 (en) Computer-readable recording medium having stored therein learning program, learning method, and learning apparatus
JP6979661B2 (en) Systems for shallow circuits as quantum classifiers, computer implementation methods and computer programs
US20170300573A1 (en) Webpage data analysis method and device
US20190273789A1 (en) Establishing and utilizing behavioral data thresholds for deep learning and other models to identify users across digital space
CN110162606B (en) Session agent learning model service selection for resolving client service requests
US11569981B1 (en) Blockchain network based on machine learning-based proof of work
US11388196B2 (en) System and method for analyzing relationships between clusters of electronic devices to counter cyberattacks
US20220036246A1 (en) Automated machine learning pipeline generation
US11586838B2 (en) End-to-end fuzzy entity matching
CN103218555A (en) Logging-in method and device for application program
JP5673473B2 (en) Distributed computer system and method for controlling distributed computer system
US20230067285A1 (en) Linkage data generator
WO2020140624A1 (en) Method for extracting data from log, and related device
US20230069079A1 (en) Statistical K-means Clustering
CN110716761A (en) Automatic and self-optimizing determination of execution parameters of software applications on an information processing platform
US11068779B2 (en) Statistical modeling techniques based neural network models for generating intelligence reports
WO2022160752A1 (en) Operations research optimization method and apparatus, and computing device
US20220351062A1 (en) Generating quantum service definitions from executing quantum services
US20210271966A1 (en) Transfer learning across automated machine learning systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: BIGML, INC., OREGON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:PARKER, CHARLES;REEL/FRAME:041835/0324

Effective date: 20170328

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION