WO2022109507A1 - Prédiction de résultats sportifs au moyen de transitions d'état de correspondance - Google Patents

Prédiction de résultats sportifs au moyen de transitions d'état de correspondance Download PDF

Info

Publication number
WO2022109507A1
WO2022109507A1 PCT/US2021/071582 US2021071582W WO2022109507A1 WO 2022109507 A1 WO2022109507 A1 WO 2022109507A1 US 2021071582 W US2021071582 W US 2021071582W WO 2022109507 A1 WO2022109507 A1 WO 2022109507A1
Authority
WO
WIPO (PCT)
Prior art keywords
probability
future
state
transition
model
Prior art date
Application number
PCT/US2021/071582
Other languages
English (en)
Inventor
Kyle ENGEL
Chris FLYNN
Henry SORSKY
Original Assignee
SimpleBet, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SimpleBet, Inc. filed Critical SimpleBet, Inc.
Priority to AU2021383910A priority Critical patent/AU2021383910A1/en
Priority to CA3195283A priority patent/CA3195283A1/fr
Publication of WO2022109507A1 publication Critical patent/WO2022109507A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • the presently disclosed embodiments generally relate to predictive modeling and machine learning and, more particularly, to a system, method, and model structure for using machine learning to predict future sport outcomes based on match state transitions.
  • Stochastic models are probabilistic models that have wide applications in sciences, signal processing, information theory, and finance.
  • Markov chains which are specific types of stochastic models, are used to model discrete or continuous time processes in which a process transitions between states.
  • a key property of Markov chains is that they are “memoryless”. This means only the present state is relevant in predicting future states - in other words, the states and transitions leading up to the present state do not matter.
  • Bayesian networks are probabilistic models that similarly have wide applications in sciences and finance. Generally, Bayesian networks are used to model probability distributions and conditional dependencies between variables.
  • the presently disclosed embodiments comprise a machine learning prediction system which is based on a Bayesian network model structure which works similar in concept to Markov chain models but overcomes the undesirable “memoryless” property of Markov chain models.
  • This system can be used to produce accurate estimates of future sport outcomes an arbitrary number of steps into the future. For example, in a Major League Baseball (MLB) game, while Batter 1 is batting, we may wish to predict the probability that Batter 4 hits a single. The probability that Batter 4 hits a single depends on the results of Batter 1, Batter 2, and Batter 3, and it also depends on how the score, outs, and runners on base change as a result of the outcomes of Batter 1, Batter 2, and Batter 3 at-bats. Generally, the presently disclosed embodiments can be used to generate accurate probability distributions at arbitrarily long steps into the future in systems that have measurable and statistically dependent state spaces.
  • MLB Major League Baseball
  • a system for predicting future outcomes in a sporting match of a sport of interest based on match state transitions comprising: a transition machine learning model trained on historical data from past matches in the sport of interest; a state updater trained on historical data from past matches in the sport of interest; a final outcome machine learning model trained on historical data from past matches in the sport of interest; and a total probability predictor; wherein the system executes the following steps: inputting an initial match state So of the sporting match into the transition machine learning model; generating, using the transition machine learning model, predicted probability distributions on a plurality of transition outcomes PTo - PTi, where i is an integer; inputting the plurality of transition outcomes PTi into the state updater; generating, using the state updater, a plurality of predicted probability distributions on future states, Si - Si, where i is an integer, conditioned on each possible transition outcome, PTi; inputting the plurality of predicted probability distributions on future states Si into the final outcome machine learning model; generating, using the
  • FIG. 1 is a schematic diagram of a computer-implemented system comprising machine learning models used to predict future sport outcomes according to an embodiment
  • FIG. 2 is a schematic high-level Bayesian network diagram summarizing the operations of the system of FIG. 1 for an MLB at-bat match state predictor.
  • FIG. 3 is a schematic process flow diagram showing a more detailed view of the Bayesian network of FIG. 2.
  • FIG. 4 is a schematic diagram of a probabilistic graphical model of a Bayesian network for predicting future at-bat results according to an embodiment
  • FIG. 5 is a schematic diagram of a probabilistic graphical model of a Bayesian network for predicting future at-bat results according to an embodiment
  • the disclosed system has four main components: a Transition Machine Learning (ML) Model, a State Updater, a Final Outcome ML Model and a Total Probability Predictor.
  • the State Updater and Total Probability Predictor components provide structure around the Transition ML Model and the Final Outcome ML Model.
  • the Transition ML Model and the Final Outcome ML Model are trained on historic statistical data from past matches in the sport of interest. All of these components are implemented as software running on general purpose computing devices, as will be understood by those having skill in the art.
  • the process functions in two stages.
  • the first stage functions as follows, with continuing reference to FIG. 1.
  • the starting point is an initial match state, So.
  • First use the Transition ML Model 102 to generate predicted probability distributions on transition outcomes, PTo - PTi, where i is an integer
  • Then use the State Updater 104 to generate predicted probability distributions on future states, Si, S2, . . . Si, where i is an integer, conditioned on each possible transition outcome, PTi. Repeat this process of using the Transition ML Model 102 to generate probability distributions on transition outcomes and using the State Updater 104 to generate probability distributions on future states, for an arbitrary number of states into the future.
  • the new “match states” may loop back and are fed into the Transition ML Model 102.
  • PF predicted probability distributions on the desired final outcome
  • the final outcome PF is the probability that the batter three at-bats into the future will score a run.
  • N 3 (three steps into the future) and the outputs of the State Updater 104 will be fed back to the Transition ML Model 102 in two loops before the Final Outcome ML Model 106 can generate the final outcome PF for this outcome on interest.
  • the Total Probability Predictor 108 uses the Total Probability Predictor 108 to furl all conditional probabilities into a single probability distribution. This probability distribution represents the probabilities of desired outcomes occurring, the desired number of steps into the future. It is the Total Probability Predictor 108 that uses the intermediate probability distributions (Si, PTi, and PF) to parameterize a Bayesian network 110.
  • All of the components 102, 104 and 106 of the model of FIG. 1 are trained on actual results from prior real-life games in the sport of interest.
  • the Transition ML Model 102 is a machine learning model that predicts probability distributions at the same resolution in which the modelled system takes steps or progresses. For example, treating an MLB game as the system, the system may progress at the pitch-level or at the at-bat-level, for example. Therefore, the Transition ML Model 102 in this system can predict pitch results or at-bat results.
  • the resulting probability distributions may have classes “ball”, “strike”, “in play” for the pitch-level Transition ML Model 102 predictions, or “hit”, “out”, “other” for the at-bat level Transition ML Model 102 predictions.
  • the choice of resolution on the Transition ML Model 102 depends on the desired interpretation of the system’s output.
  • the Transition ML Model 102 may operate at the at-bat level.
  • the Transition ML Model may operate at the pitch-level.
  • the Transition ML Model 102 is very similar in concept to the Final Outcome ML Model 106 discussed hereinbelow (and in some cases they can be identical). They may be trained on the same historical data (batter statistics, pitcher statistics, matchups, outs, balls, strikes, runners on base, etc., to name just a few non-limiting examples in the MLB example). The difference is that the Transition ML Model 102 predicts probability distributions on the outcome that the system transitions on, whereas the Final Outcome ML Model 106 predicts probability distributions for the desired final outcome. For baseball, the system may transition on plate appearances, for example, so the Transition ML Model 102 may predict probabilities of singles, doubles, walks, outs, etc. When the desired final outcome is something like "plate appearance result for the batter on-deck", then the Final Outcome ML Model 106 may be identical to the Transition ML Model 102, since the system in this example transitions on the same outcome desired for the final outcome.
  • State Updater 104 The outputs of the Transition ML Model are provided as inputs to the State Updater 104.
  • State Updater 104
  • State Updater 104 can update match states. Three such embodiments are disclosed herein, although those skilled in the art will recognize in view of the present disclosure that additional methods may also be used. The first two embodiments apply to match state variables that depend on the Transition ML Model 102. The final embodiment does not.
  • the State Updater 104 can enumerate all possible initial match state variables and Transition ML Model 102 outcomes and use empirical data (e.g., probability distributions based on actual results from prior real-life games in the sport of interest) to extrapolate the expected future match states.
  • the output probabilities are empirical probabilities (rather than predicted probabilities) taken from the past based on past base rates. Therefore, the probabilities in this first embodiment are not the output of a machine learning model. For example, in an MLB game, a simple initial match state may be “runner on first base, no runner on second base, no runner on third base”. An example of a Transition ML Model 102 outcome may be “single”.
  • the State Updater would take this information and extrapolate probability distributions on future states if the batter hits a single, which may look something like this: 70% “runner on first base, runner on second base, no runner on third base”, 25% “runner on first base, no runner on second base, runner on third base”, etc.
  • the most likely following match states are “runner on first and second base” or “runner on first and third base”. Note that these probability distributions on future match states are computed for each
  • 104 may look like the following:
  • the enumeration method may be best used when match state variables are discrete and do not take on many unique values, since the "state space" can get huge if the state variables take on a lot of unique values.
  • the State Updater 104 itself can use a meta-ML model to predict future match states.
  • This method involves building a machine learning model based on past match states (general historical data from other games) and Transition ML Model 102 outcomes to predict future match states. For example, if a pitcher’s total pitch count is 20, and he walks the batter, the meta-ML model of State Updater 104 may predict something like this for the pitcher’s total pitch count for the next batter: 10% 24, 15% 25, 25% 26, 35% 27, etc.
  • the meta-ML model method may be best used with match state variables that take on many unique values.
  • the State Updater 104 can simply retrieve pre-computed information for the following match state. This method is suitable when the following match state does not depend on the current state and is known at the time of the current match state.
  • the on-deck batter’s batting average may be information required for the future match state.
  • the on-deck batter’s batting average doesn’t depend on the result of the current at-bat, so the State Updater 104 can simply retrieve the on-deck batter’s batting average to construct a future match state.
  • State Updater 104 may employ combinations of these approaches, with or without implementing additional approaches, to construct a future match state.
  • the Final Outcome ML Model 106 is a machine learning model that predicts probability distributions for a desired final outcome PF.
  • the Final Outcome ML Model 106 may be identical to the Transition ML Model 102 if the system steps at the same outcome as the desired final outcome.
  • the Transition ML Model 102 may predict at-bat results (because the system progresses at the at-bat level), such as Hit, Out, or Other, and the Final Outcome ML Model 106 may also predict at-bat results (because we may want the at- bat probability distributions a number of steps into the future).
  • the Final Outcome ML Model 106 may be a separate model from the Transition ML Model 102. If it is desired to know probabilities of future outcomes for an outcome different than the outcome at which the match state transitions, then the Transition ML Model 102 and the Final Outcome ML Model 106 are different. For example, the Transition ML Model 102 may predict pitch results (because the system progresses at the pitch level), and the Final Outcome ML Model 106 may predict at-bat results (because we may want the at-bat probability distributions a number of steps into the future). The Final Outcome ML Model 106 may extrapolate from future pitch level states provided by the State Updater 104 to predict future at-bat results.
  • the match states are still transitioning at the at-bat level, so the Transition ML Model 102 is still predicting probabilities of at-bat results (hit, out, other).
  • the Final Outcome ML Model 106 is predicting probabilities of at-bat pitch counts (1, 2, 3, 4, etc.).
  • FIG. 2 there is shown a schematic high-level Bayesian network diagram summarizing the operation of the Transition ML Model 102, the State Updater 104, and the Final Outcome ML Model 106 for an MLB at-bat match state predictor.
  • Each circle is a random variable.
  • FIG. 3 schematically illustrates a more detailed view of the Bayesian network of FIG. 2 after the system 100 has produced its outputs.
  • the random variable is current at-bat batter transition outcome (i.e., hit, out, or other).
  • the output of the Transition ML Model 102 is a realization of the probability distribution of the transition outcome for the current at-bat batter. As can be seen in FIG.
  • the Transition ML Model 102 has determined that the probability of a hit is 21%, the probability of an out is 56%, and the probability of another outcome is 23%. These probabilities come from the Transition ML Model 102 and apply to the current batter (who is at bat in the initial match state).
  • the random variable is the match state when the on-deck batter is at bat (i.e., a time in the future from the current match state).
  • the State Updater 104 uses the possible transition outcomes generated by the Transition ML Model 102 (hit, out, or other), the State Updater 104 outputs a realization of this random variable, the probability of each possible future match state. Future match states depend on the outcome of the current batter, thus the State Updater 104 takes as input the Transition ML Model 102 output.
  • the possible future match states (along with their probabilities of occurring) when the on-deck batter takes the plate are shown at 304 in FIG. 3.
  • the State Updater 104 may output possible future match states any desired number of steps into the future by feeding back one predicted match state to the Transition ML Model 102 (as shown in FIG. 1), and obtaining the probability distribution of possible transitions from such predicted future match state from the Transition ML Model 102. These transition outcomes may then be used by the State Updater 104 to predict the probabilities of possible future match states one additional step into the future. This process can be repeated as many times as desired to predict the probability of any future match state as many steps into the future as desired.
  • the random variable is the on-deck batter final outcome (i.e., predicting the outcome of the on-deck batter’s future at-bat).
  • the Final Outcome ML Model 106 uses each possible future match state generated by the State Updater 104 as input, the Final Outcome ML Model 106 outputs a realization of this random variable, the probability of hit, out, or other for the on-deck batter’s future at-bat.
  • the final outcome of the on-deck batter’s future at-bat depends on the future match state that exists when the on-deck batter takes the plate, thus the Final Outcome ML Model 106 takes as input the State Updater 104 output.
  • the possible future outcomes when the on-deck batter completes his/her at-bat are shown at 306 in FIG. 3. Any of these possible future outcomes 306 may be selected as the desired final outcome PF that is output by the Final Outcome ML Model 106.
  • the Total Probability Predictor 108 can compute the desired probability distributions for future outcomes a number of steps into the future, although those skilled in the art will recognize in view of the present disclosure that additional methods may also be used.
  • Each of the probability distributions created by the Transition ML Model 102 (PTi), the Statue Updater 104 (Si), and the Final Outcome ML Model 106 (PF) are used as inputs to the Total Probability Predictor 108.
  • Each of the probability distributions created by the Transition ML Model 102 (PTi), the Statue Updater 104 (Si), and the Final Outcome ML Model 106 (PF) are used to parameterize a Bayesian network 110 which captures the conditional probabilities of each possible outcome.
  • the Total Probability Predictor 108 is then used to compute the final desired probability distributions.
  • each probability shown in FIG. 3 is an input to the Total Probability
  • the probabilities of each possible pathway to arriving at a hit (there are 9) is summed by the Total Probability Predictor 108 to get the "total probability.” Therefore, the Final Outcome ML Model 106 has determined the last layer of probabilities for each possible final outcome, but they are conditional on the previous states and transition outcomes.
  • the Total Probability Predictor 108 takes all of those conditional probabilities mentioned above and outputs unconditional probabilities. It answers the question "What is the probability that the on- deck hitter gets a hit?"
  • the Total Probability Predictor 108 may perform an exact calculation. This method is suitable when the Transition ML Model 102 and the Final Outcome ML Model 106 do not contain many classes in their respective outcome variables and when the number of steps into the future is small, for example.
  • the exact calculation can be computed using matrix multiplication and linear algebra by representing the Bayesian network 110 with tensor data structures, or it can be computed using recursive calculations by representing the Bayesian network 110 with tree data structures, to name just two non-limiting examples.
  • the exact calculation may consume large amounts of computer memory, such as random access memory (RAM).
  • the Total Probability Predictor 108 may perform an approximate calculation. This method is suitable when the Transition ML Model 102 and/or the Final Outcome ML model 106 do contain many classes in their respective outcome variables, and/or when the number of steps into the future is large, for example.
  • the approximate calculation can be computed using Monte Carlo methods, for example. Performing the approximate calculation may take longer computation time, but it does not require the same computer memory resources as the exact calculation.
  • pitch count of an at-bat for the current batter, the on-deck batter, and the in-the-hole batter.
  • pitch counts can be “1”, “2”, or “3+”.
  • the Final Outcome ML Model 106 predicts probability distributions on pitch counts.
  • the Transition ML Model 102 predicts probability distributions on at-bat results. Say that at-bat results can be “hit”, “out”, or “other”.
  • match state variables as number of outs, runner on 1 st base, runner on 2 nd base, and runner on 3 rd base. For simplicity, we symbolize the match state as a vector. For example, take (1, 1, 0, 0) to mean one out, runner on 1 st base, and no runners on 2 nd or 3 rd base.
  • the probabilities produced by the Transition ML Model 102 are dependent on the batters and pitchers. For example, better batters would have higher probabilities of hits.
  • the probabilities here are determined by a machine learning model that is trained on historical data.
  • the State Updater 104 does not need to know the probabilities of a single, double, etc. It just needs to know what the possible outcomes are. The State Updater 104 knows that if a single happens, then possible futures states are A, B, C with associated probabilities, if a double happens, then possible future states are X, Y, Z, with associated probabilities, etc.
  • a (0, 0, 0, 1) match state may produce:
  • Bayesian networks are a powerful inference tool, in which a set of variables are represented as nodes, and the lack of an edge represents a conditional independence statement between the two variables, and an edge represents a dependence between the two variables.
  • the conditional probabilities are hard-coded for illustrative purposes.
  • This Example 2 uses the pomegranate package in the Python programming language, a package that implements graphical probabilistic models. We will use pomegranate to build a graphical model and perform inference on it.
  • Our final goal is to compute the conditional probability distribution at each level of future at-bats. For instance, we would like to know the probability of a single on at-bat 2, conditioned on all previous at-bat outcomes.
  • FIG. 4 The tree diagram shown in FIG. 4 isn't actually a proper probabilistic graphical model. Each node should represent the full distribution at its layer — it should not be broken down by each possible realization from that distribution.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Algebra (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • Operations Research (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Transition And Organic Metals Composition Catalysts For Addition Polymerization (AREA)
  • Measurement And Recording Of Electrical Phenomena And Electrical Characteristics Of The Living Body (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Testing And Monitoring For Control Systems (AREA)

Abstract

L'invention concerne un système et un procédé utilisant une prédiction d'apprentissage automatique qui estime des résultats sportifs futurs par la représentation mathématique d'états de correspondance actuel et futur sous forme d'un réseau bayésien paramétré par des modèles d'apprentissage machine sous-jacents.
PCT/US2021/071582 2020-11-20 2021-09-24 Prédiction de résultats sportifs au moyen de transitions d'état de correspondance WO2022109507A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU2021383910A AU2021383910A1 (en) 2020-11-20 2021-09-24 Predicting sport outcomes using match state transitions
CA3195283A CA3195283A1 (fr) 2020-11-20 2021-09-24 Prediction de resultats sportifs au moyen de transitions d'etat de correspondance

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202063116573P 2020-11-20 2020-11-20
US63/116,573 2020-11-20
US17/184,132 US20220164702A1 (en) 2020-11-20 2021-02-24 System, method, and model structure for using machine learning to predict future sport outcomes based on match state transitions
US17/184,132 2021-02-24

Publications (1)

Publication Number Publication Date
WO2022109507A1 true WO2022109507A1 (fr) 2022-05-27

Family

ID=81657770

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/071582 WO2022109507A1 (fr) 2020-11-20 2021-09-24 Prédiction de résultats sportifs au moyen de transitions d'état de correspondance

Country Status (4)

Country Link
US (1) US20220164702A1 (fr)
AU (1) AU2021383910A1 (fr)
CA (1) CA3195283A1 (fr)
WO (1) WO2022109507A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7296007B1 (en) * 2004-07-06 2007-11-13 Ailive, Inc. Real time context learning by software agents
US20130304622A1 (en) * 2010-01-19 2013-11-14 Ronald L. Johannes Methods and systems for computing trading strategies for use in portfolio management and computing associated probability distributions for use in option pricing
US10309812B1 (en) * 2013-05-24 2019-06-04 University Of Wyoming System and method of using the same
US20190228290A1 (en) * 2018-01-21 2019-07-25 Stats Llc Method and System for Interactive, Interpretable, and Improved Match and Player Performance Predictions in Team Sports
US20200027444A1 (en) * 2018-07-20 2020-01-23 Google Llc Speech recognition with sequence-to-sequence models

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7296007B1 (en) * 2004-07-06 2007-11-13 Ailive, Inc. Real time context learning by software agents
US20130304622A1 (en) * 2010-01-19 2013-11-14 Ronald L. Johannes Methods and systems for computing trading strategies for use in portfolio management and computing associated probability distributions for use in option pricing
US10309812B1 (en) * 2013-05-24 2019-06-04 University Of Wyoming System and method of using the same
US20190228290A1 (en) * 2018-01-21 2019-07-25 Stats Llc Method and System for Interactive, Interpretable, and Improved Match and Player Performance Predictions in Team Sports
US20200027444A1 (en) * 2018-07-20 2020-01-23 Google Llc Speech recognition with sequence-to-sequence models

Also Published As

Publication number Publication date
CA3195283A1 (fr) 2022-05-27
AU2021383910A1 (en) 2023-05-18
US20220164702A1 (en) 2022-05-26

Similar Documents

Publication Publication Date Title
Ji et al. Improved gravitational search algorithm for unit commitment considering uncertainty of wind power
Boas Machine models and simulations
CN109934332A (zh) 基于评论家和双经验池的深度确定性策略梯度学习方法
CN108021983A (zh) 神经架构搜索
Bhattacharjee et al. Oppositional real coded chemical reaction based optimization to solve short-term hydrothermal scheduling problems
Mansor et al. Accelerating activation function for 3-satisfiability logic programming
Ozolins et al. Goal-aware neural SAT solver
Dli et al. Сomplex model for project dynamics prediction
US20220164702A1 (en) System, method, and model structure for using machine learning to predict future sport outcomes based on match state transitions
Gandomi et al. A multiobjective evolutionary framework for formulation of nonlinear structural systems
Pei et al. AlphaSyn: Logic synthesis optimization with efficient monte carlo tree search
Farmahini-Farahani et al. SOPC-based architecture for discrete particle swarm optimization
Lazarova-Molnar et al. Proxel-based simulation of stochastic petri nets containing immediate transitions
Sirin et al. Batch Mode TD ($\lambda $) for Controlling Partially Observable Gene Regulatory Networks
Abed et al. A hybrid local search algorithm for minimum dominating set problems
Zhang et al. Elimination mechanism of glue variables for solving sat problems in linguistics
Perrone A formal scheme for avoiding undecidable problems. Applications to chaotic behavior characterization and parallel computation
Fisher et al. BEAUT: An ExplainaBle deep learning model for agent-based populations with poor data
Danziger Efficiency Improvements in Monte Carlo Algorithms for High-Multiplicity Processes
Winkler et al. Stochastic games with disjunctions of multiple objectives
Nai et al. A Design of Reinforcement Learning Accelerator Based on Deep Q-learning Network
Spantidi et al. The Perfect Match: Selecting Approximate Multipliers for Energy-Efficient Neural Network Inference
Kretinsky et al. Comparison of Algorithms for Simple Stochastic Games (Full Version)
Guo et al. Data‐driven modeling and prediction on hysteresis behavior of flexure RC columns using deep learning networks
Àlvarez et al. The parallel complexity of two problems on concurrency

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21895845

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3195283

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2021383910

Country of ref document: AU

Date of ref document: 20210924

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21895845

Country of ref document: EP

Kind code of ref document: A1