WO2019086867A1 - A computer implemented determination method and system - Google Patents

A computer implemented determination method and system Download PDF

Info

Publication number
WO2019086867A1
WO2019086867A1 PCT/GB2018/053154 GB2018053154W WO2019086867A1 WO 2019086867 A1 WO2019086867 A1 WO 2019086867A1 GB 2018053154 W GB2018053154 W GB 2018053154W WO 2019086867 A1 WO2019086867 A1 WO 2019086867A1
Authority
WO
WIPO (PCT)
Prior art keywords
model
user
probabilistic graphical
probability
graphical model
Prior art date
Application number
PCT/GB2018/053154
Other languages
French (fr)
Inventor
Laura Helen DOUGLAS
Iliyan Radev ZAROV
Konstantinos GOURGOULIAS
Christopher Lucas
Christopher Robert HART
Adam Philip BAKER
Maneesh Sahani
Iurii PEROV
Saurabh JOHRI
Pavel MYSHKOV
Robert WALECKI
Original Assignee
Babylon Partners Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from GB1718003.5A external-priority patent/GB2567900A/en
Application filed by Babylon Partners Limited filed Critical Babylon Partners Limited
Priority to CN201880071038.4A priority Critical patent/CN111602150A/en
Priority to EP18815276.3A priority patent/EP3704639A1/en
Priority to US16/325,681 priority patent/US20210358624A1/en
Priority to US16/277,975 priority patent/US11328215B2/en
Priority to US16/277,956 priority patent/US20190251461A1/en
Priority to US16/277,970 priority patent/US11348022B2/en
Publication of WO2019086867A1 publication Critical patent/WO2019086867A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B45/00ICT specially adapted for bioinformatics-related data visualisation, e.g. displaying of maps or networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16BBIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
    • G16B5/00ICT specially adapted for modelling or simulations in systems biology, e.g. gene-regulatory networks, protein interaction networks or metabolic networks
    • G16B5/20Probabilistic models
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/50ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders

Definitions

  • Embodiments of the present invention relate to the field of computer implemented determination methods and systems.
  • Graphical models provide a natural framework for expressing the probabilistic relationships between random variables in numerous fields across the natural sciences.
  • Bayesian networks a directed form of graphical model, have been used extensively in medicine, to capture causal relationships between entities such as risk-factors, diseases and symptoms, and to facilitate medical decision-making tasks such as disease diagnosis.
  • Key to decision-making is the process of performing probabilistic inference to update one's prior beliefs about the likelihood of a set of diseases, based on the observation of new evidence.
  • Figure 1 is an overview of a system in accordance with an embodiment
  • Figure 2 is a schematic diagram of a simple graphical model
  • Figure 3 is a flow diagram showing the training of a discriminative model to use the system of figure 1;
  • Figure 4 is a flow diagram showing the use of the trained model with the inference engine of figure 1 ;
  • FIG. 5 is a basic schematic processing system with a GPU
  • Figure 6 is a schematic of an overview of a system in accordance with an embodiment
  • Figure 7 is a flow diagram showing the training of a discriminative model to use the system of figure 6;
  • Figure 8 is a flow diagram showing the use of the trained model with the inference engine of figure 6;
  • Figure 9(a) is a schematic of a graphical Model and Figure 9(b) the corresponding UM architecture.
  • the nodes of (a) the graph are categorized by their depth inside the network and the weights of (b) the UM neural network are shared for nodes of the same category;
  • Figure 10 shows the performance of the above system on three different graphical models.
  • Figure 10(a) shows results from a synthetic graph with 96 nodes
  • Figure 10(b) shows results from a synthetic graph with 768 nodes
  • figure 10(c) shows results from a medical PGM.
  • Inference was applied through importance sampling with and without the support of a trained UM and it was evaluated in terms of Pearson Correlation Coefficient (PCC), Mean Absolute Error (MAE) and Effective Sampling Size (ESS); and
  • Figure 11 shows the embeddings filtered for two sets of symptoms and risk factors, where each scatter point corresponds to a set of evidence.
  • Figure 11(a) shows results for Diabetes embeddings and figure 11(b) shows results for smoke and obesity embeddings.
  • the display embedding vectors correspond to the first two components. It can be seen that they separate quite well unrelated medical concepts and show an overlap for concepts which are closely related.
  • a method for providing a computer implemented medical diagnosis comprising: receiving an input from a user comprising at least one symptom; providing at least one symptom as an input to a medical model; using the medical model to determine the probability of the user having a disease stored in the medical model from the provided input; and outputting the probability of the user having one or more diseases
  • said medical model comprises a probabilistic graphical model containing the probability distributions and the relationships between symptoms and diseases, an inference engine configured to perform Bayesian inference on said probabilistic graphical model using a discriminative model, wherein the discriminative model has been pre-trained to approximate the probabilistic graphical model, the discriminative model being trained using samples generated from said probabilistic graphical model, wherein some of the data of the samples has been masked to allow the discriminative model to produce data which is robust to the user providing incomplete information about their symptoms, and wherein determining the probability that the user has a disease comprises deriving estimates of the probabilities that the user has that disease from the discriminative model, in
  • Medical diagnosis systems require significant computing resources such as processor capacity.
  • the disclosed systems and methods solve this technical problem with a technical solution, namely by conducting approximate statistical inference on a PGM with help of a discriminative model (e.g. a neural net) to provide an estimate of the posterior probabilities.
  • the discriminative model is trained such that it is robust to the user providing incomplete information about their symptoms. The above therefore allows the system to produce answers using such new approximate inference with the accuracy comparable to using exact or already existing approximate inference techniques, but in a fraction of the time and with a reduction in the processing required.
  • the inference engine may be configured to perform importance sampling over conditional marginal. However, other methods may be used such as Variational Inference, other Monte Carlo methods, etc.
  • the discriminative model can be a neural network.
  • the neural network is a single neural network in other embodiments the neural network is as described in Further Embodiment A.
  • the neural net can approximate the outputs of the probabilistic graphical model and hence later in this document, it is termed a Universal Marginaliser (UM).
  • UM Universal Marginaliser
  • the probabilistic graphical model is a noisy-OR model.
  • performing probabilistic inference is computationally expensive, and in a medicine where large-scale Bayesian networks are required to make clinically robust diagnoses, it is not feasible to apply exact inference techniques. Instead, approximate, sampling- based algorithms are used which provide theoretical guarantees (under the central limit theorem) regarding convergence to the true posterior. In the context of medical diagnosis, this amounts to arriving at the true disease differential, based on the evidence and the underlying the model.
  • the task of inference is to sample from an independent 'proposal' distribution, which, ideally is as close as possible to the target.
  • the standard approach when applying Bayesian networks for medical decision-making, is to use the model prior as the proposal distribution.
  • this is often not ideal, particularly in cases where an unusual combination of symptoms is generated by a rare disease.
  • a large number of samples is often required to reduce the variance in the estimate of the true posterior; this poses a significant practical constraint to the use sampling algorithms for inference.
  • Figure 1 is a schematic of a method in accordance with an embodiment.
  • a patient 101 inputs their symptoms in step SI 03 via interface 105.
  • the patient may also input their risk factors, for example, whether they are a smoker, their weight etc. the interface may be adapted to ask the patient 101 specific questions. Alternately, the patient may just simply enter free text.
  • the patient's risk factors may be derived from the patient's records held in a database (not shown). Therefore, once the patient identified themselves, data about the patient could be accessed via the system.
  • follow-up questions may be asked by the interface 105. How this is achieved will be explained later. First, it will be assumed that the patient provide all possible information (evidence) to the system at the start of the process. This will be used to explain the basic procedure. However, a variation on the procedure will then be explained with patient only gives partial information with the system, once completing the first analysis, requests further information.
  • the evidence will be taken to be the presence or absence of all known symptoms and risk factors. For symptoms and risk factors where the patient has been unable to provide a response, these will assume to be unknown.
  • step SI 07 this evidence is passed in step SI 07 to the inference engine 109.
  • Inference engine 109 performs Bayesian inference on PGM 120.
  • PGM 120 will be described in more detail with reference to figure 2 after the discussion of figure 1.
  • the inference engine 109 performs approximate inference.
  • the inference engine 109 is configured to perform Importance Sampling. Importance sampling is described with reference to equation 3 below.
  • the inference engine 109 When performing approximate inference, the inference engine 109 requires an approximation of the probability distributions within the PGM to act as proposals for the sampling.
  • the evidence is passed to what would be termed a universal marginaliser (UM) 113.
  • the UM will be described in more detail with reference to both figures 3 and 4.
  • the UM is a neural network that has been trained to approximate the outputs of the PGM 120.
  • the UM is a model that can approximate the behaviour of the entire PGM 120.
  • the UM is a single neural net
  • the model is a neural network which consists of several sub-networks, such that the whole architecture is a form of auto-encoder-like model but with multiple branches.
  • the UM as will be described with reference to figure 3 is trained to be robust to the patient giving incomplete answers. This is achieved via the masking procedure for training the UM that will be described with reference to figure 3.
  • step SI 15 the UM returns probabilities to be used as proposals to the inference engine 109.
  • the inference engine 109 then performs importance sampling using the proposals from the UM as estimates and the PGM 120.
  • the inference engine 109 calculates "likelihood” (conditional marginal probability) P(Disease_i
  • the inference engine can also determine:
  • step SI 17 it can transmit back information in step SI 17 concerning the "likelihood" of a disease given the evidence supplied by the patient 101 to the interface 105.
  • the interface 105 can then supply this information back to the patient in step SI 19.
  • the system determines whether further information is required from the patient 101.
  • the inference engine 109 determines:
  • the analysis to determine whether a further question should be asked and what that question should be is based purely on the output of the UM 113 that provide an estimate of the probabilities.
  • the probabilities derive directly from the PGM via importance sampling using the UM used to make this decision.
  • Figure 2 is a depiction of a graphical model of the type used in the system of figure 1.
  • the graphical model provides a natural framework for expressing probabilistic relationships between random variables, to facilitate causal modelling and decision making.
  • D stands for diagnosis
  • S for symptom
  • RF Risk Factor
  • the model is used in the field of diagnosis.
  • the first layer there are three nodes Si, S 2 and S 3
  • the second layer there are three nodes Di
  • the third layer there are two nodes RFi, RF 2 and RF 3 .
  • each arrow indicates a dependency.
  • Di depends on RFi and RF 2
  • D 2 depends on RF 2 , RF 3 and Di. Further relationships are possible.
  • each node is only dependent on a node or nodes from a different layer. However, nodes may be dependent on other nodes within the same layer.
  • the graphical model of figure 1 is a Bayesian Network.
  • the network represents a set of random variables and their conditional dependencies via a directed acyclic graph.
  • the network of figure 2 given full (or partial) evidence over symptoms Si, S 2 and S 3 and risk factors RFi, RF 2 and RF 3 the network can be used to represent the probabilities of various diseases Di, D 2 , and D 3 .
  • the BN allows probabilistic inference to update one's beliefs about the likelihood of a set of events, based on observed evidence.
  • performing inference on large- scale graphical models is computationally expensive.
  • approximate inference techniques are used variational inference or Monte Carlo methods.
  • P(RF,D,S) P(RF) P(D ⁇ RF) P(S ⁇ D).
  • ii. For simplicity for this explanation, it will be assumed that all nodes are binary.
  • a discriminative model UM is trained (e.g. feedforward neural network) by sampling from the generative model.
  • Each sample i.e. combined vector (RF, D, S) becomes a training example.
  • values are "obscured" for each element of the sample vector with some probability.
  • the probability can be different depending on the value of that vector element; if that is the case, the cross-entropy loss should be weighted appropriately by the probability.
  • the output contains exactly the sample without any obscuring.
  • Each element of the sample vector is a separate independent output node.
  • the loss function is the cross-entropy for each output node.
  • the discriminative model is expected to learn exactly conditional marginal P ⁇ node ⁇ partially _obscured(RF , D, S)) , where node can be any risk factor, disease or symptom.
  • This conditional marginal approximation can then be used to sample from the joint of the distribution by iterating node by node with less and less risk factors and symptoms obscured.
  • the task of inference is to sample from an independent 'proposal' distribution, which ideally, is as close to the target distribution as possible.
  • inference is performed by considering the set of random variables
  • a BN is a combination of a directed acyclic graph (DAG), with X L as nodes, and a joint distribution of the X, P .
  • the distribution 1 can factorize according to the structure of the DAG,
  • Equation (2) could be computed exactly.
  • exact inference becomes intractable in large BNs as computational costs grow exponentially with effective clique size,— in the worst case, becoming an NP-hard problem
  • the strategy is to estimate -P ⁇ 3 ⁇ 4 !3 ⁇ 4) with an importance sampling estimator if there is appropriate Q to sample from.
  • a discriminative model UM(:) (a feedforward neural network or a neural network, with architecture related to auto encoder but with multiple branches) is trained to approximate any possible posterior marginal distribution for any binary BN. ⁇ ( ⁇ ,
  • n is the total number of nodes
  • Xo are the observations.
  • Y is a vector of conditional marginal probabilities for every node in the BN, whether observed or not (if node X_i is observed, the marginal posterior distribution for it will be trial, i.e.
  • UM Universal Marginalizer
  • the training process for the above described UM involves generating samples from the underlying BN, in each sample masking some of the nodes, and then training with the aim to learn a distribution over this data. This process is explained through the rest of the section and illustrated in Figure 3.
  • Such a model can be trained off-line by generating samples from the original BN (PGM 120 of figure 1) via ancestral sampling in step S201.
  • unbiased samples are generated from the probabilistic graphical model (PGM) using ancestral sampling
  • Each sample is a binary vector which will be the values for the classifier to learn to predict.
  • step S203 for the purpose of prediction then some nodes in the sample then be hidden, or "masked” in step S203.
  • This masking is either deterministic (in the sense of always masking certain nodes) or probabilistic over nodes.
  • each node is probabilistically masked (in an unbiased way), for each sample, by choosing a masking probability ⁇ ' ' s i and then masking all data in that sample with probability p.
  • the masking process is as described in Further Embodiment A.
  • the nodes which are masked (or unobserved when it comes to inference time) are represented consistently in the input tensor in step S205. Different representations of obscured nodes will be described later, for now, they will be represented them as a ' *'.
  • the neural network is then trained using a cross entropy loss function in step S207 in a multi-label classification setting to predict the state of all observed and unobserved nodes.
  • a cross entropy loss function Any reasonable, i.e., a twice-differentiable norm, loss function could be used.
  • the output of the neural net can be mapped to posterior probability estimates.
  • the cross entropy loss is used, the output from the neural net is exactly the predicted probability distribution.
  • the loss function is split for different sub-sets of nodes for more efficient learning as described in Further Embodiment A.
  • the trained neural network can then be used to obtain the desired probability estimates by directly taking the output of the sigmoid layer. This result could be used as a posterior estimate.
  • the UM is combined with importance sampling to improve the accuracy of UM and the speed of importance sampling.
  • a discriminative model is now produced which, given any set of observations o, will approximate all the posterior marginals in step S209.
  • the training of a discriminative model can be performed, as often practised, in batches; for each batch, new samples from the model can be sampled, masked and fed to the discriminative model training algorithm; all sampling, masking, and training can be performed on Graphics Processing Units, again as often practised.
  • This trained Neural net becomes the UM 113 of figure 1 and is used to produce the predictions sent to the inference engine 109 in step SI 15.
  • Importance Sampling in the inference engine is augmented by using the predicted posteriors from the UM as the proposals.
  • Using the UM+IS hybrid it is possible to improve the accuracy of results for a given number of samples and ultimately speed up inference, while still maintaining the unbiased guarantees of Importance Sampling, in the limit.
  • step S301 the input is received and passed to the UM (NN).
  • the NN input is then provided to the NN (which is the UM) in step S303.
  • step S3 1 we receive a sample from the approximate joint.
  • each node will be conditioned on nodes topologically before it.
  • the training process may therefore be optimized by using a "sequential masking" process in the training process as in Figure 3, where firstly we randomly select up to which node X t we will not mask anything, and then, as previously, mask some nodes starting from node X i+1 (where nodes to be masked are selected randomly, as explained before). This is to perform to a more optimal way of getting training data.
  • an embodiment might involve a hybrid approach as shown in Algorithm 2 below. There, an embodiment might include calculating the conditional marginal probabilities only once, given the evidence, and then constructing a proposal for each node X t as a mixture of those conditional marginals (with weight ⁇ ) and the conditional prior distribution of a node (with weight (7— ?)).
  • FIG. 5 shows a layout of a system in accordance with a further embodiment of the invention.
  • the system 401 comprises a processor 403, the processor comprises Computer Processing units (CPUs) 405 and Graphical Processing units (GPUs) 407 that operates under the control of the host.
  • GPUs 407 offer a simplified instruction set that is well suited for a number of numerical applications. Due to the simplified instruction set they are not suitable for any general purpose computing in the same way that CPUs are used, however thanks to these simplifications GPU 407 can offer a much large number of processing cores. This makes GPU 407 ideally suited for applications where computations can be parallelised.
  • the noisy-Or model for the conditional prior probabilities in the PGM is used (see for example Koller & Friedman 2009, Probabilistic Graphical Models: Principles and Techniques The MIT Press).
  • the procedure is modified to improve the numerical stability and to parallelise the computation of the conditional priors.
  • Pa( )) f([x k xj) can be then expressed as a ⁇ : * S, where S is the samples tensor and
  • a sample is a full instantiation of the network, that is all nodes in the network will be assigned a state. Nodes that are in the evidence set E will be set to their observed state, whereas nodes not in the evidence will be randomly sampled according to their conditional probability given their parents' state.
  • u_i x ⁇ Pa(X_i)> // get the sampled state of the parents of X_i
  • w w * P(X_i I u_i) // multiply weight by probability of the evidence state given the node's parents
  • I is an indicator function which is equal to 1 if the sampled state y of sample m is the same as the target y. In binary nodes, it just means that all weights are summed where y is true and divide that by the total sum of weights.
  • u_i x ⁇ Pa(X_i)> // get the sampled state of the parents of X_i
  • w w * P(X_i I u_i) // multiply weight by probability of the evidence state given the node's parents
  • u_i x ⁇ Pa(X_i)> // get the sampled state of the parents of X_i in each sample in the batch, dimension is Kxl
  • the topologically sorted list of network nodes is split into multiple potential 'layers' according via a grid search over three parameters based on the size of tensors created by each layer, namely:
  • proposal probabilities q were kept within a max precision range to sampling efficiency of the importance sampler in some cases by requiring less samples to arrive at a target accuracy.
  • was set to 0.001.
  • the noisy-OR model allows for a child node representing a symptom to be binary (e.g. absent ⁇ present ).
  • the noisy-MAX model however allows nodes to have one of a variety of states. Therefore for a symptom node it becomes possible to encode the severity of the symptom, for example, by any number of particular states (e.g. absent ⁇ mild ⁇ strong ⁇ severe ).
  • each node-parent connection is described by a single probability parameter (lambda)
  • the noisy-MAX algorithm requires multiple parameters describing the variety of multiple states in which the node can exist.
  • noisy-MAX nodes are therefore implemented as well on GPUs in our embodiment by adding an additional dimension to the probability value lambda matrix, and producing categorical samples according to the values in this dimension (i.e. sampling from a number of possible states, as opposed to simply true/false).
  • the UM network was trained using cross-entropy loss. Specifically, ReLU non-linearity was used with an applied dropout of 0.5 before each hidden layer and the Adam optimizer was used.
  • 32-bit Continuous Representation Represent false as 0, true as 1 and the unobserved values by a point somewhere between 0 and 1. This is analogous to the probability of the input being true. Three values were used for unobserved: 0, 0.5 and the prior of the node.
  • the second metric is the max error - this looks for the maximum probability error across all nodes in the predictions and then averages these over data points.
  • a grid search was run on network size and unobserved representation and the results are reported using the two metrics
  • the above system could also be used for any determination process where there are a plurality of interlinked factors which are evidenced by observations and a determination of a probable cause is required.
  • the above method can be used in financial systems.
  • the output of the discriminative model as an aid in conducting approximate inference, in some cases, the estimate produced by the discriminative model may be used on its own.
  • the embeddings of such discriminative models e.g. neural networks
  • FIG. 6 shows the Universal Marginaliser (UM) :
  • the UM performs scalable and efficient inference on graphical models. This figure shows one pass through the network. First, (1) a sample is drawn from the PGM, (2) values are then masked and (3) the masked set is passed through the UM, which then, (4) computes the marginal posteriors.
  • UM Universal Marginaliser
  • the embodiment described herein shows how combining samples drawn from the graphical model with an appropriate masking function allows training of a single neural network to approximate any of the corresponding conditional marginal distributions, and thus amortise the cost of inference. It is also shown that the graph embeddings can be applied for tasks such as: clustering, classification and interpretation of relationships between the nodes. Finally, the method is benchmarked on a large graph (>1000 nodes), showing that UM-IS outperforms sampling-based methods by a large margin while being computationally efficient.
  • the Universal Marginaliser Importance Sampler (UM-IS), an amortised inference- based method for graph representation and efficient computation of asymptotically exact marginals is used.
  • the UM still relies on Importance Sampling (IS).
  • IS Importance Sampling
  • a guiding frame- work based on amortised inference is used that significantly improves the performance of the sampling algorithm rather than computing marginals from scratch every time the inference algorithm is run. This speed-up allows the application of the inference scheme on large PGMs for interactive applications with minimum errors.
  • the neural network can be used to calculate a vectorised representation of the evidence nodes. This representation can then be used for various machine learning tasks such as node clustering and classification.
  • the model has the flexibility of a deep neural network to perform amortised inference.
  • the neural net- work is trained purely on samples from the model prior and it benefits from the asymptotic guarantees of importance sampling.
  • representation of the provided evidence for tasks like classification and clustering or interpretation of node relationships.
  • the Universal Marginaliser is a feed-forward neural network, used to perform fast, single-pass approximate inference on general PGMs at any scale.
  • the UM can be used together with importance sampling as the proposal distribution, to obtain asymptotically exact results when estimating marginals of interest.
  • This hybrid model will be referred to as the Universal Marginaliser Importance Sampler (UM-IS).
  • DAG Directed Acyclic Graph
  • the conditional independence of a random variable X t given its parents pa(JQ is denoted as P(JQpa(JQ).
  • the random variables can be divided into two disjoint sets, X travelc X the set of observed variables within the BN, and X ⁇ - c X ⁇ X tract the set of the unobserved variables.
  • x 0 is defined as the encoding of the instantiation that specifies which variables are observed, and what their values are.
  • X 0
  • This NN is used as a function approximator, hence, it can approximate any posterior marginal distribution given an arbitrary set of evidence X 0 .
  • this discriminative model is termed the Universal Marginaliser (UM).
  • UM Universal Marginaliser
  • the marginalisation operation in a Bayesian Network is considered as a function / : B N ⁇ [0, l] N
  • existence of a neural network which can approximate this function is a direct consequence of the Universal Function Approximation Theorem (UFAT).
  • UAT Universal Function Approximation Theorem
  • X 0 x 0 ).
  • Fig. 7 The flow chart with each step of the training algorithm is depicted in Fig. 7. For simplicity, it will be assumed that the training data (samples for the PGM) is pre- computed, and only one epoch is used to train the UM.
  • steps 1-4 are applied for each of the mini-batches separately rather than on a full training set all at once. This improves memory efficiency during training and ensures that the network receives a large variety of evidence combinations, ac- counting for low probability regions in P.
  • the steps are given as follows:
  • the PGM described here only contains binary variables Xi, and each sample Si G is a binary vector. In the next steps, these vectors will be partially masked as input and the UM will be trained to reconstruct the complete unmasked vectors as output.
  • each sample Si is partially masked.
  • the network will then receive as input a binary vector where a subset of the nodes initially observed were hidden, or masked.
  • This masking can be deterministic, i.e., always masking specific nodes, or probabilistic.
  • a different masking distribution is used for every iteration during the optimization process. This is achieved in two steps. First, two random numbers from a uniform distribution i,j ⁇ U[0,N] are sampled where N is the number of nodes in the graph. Next, making is performed from randomly selected i (j) number of nodes the positive (negative) state.
  • the NN was trained by minimising the multi-label binary cross entropy of the sigmoid output layer and the unmasked samples Si.
  • Posterior marginals The desired posterior marginals are approximated by the output of the last NN-layer. These values can be used as a first estimate of the marginal posteriors (UM approach); however, combined with importance sampling, these approximated values can be further refined (UM-IS approach).
  • the UM is a discriminative model which, given a set of observations X 0 , will approximate all the posterior marginals. While useful on its own, the estimated marginals are not guaranteed to be unbiased. To obtain a guarantee of asymptotic unbiasedness while making use of the speed of the approximate solution, the estimated marginals are used for proposals in importance sampling.
  • a naive approach is to sample each Xi G X 3 ⁇ 4 independently from UM(x 0 ) context where UM( 0 ) i is the z ' -th element of vector UM(x 0 ).
  • the product of the (approximate) posterior marginals may be very different to the true posterior joint, even if the marginal approximations are good.
  • the universality of the UM makes the following scheme possible, which will be termed the Sequential Universal Marginaliser Importance Sampling (SUM-IS).
  • SUM-IS Sequential Universal Marginaliser Importance Sampling
  • a single proposal is sampled x s sequentially as follows. First, a new partially observed state is introduced x Su0 and it is initialised to x 0 . Then, [x ⁇ ⁇ UM(x 0 ) 1 is sampled and the previous sample x Su0 is updated such that X x is now observed with this value. This process is repeated, at each step sampling [x s ] l ⁇ UM(3 ⁇ 4 u0 ),, and x Su0 i s updated to include the new sampled value.
  • the conditional marginal can be approximated for a node i given the current sampled state X s and evidence X 0 to get the optimal proposal Q * as follows:
  • the full sample x s is drawn from an implicit encoding of the approximate posterior joint distribution given by the UM. This is because the product of sampled probabilities from Equation A.3 is expected to yield low variance importance weights when used as a proposal distribution.
  • the architecture of the UM is shown in Fig. 9. It has a denoising auto-encoder structure with multiple branches - one branch for each node of the graph.
  • the cross entropy loss for different nodes highly depends on the number of parents and its depth in the graph.
  • the weights of all fully connected layers that correspond to specific type of nodes are shared.
  • the types are defined by the depth in the graph (type 1 nodes have no parents, type 2 nodes have only type 1 nodes as parents etc.).
  • the architecture of the best performing model on the large medical graph has three types of nodes and the embedding layer has 2048 hidden states.
  • the quality of approximate conditional marginals was measured using a test set of posterior marginals computed for 200 sets of evidence via ancestral sampling with 300 million samples.
  • the test evidence set for the medical graph was generated by experts from real data.
  • the test evidence set for the synthetic graphs was sampled from a uniform distribution. Standard importance sampling was used, which corresponds to the likelihood weighting algorithm for discrete Bayesian networks with 8 GPUs over the course of 5 days to compute precise marginal posteriors of all test sets.
  • the Mean Absolute Error (MAE) given by the absolute difference of the true and predicted node posteriors and the Pearson Correlation Coefficient (PCC) of the true and predicted marginal vectors. Note that we did not observe negative correlations and therefore both measures are bounded between 0 and 1.
  • the Effective Sample Size (ESS) statistic was used for the comparison with the standard importance sampling. This statistics measures the efficiency of the different proposal distributions used during sampling. In this case, there was not access to the normalising constant of the posterior distribution, the ESS is defined as where the weights, wi, are defined in Step 8 of Algorithm 1 A.
  • a one hot encoding was considered for the unobserved and observed nodes. This representation only requires two binary values per node. One value represents if the node is observed and positive ([0,1]) and the other value represents whether this node is observed and negative ([1,0]). If the node is unobserved or masked, then both values are set to zero ([0,0]).
  • This simple UM (UM-IS-Basic) has one single hidden layer that is shared for all nodes of the PGM. It can be can seen that the MAE and PCC still improved over standard IS.
  • UM-IS with multiple fully connected layers per group of nodes significantly outperforms the basic UM by a large margin. There are two reasons for this. First, the model capacity of the UM is higher which allows to learn more complex structures from the data. Secondly, the losses in the UM are spread across all groups of nodes and the gradient update steps are optimised with the right order of magnitude for each group. This prevents the model from overfitting to the states of a specific type of node with a significant higher loss.
  • the graph embeddings are extracted as the 2048 dimensional activations of the inner layer of the UM (see Fig. 9). They are a low-dimensional vectorised representation of the evidence set in which the graphs structure is preserved. That means that the distance for nodes that are tightly connected in the PGM should be smaller that the distance to nodes than are independent.
  • the first two principal components of the embeddings from different evidence sets are plotted in which it is known that they are related.
  • the evidence set from the medical PGM is used with different diseases, risk-factors and symptoms as nodes. Fig.
  • FIG. 11(a) shows that the embeddings of sets with active Type-1 and Type-2 diabetes are collocated.
  • the two diseases have different underlying cause and connections in the graphical model (i.e pancreatic beta-cell atrophy and insulin-resistance respectively), they share similar symptoms and complications (e.g cardiovascular diseases, neuropathy, increased risk of infections etc.).
  • a similar clustering can be seen in Fig. 11(b) for two cardiovascular risk factors: smoking and obesity, interestingly collocated with a sign seen in patient suffering from a severe heart condition (i.e unstable angina, or acute coronary syndrome): chest pain at rest.
  • a severe heart condition i.e unstable angina, or acute coronary syndrome
  • mapping from the evidence set to the embeddings was optimised with a large number of generated samples (3 * 10 ⁇ 1) during the UM learning phase. Therefore, these representations can be used to build more robust ma- chine learning methods for classfication and clustering rather then using the raw evidence set to the PGM.
  • Table 1 A Classification performances using two different features. Each classifier is trained on - dense the dense embedding as features, and input - the top layer (UM input) as features. The target (output) is always the disease layer.
  • the above embodiment paper discusses a Universal Marginaliser based on a neural network which can approximate all conditional marginal distributions of a PGM. It is shown that a UM can be used via a chain decomposition of the BN to approximate the joint posterior distribution, and thus the optimal proposal distribution for importance sampling. While this process is computationally intensive, a first-order approximation can be used requiring only a single evaluation of a UM per evidence set. The UM on multiple datasets and also on a large medical PGM is evaluated demonstrating that the UM significantly improves the efficiency of importance sampling. The UM was trained offline using a large amount of generated training samples and for this reason, the model learned an effective representation for amortising the cost of inference.
  • Importance Sampling (IS) is used to provide the posterior marginal estimates P(X 3 ⁇ 4 I X 0 ). To do so, samples x3 ⁇ 4 are drawn from a distribution Q(X 3 ⁇ 4
  • the proposal distribution must be defined such that both sampling from and evaluation can be performed efficiently.
  • a BN for sampling from the posterior marginal a BN can be considered with Bernoulli nodes and of arbitrary size and shape.
  • X two specific nodes, X, and X j , such that X j is caused only and always by X:
  • the weights should be approximately 1 if Q is close to P.
  • Wj There are four combinations of X, and X j .
  • a method of using the embeddings of using the above discriminative model as a vectorised representation of the provided evidence for classification.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Medical Informatics (AREA)
  • General Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Pathology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biotechnology (AREA)
  • Evolutionary Biology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Physiology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

A method for providing a computer implemented medical diagnosis, the method comprising: receiving an input from a user comprising at least one symptom of the user; providing the at least one symptom as an input to a medical model comprising: a probabilistic graphical model comprising probability distributions and relationships between symptoms and diseases; an inference engine configured to perform Bayesian inference on said probabilistic graphical model; and a discriminative model pre-trained to approximate the probabilistic graphical model, the discriminative model being trained using samples from said probabilistic graphical model, wherein some of the data of the samples has been masked to allow the deterministic model to produce data which is robust to the user providing incomplete information about their symptoms; deriving estimates of the probability of the user having a disease from the discriminative model; inputting the estimates to the inference engine; performing approximate inference on the probabilistic graphical model to obtain a prediction of the probability that the user has that disease; and outputting the probability of the user having the disease for display by a display device.

Description

A Computer Implemented Determination Method and System
FIELD
Embodiments of the present invention relate to the field of computer implemented determination methods and systems.
BACKGROUND
Graphical models provide a natural framework for expressing the probabilistic relationships between random variables in numerous fields across the natural sciences. Bayesian networks, a directed form of graphical model, have been used extensively in medicine, to capture causal relationships between entities such as risk-factors, diseases and symptoms, and to facilitate medical decision-making tasks such as disease diagnosis. Key to decision-making is the process of performing probabilistic inference to update one's prior beliefs about the likelihood of a set of diseases, based on the observation of new evidence.
BRIEF DESCRIPTION OF FIGURES
Figure 1 is an overview of a system in accordance with an embodiment;
Figure 2 is a schematic diagram of a simple graphical model;
Figure 3 is a flow diagram showing the training of a discriminative model to use the system of figure 1;
Figure 4 is a flow diagram showing the use of the trained model with the inference engine of figure 1 ;
Figure 5 is a basic schematic processing system with a GPU;
Figure 6 is a schematic of an overview of a system in accordance with an embodiment; Figure 7 is a flow diagram showing the training of a discriminative model to use the system of figure 6;
Figure 8 is a flow diagram showing the use of the trained model with the inference engine of figure 6;
Figure 9(a) is a schematic of a graphical Model and Figure 9(b) the corresponding UM architecture. The nodes of (a) the graph are categorized by their depth inside the network and the weights of (b) the UM neural network are shared for nodes of the same category;
Figure 10 shows the performance of the above system on three different graphical models. Figure 10(a) shows results from a synthetic graph with 96 nodes, Figure 10(b) shows results from a synthetic graph with 768 nodes and figure 10(c) shows results from a medical PGM. Inference was applied through importance sampling with and without the support of a trained UM and it was evaluated in terms of Pearson Correlation Coefficient (PCC), Mean Absolute Error (MAE) and Effective Sampling Size (ESS); and
Figure 11 shows the embeddings filtered for two sets of symptoms and risk factors, where each scatter point corresponds to a set of evidence. Figure 11(a) shows results for Diabetes embeddings and figure 11(b) shows results for smoke and obesity embeddings.The display embedding vectors correspond to the first two components. It can be seen that they separate quite well unrelated medical concepts and show an overlap for concepts which are closely related.
DETAILED DESCRIPTION
In an embodiment, a method for providing a computer implemented medical diagnosis is provided, the method comprising: receiving an input from a user comprising at least one symptom; providing at least one symptom as an input to a medical model; using the medical model to determine the probability of the user having a disease stored in the medical model from the provided input; and outputting the probability of the user having one or more diseases, wherein said medical model comprises a probabilistic graphical model containing the probability distributions and the relationships between symptoms and diseases, an inference engine configured to perform Bayesian inference on said probabilistic graphical model using a discriminative model, wherein the discriminative model has been pre-trained to approximate the probabilistic graphical model, the discriminative model being trained using samples generated from said probabilistic graphical model, wherein some of the data of the samples has been masked to allow the discriminative model to produce data which is robust to the user providing incomplete information about their symptoms, and wherein determining the probability that the user has a disease comprises deriving estimates of the probabilities that the user has that disease from the discriminative model, inputting these estimates to the inference engine and performing approximate inference on the probabilistic graphical model to obtain a prediction of the probability that the user has that disease.
Medical diagnosis systems require significant computing resources such as processor capacity. The disclosed systems and methods solve this technical problem with a technical solution, namely by conducting approximate statistical inference on a PGM with help of a discriminative model (e.g. a neural net) to provide an estimate of the posterior probabilities. The discriminative model is trained such that it is robust to the user providing incomplete information about their symptoms. The above therefore allows the system to produce answers using such new approximate inference with the accuracy comparable to using exact or already existing approximate inference techniques, but in a fraction of the time and with a reduction in the processing required.
The inference engine may be configured to perform importance sampling over conditional marginal. However, other methods may be used such as Variational Inference, other Monte Carlo methods, etc.
The discriminative model can be a neural network. In some embodiments, the neural network is a single neural network in other embodiments the neural network is as described in Further Embodiment A. The neural net can approximate the outputs of the probabilistic graphical model and hence later in this document, it is termed a Universal Marginaliser (UM). In an embodiment, the probabilistic graphical model is a noisy-OR model. As, performing probabilistic inference is computationally expensive, and in a medicine where large-scale Bayesian networks are required to make clinically robust diagnoses, it is not feasible to apply exact inference techniques. Instead, approximate, sampling- based algorithms are used which provide theoretical guarantees (under the central limit theorem) regarding convergence to the true posterior. In the context of medical diagnosis, this amounts to arriving at the true disease differential, based on the evidence and the underlying the model.
As the true 'target' (posterior) distribution is unknown ahead of time, the task of inference is to sample from an independent 'proposal' distribution, which, ideally is as close as possible to the target. The standard approach when applying Bayesian networks for medical decision-making, is to use the model prior as the proposal distribution. However, this is often not ideal, particularly in cases where an unusual combination of symptoms is generated by a rare disease. In these and similar cases, a large number of samples is often required to reduce the variance in the estimate of the true posterior; this poses a significant practical constraint to the use sampling algorithms for inference.
As a consequence, in all but the simplest of symptom presentations, it is often difficult to match the diagnostic speed of human doctors. This is because, for cognitive tasks, humans operate in the setting of amortized inference i.e. 'they have to solve many similar inference problems, and can thus offload part of the computational work to shared pre-computation and adaptation over time'. As discussed above, where the proposal is close to posterior, fewer samples need to be drawn, and therefore inference will be more rapid.
Figure 1 is a schematic of a method in accordance with an embodiment. A patient 101 inputs their symptoms in step SI 03 via interface 105. The patient may also input their risk factors, for example, whether they are a smoker, their weight etc. the interface may be adapted to ask the patient 101 specific questions. Alternately, the patient may just simply enter free text. The patient's risk factors may be derived from the patient's records held in a database (not shown). Therefore, once the patient identified themselves, data about the patient could be accessed via the system. In further embodiments, follow-up questions may be asked by the interface 105. How this is achieved will be explained later. First, it will be assumed that the patient provide all possible information (evidence) to the system at the start of the process. This will be used to explain the basic procedure. However, a variation on the procedure will then be explained with patient only gives partial information with the system, once completing the first analysis, requests further information.
The evidence will be taken to be the presence or absence of all known symptoms and risk factors. For symptoms and risk factors where the patient has been unable to provide a response, these will assume to be unknown.
Next, this evidence is passed in step SI 07 to the inference engine 109. Inference engine 109 performs Bayesian inference on PGM 120. PGM 120 will be described in more detail with reference to figure 2 after the discussion of figure 1.
Due to the size of the PGM 120, it is not possible to perform exact inference using inference engine 109 in a realistic timescale. Therefore, the inference engine 109 performs approximate inference. In an embodiment, the inference engine 109 is configured to perform Importance Sampling. Importance sampling is described with reference to equation 3 below.
When performing approximate inference, the inference engine 109 requires an approximation of the probability distributions within the PGM to act as proposals for the sampling. In step SI 11, the evidence is passed to what would be termed a universal marginaliser (UM) 113. The UM will be described in more detail with reference to both figures 3 and 4. In summary, the UM is a neural network that has been trained to approximate the outputs of the PGM 120.
The training of the UM will be described in detail with reference to figure 3. However, the UM is a model that can approximate the behaviour of the entire PGM 120. In one embodiment the UM is a single neural net, in another embodiment, the model is a neural network which consists of several sub-networks, such that the whole architecture is a form of auto-encoder-like model but with multiple branches. Hence, why "universal" is used in the title of the UM. Further, the UM as will be described with reference to figure 3 is trained to be robust to the patient giving incomplete answers. This is achieved via the masking procedure for training the UM that will be described with reference to figure 3.
In step SI 15, the UM returns probabilities to be used as proposals to the inference engine 109. The inference engine 109 then performs importance sampling using the proposals from the UM as estimates and the PGM 120.
The inference engine 109 calculates "likelihood" (conditional marginal probability) P(Disease_i| Evidence) for all diseases.
In addition the inference engine can also determine:
P(Symptom_i | Evidence),
P(Risk factor i | Evidence)
From this, it can transmit back information in step SI 17 concerning the "likelihood" of a disease given the evidence supplied by the patient 101 to the interface 105. The interface 105 can then supply this information back to the patient in step SI 19.
The above high-level explanation of the system presumes that the patient provide all possible evidence they can concerning their symptoms and that the system has access to all possible risk factors of which the patient can give a definitive answer. However, in many situations, the patient will only give a fraction of this information as a first input into the system. For example, if the patient has a stomach ache, the patient is likely to indicate that they have a stomach ache, but probably little further information without prompting.
In a further embodiment, the system determines whether further information is required from the patient 101. As explained above, the inference engine 109 determines:
P(Disease_i| Evidence) for all diseases
P(Symptom_i | Evidence),
P(Risk factor i | Evidence) It is possible using a value of information analysis (Vol) to determine from the above likelihoods whether asking a further question would improve the probability of diagnosis. For example, if the initial output of the system seems that there are 9 diseases each having a 10% likelihood based on the evidence, then asking a further question will allow a more precise and useful diagnosis to be made. In an embodiment, the next further questions to be asked are determined on the basis of questions that reduce the entropy of the system most effectively.
In one embodiment, the analysis to determine whether a further question should be asked and what that question should be is based purely on the output of the UM 113 that provide an estimate of the probabilities. However, in a further embodiment, the probabilities derive directly from the PGM via importance sampling using the UM used to make this decision.
Once the user supplies further information, then this is then passed back and forth to the inference engine 109 to update evidence to produce updated probabilities.
Figure 2 is a depiction of a graphical model of the type used in the system of figure 1.
The graphical model provides a natural framework for expressing probabilistic relationships between random variables, to facilitate causal modelling and decision making. In the model of figure 1, when applied to diagnosis, D stands for diagnosis, S for symptom and RF for Risk Factor. Three layers: risk factors, diseases and symptoms. Risk factors causes (with some probability) influence other risk factors and diseases, diseases causes (again, with some probability) other diseases and symptoms. There are prior probabilities and conditional marginals that describe the "strength" (probability) of connections. We use noisy-OR and noisy-MAX modelling assumptions for now.
In this simplified specific example, the model is used in the field of diagnosis. In the first layer, there are three nodes Si, S2 and S3, in the second layer there are three nodes Di, D2 and D3 and in the third layer, there are two nodes RFi, RF2 and RF3. In the graphical model of figure 1, each arrow indicates a dependency. For example, Di depends on RFi and RF2. D2 depends on RF2, RF3 and Di. Further relationships are possible. In the graphical model shown, each node is only dependent on a node or nodes from a different layer. However, nodes may be dependent on other nodes within the same layer.
In an embodiment, the graphical model of figure 1 is a Bayesian Network. In this Bayesian Network, the network represents a set of random variables and their conditional dependencies via a directed acyclic graph. Thus, in the network of figure 2, given full (or partial) evidence over symptoms Si, S2 and S3 and risk factors RFi, RF2 and RF3 the network can be used to represent the probabilities of various diseases Di, D2, and D3.
The BN allows probabilistic inference to update one's beliefs about the likelihood of a set of events, based on observed evidence. However, performing inference on large- scale graphical models is computationally expensive. To reduce the computational task, approximate inference techniques are used variational inference or Monte Carlo methods.
In summary
i. Figure 2 represents a generative model P(RF,D,S) = P(RF) P(D \ RF) P(S \ D). ii. For simplicity for this explanation, it will be assumed that all nodes are binary. iii. A discriminative model UM is trained (e.g. feedforward neural network) by sampling from the generative model. iv. Each sample (i.e. combined vector (RF, D, S) becomes a training example.
1. For the input of one training example, values are "obscured" for each element of the sample vector with some probability. The probability can be different depending on the value of that vector element; if that is the case, the cross-entropy loss should be weighted appropriately by the probability.
The output contains exactly the sample without any obscuring. Each element of the sample vector is a separate independent output node.
The loss function is the cross-entropy for each output node.
Since the cross-entropy is used, the discriminative model is expected to learn exactly conditional marginal P {node \ partially _obscured(RF , D, S)) , where node can be any risk factor, disease or symptom.
This allows the use of the trained discriminative model to either directly approximate the posterior or use that approximation as a proposal for any inference method (e.g. as a proposal for Monte Carlo methods or as a starting point for variational inference, etc).
This conditional marginal approximation can then be used to sample from the joint of the distribution by iterating node by node with less and less risk factors and symptoms obscured.
In the method taught herein, it is possible to provide theoretical guarantees (convergence to the true posterior) regarding the outcomes from inference. This is of particular use when the system is applied to decision making of a sensitive nature e.g. medicine or finance.
In an embodiment, since the true posterior (target distribution) is unknown, the task of inference is to sample from an independent 'proposal' distribution, which ideally, is as close to the target distribution as possible.
When performing inference on Bayesian networks, a prior is often used as the proposal distribution. However, in cases where the BN is used to model rare events, a large number of samples are required to reduce the variance in an estimate of the posterior.
In an embodiment, inference is performed by considering the set of random variables,
X = {X XN}. A BN is a combination of a directed acyclic graph (DAG), with XL as nodes, and a joint distribution of the X, P . The distribution 1 can factorize according to the structure of the DAG,
Fix, xn) - JI M'; Ι¾.:Λ,:· : - /' AM J] /VV.. A ; ... . , χ^). en
Where P( j |Pa(¾)) is the conditional distribution of Xi given its parents, Pa(X,). The second equality holds as long as Xj; X , XN are in topological order.
Now, a set of observed nodes is considered, X-e> C X and their observed values . To conduct Bayesian inference when provided with a set of unobserved variables, say
X¾ C X \ Xothe posterior marginal is computed:
Figure imgf000012_0001
In the optimal scenario, Equation (2) could be computed exactly. However, as noted above exact inference becomes intractable in large BNs as computational costs grow exponentially with effective clique size,— in the worst case, becoming an NP-hard problem
In an embodiment, importance sampling is used. Here, a function / is considered for which it' s expectation, Ep[f is to be estimated, under some probability distribution P . It is often the case that we can evaluate P up to a normalizing constant, but sampling from it is costly. In Importance Sampling, expectation Ep[f\ is estimated by introducing a distribution Q, known as the proposal distribution, which can both be sampled and evaluated. This gives:
Figure imgf000013_0001
Where x, ~ Q and where w, = P (x,)/Q(x,) are the importance sampling weights. If P can only be evaluated up to a constant, the weights need to be normalized by their sum.
In the case of Inference on a BN, the strategy is to estimate -P< ¾ !¾) with an importance sampling estimator if there is appropriate Q to sample from. In the case of using Importance Sampling to sample from the posterior, the weight also contains the likelihood ^(^O = I ^ ) Also, while the equalities in (3) hold for any appropriate Q, this is true only in the limit as n→∞ and it is not the case that all importance sampling estimators have the same performance in terms of variance, or equivalently, time till convergence.
For example in likelihood weighting, if Q— P ^u) is very far from f(¾i^o) then, with only a small number of weights dominating the estimate, the variance of the estimator can be potentially huge and the estimation will be poor unless too many samples are generated.
For this reason, the joint distribution Q — -P(Xw |Xc>)would be the optimal proposal and thus, it would be helpful to obtain an estimate of this to reduce the variance in importance sampling.
In an embodiment a discriminative model UM(:) (a feedforward neural network or a neural network, with architecture related to auto encoder but with multiple branches) is trained to approximate any possible posterior marginal distribution for any binary BN. ΓΡ(Χ, |Χ0)1
( 2 |Xo)
Y = L' M(X0 ) % I . (4)
[p(X» \Xo )_
where n is the total number of nodes, and Xo are the observations. Y is a vector of conditional marginal probabilities for every node in the BN, whether observed or not (if node X_i is observed, the marginal posterior distribution for it will be trial, i.e.
P(Xi I Xo) = 1 0T P(Xi 1 Xo ) - 0 ).
To approximate any possible posterior marginal distribution i.e., given any possible set of evidence Xo, only one model is needed. Due to this the discriminative model is described as a Universal Marginalizer (UM). The existence of such a network is a direct consequence of the universal function approximation theorem (UFAT). This is illustrated by considering marginalization in a BN as a function and that, by UFAT, any measurable function can be arbitrarily approximated by a neural network. Therefore, such a UM can be used as a proposal for the distribution.
The training process for the above described UM involves generating samples from the underlying BN, in each sample masking some of the nodes, and then training with the aim to learn a distribution over this data. This process is explained through the rest of the section and illustrated in Figure 3.
Such a model can be trained off-line by generating samples from the original BN (PGM 120 of figure 1) via ancestral sampling in step S201. In an embodiment unbiased samples are generated from the probabilistic graphical model (PGM) using ancestral sampling Each sample is a binary vector which will be the values for the classifier to learn to predict.
In an embodiment, for the purpose of prediction then some nodes in the sample then be hidden, or "masked" in step S203. This masking is either deterministic (in the sense of always masking certain nodes) or probabilistic over nodes. In embodiment each node is probabilistically masked (in an unbiased way), for each sample, by choosing a masking probability ·' ' s i and then masking all data in that sample with probability p. However, in a further embodiment, the masking process is as described in Further Embodiment A.
The nodes which are masked (or unobserved when it comes to inference time) are represented consistently in the input tensor in step S205. Different representations of obscured nodes will be described later, for now, they will be represented them as a ' *'.
The neural network is then trained using a cross entropy loss function in step S207 in a multi-label classification setting to predict the state of all observed and unobserved nodes. Any reasonable, i.e., a twice-differentiable norm, loss function could be used. In a further embodiment, the output of the neural net can be mapped to posterior probability estimates. However, when the cross entropy loss is used, the output from the neural net is exactly the predicted probability distribution. In a further embodiment, the loss function is split for different sub-sets of nodes for more efficient learning as described in Further Embodiment A.
The trained neural network can then be used to obtain the desired probability estimates by directly taking the output of the sigmoid layer. This result could be used as a posterior estimate. However, in a further embodiment as described below the UM is combined with importance sampling to improve the accuracy of UM and the speed of importance sampling.
Thus a discriminative model is now produced which, given any set of observations o, will approximate all the posterior marginals in step S209. Note that the training of a discriminative model can be performed, as often practised, in batches; for each batch, new samples from the model can be sampled, masked and fed to the discriminative model training algorithm; all sampling, masking, and training can be performed on Graphics Processing Units, again as often practised.
This trained Neural net becomes the UM 113 of figure 1 and is used to produce the predictions sent to the inference engine 109 in step SI 15. In the embodiment, described with reference to figure 1, Importance Sampling in the inference engine is augmented by using the predicted posteriors from the UM as the proposals. Using the UM+IS hybrid it is possible to improve the accuracy of results for a given number of samples and ultimately speed up inference, while still maintaining the unbiased guarantees of Importance Sampling, in the limit.
In the above discussion of Importance sampling, we saw that the optimal proposal distribution Q for the whole network is the posterior itself ^ (·¾ί i-^O^ and thus for each node the optimal proposal distribution is Qop — P{X £ | c? U where o are the evidence nodes and Xs the already sampled nodes before sampling Xl.
As it is now possible, using the above UM to approximate, for all nodes, and for all evidences, the conditional marginal, the sampled nodes can be incorporated into the evidence to get an approximation for the posterior and use it is as proposal. For node i specifically, this optimal Q* is:
Q«
Figure imgf000016_0001
} U o)j— Qi (5)
The process for sampling from these approximately optimal proposals is illustrated in the algorithm 1 below and in Figure 4 where the part within the box is repeated for each node in the BN in topological order.
In step S301, the input is received and passed to the UM (NN). The NN input is then provided to the NN (which is the UM) in step S303. The UM calculates in step S305, the output q that it provides in step S307. This is the provided to the Inference engine in step S309 to sample node X, from the PGM. Then, that node value is injected as an observation into , and it is repeated for the next node (hence M := i + Γ ). In step S3 1 1, we receive a sample from the approximate joint.
Figure imgf000017_0001
That is, following the requirement that parents are sampled before their children and adding any previously sampled nodes into the evidence for the next one, we are ultimately sampling from the approximation of the joint distribution. This can be seen by observing the product of the probabilities we are sampling from. seen that the proposal Q, constructed in such a way, becomes the posterior
Figure imgf000017_0002
iV
∞ p( ij o) Π ρΜ¾ · - χί -- .1■ χ ) m
- .Ρ(Χ1 τ ¾, , Χη0) (9)
This procedure requires that nodes are sampled sequentially, using the UM to provide a conditional probability estimate at each step. This can affect computation time, depending on the parallelization scheme used for sampling. However, parallelization efficiency can be recovered by increasing the number of samples, or batch size, for all steps.
In Importance Sampling, each node will be conditioned on nodes topologically before it. The training process may therefore be optimized by using a "sequential masking" process in the training process as in Figure 3, where firstly we randomly select up to which node Xt we will not mask anything, and then, as previously, mask some nodes starting from node Xi+1 (where nodes to be masked are selected randomly, as explained before). This is to perform to a more optimal way of getting training data.
Another way of doing an embodiment might involve a hybrid approach as shown in Algorithm 2 below. There, an embodiment might include calculating the conditional marginal probabilities only once, given the evidence, and then constructing a proposal for each node Xt as a mixture of those conditional marginals (with weight β) and the conditional prior distribution of a node (with weight (7— ?)).
Ai ydtte Z Hybrid UM-IS
: the aoies. lo ologks!ly X, , ,V.X.¾ , fe«*e JV total aumher of sod s.
2; for 1 is II,— M"\ (w ere M is t¾¾ total number of samples'^ tie
X- < « $
5: n pte aodte ¾ fr m < Μ) - β&Μ., ίχκ : } 4- ft ·■■■ )I>(Xi ~ ·*^ ··;<-.·>: A : - )
6-< ssld a¾ ¾
S: ¾.·,; ~ Π¾: Γί¾ ohere. P. ss s e Hkdikxxi f* ~ P Xf ~ ^ i ^^ ^^ x^ ] sssd ¾ ~
While this hybrid approach might be easier and potentially less computationally expensive, in cases when P(Xi I Xs U Xo) is far from P(Xi I Xo) , this will be just a first-order approximation, hence the variance will be higher and we generally need more samples to get a reliable estimate.
The intuition for approximating
Figure imgf000018_0001
U X<s) by linearly combining P(Xi
Figure imgf000018_0002
ννΜ take into account the effect of the evidence on node i and Ρ( ( |Ρ ¾)) will take into account the effect of Xs, namely the parents. Note that β could also be allowed to be a function of the currently sampled state and the evidence, for example if all the evidence is contained in parents then = 0 is optimal.
Figure 5 shows a layout of a system in accordance with a further embodiment of the invention. The system 401 comprises a processor 403, the processor comprises Computer Processing units (CPUs) 405 and Graphical Processing units (GPUs) 407 that operates under the control of the host. GPUs 407 offer a simplified instruction set that is well suited for a number of numerical applications. Due to the simplified instruction set they are not suitable for any general purpose computing in the same way that CPUs are used, however thanks to these simplifications GPU 407 can offer a much large number of processing cores. This makes GPU 407 ideally suited for applications where computations can be parallelised.
To achieve these performance gains algorithms usually need to be modified to express this parallelism in a form that is easy to implement on GPUs. This can be done either via low level custom GPU instructions (e.g., implementing algorithm in terms of low level CUDA code) or, alternatively, algorithms can be expressed more generally in terms of common vectorised operations, such as scatter, gather, reduce on tensors, as well as higher level numerical routines such as matrix transpose, multiplication, etc.
To express vectorised operations and to make use of higher level tensor frameworks with GPU support, it is possible to use products such as TensorFlow, PyTorch, etc. Once calculations are expressed in vectorised form, in an embodiment, it is possible to make use of the large number of processing cores in modern GPUs by generating a large batch of random numbers, for the importance sampling procedure. The GPU uses data acquired from the PGM 409.
The above approach using a GPU is used to determine the posterior marginal probabilities P(Di \evidence), P(S ^evidence) and P(RFi \evidence).
In an embodiment, the Noisy-Or model for the conditional prior probabilities in the PGM is used (see for example Koller & Friedman 2009, Probabilistic Graphical Models: Principles and Techniques The MIT Press). In an embodiment, the procedure is modified to improve the numerical stability and to parallelise the computation of the conditional priors.
To improve numerical accuracy, in an embodiment, most calculations are performed in the log domain. From basic properties of the log function multiplication becomes addition. So in the example of the Noisy-Or model instead of calculating probabilities as a multiplications of lambdas, the sum of log( ) is computed:
P(X,. F \ Pa{Xi) = [xk, x,]) = λ0 Π x = exp(log( 0) + ∑ gixfa)) , where x. e {0, 1 } .
To further improve performance the above is then expressed as a tensor operation. Here, a lambda matrix Λ, is constructed where Ajfc is equal to the log lambda value of node j with node k, with Ajfc = 0 if node k is not a parent of node j. P( j |Pa( )) = f([xk xj) can be then expressed as a∑Λ : *S, where S is the samples tensor and
* denotes element-wise multiplication.
To illustrate this, firstly, the likelihood weighting method is shown. The following procedure generates one sample using likelihood weighting. A sample is a full instantiation of the network, that is all nodes in the network will be assigned a state. Nodes that are in the evidence set E will be set to their observed state, whereas nodes not in the evidence will be randomly sampled according to their conditional probability given their parents' state.
Procedure Generate-LW- Sample (
B, // Bayesian network over X
E = e, // Evidence
)
// conditional probabilities below are calculated using Noisy-Or model
Let X I, X_n be a topological ordering of X
w = 1
for i = 1, n
u_i = x<Pa(X_i)> // get the sampled state of the parents of X_i
if X_i not in E then // if X_i not in evidence then sample
x_i = sample from P(X_i | u_i) else
x_i = e<X_i> // evidence state of X_i in z
w = w * P(X_i I u_i) // multiply weight by probability of the evidence state given the node's parents
return (x_l, x_n), w
It is then possible to estimate a probability query y by generating M samples by repeating calling the procedure above M times and then calculate the estimate as:
Where I is an indicator function which is equal to 1 if the sampled state y of sample m is the same as the target y. In binary nodes, it just means that all weights are summed where y is true and divide that by the total sum of weights.
This procedure can then be extended for importance sampling as follows:
Procedure Generate-IS-Sample (
B, // Bayesian network over X
E = e, // Evidence Q // proposal probability distribution
)
// conditional probabilities below are calculated using Noisy-Or model
Let X I, X_n be a topological ordering of X
w = 1
for i = 1, n
u_i = x<Pa(X_i)> // get the sampled state of the parents of X_i
if X_i not in E then // if X_i not in evidence then sample
p = P(X_i I u_i)
q = Q(X_i | u_i, E)
x_i = sample from q w = w * (p / q)
else
x_i = e<X_i> // evidence state of X_i in z
w = w * P(X_i I u_i) // multiply weight by probability of the evidence state given the node's parents
return (x_l, x_n), w
The main difference here is that instead of sampling directly from P it is now possible to sample from Q and correct the weight by the ratio p/q. This can be parallelized on a GPU by simply generating a batch of multiple samples at a time:
Procedure IS-Generate-Batch-of-Samples (
B, // Bayesian network over X
E = e, // Evidence
Q, // proposal probability distribution
K // batch size
) // conditional probabilities below are calculated using Noisy-Or model
Let X I, X_n be a topological ordering of X
w = [1, 1] // vector of weights of size Kxl
for i = 1, n
u_i = x<Pa(X_i)> // get the sampled state of the parents of X_i in each sample in the batch, dimension is Kxl
if X_i not in E then // if X_i not in evidence then sample
p = P(X_i I u_i)
q = Q(X_i | u_i, E)
x_i = sample K times from q // dimension Kxl
w = w * (p / q) // multiply each element of w by p/q
else
x_i = e<X_i> // evidence state of X_i in z, Kxl dimension
w = w * P(X_i I u_i) // multiply weight by probability of the evidence state given the node's parents, Kxl dimension
return [(x_l, x_n)_l, (x_l, x_n)_K] , w // returns K samples and K weights
It is possible to make significant improvements to the efficiency of the tensor representation by optimising the number and size of the tensors used to represent the network - essentially representing the network by a number of 'layers' upon which independent sampling/calculations can be performed. Working with smaller tensors increases the speed of computation for the numerous tensor manipulations used throughout the inference process, but at the expense of increasing the number of those manipulations required due to the increased number of tensors representing the same network.
To optimise the decomposition of the network into layers, the topologically sorted list of network nodes is split into multiple potential 'layers' according via a grid search over three parameters based on the size of tensors created by each layer, namely:
• the minimum tensor size
• the maximum tensor size
• the total 'waste' incurred due to the sequentially increasing tensor size The resultant layers at each grid point are tested according to the metric:
M = (10 * number_of_layers) + total_waste
Where the total waste is calculated as the penalty incurred by the incremental increase in individual layer's tensor size. The group of layers with the lowest M are chosen as the optimum representation.
To improve sampling efficiency, in an embodiment, the current estimate of the posterior being calculated was mixed into the Importance Sampling proposal distribution Q. It was found that this helps with convergence. To calculate the current estimate the same probability query formula as above was used with the samples generated so far. q'(X_i I u_i, E) = (l-a)*q(X_i | u_i, Evidence) + a*current_estimate of P(X_i | Evidence)
In a further embodiment, proposal probabilities q were kept within a max precision range to sampling efficiency of the importance sampler in some cases by requiring less samples to arrive at a target accuracy.
Clipping is performed as follows: clipped q = min(max(q, ε), 1-ε)
Here ε was set to 0.001.
In a further embodiment, for importance sampling q was imposed (i.e. by redefining the proposal q := clipping(q) ) to be not too different from p in order to reduce the variance of the weights. One simple heuristic that was found useful was to ensure that q is such
-J2 < £ < γ
that ct holds, with a typical value for γ of 2.
In a yet further embodiment, for some nodes an extension to the noisy-OR model was employed which is particularly useful for medical diagnosis, namely the noisy-MAX model. For a binary disease node, the noisy-OR model allows for a child node representing a symptom to be binary (e.g. absent\present ). The noisy-MAX model however allows nodes to have one of a variety of states. Therefore for a symptom node it becomes possible to encode the severity of the symptom, for example, by any number of particular states (e.g. absent\mild\strong\severe ). Whereas in the noisy-OR model each node-parent connection is described by a single probability parameter (lambda), the noisy-MAX algorithm requires multiple parameters describing the variety of multiple states in which the node can exist.
Noisy-MAX nodes are therefore implemented as well on GPUs in our embodiment by adding an additional dimension to the probability value lambda matrix, and producing categorical samples according to the values in this dimension (i.e. sampling from a number of possible states, as opposed to simply true/false).
A demonstration of the above is presented in Further Embodiment A. In addition to demonstrate the above, the following experiments were performed:
The UM network was trained using cross-entropy loss. Specifically, ReLU non-linearity was used with an applied dropout of 0.5 before each hidden layer and the Adam optimizer was used.
As noted above, there are many options for ways to represent the unobserved nodes on the input layer of the neural network when training the UM as explained with reference to figure 3.
Three representations were trialled:
1. 32-bit Continuous Representation Represent false as 0, true as 1 and the unobserved values by a point somewhere between 0 and 1. This is analogous to the probability of the input being true. Three values were used for unobserved: 0, 0.5 and the prior of the node.
2. 2-bit Representation Here, 1 bit was used to represent whether the node is observed and another node to represent whether it is true. {(Observed, True), (Observed, False), (Unobserved, False)} = {(1, 1),(1,0),(0,0) } which is equivalent to {True, False, Unobserved} = { 1,2,3 } in terms of information.
3. 33-bit Representation (1 bit + Continuous) The further option is to combine these two approaches to allow one bit to represent whether the node is observed or not and another variable to be a float between 0 and 1 to represent the probability of this being true.
To measure the quality of the conditional marginals in themselves, a test set of some evidences. For each evidence, "ground truth" posterior marginals were calculated via Likelihood Sampling with 300 million samples. Then two main metrics were used to assess the quality of the distributions. Firstly the mean absolute error - which calculates the absolute error between the true node posterior and predicted node posterior, then averaged over the test set of evidences.
The second metric is the max error - this looks for the maximum probability error across all nodes in the predictions and then averages these over data points. A grid search was run on network size and unobserved representation and the results are reported using the two metrics
in table 1.
Figure imgf000026_0001
Table 1 : avg error / max error for 20,000 iterations
It can be seen that the largest one layer network performs the best. The difference between the two representations is not large, but the results suggest that providing the priors may help improve performance.
Next, experiments were performed to assess the use of UM posterior estimates as proposals.
To do this comparison, the error to the test set over time was evaluated as the number of samples increases. This was done for a standard likelihood weighting and also Importance Sampling with the UM marginals as a proposal distribution. Again both average absolute and max errors over all the case cards were measured.
Firstly the approximate joint distribution as described above was used with empirically a very good beta of 0.1. With beta of 0.1 equivalent results in 250k samples were seen as in likelihood weighting 750k samples, so already this is a 3x speed up.
Although the above has been described in relation to medical data, the above system could also be used for any determination process where there are a plurality of interlinked factors which are evidenced by observations and a determination of a probable cause is required. For example, the above method can be used in financial systems. Also, although the above has used the output of the discriminative model as an aid in conducting approximate inference, in some cases, the estimate produced by the discriminative model may be used on its own. Also, the embeddings of such discriminative models (e.g. neural networks) can serve as a vectorised representation of the provided evidence for tasks like classification and clustering or interpretation of node relationships, as described in further embodiment A.
Further embodiment A
An framework for the embodiments described herein is shown in figure 6 that shows the Universal Marginaliser (UM) : The UM performs scalable and efficient inference on graphical models. This figure shows one pass through the network. First, (1) a sample is drawn from the PGM, (2) values are then masked and (3) the masked set is passed through the UM, which then, (4) computes the marginal posteriors.
As described above, probabilistic graphical models are powerful tools that allow formalisation of knowledge about the world and reason about its inherent uncertainty. There exist a considerable number of methods for performing inference in probabilistic graphical models; however, they can be computationally costly due to significant time burden and/or storage requirements; or they lack theoretical guarantees of convergence and accuracy when applied to large scale graphical models. To this end, the above described Universal Marginaliser Importance Sampler (UM-IS) is implemented- a hybrid inference scheme that combines the flexibility of a deep neural network trained on samples from the model and inherits the asymptotic guarantees of importance sampling. The embodiment described herein shows how combining samples drawn from the graphical model with an appropriate masking function allows training of a single neural network to approximate any of the corresponding conditional marginal distributions, and thus amortise the cost of inference. It is also shown that the graph embeddings can be applied for tasks such as: clustering, classification and interpretation of relationships between the nodes. Finally, the method is benchmarked on a large graph (>1000 nodes), showing that UM-IS outperforms sampling-based methods by a large margin while being computationally efficient.
In this embodiment, the Universal Marginaliser Importance Sampler (UM-IS), an amortised inference- based method for graph representation and efficient computation of asymptotically exact marginals is used. In order to compute the marginals, the UM still relies on Importance Sampling (IS). A guiding frame- work based on amortised inference is used that significantly improves the performance of the sampling algorithm rather than computing marginals from scratch every time the inference algorithm is run. This speed-up allows the application of the inference scheme on large PGMs for interactive applications with minimum errors. Furthermore, the neural network can be used to calculate a vectorised representation of the evidence nodes. This representation can then be used for various machine learning tasks such as node clustering and classification.
The main contributions of this embodiment are as follows:
• UM-IS is used as a novel algorithm for amortised inference-based importance
sampling. The model has the flexibility of a deep neural network to perform amortised inference. The neural net- work is trained purely on samples from the model prior and it benefits from the asymptotic guarantees of importance sampling.
• The efficiency of importance sampling is significantly improved, which makes the proposed method applicable for interactive applications that rely on large PGMs.
• It will be shown on a variety of toy network and on a medical knowledge graph
(>1000 nodes) that the proposed UM-IS outperforms sampling-based and deep learning-based methods by a large margin, while being computational efficient.
• It will be shown that the networks embeddings can serve as a vectorised
representation of the provided evidence for tasks like classification and clustering or interpretation of node relationships.
As described above, the Universal Marginaliser (UM) is a feed-forward neural network, used to perform fast, single-pass approximate inference on general PGMs at any scale. The UM can be used together with importance sampling as the proposal distribution, to obtain asymptotically exact results when estimating marginals of interest. This hybrid model will be referred to as the Universal Marginaliser Importance Sampler (UM-IS).
As described above, a Bayesian Network (BN) encodes a distribution P over the random variables X = Xh . . . , XN } through a Directed Acyclic Graph (DAG), the random variables are the graph nodes and the edges dictate the conditional independence relationships between random variables. Specifically, the conditional independence of a random variable Xt given its parents pa(JQ is denoted as P(JQpa(JQ).
The random variables can be divided into two disjoint sets, X„c X the set of observed variables within the BN, and X^- c X \ X„ the set of the unobserved variables.
In this embodiment, a Neural Network (NN) is implemented as an approximation to the marginal posterior distributions P(JQX0 = x0) for each variable X G X given an instantiation x0 of any set of observations. x0 is defined as the encoding of the instantiation that specifies which variables are observed, and what their values are. For a set of binary variables Xt with i G ,.,.,Ν, the desired network maps the N-dimensional binary vector x0 c B^ to a vector in [0, 1]* representing the probabilities p, := P(1 |X0 =
Figure imgf000029_0001
(A. i)
This NN is used as a function approximator, hence, it can approximate any posterior marginal distribution given an arbitrary set of evidence X0. For this reason, this discriminative model is termed the Universal Marginaliser (UM). If the marginalisation operation in a Bayesian Network is considered as a function / : BN→ [0, l]N , then existence of a neural network which can approximate this function is a direct consequence of the Universal Function Approximation Theorem (UFAT). It states that, under mild assumptions of smoothness, any continuous function can be approximated to an arbitrary precision by a neural network of a finite, but sufficiently large, number of hidden units. Once the weights of the NN are optimised, the activations of those hidden units can be computed to any new set of evidence. They are a compressed vectorised representation of the evidence set and can be used for tasks such as node clustering or classification.
Next, each step of the UM's training algorithm for a given PGM will be described. This model is typically a multi-output NN with one output per node in the PGM (i.e. each variable JQ. Once trained, this model can handle any type of input evidence instantiation and produce approximate posterior marginals P(A^ = 1 |X0 = x0).
The flow chart with each step of the training algorithm is depicted in Fig. 7. For simplicity, it will be assumed that the training data (samples for the PGM) is pre- computed, and only one epoch is used to train the UM.
In practice, the following steps 1-4 are applied for each of the mini-batches separately rather than on a full training set all at once. This improves memory efficiency during training and ensures that the network receives a large variety of evidence combinations, ac- counting for low probability regions in P. The steps are given as follows:
1. S501 Acquiring samples from the PGM . The UM is trained offline by generating unbiased samples (i.e., complete assignment) from the PGM using ancestral sampling.
The PGM described here only contains binary variables Xi, and each sample Si G is a binary vector. In the next steps, these vectors will be partially masked as input and the UM will be trained to reconstruct the complete unmasked vectors as output.
2. S503 Masking. In order for the network to approximate the marginal posteriors at test time, and be able to do so for any input evidence, each sample Si is partially masked. The network will then receive as input a binary vector where a subset of the nodes initially observed were hidden, or masked. This masking can be deterministic, i.e., always masking specific nodes, or probabilistic. Here a different masking distribution is used for every iteration during the optimization process. This is achieved in two steps. First, two random numbers from a uniform distribution i,j ~ U[0,N] are sampled where N is the number of nodes in the graph. Next, making is performed from randomly selected i (j) number of nodes the positive (negative) state. In this way, the ratio between the positive and negative evidence and the total number of masked nodes is different with every iteration. A network with a large enough capacity will eventually learn to capture all these possible representations. There is some analogy here to dropout in the input layer and so this approach could work well as a regulariser, independently of this problem. However, it is not suitable for this problem because of the constant dropout probability for all nodes.
3. S505 Encoding the masked elements. Masked elements in the input vectors S artificially reproduce queries with unobserved variables, and so their encoding must be consistent with the one used at test time. The encodings are detailed below.
4. S507 Training with Cross Entropy Loss. The NN was trained by minimising the multi-label binary cross entropy of the sigmoid output layer and the unmasked samples Si.
5. S509 Outputs: Posterior marginals. The desired posterior marginals are approximated by the output of the last NN-layer. These values can be used as a first estimate of the marginal posteriors (UM approach); however, combined with importance sampling, these approximated values can be further refined (UM-IS approach).
The UM is a discriminative model which, given a set of observations X0, will approximate all the posterior marginals. While useful on its own, the estimated marginals are not guaranteed to be unbiased. To obtain a guarantee of asymptotic unbiasedness while making use of the speed of the approximate solution, the estimated marginals are used for proposals in importance sampling. A naive approach is to sample each Xi G X¾ independently from UM(x0)„ where UM( 0)i is the z'-th element of vector UM(x0). However, the product of the (approximate) posterior marginals may be very different to the true posterior joint, even if the marginal approximations are good.
The universality of the UM makes the following scheme possible, which will be termed the Sequential Universal Marginaliser Importance Sampling (SUM-IS). A single proposal is sampled xs sequentially as follows. First, a new partially observed state is introduced xSu0 and it is initialised to x0. Then, [x^ ~ UM(x0)1 is sampled and the previous sample xSu0 is updated such that Xx is now observed with this value. This process is repeated, at each step sampling [xs]l ~ UM(¾u0),, and xSu0 is updated to include the new sampled value. Thus, the conditional marginal can be approximated for a node i given the current sampled state Xs and evidence X0 to get the optimal proposal Q * as follows:
Qi = P{Xi \ {X .■ . , Xi-i ] U X0) ~ UM(¾u0)i ; (A 2)
Thus, the full sample xs is drawn from an implicit encoding of the approximate posterior joint distribution given by the UM. This is because the product of sampled probabilities from Equation A.3 is expected to yield low variance importance weights when used as a proposal distribution.
N
UO)i
1=2 (A.3)
« Ρ(Χι |Χσ) Π P(Xi \X ■■■ , Xi- Xo).
i=1 (A.4)
The process for sampling from these proposals illustrated in Algorithm 1A and in Fig. 8. The nodes are sampled sequentially using the UM to provide a conditional probability estimate at each step. This requirement can affect computation time, depending on the parallelisation scheme used for sampling. In our experiments, we observed that some parallelisation efficiency can be recovered by increasing the number of samples per batch.
Algorithm 1 A Sequential Universal Marginalizer importance sampling
Order the nodes topological!}? ,. ,.,X^ where iY
i the total i¾B»h r of luxles,
for j in jI ,, ,, ? ] (where M is h® total number of
sam les): do
3:
Figure imgf000033_0001
sam le node ¾ from Q(Xi) add J to ks
Figure imgf000033_0002
The architecture of the UM is shown in Fig. 9. It has a denoising auto-encoder structure with multiple branches - one branch for each node of the graph. In the experiments, it was noticed that the cross entropy loss for different nodes highly depends on the number of parents and its depth in the graph. To simplify the network and reduce the number of parameters, the weights of all fully connected layers that correspond to specific type of nodes are shared. The types are defined by the depth in the graph (type 1 nodes have no parents, type 2 nodes have only type 1 nodes as parents etc.). The architecture of the best performing model on the large medical graph has three types of nodes and the embedding layer has 2048 hidden states.
In experiments, the best performing UM was chosen in terms of Mean Absolute Error (MAE) on the test set for the subsequent experiments. ReLU non-linearities, apply dropout was used on the last hidden layer and the Adam optimization method was used with batchsize of 2000 samples per batch for parameter learning. Batch normalization was also used between the fully connected layers. To train the model on a large medical graphical model, in total a stream of 3 χ 10 ^ samples was used, which took approximately 6 days on a single GPU.
Experiments were performed on a large (>1000 nodes) proprietary Bayesian Network for medical diagnosis representing the relationships between risk factors, diseases and symptoms. An illustration of the model structure is given in Fig. 10(c).
Different NN architectures were tried with a grid search over the values of the hyperparameters and on the number of hidden layers, number of states per hidden layer, learning rate and strength of regularisation through dropout.
The quality of approximate conditional marginals was measured using a test set of posterior marginals computed for 200 sets of evidence via ancestral sampling with 300 million samples. The test evidence set for the medical graph was generated by experts from real data. The test evidence set for the synthetic graphs was sampled from a uniform distribution. Standard importance sampling was used, which corresponds to the likelihood weighting algorithm for discrete Bayesian networks with 8 GPUs over the course of 5 days to compute precise marginal posteriors of all test sets.
Two main metrics are considered: the Mean Absolute Error (MAE) given by the absolute difference of the true and predicted node posteriors and the Pearson Correlation Coefficient (PCC) of the true and predicted marginal vectors. Note that we did not observe negative correlations and therefore both measures are bounded between 0 and 1. The Effective Sample Size (ESS) statistic was used for the comparison with the standard importance sampling. This statistics measures the efficiency of the different proposal distributions used during sampling. In this case, there was not access to the normalising constant of the posterior distribution, the ESS is defined as where the weights, wi, are defined in Step 8 of Algorithm 1 A.
Figure imgf000034_0001
A one hot encoding was considered for the unobserved and observed nodes. This representation only requires two binary values per node. One value represents if the node is observed and positive ([0,1]) and the other value represents whether this node is observed and negative ([1,0]). If the node is unobserved or masked, then both values are set to zero ([0,0]).
In this section, first the results of different architectures for the UM will be discussed, then the performance of importance sampling will be compared with different proposal functions. Finally, the efficiency of the algorithm will be discussed. A hyperparameter grid search was used on the different network architectures and data representations. The algorithmic performance was not greatly affected for different types of data representations. It is hypothesised that this is due to the fact that neural networks are flexible models capable of handling different types of in- puts efficiently by capturing the representations within the hidden layers. In contrast, the network architecture of the UM strongly depends on the structure of the PGM. For this reason, a specific UM needs to be trained for each PGM. This task can be computation- ally expensive but once the UM is trained, it can be used to compute the approximate marginals in a single forward pass on any new and even unseen set of evidence.
In order to evaluate the performance of sampling algorithms, the change in PCC and MAE on the test sets was monitoried with respect to the total number of samples. It was noticed that across all experiments, a faster increase in the maximum value or the PCC is observed when the UM predictions are used as proposals for importance sampling. This effect becomes more pronounced as the size of the graphical model increases. Fig. 10 indicates standard IS (blue line) reaches PCC close to 1 and an MAE close to 0 on the small network with 96 nodes. In this case of very small graphs, both algorithms converge quickly to the exact solution. However, UM-IS (orange-line) still outperforms IS and converges faster, as seen in Fig. 10(a). For the synthetic graph with 798 nodes, standard IS reaches an MAE of 0.012 with 10^ samples, whereas the UM-IS error is 3 times lower (0.004) for the same number of samples. The same conclusions can also be drawn for PCC. Most interestingly, on the large medical PGM (Fig. 10(c)), the UM-IS with 10^ samples exhibits better performance than standard IS with 10^ samples in terms of MAE and PCC. In other words, the time (and computational costs) of the inference algorithm is significantly reduced by factor of ten or more. It was expected for this improvement to be even stronger on much larger graphical models. The results of a simple UM architecture as a baseline are also included. This simple UM (UM-IS-Basic) has one single hidden layer that is shared for all nodes of the PGM. It can be can seen that the MAE and PCC still improved over standard IS. However, UM-IS with multiple fully connected layers per group of nodes significantly outperforms the basic UM by a large margin. There are two reasons for this. First, the model capacity of the UM is higher which allows to learn more complex structures from the data. Secondly, the losses in the UM are spread across all groups of nodes and the gradient update steps are optimised with the right order of magnitude for each group. This prevents the model from overfitting to the states of a specific type of node with a significant higher loss.
Extracting meaningful representations form the evidence set is an additional interesting feature of the UM. In this section, it is demonstrated that the qualitative results for this application. The graph embeddings are extracted as the 2048 dimensional activations of the inner layer of the UM (see Fig. 9). They are a low-dimensional vectorised representation of the evidence set in which the graphs structure is preserved. That means that the distance for nodes that are tightly connected in the PGM should be smaller that the distance to nodes than are independent. In order to visualise this feature, the first two principal components of the embeddings from different evidence sets are plotted in which it is known that they are related. The evidence set from the medical PGM is used with different diseases, risk-factors and symptoms as nodes. Fig. 11(a) shows that the embeddings of sets with active Type-1 and Type-2 diabetes are collocated. Although the two diseases have different underlying cause and connections in the graphical model (i.e pancreatic beta-cell atrophy and insulin-resistance respectively), they share similar symptoms and complications (e.g cardiovascular diseases, neuropathy, increased risk of infections etc.). A similar clustering can be seen in Fig. 11(b) for two cardiovascular risk factors: smoking and obesity, interestingly collocated with a sign seen in patient suffering from a severe heart condition (i.e unstable angina, or acute coronary syndrome): chest pain at rest.
To further asses the quality of the UM embeddings, experiments are performed for node classification with different features and two different classifiers. More precisely, a SVM and Ridge regression model with thresholded binary output was trained for multitask disease detection. These models were trained to detect the 14 most frequent diseases from (a) the set of evidence or (b) the embedding of that set. A 5-fold standard cross validation was used with a grid search over the hyperparameter of both models and the number of PC A components for data preprocessing. Table 1A shows the experimental results for the two types of features. As expected, the models that were trained on the UM embeddings reach a significantly higher performance across all evaluation measures. This is mainly because the embeddings of the evidence set are effectively compressed and structured and also preserve the information form the graph structure. Note that the mapping from the evidence set to the embeddings was optimised with a large number of generated samples (3 * 10^ 1) during the UM learning phase. Therefore, these representations can be used to build more robust ma- chine learning methods for classfication and clustering rather then using the raw evidence set to the PGM.
Table 1 A: Classification performances using two different features. Each classifier is trained on - dense the dense embedding as features, and input - the top layer (UM input) as features. The target (output) is always the disease layer.
Linear SVC ft idge
den ¾e i ut: dense input:
Fl 0.67 ± 0.01 0.07 ± 0.00 0.66 ± 0.04 0.17 ± 0.01
Precision 0.84 ± 0.03 0.20 ± 0.04 0.81 ± 0.06 0.22 ± 0.04
Recail 0.58 ± 0.02 0.05 ± 0.00 0.59 ± 0.04 0.16 ± 0.01
Accuracy 0.69 ± 0.01 0.31 ± 0.01 0.63 ± 0.02 0.27 ± 0.01
The above embodiment paper discusses a Universal Marginaliser based on a neural network which can approximate all conditional marginal distributions of a PGM. It is shown that a UM can be used via a chain decomposition of the BN to approximate the joint posterior distribution, and thus the optimal proposal distribution for importance sampling. While this process is computationally intensive, a first-order approximation can be used requiring only a single evaluation of a UM per evidence set. The UM on multiple datasets and also on a large medical PGM is evaluated demonstrating that the UM significantly improves the efficiency of importance sampling. The UM was trained offline using a large amount of generated training samples and for this reason, the model learned an effective representation for amortising the cost of inference. This speed-up makes the UM (in combination with importance sampling) applicable for interactive applications that require a high performance on very large PGMs. Furthermore, the use of the UM embeddings was explored and it has been shown that they can be used for tasks such as classification, clustering and interpretability of node relations. These UM embeddings make it possible to build more robust machine learning applications that rely on large generative models.
Next, for completeness, an overview of importance sampling and how it is used for computing the marginals of a PGM given a set of evidence will be described.
In BN inference, Importance Sampling (IS) is used to provide the posterior marginal estimates P(X¾ I X0). To do so, samples x¾ are drawn from a distribution Q(X¾ |X0), known as the proposal distribution. The proposal distribution must be defined such that both sampling from and evaluation can be performed efficiently.
Provided that P(Xu, Xo) can be evaluated, and that this distribution is such that X¾ contains the Markov boundary of Xo along with all its ancestors, IS states that posterior estimates can be formed as shown in E uation B 1 below:
Figure imgf000038_0001
where x, ~Q and w, = P(x , xo) I 2(xi, xo) are the importance sampling weights and l%t((x) is an indicator function for x¾.
The simplest proposal distribution is the prior, P(Xu) However, as the prior and the posterior may be very different (especially in large networks) this is often an inefficient approach. An alternative is to use an estimate of the posterior distribution as a proposal. In this work, we argue that the UM learns an optimal proposal distribution.
In an embodiment, for sampling from the posterior marginal a BN can be considered with Bernoulli nodes and of arbitrary size and shape. Consider two specific nodes, X, and Xj, such that Xj is caused only and always by X:
Xi = I) - 1 .
PiX, - 1 ¾ ^ 0} - 0. Given evidence E, it can be assumed that
Figure imgf000039_0001
It will now be illustrated that using the posterior distribution P (X\E) as a proposal will not necessarily yield the best result.
Say we have been given evidence, E, and the true conditional probability of P
Figure imgf000039_0002
= 0:001, therefore also P (Xj\E) = 0:001. It would be naively expected that P(X\E) to be the optimal proposal distribution. However we can illustrate the problems here by sampling with Q = P{X\E) as the proposal.
Each node kGN will have a weight ¼¾ = P(Xk)/Q(Xk) and the total weight of the sample will be
,v
w -
Figure imgf000039_0003
The weights should be approximately 1 if Q is close to P. However, consider the Wj. There are four combinations of X, and Xj. Xt=l , Xj=\ only will be sampled, in expectation, one every million samples, however when the weight is determined Wj will be wj = P(Xj = \)IQ(Xj = 1) = 1=0:001 = 1000. This is not a problem in the limit, however if it happens for example in the first 1000 samples then it will outweigh all other samples so far. As soon as there is a network with many nodes whose conditional probabilities are much greater than their marginal proposals this becomes almost inevitable. A further consequence of these high weights is that, since the entire sample is weighted by the same weight, every node probability will be effected by this high variance.
Further embodiments are set out below:
A method, of using the embeddings of using the above discriminative model as a vectorised representation of the provided evidence for classification.
A method of using the embeddings of using the above discriminative model as a vectorised representation of the provided evidence for clustering.
A method of using the embeddings of using the above discriminative model as a vectorised representation of the provided evidence for interpretation of node relationships."
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed the novel methods and systems described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of methods and systems described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms of modifications as would fall within the scope and spirit of the inventions.

Claims

CLAIMS:
1. A method for providing a computer implemented medical diagnosis, the method comprising: receiving an input from a user comprising at least one symptom of the user; providing the at least one symptom as an input to a medical model comprising: a probabilistic graphical model comprising probability distributions and relationships between symptoms and diseases; an inference engine configured to perform Bayesian inference on said probabilistic graphical model; and a discriminative model pre-trained to approximate the probabilistic graphical model, the discriminative model being trained using samples from said probabilistic graphical model, wherein some of the data of the samples has been masked to allow the deterministic model to produce data which is robust to the user providing incomplete information about their symptoms; deriving estimates, from the discriminative model, of the probability of the user having a disease; inputting the estimates to the inference engine; performing approximate inference on the probabilistic graphical model to obtain a prediction of the probability that the user has that disease; and outputting the probability of the user having the disease for display by a display device.
2. A method according to claim 1, wherein the inference engine is adapted to perform importance sampling over conditional marginals.
3. A method according to any preceding claim, wherein the discriminative model is a neural network
4. A method according to claim 3, wherein the neural network is a neural network that can approximate the outputs of the probabilistic graphical model.
5. A method according to claim 3, wherein the neural network is a neural network that can approximate the outputs of the probabilistic graphical model.
6. A method according to claim 3, wherein the neural network is a single neural network that can approximate the outputs of the probabilistic graphical model.
7. A method according to any preceding claim, wherein the probabilistic graphical model is a noisy-OR model.
8. A method according to any preceding claim, wherein determining the probability that the user has one or more diseases further comprises determining whether further information from the user would improve the diagnosis and requesting further information.
9. A method according to any preceding claim, wherein the medical model receives information concerning the symptoms of the user and risk factors of the user.
10. A method of training a discriminative model to approximate the output of a probabilistic graphical model, comprising:
receiving by the discriminative model samples from said probabilistic graphical model; and
training the discriminative model using said samples,
wherein some of the data of the samples has been masked to allow the deterministic model to produce data which is robust to the user failing to input at least one symptom.
11. A method according to claim 10, wherein the masking is based on a uniform distribution.
12. A method for providing a computer implemented medical diagnosis, the method comprising: receiving an input from a user comprising at least one symptom; providing the at least one symptom as an input to a medical model comprising: a probabilistic graphical model comprising probability distributions and relationships between symptoms and diseases; and an inference engine configured to perform Bayesian inference via statistical sampling on said probabilistic graphical model, the inference engine comprising a graphical processing unit and said statistical sampling being performed using parallel processing, performing Bayesian inference via statistical sampling on said probabilistic graphical model with said inference engine to obtain a prediction of the probability that the user has a disease; and outputting the probability of the user having the disease for display by a display device.
13. A method according to claim 12, wherein the medical model further comprises a discriminative model, wherein the discriminative model has been pre-trained to approximate the probabilistic graphical model, and wherein predicting the probability that the user has a diseases comprises deriving estimates of the probabilities that the user has that from the discriminative model, inputting these estimates to the inference engine and performing approximate inference using parallel processing on the probabilistic graphical model to obtain a prediction of the probability that the user has that disease.
14. A method according to either of claims 12 or 13, wherein the statistical sampling is implemented via tensor operations.
15. A method according to any of claims 12 to 14, wherein the statistical sampling is importance sampling comprising an importance sampling proposal distribution.
16. A method according to claim 15, wherein the current estimate of the posterior probability that a user has a disease given the symptoms provided by the user, is mixed with the importance sampling proposal distribution.
17. A method according to any of claims 15 or 16, wherein probabilities are clipped.
18. A carrier medium comprising computer readable code configured to cause a computer to perform any of the preceding methods.
19. A system for providing a computer implemented medical diagnosis, the system comprising: a user interface for receiving an input from a user comprising at least one symptom of the user; a processor, said processor being configured to: provide the at least one symptom as an input to a medical model comprising: a probabilistic graphical model comprising probability distributions and relationships between symptoms and diseases; an inference engine configured to perform Bayesian inference on said probabilistic graphical model; and a discriminative model pre-trained to approximate the probabilistic graphical model, the discriminative model being trained using samples from said probabilistic graphical model, wherein some of the data of the samples has been masked to allow the deterministic model to produce data which is robust to the user providing incomplete information about their symptoms; derive estimates, from the discriminative model, of the probability of the user having a disease; input the estimates to the inference engine; and perform approximate inference on the probabilistic graphical model to obtain a prediction of the probability that the user has that disease, the system further comprising a display device adapted to display the probability of the user having the disease.
20. A system for training a discriminative model to approximate the output of a probabilistic graphical model, comprising a processor, said processor comprising a probabilistic graphical model and a discriminative model, wherein the processor is adapted to: receive by the discriminative model samples from said probabilistic graphical model; and train the discriminative model using said samples,
wherein some of the data of the samples has been masked to allow the deterministic model to produce data which is robust to the used failing to input at least one symptom
21. A system for providing a computer implemented medical diagnosis, the method comprising: receiving an input from a user comprising at least one symptom; providing the at least one symptom as an input to a medical model comprising: a probabilistic graphical model comprising probability distributions and relationships between symptoms and diseases; and an inference engine configured to perform Bayesian inference via statistical sampling on said probabilistic graphical model, the inference engine comprising a graphical processing unit and said statistical sampling being performed using parallel processing, performing approximate inference on the probabilistic graphical model with said inference engine to obtain a prediction of the probability that the user has a disease; and outputting the probability of the user having the disease for display by a display device.
22. A method for providing a computer implemented medical diagnosis, the method comprising: receiving an input from a user comprising at least one symptom of the user; providing the at least one symptom as an input to a medical model comprising: a probabilistic graphical model comprising probability distributions and relationships between symptoms and diseases; and a discriminative model pre-trained to approximate the probabilistic graphical model, the discriminative model being trained using samples from said probabilistic graphical model, wherein some of the data of the samples has been masked to allow the deterministic model to produce data which is robust to the user providing incomplete information about their symptoms; deriving estimates, from the discriminative model, of the probability of the user having a disease; and outputting the probability of the user having the disease for display by a display device.
23. A method for providing a computer implemented determination process for determining a probable cause from a plurality of causes, the method comprising: receiving an input from a user comprising an observation; providing the at least one observation as an input to a determination model comprising: a probabilistic graphical model comprising probability distributions and relationships between observations and causes; an inference engine configured to perform Bayesian inference on said probabilistic graphical model; and a discriminative model pre-trained to approximate the probabilistic graphical model, the discriminative model being trained using samples from said probabilistic graphical model, wherein some of the data of the samples has been masked to allow the deterministic model to produce data which is robust to the user providing incomplete information about the observations; deriving estimates, from the discriminative model, of the probability of the most probable cause of the observations; inputting the estimates to the inference engine; performing approximate inference on the probabilistic graphical model to obtain a prediction of the most probable cause based on the observations; and outputting the probability of the most probable cause for the inputted observations for display by a display device.
24. A method for providing a computer implemented determination process for determining a probable cause from a plurality of causes, the method comprising: receiving an input from a user comprising an observation; providing the at least one observation as an input to a determination model comprising: a probabilistic graphical model comprising probability distributions and relationships between observations and causes; and a discriminative model pre-trained to approximate the probabilistic graphical model, the discriminative model being trained using samples from said probabilistic graphical model, wherein some of the data of the samples has been masked to allow the deterministic model to produce data which is robust to the user providing incomplete information about the observations; deriving estimates, from the discriminative model, of the probability of the most probable cause of the observations; and outputting the probability of the most probable cause for the inputted observations for display by a display device.
PCT/GB2018/053154 2017-10-31 2018-10-31 A computer implemented determination method and system WO2019086867A1 (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN201880071038.4A CN111602150A (en) 2017-10-31 2018-10-31 Computer-implemented method and system for determining
EP18815276.3A EP3704639A1 (en) 2017-10-31 2018-10-31 A computer implemented determination method and system
US16/325,681 US20210358624A1 (en) 2017-10-31 2018-10-31 A computer implemented determination method and system
US16/277,975 US11328215B2 (en) 2017-10-31 2019-02-15 Computer implemented determination method and system
US16/277,956 US20190251461A1 (en) 2017-10-31 2019-02-15 Computer implemented determination method and system
US16/277,970 US11348022B2 (en) 2017-10-31 2019-02-15 Computer implemented determination method and system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
GB1718003.5 2017-10-31
GB1718003.5A GB2567900A (en) 2017-10-31 2017-10-31 A computer implemented determination method and system
GB1815800.6 2018-09-27
GBGB1815800.6A GB201815800D0 (en) 2017-10-31 2018-09-27 A computer implemented determination method and system

Related Child Applications (4)

Application Number Title Priority Date Filing Date
US16/325,681 A-371-Of-International US20210358624A1 (en) 2017-10-31 2018-10-31 A computer implemented determination method and system
US16/277,975 Continuation US11328215B2 (en) 2017-10-31 2019-02-15 Computer implemented determination method and system
US16/277,970 Continuation US11348022B2 (en) 2017-10-31 2019-02-15 Computer implemented determination method and system
US16/277,956 Continuation US20190251461A1 (en) 2017-10-31 2019-02-15 Computer implemented determination method and system

Publications (1)

Publication Number Publication Date
WO2019086867A1 true WO2019086867A1 (en) 2019-05-09

Family

ID=67540190

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2018/053154 WO2019086867A1 (en) 2017-10-31 2018-10-31 A computer implemented determination method and system

Country Status (1)

Country Link
WO (1) WO2019086867A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110232678A (en) * 2019-05-27 2019-09-13 腾讯科技(深圳)有限公司 A kind of image uncertainty prediction technique, device, equipment and storage medium
CN110276442A (en) * 2019-05-24 2019-09-24 西安电子科技大学 A kind of searching method and device of neural network framework
CN110570013A (en) * 2019-08-06 2019-12-13 山东省科学院海洋仪器仪表研究所 Single-station online wave period data prediction diagnosis method
CN111326251A (en) * 2020-02-13 2020-06-23 北京百度网讯科技有限公司 Method and device for outputting inquiry questions and electronic equipment
CN111754118A (en) * 2020-06-24 2020-10-09 重庆电子工程职业学院 Intelligent menu optimization system based on self-adaptive learning
CN113707331A (en) * 2021-07-30 2021-11-26 电子科技大学 Traditional Chinese medicine syndrome differentiation data generation method and system
CN115004649A (en) * 2020-01-21 2022-09-02 株式会社Ntt都科摩 Communication system based on neural network model and configuration method thereof

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
"Uncertainty Proceedings 1994", 1 January 1994, ELSEVIER, ISBN: 978-1-55860-332-5, article MALCOLM PRADHAN ET AL: "Knowledge Engineering for Large Belief Networks", pages: 484 - 490, XP055548802, DOI: 10.1016/B978-1-55860-332-5.50066-3 *
BAIFANG ZHANG ET AL: "Protein secondary structure prediction using machine learning", THE 2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), vol. 1, 1 January 2005 (2005-01-01), pages 532, XP055549370, ISSN: 2161-4393, DOI: 10.1109/IJCNN.2005.1555887 *
DACHENG TAO ET AL: "Bayesian tensor analysis", THE 2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 1 June 2008 (2008-06-01), pages 1402 - 1409, XP055549421, ISSN: 2161-4393, DOI: 10.1109/IJCNN.2008.4633981 *
DANIEL JIWOONG IM ET AL: "Denoising Criterion for Variational Auto-Encoding Framework", 19 November 2015 (2015-11-19), XP055551067, Retrieved from the Internet <URL:https://arxiv.org/pdf/1511.06406.pdf> *
HOOGERHEIDE ET AL: "On the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank: An application of flexible sampling methods using neural networks", JOURNAL OF ECONOMETRICS, ELSEVIER SCIENCE, AMSTERDAM, NL, vol. 139, no. 1, 16 May 2007 (2007-05-16), pages 154 - 180, XP022081163, ISSN: 0304-4076, DOI: 10.1016/J.JECONOM.2006.06.009 *
KOLLER; FRIEDMAN: "Probabilistic Graphical Models: Principles and Techniques", 2009, THE MIT PRESS
LAURA DOUGLAS ET AL: "A Universal Marginalizer for Amortized Inference in Generative Models", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 2 November 2017 (2017-11-02), XP080833793 *
PASCAL VINCENT ET AL: "Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion", JOURNAL OF MACHINE LEARNING RESEARCH, MIT PRESS, CAMBRIDGE, MA, US, vol. 11, 1 December 2010 (2010-12-01), pages 3371 - 3408, XP058336476, ISSN: 1532-4435 *
QUAID MORRIS: "Recognition Networks for Approximate Inference in BN20 Networks", PROCEEDINGS OF THE SEVENTEENTH CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2 August 2011 (2011-08-02), San Francisco, CA, USA, pages 370 - 377, XP055547941, ISBN: 978-1-55860-800-9, Retrieved from the Internet <URL:http://delivery.acm.org/10.1145/2080000/2074068/p370-morris.pdf?ip=145.64.134.241&id=2074068&acc=ACTIVE%20SERVICE&key=E80E9EB78FFDF9DF.4D4702B0C3E38B35.4D4702B0C3E38B35.4D4702B0C3E38B35&__acm__=1548668121_7ce685c0431b8f22049de734610dacfa> [retrieved on 20190128] *
ROBERT WALECKI ET AL: "Universal Marginalizer for Amortised Inference and Embedding of Generative Models", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 12 November 2018 (2018-11-12), XP080943409 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110276442A (en) * 2019-05-24 2019-09-24 西安电子科技大学 A kind of searching method and device of neural network framework
CN110276442B (en) * 2019-05-24 2022-05-17 西安电子科技大学 Searching method and device of neural network architecture
CN110232678A (en) * 2019-05-27 2019-09-13 腾讯科技(深圳)有限公司 A kind of image uncertainty prediction technique, device, equipment and storage medium
CN110570013A (en) * 2019-08-06 2019-12-13 山东省科学院海洋仪器仪表研究所 Single-station online wave period data prediction diagnosis method
CN115004649A (en) * 2020-01-21 2022-09-02 株式会社Ntt都科摩 Communication system based on neural network model and configuration method thereof
CN111326251A (en) * 2020-02-13 2020-06-23 北京百度网讯科技有限公司 Method and device for outputting inquiry questions and electronic equipment
CN111326251B (en) * 2020-02-13 2023-08-29 北京百度网讯科技有限公司 Question output method and device and electronic equipment
CN111754118A (en) * 2020-06-24 2020-10-09 重庆电子工程职业学院 Intelligent menu optimization system based on self-adaptive learning
CN111754118B (en) * 2020-06-24 2023-08-04 重庆电子工程职业学院 Intelligent menu optimization system based on self-adaptive learning
CN113707331A (en) * 2021-07-30 2021-11-26 电子科技大学 Traditional Chinese medicine syndrome differentiation data generation method and system
CN113707331B (en) * 2021-07-30 2023-04-07 电子科技大学 Traditional Chinese medicine syndrome differentiation data generation method and system

Similar Documents

Publication Publication Date Title
US11328215B2 (en) Computer implemented determination method and system
WO2019086867A1 (en) A computer implemented determination method and system
Huang et al. Augmented normalizing flows: Bridging the gap between generative flows and latent variable models
Ritchie et al. Deep amortized inference for probabilistic programs
Lall et al. The MIDAS touch: accurate and scalable missing-data imputation with deep learning
Urban et al. Deep learning: A primer for psychologists.
Wang et al. Natural-parameter networks: A class of probabilistic neural networks
Binder et al. Adaptive probabilistic networks with hidden variables
Shen et al. Disentangled generative causal representation learning
Díaz Muñoz et al. Super learner based conditional density estimation with application to marginal structural models
Douglas et al. A universal marginalizer for amortized inference in generative models
Miladinović et al. Disentangled state space representations
El-Laham et al. Policy gradient importance sampling for Bayesian inference
Rahman et al. Towards Modular Learning of Deep Causal Generative Models
Scannell et al. Function-space Parameterization of Neural Networks for Sequential Learning
Glass et al. Structured regression on multiscale networks
Nareklishvili et al. Deep Ensemble Transformers for Dimensionality Reduction
Sacher et al. Hamiltonian Monte Carlo for regression with high-dimensional categorical data
Belbahri et al. A twin neural model for uplift
WANG et al. On the global convergence of the high-order power method for rank-one tensor approximation
Lall et al. Applying the midas touch: how to handle missing values in large and complex data
Wu et al. Counterfactual Generative Modeling with Variational Causal Inference
Lu Advances in Sequential Decision Making Problems with Causal and Low-Rank Structures
Lyle Generalization through the lens of learning dynamics
Marco et al. Imputation of Missing Data Using Masked Denoising Autoencoder with L2-Norm Regularization in Software Effort Estimation.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18815276

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018815276

Country of ref document: EP

Effective date: 20200602