GB2513105A - Signal processing systems - Google Patents

Signal processing systems Download PDF

Info

Publication number
GB2513105A
GB2513105A GB1304795.6A GB201304795A GB2513105A GB 2513105 A GB2513105 A GB 2513105A GB 201304795 A GB201304795 A GB 201304795A GB 2513105 A GB2513105 A GB 2513105A
Authority
GB
United Kingdom
Prior art keywords
vector
signal processor
category
data
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB1304795.6A
Other versions
GB201304795D0 (en
Inventor
Julien Robert Michel Cornebise
Danilo Jimenez Rezende
Dani L Pieter Wierstra
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DeepMind Technologies Ltd
Original Assignee
DeepMind Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DeepMind Technologies Ltd filed Critical DeepMind Technologies Ltd
Priority to GB1304795.6A priority Critical patent/GB2513105A/en
Publication of GB201304795D0 publication Critical patent/GB201304795D0/en
Priority to US13/925,637 priority patent/US9342781B2/en
Priority to PCT/GB2014/050695 priority patent/WO2014140541A2/en
Priority to CN201480016209.5A priority patent/CN105144203B/en
Priority to EP14715977.6A priority patent/EP2973241B1/en
Publication of GB2513105A publication Critical patent/GB2513105A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

A conditional generative model for generating sample data or outputs similar to those that have been observed by conditionally using stochastic selection categorisation probability, i.e. choosing according to a probability that the output generated falls in to a specific category, then using a deterministic neural network to output the nearest previously seen example to the input. The neural network may be a multilayer perceptron and it may form a modified Helmholtz machine. The use of the neural network artificial intelligence classifier is termed signal processing using compressed mixture and chained compressed mixture approaches. The neural networks may be chained together so they operate sequentially. Potential applications include: simulating imagination, for example outputting example imagined data based on an input category; completion of partial signals, e.g. images in character recognition; and, classifying objects amongst categories;

Description

Signal Processing Systems
FIELD OF THE INVENTION
This invention generally relates to electronic hardware, software, and related methods for signal processing, in particular signals processing systems which generate data dependent on, and representative of, previously learnt example data.
BACKGROUND TO THE INVENTION
We will describe, in the main, signal processors which employ neural networks and other techniques to generate output data examples which match those previously learnt. For example the signal processor may be trained with many different examples of hand written digits from zero to nine and may then be employed to randomly generate a new example from one of the learnt categories. Thus an output may be generated from a set of learnt distributions (of the training examples) and, in general, the categorisation of the training examples may also be learnt. We will also describe techniques which use an external input to select the category of output example generated, not buy precisely specifying the category but instead by providing data which defines a context' for the training examples. The signal processor is trained using examples and their context and then afterwards context data can be used to bias the generation of output examples.
Signal processors of this general type have a range of applications. For example they can be used for prediction, with or without context, and thus have applications in many types of image and audio signal processing, as well as in control applications, for example predicting the position of a robot arm, as well as in other applications, for example evolutionary search techniques for, say, drug discovery. Embodiments of the signal processor/system may process data including, but not limited to: audio, video, image, game, sensor, actuator, control (including motor control), biological, physical, chemical, spatial, text, search, and other data.
It is known to use a Boltzmann machine to provide a so-called generative model as described, for example, in Salakhutdinov and Hinton, Deep Bo/tzmann Machine", in Proceedings of the International Conference on Artificial Intelligence and Statistics, volume 5, pages 448-455, 2009 (http://www.cs.utoronto.cahrsalakhu/papers/dbm.pdf).
However Deep Boltzman Machines require a great deal of processing power to implement.
A Helmholtz machine can also be employed to provide a generative model, but whilst such machines have some interesting features in practice they learn very slowly and the output examples they generate are poor.
We will describe improved signal processors and related architectures which address both these problems.
SUMMARY OF THE INVENTION
According to a first aspect of the invention there is therefore provided a signal processor, the signal processor comprising: a probability vector generation system, wherein said probability vector generation system has an input to receive a category vector for a category of output example and an output to provide a probability vector for said category of output example, wherein said output example comprises a set of data points, and wherein said probability vector defines a probability of each of said set of data points for said category of output example; a memory storing a plurality of said category vectors, one for each of a plurality of said categories of output example; and a stochastic selector to select a said stored category of output example for presentation of the corresponding category vector to said probability vector generation system; wherein said signal processor is configured to output data for an output example corresponding to said selected stored category.
In embodiments the relationship between the category vector and the probability vector, and also the stored category vectors themselves, have been learnt by training the signal processor using a set of examples, as described further below. The training system may comprise part of the signal processor or the variable parameters in the signal processor may be trained when an instance of the signal processor is created, and afterwards the signal processor may operate independently of a training module/system. In some preferred implementations the probability vector generation system operates to translate between a probability vector defining the probability of a set of output data points (output vector) and a compressed representation of the probability vector as a category vector. Thus in embodiments a category vector is a compressed representation of a probability vector.
The data compression may be implemented using a neural network, in particular a deterministic (rather than stochastic) neural network. Employing a shared mapping system of this type reduces the number of parameters to be learnt by the signal processor since, in effect, the weights of the neural network are common to all the compressed representations (categories). Furthermore employing a deterministic neural network counterintuitively facilitates learning by facilitating a deterministic, closed-form calculation of the weights during training of the signal processor.
The skilled person will appreciate that, in this context, the reference to deterministic is to the weight calculation and does not preclude, for example, use of the dropout' technique to reduce the risk of complex co-adaptation where there are potentially many degrees of freedom (Hinton et al Improving Neural Networks by Preventing Go-adaptation of Feature Detectors', Arxiv: 1207.0580 vi, 3 July 2012). More generally in this specification, where reference is made to a deterministic' neural network this should be taken to include, for example, a neural network which employs dropout or similar techniques to reduce overfitting during training.
As previously mentioned, preferably the probability vector generation system comprises a deterministic neural network, for example a non-linear multilayer perceptron. Here, by non-linear, it is meant that one or more layers of neurons in the network have a non-linear transfer function so that the network is not constrained to fit just linear data. The skilled person will recognise that, in principle, the mapping need not be performed by a neural network but may be performed by any deterministic function, for example a large polynomial, splines or the like, but in practice such techniques are undesirable because of the exponential growth in the number of parameters needed as the length of the input/output vectors increases.
Some preferred implementations of the signal processor include a context vector input to receive a context vector which defines a relative likelihood of each of the plurality of categories (for training examples and/or output examples). This may then provide an input to the stochastic selector so that the selection of a category of output example is dependent on the context (vector). Then the context vector, or data derived from the context vector, may be provided as a further input to the probability vector generation system, in embodiments as an additional vector input to the deterministic neural network. Thus an input layer of this network may have a first set of nodes to receive a category vector output from the memory storing these vectors, and a second set of nodes to receive the context vector.
In some preferred embodiments a length of the context vector may be different to the number of categories and a mapping unit is included to translate from one to the other.
This mapping unit preferably comprises a second neural network, preferably a deterministic neural network, preferably non-linear (including a non-linear function applied to the signals from at least one layer of nodes). In embodiments this mapping unit comprises a second multilayer perceptron. The stochastic selector may then select a category according to a set of probabilities defined by a modified context vector of length K (the number of categories). In such a system, if there is no external context then the context vector, or the niodified context vector of length K output from the mapping unit, may be defined to be constant (that is, setting the categories to be equally likely).
Where the context vector is not constant the context vector mapping neural network should have at least one hidden layer; similarly in embodiments the neural network in the probability vector generation system also preferably has at least one hidden layer although, depending upon the complexity of the data, two or more hidden layers may be preferable for this neural network. Providing a context vector input for the signal processor enables output examples from a learnt context to be provided. Although, typically, an output example may comprise a large number of data points (for example it may be an image), and the context vector will often be much shorter (for example 1-values), this is not essential. Thus in other implementations the context vector may be large, for example an image, and the output example small, for example defining a classification or category of the image. In this case the probability vector generation system may not be needed to provide data compression between the probability vector and category vector, in which case the probability vector generation system may, in effect, provide an identity operation (straight through connection). Data compression may then effectively be provided by the context vector mapping unit (A).
One particularly advantageous extension to the above described signal processor is to connect a sequence of the signal processors in a chain such that each successive signal processor receives a context vector from at least a previous signal processor in the chain, in embodiments from all the previous signal processors in the chain. More particularly, the context vector input to a signal processor in the chain may comprise data identifying the selection of the stochastic selector in the previous signal processor of the chain. In some sense this corresponds to a belief' the previous signal processor has regarding the output example to generate because what is provided is an example selected based on the likelihoods (distributions) it has learnt from the training examples. The selection of the stochastic selector may be provided to the next signal processor from various stages following the selection. Thus the information may be provided as a probability vector or as a category vector or, potentially, as a stochastic selection (sample) with data values chosen according to probabilities defined by the probability vector. It is preferable, however, to use the compressed' category vector level data as this reduces the number of parameters which the subsequent signal processor must learn and, in effect, leverages the compression mapping (MLP -multilayer perceptron -weights) learnt by the previous signal processor.
Thus it will also be appreciated that the output data from a signal processor for an output example may either comprise a category vector, or a probability vector (defining likelihood values for data points of the output example) which, if desired, may be employed for generating an output example. Additionally or alternatively the output may comprise an output example per se, with data point values selected stochastically according to corresponding probabilities defined by the probability vector.
Similarly the output from the chain of signal processors may either comprise a probability vector from the end processor of the chain and/or an output stochastic selector may be provided to generate an output example according to probabilities defined by this probability vector.
The skilled person will recognise that in a chain of signal processors the first signal processor in the chain may or may not have a context vector input, depending upon whether it is desired to make the signal processor chain dependent on an external context vector input.
The number of categories available in a signal processor is a design choice. In part this choice may be made dependent on a priori knowledge of the data -how many categories, very roughly, might be expected to be present. For example with learnt hand written digits 10 different categories would be expected, for digits 0-9. In general, however, it is advantageous to provide for a very large number of categories and, in effect, allow the training of the signal processor to determine how many categories are needed. In theory there is a risk of overfitting with such an approach (in effect the signal processor may simply memorise the training examples. In practice, however, this is not necessarily a problem and if it was could be addressed by, for example, dropout or imposing a sparse representation (on one or both neural networks) or in other ways, for example by detecting over fitting and adjusting (reducing) a number of free parameters. Thus it is generally desirable to make provision for a large number of categories.
In one approach a large number of categories may be implemented on a single signal processor, but with more than a few thousand categories this becomes computationally expensive. Counterintuitively it is much more computationally efficient to implement a relatively small number of categories on each processor of a chain of processors: with this approach the effective number of categories grows exponentially with the number of processors in the chain (the number of levels) whilst the computational cost of sampling from the structure grows linearly with the number of processors (levels), and the computational cost of training the chain grows sub-linearly with the number of levels. For example with, say, 20 categories and four levels there are effectively 20 =160,000 categories. There is not complete equivalence with this same number of categories implemented on a single processor but there is very little decrease in flexibility for a huge saving in computational cost. By way of illustration consider an example with two categories on each processor: The first processor splits the data domain into two (in general divided by some complex surface), the second processor then splits each of these categories within the data domain into two, the third processor splits each of the domains created by the first and second processors into two, and so forth. In effect the context vector received by a processor labels which of the available regions generated by previous processors the current processor is to split the category vector inherited from the previous processor provides this information in compressed form (it represents, for example, a compressed form of the image it has chosen). One processor receives the category vector which, for say an image, defines a compressed image which the previous processor believes should be the output example, and this is combined with a belief of the present processor regarding the output example image, the present processor adding detail. This process continues down the chain with sequential refinement of the output example.
In a related aspect, therefore, the invention provides a signal processing system for generating output examples from categories of a plurality of categories, wherein a distribution of training examples across said plurality of categories has been learnt by said signal processing system, the signal processing system comprising: a chain of signal processors, wherein each signal processor of the chain has learnt a distribution of said training examples across a limited number of categories less than said plurality of categories; wherein at least each said signal processor after a first said signal processor in the chain has a context input and is configured to generate an output example from said learnt distribution conditional on said context input; wherein each successive signal processor in said chain receives the output example from the preceding processor in the chain as said context input; wherein a first said input processor in said chain is configured to stochastically select a said output example according to its learnt distribution; and wherein a last said signal processor in said chain is configured to provide one or both of an output example and a probability distribution for stochastically selecting a said output example.
In a further related aspect there is provided a method of signal processing to generate data for an output example from a plurality of learnt categories of training examples, the method comprising: storing a plurality of category vectors each defining a learnt category of training example; stochastically selecting a stored said category vector; generating a probability vector, dependent upon said selected category vector; and outputting data for said output example, wherein said output example comprises a set of data points each having a probability defined by a respective component of said probability vector.
As previously described, in some preferred embodiments of the method selection of a stored category vector is dependent upon category likelihoods defined by a context vector, in particular one provided by a preceding signal processor in a chain of signal processors.
In embodiments the stored category vectors and probability vectors, more particularly the probability vector generation system, comprise, that is are defined by, a learnt representation of real-world data. More generally the (output example) data may comprise one or more of: image data, sound data, signal data, sensor data, actuated data, spatial data, text data, game data and the like embodiments of the signal processor may be employed to generate/predict/classify or otherwise process such data.
A signal processor/method as described above may be implemented in hardware, for example as electronic circuitry, or in software, for example as code running on a digital signal processor (DSP) or on a general purpose computer system, or in a combination of the two. As the skilled person will appreciate, the signal processing we describe may be distributed between a plurality of coupled components in communication with one another. Processor control code and/or data (for example learnt parameter data) to implement embodiments of the invention may be provided on a physical (non-transitory) data carrier such as a disc, programmed memory, for example non-volatile memory such as Flash, or in Firmware. Code and/or data to implement embodiments of the invention may comprise source, object or executable code in a conventional programming language (interpreted or compiled) such as C, or code for a hardware
description language such as Verilog.
The invention also provides a method of training a signal processor, signal processing system, or method, in particular as previously described, the method comprising: presenting training examples to the signal processor system or method! wherein a said training example comprises a set of data points corresponding to data points of a said output example; computing from a said training example a set of responsibility values, one for each said category, wherein a said responsibility value comprises a probability of the training example belonging to the category, each category having a respective stored category vector; computing a gradient vector for a set of parameters of the signal processor, system or method from said set of responsibility values, wherein said set of parameters includes said stored category vectors and defines a shared mapping between said stored category vectors and a corresponding set of said probability vectors defining probability values for a said set of data points of a said output example; and updating said set of parameters using said computed gradient vector.
Embodiments of this training procedure are efficient in part because the category vectors represent a compressed version of the data, say image, space represented by the probability vectors. Because of this and, in embodiments, because the neural network between the category and probability vectors provides a shared parameterisation for the training examples, learning is relatively quick and computationally efficient. In effect the category vectors provide a reduced dimensionality codebook for the examples, say images.
Broadly speaking a responsibility value defines the likelihood of a category given an example set of data points; in embodiments this is computed from the probability of the set of data points given a category (preferably normalised by summing over all the available categories). In preferred embodiments the responsibility value is also conditional on the context vector so that parameters are learnt based on a combination of training examples and their context vector data. In embodiments the learnt set of parameters comprises the context vectors stored in memory (one per category) and the weights of the two neural networks, for the context and category vectors respectively (MLPs A and B later). The skilled person will appreciate that the aforementioned probability of an example set of data points given a category and context vector is a probability of the example given a category vector, context vector, and weights of the neural network connecting these to a probability vector, that is B(mc). The skilled person will further appreciate that the calculation of this probability will depend upon the implementation of the neural network and also on the type of data. For example for binary data a binomial distribution applies and if b1 is the probability of bit ithen: k = b (-k)1\ {0, i} Ideally, when computing the gradient vector this would be computed over the entire set of training examples, but in practice one example or a minibatch' of a few examples is sufficient to provide a noisy but usable approximation to what the gradient would be if integrated over the full set of training examples. When updating the parameters the gradient vector is multiplied by a step size (q). In theory different step sizes may be employed with different parameters and q may be a diagonal or full matrix, but in practice this does not appear necessary. Since there may be many thousands of parameters (the parameters include the weights of the neural networks) it is convenient to chose the step size as a constant small number, say 0.001 (although, again, in theory the step size could be reduced towards 0 as training progresses, for example as a function of iteration number). In practice it is useful to chose a step size to be as large as practicable without the training procedure failing. Broadly speaking averaging the gradient vector over a minibatch corresponds to a change in step size.
Merely to give a feel for the numbers which may be involved, the output neural network (B) may have of order 10 input side nodes (category vector and context) and of order 1000 nodes in each of two hidden layers and an output visible' layer. The input side neural network (A) may have of order 100-1 000 input layer (context vector) nodes, of order 1000 hidden layer nodes, and a number of output nodes equal to the number of categories, depending on the implementation say 10-10000. In some implementations the context vector may have length one, that is it may comprise a single, scalar value.
As previously mentioned, a category vector may be relatively compressed, for example having a length of order 1-100.
The above described training procedure can straightforwardly be extended to a chain of processors since, in effect, each processor may be trained independent of the others except that it inherits a sample stored category vector from one (or all) previous signal processors in the chain. This sample is made stochastically, with the probability of selecting a category vector dependent on a corresponding responsibility value. Thus in this manner responsibilities are inherited from one processor in the chain to another although, in embodiments, a computed gradient vector is not inherited or shared between signal processors of the chain. In a modification of the procedure a gradient vector may be computed for a context vector of a processor of the chain and this may then be shared, more particularly accumulated, from one processor in the chain to a subsequent processor in the chain.
The previously described signal processors/systems may be considered as an architecture in which a stochastic node or nodes (the stochastic selection step of 1 of K categories, is followed by a deterministic neural network (B), which is then followed by a stochastic output stage (stochastic selection of a set of data points according to a probability vector). This concept may be extended and generalised to provide a neural network architecture in which a (large) deterministic neural network is sandwiched or interleaved between layers of stochastic nodes. Such an approach can address previous difficulties with slow/poor training of deep stochastic neural networks.
Thus in a further aspect the invention provides a neural network architecture, the architecture comprising: a first, input layer of stochastic nodes; a second, output layer of stochastic nodes; and a deterministic neural network connected between said input and output layer nodes.
Embodiments of this structure may be employed to propagate signals (features) both up and down through the layers of the deep neural network. Thus the structure is able to implement a (modified) Helmholtz machine which addresses the defects in conventional Helmholtz machines -which has stalled research in this field for a decade or more -providing both extremely fast, and also accurate, sampling.
Broadly speaking the deterministic neural network (which may optionally be sparse and/or employ dropout) learns an efficient representation of features from amongst training examples from which the stochastic neural network nodes can then select. For example the deterministic neural network may learn to distinguish between a man and a woman and thus, implicitly, the stochastic nodes are forbidden from selecting both a man and woman simultaneously, which is desirable real-world behaviour. By contrast without the deterministic intermediate structure a complicated set of interrelationships between features of say male and female faces would need to be learnt.
In embodiments the deterministic neural network includes one, two or more hidden layers, and in preferred implementations is non-linear as previously described.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention will now be further described, by way of example only, with reference to the accompanying figures in which: Figures la to ic show, respectively, a flow diagram of a signal processing method/system according to an embodiment of the invention, a neural network architecture according to an embodiment of the invention, and an example deterministic neural network for the method/system/architecture of Figures 1 a and 1 b; Figure 2 shows a signal processing system comprising a chain of signal processors, according to an embodiment of the invention; Figures 3a and 3b show, respectively, a selection of examples from a set of training data, and a plot of values of K=100 category vectors or embeddings each having a dimension c/rn = 2 and comprising two real, continuous values illustrating a compressed representation of the dataset from which the examples of Figure 3a are drawn, the 2D coordinates of each point representing an image; Figure 4 shows output examples generated by successive signal processors in a chain of signal processors according to an embodiment of the invention; Figure 5 shows a block diagram illustrating a signal processor structure/architecture according to an embodiment of the invention; Figure 6 shows an example of a computer system programmed to implement the signal processors/processing chain of Figures 1 and 2; and Figure 7 shows output examples generated by a chain of signal processors according to an embodiment of the invention, illustrating image completion.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Broadly speaking we will describe signal processing techniques which we term Compressed Mixtures and, for their powerful extension, Chained Compressed Mixtures. We will describe the structures and procedures implementing these techniques, and the algorithms by which the signal processors can be trained on real observed samples (learning) so that they can generate new, similar, samples (sampling). One advantage of Compressed Mixtures is that they can imagine" very fast yet very accurately. Further, in embodiments all the required computations are available in closed form, allowing efficient learning.
As used herein, a generative model (GM) is computational machinery that learns to imagine. Its main purpose is to generate samples similar to those that it has observed.
More precisely, a GM is trained by observing a sequence of samples from an unknown real-world probability distribution, and generates new samples from an approximation to this probability distribution. For example, the GM may be shown images from the NORB (New York University Object Recognition Benchmark) dataset which contains around 200,000 images of 50 different objects, and may then learn to generate new example images which look like the objects. As used herein, a Conditional Generative Model (0GM) additionally learns to generate new samples conditionally on a given context, that is, some data which accompanies the observation. Each observation can have its own context. Different contexts correspond to different distributions of the observations, and a 0GM learns this and, given any specific future context, will generate new samples corresponding to the distribution associated with this context.
For example, the context may specify the conditions under which an image was captured.
In general learning and sampling can be intertwined: sampling from the model can be done at any time; and it is possible to refine the model by learning from additional observations without needing to restart from scratch. Conversely, it is also possible to keep sampling without seeing any new observations: New observations are not necessary to generate new samples, thus allowing an extensively trained GM to be used in practical settings where its generative abilities are needed. Thus, for example, a signal processor implementing a generative model may be trained during some initial "calibration" stage, storing the learnt parameters in non-volatile memory, and may then be used as a self-contained module without its training system.
Because the Compressed Mixtures we describe can operate efficiently on very high-dimensional observation spaces, they are suitable to many domains of applications.
The samples, observed or generated, can for example be either large static objects, as large as images, or entire time-series of smaller-dimensional objects where the high dimensionality stems from the number of time steps actually represented in a single observed series. The example training and output data values can be categorical (selected from a discrete number of categories), binary, discrete (eg 0-255), or even continuous. Some example applications are described later.
In broad terms we will begin by describing our "CMix" architecture and components, then how to sample from, and train this architecture. We then discuss some of its limitations and how these can be addressed by chaining multiple CMix processors, and describe how the chain of processors is trained.
ComDressed Mixture processor (CMix) A Compressed Mixture (CMix) signal processor defines a distribution, or a conditional distribution where context data is employed, over the sample space.
1.Noiation We consider an arbitrary d -dimensional sample space. The space Q is problem-specific, for example Q = (0' fl in black and white images, = (0 255} in grey-scale images, or more generally Q = R (ie is the set of real numbers).
We denote by x: p the sampling of a realization of a random variable x from a probability distribution p() . With a slight abuse of notation we do not distinguish between a random variable and its realization, nor between a distribution with its density.
Composition of functions is noted by f og(x) := f(g(x)).
Vectors and matrices are represented in boldface, e.g. x. The i -th component of vector x is denoted in subscript x, while the i -th row of a matrix in is noted ink. The vector of the components 1,2 i-I of x is noted x. Superscript indexing x serves to denote a sequence of vectors.
2. Architecture Referring to Figure la, this shows the architecture of a Compressed Mixture processor according to an embodiment of the invention. Square boxes represent deterministic transformations, while ellipsoids represent sampling a random variable from a given distribution. The dashed boxes are expanded visions of the corresponding square box, illustrating an MLP (multilayer perceptron) in each.
In some preferred embodiments at the top of a CMix is vector CE R (that is, a vector of dimension (length) do), we refer to as the context vector, input vector or conditioning vector. This represents the exogenous information we want to condition the sample on. It can be ignored (for example taken as constant) for a non-conditional generative S model.
The density of the generative distribution for any data-point x in the visible space If conditional on a context vector c, is: p(xIC)=p(xIk,C)p(kIC), (1) where p(x I k,C) is a probability distribution on the visible space If' , and p(k IC) j5 a categorical distribution over the index of classes {1 K}. Namely, the probability of occurrence of any class index k between 1 and K is defined as: p(k Ic) =Cat(k I aoA(c)), (2) = c-k oA(C) (3) where o-k (x) = exp Xk I exp x1 is the k -th component of the classical softmax function and A is a multilayer perceptron (MLP) with the appropriate input and output dimensions for c/ and K. There is no specific constraint on their number of hidden layers and hidden units.
Similarly, the distribution p(x I k.C) on the visible space is such that its sufficient statistic is the output of a second MLP B: p(xlk,C)=p(xIB(c,mk)) (4) where k is the k -th row of a matrix me RKX( . We refer to the rows of the matrix in are as embeddings.
The OMix model thus defines a mixture model of the "visible" distributions pi1xk,C), where the parameters of the components are shared through the MLPs A and B. In embodiments the MLP B defines a non-linear compression between the dimension (length) ci, of the output example vector and the dimension (length) ci,,, of the category vector or embedding stored in matrix rn. Referring to Figure la, in block 102 MLP A converts the context vector of dimension d to a vector of dimension K, where K is the number of categories of example employed by the signal processor. As previously mentioned, Kmay be chosen based on the application, or simply to be large or, in a chain as described later, to provide a large number of categories for the chain. MLP A preferably has at least one hidden layer; preferably this has a larger number of nodes than c, and preferably also a larger number of nodes than K. In block 104 the "" indicates that a k is chosen to choose a category: there are K categories and k takes a value indicating one of these. In embodiments, therefore, k may be represented by a vector of length Kin which every component has a value of 0 except for component Ic, which may have a value of 1. All components may have an equal probability of being selected. Alternatively the context vector c, via A and a, may define, for each component of the vector, the probability of that component have a value of 1 (the probabilities being normalised so that the sum of these probabilities is equaltol).
Block 106 represents a memory storing matrix m. This may be implemented as a table comprising K rows of data, each row storing dm values, one for each component of a category vector or embedding". One of these rows, and hence a category vector or embedding mM, is selected by the value of K The category vector or embedding in effect represents a compressed representation of output example data from the processor.
In block 108 MLP B receives the category vector or embedding as an input together with, where used, the context vector c (or data derived from this). MLP B translates this input into a probability vector p (p in equation (4) above; pt in Figure la, the "v' denoting the visible" nodes of B), which has a dimension (length) d equal to the number of data points in the desired output example. For example, for an image with 4096 pixels, dwould have length 4096.
The output of MLP B defines a probability (of a value, say 1 or 0) for each data point x1 of an output example. This probability vector may be used as the output from the signal processor as it effectively provides a representation of the output example (for example, for a processor trained on the NORB dataset, as described later, Pv effectively provides a greyscale image of one of the 50 model objects). However in embodiments an output example is generated, as indicated in block 110, by stochastically sampling this probability distribution i.e. values for data points are chosen according to the probability defined for each by Pv. By contrast, in the chained processors described later, the compressed representation m, is used as the output example data from the processor.
The dimension (length) d of the category vector or embedding m,<, is chosen according to the desired or sustainable degree of compression of the training/output data. Thus GYm may be chosen with some knowledge of the degree of compression which is potentially applicable to the dataset used with the processor and/or by routine experiment. In embodiments a high degree of compression is employed between d and c/rn -for example with images compression by two or three orders of magnitude or more may be employed. However it is not essential for any compression to be employed -for example for a processor in a classification application the context vector may have the dimension of the image and the number of output data points/nodes may be low (one with, say, a continuous value, or a few, to classify the input into a few classes). In general, however, a significantly compressed representation is desirable.
The MLP B preferably has at least one hidden layer, and works best with two or more hidden layers. Preferably the number of nodes in its hidden layer(s) is at least equal to Extended archiecture The architecture of Figure la may be considered as employing a pair of stochastic layers (indicated by ellipses in Figure la), the output stage 110 and the stochastic selection stage 104 -although selection stage 104 effectively comprises just a single stochastic node which may take 1 of K values.
This architecture may be extended to the more general architecture of Figure lb. Thus Figure lb shows a neural network 130 comprising a first stochastic layer 132, for example an input layer, a second stochastic layer 136, for example an output layer, and a deterministic neural network, D, 134 connected between the stochastic layers.
In Figure lb the connections shown between nodes are illustrative only -the connections between nodes of different layers may be global or local. In the stochastic layers the node values may be drawn randomly from/in accordance with a probability distribution which may be defined, for example, by a weight matrix multiplying a non-linear function such as a sigmoid operating on an input vector.
Broadly speaking, the deterministic neural network D, learns to map features of the training data to patterns with the correct frequencies. Consider, for example, a simple version of D with 4 binary output nodes, which can therefore represent 16 patterns: if, say, a particular pattern should appear 1/4 of the time the structure of Figure lb will learn to map 1/4 of the training data features to the same pattern. It will be appreciated that if 0 is made sufficient large then any mapping is possible. The structure will allocate the correct mapping for the correct frequencies of patterns.
Advantageously this structure may be employed to implement a Helmholtz machine-type training procedure, but other training procedures may also be employed. The deterministic nature of D simplifies training (in effect back-propagation may be employed to train D), avoiding the problems that occur with stochastic nodes in a Helmholtz machine, which result in a noisy gradient vector and thus very slow or stalled learning.
Preferably D is large and/or deep, that it is preferably has a large number of nodes in its one or more hidden layers, and/or two, three or more hidden layers. This provides greater representational power for the distribution(s), twisting and expanding these to a larger representational space. It may be constrained to be sparse (only a relatively small percentage of neurons activated by any particular feature, for example less than 20%, 15%, 10% of the neurons having greater than a threshold activation) and/or employ dropout. In effect, D acts as a feature learner for the training data and the stochastic layers operate on these learnt features.
Multilayer Perceptron (MLP') An example deterministic neural network which may be used in the architectures of Figures la and lb, for A, B and/or D is the multilayer perceptron (MLP). The skilled person will be aware of such devices (and further details can be found in, for example, G.M. Bishop, "Neural networks for pattern recognition", Oxford University Press, 1995) but for completeness we will outline an example structure.
A multilayer perceptron (MLP) is a deterministic function with a specific parametric structure alternating linear and non-linear operations, making it a universal function approximator: it can approximate any real-valued multivariate deterministic function f: R -, R, as long as it has been trained with enough couples c. f(c).
Figure lc shows an example architecture of a MLP. This MLP has an input layer containing 2 units plus a bias unit, two hidden layers containing 4 and 6 units plus a bias unit each, and an output layer containing 4 units, with no bias needed at the output. (In principle a bias unit enables the representation of the constant term in v =nu+c but in practice the bias units are optional, particularly in larger neural networks with many nodes in a layer). In Figure lc arrows represent linear combinations, i.e. multiplications by a given weight and summation of all incoming arrows. Circles represent scalar units. Units labelled tanh operate a non-linear transformation of their input; units labelled I are constant bias units. The vector c=(c,.c,) is the input, while the output is collected into the vector A(c)=(A,(e) More formally, a MLP is a composition of linear and non-linear operations on spaces of arbitrary dimensions, each such space being usually named a layer, and each component of each space being named a unit. A MLF A from R to R will therefore have one input layer with de unit, 1 output layer with d units, and an arbitrary number n71 of intermediate hidden layers of dimensions d11,,.... d11,, . Its precise form is the following composition of linear functions Hk and non-linear functions ak: A(c) := IT""' ocf' oil" oa"'' otFr' I. ci' oil' (c).
The functions ilk, for any k in {1,..., n11 +11, are affine transformations from RdH to RdH where d110:= d and d11,,, := d. More precisely, with a slight abuse of notation, we identify the function with a lJk x (c/Ilk_i + 1) matrix and define for any x in Hk(x) :=HkJ.
The components of the matrices H'...H' are the weights of the MLP, and are the free parameters that are trained by gradient ascent to approximate the function of interest.
The functions ak are non-linear functions from R" to R" , an activation function or "squashing function" since some common choices map to [O,1]k. They are typically chosen as component-wise application of the hyperbolic tangent LanE or of the logistic sigmoid 1/(1-exp(-x))). This activation function is not applied to the output of the last hidden layer, to allow the output of the neural network to take any value in Rdv. In practice, the choice of the number of hidden layers ni,., of their numbers of units and of the activation functions c' may be chosen by trial and error and practical considerations.
Training a MLP to approximate a function f amounts to choosing the adequate weights, i.e. the components of the matrices II' ...t-I'. This is typically achieved by solving the minimization problem argmn E(A(x),f(x) A (x,fx)l where the sum is over a training dataset of known pairs (x, f(x)), and E(A(x), f(x)) is an error function that measures the divergence between A(x) and the known outputs f(x). This error function is, for example, a least-square error or a logarithmic loss function. The optimization algorithm employed to solve arg ruin E(A(x), f(x) is usually one of many variants of gradient A (xf(x ascent, evaluating the partial derivatives &E(A(x),f(x)) aH.
by cautious application of the chain-rule of derivation. Such evaluation of the derivatives is referred to as back-propagation of the error.
3. Sampling Referring again to the Compressed Mixture (CMix) signal processor of Figure la, we now describe a procedure for producing samples from this processor. This is a straightforward application of Equations (1), (3) and (4), and the sampling procedure is detailed in Algorithm 1, below, in terms of samples from a categorical distribution and from the p distribution: Algorithm 1 -Generating a sample from a compressed mixture function GENEAATESAMPLE(C) p<-aoA(c) Ac: Cat(p) x: p(IB(c,mk)) return x,m end function Here, in k: Cat(p) -" denotes choosing a k from a set of K numbers according to probabilities p, as previously described. It will be appreciated that in this sampling procedure c and m are known (from previous training).
For future convenience (i.e. use in a CMix chain), the procedure GENERATESAMPLE returns both a visible sample from the CMix model and the row in the embedding space which served to generate it, but in general the algorithm may return one or more of x.m.p( I B(c.mk)) (the final sampling step x: B(c.mk)) is optional). Optionally c may be constant, in which case the output represents the learnt values without context. As previously mentioned, pEIB(c,nt4)) may be discrete, continuous, bounded etc, in general any distribution whose sufficient statistics are those of an MLP, i.e. any distribution representable by the output of a MLP.
4. Learn ing The CMix processor may be trained by learning the optimal value of its parameters using an online EM (expectation-maximisation) algorithm, which takes a straightforward form for this processor.
Here B is a vector of all parameters in the CMix, i.e. the weights in the MLP A, the matrix in and the weights in the MLF' B. It will be appreciated that there may be many thousands of such parameters.
For any given data sample x, the first step of the EM procedure is to compute the gradient G°(x.c) of log p(x Ic) with respect to the parameters 0: G9(x,c)=V1ogp(xIc) (5) =V1ogp(,kic) (6) =E[V0logp(x,klc)Ix] (7) = p(k I x,c)Vjlog p(x I k,c)+log p(k I c)j. (8) Equality (7) is an application of the Fisher identity (see e.g. 0. Capp e, T. Ryden, and F. Moulines, Inference in hidden Markov models", Springer, 2005, proposition 10.1.6, p. 353). (The notation in (8) with x on both sides of"I" denotes fixing x to its value on the right hand side and integrating).
The posterior mixture weights p(k I x,c), are referred to as the responsibilities of each component of the mixture, and are the posterior distribution of the latent categorical index conditionally on the observation x and the context c: p(klx,c)= p(xlk,c)p(klc) p(X I J,c)p(J Ic) The second step of the EM algorithm then proceeds to maximizing log p(x), hence the name M-step. We simply increment the parameters B in the direction given by G9.
Algorithm 2 describes this procedure, with the optional improvement that it accumulates the gradient over a minibatch of several randomly sampled observations before proceeding to the M-step. A typical minibatch size may be of order 1-10
examples.
A.Igorftho: 2 II:J urn t1-U)ri:1ç)rilsed Mixt.i.tn ñnt:kn i1iA1NC1ViIX 2: v Idle is.[. Ef3 () do 13: ZEI3C>C.it13 DikI13AME1[E[l$( ) for I..r& do ACCtJMYLXrE.{J1tA1UtN1S1x. ) flt1I 1OI 8: c end while lO: end function Ii I tnu inn dEbt ( n1' I' fl El EN 1 ih (:r z-O 1' end h met on it huutu,n i 11\1IP'[l s'II it Ru I 13 1 -lh I nd 13 UI I is iii p-fnxø htni VI I UI PiE \f \t \ (1 i ir' ii Itt vo\'E1n[ Ii If -x -Y hIet\1 C Cl - ) k.&ptiIil Ct1.H ii C 22 lid tuu& tjnn 22 nIICtkUfl OM1 I Fl{NH HhEilI [F t\ 21 c U tot,-] K do 28: pxk. e:pkIc) 27: 28: end fur 1etUui.b. I: 20: inid [U11Ct1011 $i IUI.iCtlt.)ll \l: ir[Fri[:I:sI::iNc FEtc)1 i. el r -CCMPTJTFE.}SPONUl3II.1TiIT(X. cA 22: . 24: ietun.i. ml 23: eI1CI ttLIUtiOI1 The algorithmic complexity of a single training step of the CMix model scales as 0(K), i.e. linearly with the number of categories K. The procedure ACCUMULATEGRADIENTS at line 17 of Algorithm 2 can (optionally) return the gradient GC of log p(x) with respect to the context c (to allow propagation of the gradient through a Cmix chain).
The gradient G8 is not explicitly shown as being returned because, in embodiments, this is a global parameter of the algorithm. The vector of parameters 9 may be initialised with random values; the parameter rj represents the learning rate of the algorithm, and may be chosen by experiment. The responsibility vector r is preferably returned normalised (that is, divided by sum s as shown). The function SAMPLEEMBEDDINOFROMPOSTERIOR is included in Algorithm 2 although it is not part of the training procedure, because it is used later when chaining CMix processors.
In line 19, the calculation of G8 employs a calculation of p(x I k,c) and of p(k Ic).
The term p(k Ic) may be determined from equation (3) (knowing the weights of A; and c). The term p(xlk,c) may be determined using equation (4), where x is known (a training example), c is known, and m and the weights of B are known (these are the parameters being optimised). The particular form of the calculation of p(xI k,c) depends on the type of distribution of x -for example whether it is Bernoulli (x is 0 or 1), Binomial (x1 is in the range 0 to 255, say), or Gaussian. An equation for p(x k.c) may be determined analytically, for example by hand (in equation (4) we know the inputs to B and the probability is linear on the output of B), but in practice this always takes a simple form, linear (or for a Gaussian, polynomial) on x, and a function of a logarithm of B. Some examples are given below: Bernoulli (:i 1 C (U, i} 1ogpkc) Y [xlogq(B(ink.&) + (1-:rIog(i -q(B(mk.efl)j qLv) = 1 -1 + expç-;r) Binomial Case: :t E {0. \T] logpv, e) -v [x1 log q(B(n.. c;) (N.) log(1 g(iu;. (fl)] Compressed Mixture processor chain (CMixChain) In practice the above described Compressed Mixtures are limited in the number of distinct samples they can generate by the processing cost. The number K of mixture components in the top layer is arbitrarily chosen and it could theoretically be very large without any impact on the constant 0(1) number of operations (algorithmic cost) required for sampling, this being 0(1). However, the number of operations of a single learning step grows as 0(K), i.e. linearly with the number of categories, making very large numbers of categories impractical.
We now describe techniques employing chained compressed mixture processors, which alleviate this problem by using the combinatorial explosion of successive compressed mixtures: the first level in the chain provides its sampled category as part of the context of the second level, which in turns passes this and its own sampled category as part of the context of the third level, and so on up to an arbitrary L levels.
In practice a small number of levels have proven remarkably powerful. The cost of sampling grows as 0(L) with L being very moderate, while the learning cost grows as 0(i2xK'), i.e. sub-linearly with the number of actual categories. Thus by chaining CMix processors a large increase in the number of actual categories that can be sampled from can be obtained, while keeping a scalable training cost using approximation in the EM algorithm by inheriting sampled categories as described.
Figure 2 shows the architecture of a signal processing system 140 comprising a chain of compressed mixture signal processors lOOa,b,c. Apart (optionally) from the last, each compressed mixture signal processor has an output 107a,b to provide its chosen context vector or embedding mk to the next processor in the chain; this is concatenated into the context vector provided to the next stage. In Figure 2 the solid arrows indicate the information flow for sampling (red, S) and during learning (blue, L).
The calculation of responsibility p(k I x.c) from p(x I k,c) (arrow Li) and p(k Ic) (arrow L2) is shown as a multiplication but will in general also involve a summation (for normalisation, as shown in line 27 of Algorithm 2). Although for convenience of representation the same internal dimensions (vector lengths) are suggested for the chained signal processors there is no obligation for these to be the same for each CMix processor -for example the processors could have different category vector/embedding dimensions c/rn. and/or the sizes of the neural networks A and/or B could grow with progression down the chain. As indicated by the dashed line accompanying the sampling information flow in the final stage 1 OOc, the output from the chain may be a stochastically selected output example or a probability vector defining probabilities of data points for such an output example.
1. Architecture Continuing to refer to Figure 2, a CMix processor chain is constructed from a sequence of L CMix processors indexed by 1 =i...L, referred to as the levels in the chain. A key feature of the chain of processors is that each successive CMix in the chain is conditioned on the samples of the preceding CMix processors, as illustrated, yielding a sequential refinement of the generated samples.
Note that A' , ml, and B1 denote the components of the CMix of level 1, and K = (k1...kL) is the concatenation of all indices k1 e (I K,) in the CMix of all levels in the chain, with a total of L levels. Each level can have a different number K1 of categories. In embodiments the parameters A', m', and B' belong to signal processor 1 in the chain, and each signal processor has its own parameters (matrix memory (m) and MLP weights).
We can write the joint distributions of any such vector, regardless of the architecture, as the product of sequential conditionals: p(k Ic) := fJ p(k1 I k1.c), (10) where p(k1 Ik<1,c) are conditional categorical distributions defined similarly to (3), and <1 denotes the previous signal processors in the chain.
We define a CMixchain conditional distribution over as (11) where p(x,k Ic) = p(x I k,c)p(k Ic) (12) = p(xlk,c)fl p(k1 Ikd,c) (13) and the distributions p(k,c) and p(k, Ik<, ,c) are parametrized as p(x I k.c) = p(x I (14) p(k1 I k<1,c) = pç1 I aoA' c',m)), (15) (16) where (17) := (c,m,..., m) for all I »= 1 (18) is the concatenation of the original context c and the embeddings chosen in the successive categories up to and including level 1. That is, at each level 1 the 1 -th CMix receives as input the concatenated sampled memories (c,m m) of all preceding OMix in the chain together with the global context c. Preferably the size and/or number of hidden layers in the neural nets A' and A' is increased as the level depth 1 increases, to accommodate the growing size of the input.
2. Sampling As illustrated in Figure 2, sampling from the CMixChain is performed by sequentially sampling and concatenating the embeddings (category vectors) m from the successive OMix levels until reaching the last level, which is then entirely sampled though. This procedure is detailed in Algorithm 3, below, which uses the GENERATESAMPLE procedure from Algorithm 1, for a single CMix processor, to return a value x dependent on context c. In Algorithm 3 the returned embedding is denoted e.
AlgorIthm 3 Sanipling hcnn the CMixUhain model 1 t\irictiii C EN Eli.AiE AMPLE(c) 2: for L = 1 *-, L do 3: x. e +-C.Mixt:c ENLRAIESAMPL.E(C) 1: C 1- : en':l for 3: ret ii i'ti x eIId tunct.i:,n Figure 3a shows observed samples from the NORB dataset used to train a CMixChain.
To aid in understanding the operation of an example single CMix processor, Figure 3b shows a set of compressed representations of images from the dataset of Figure 3a, for a CMix processor with K=100 categories, each with category vector or embedding with a dimension dm = 2 -that is each image of the training dataset is compressed such that it is represented by just two continuous real values. Figure 3b plots points identified by these two continuous values, using these values as x-and y-coordinates, and plotting 100 points, one for each category.
Figure 4 shows output examples generated by successive signal processors at levels! = 1,2.3,4 in a CMixChain of signal processors comprising 10 categories per level and 4 levels. These can be compared with the examples of Figure 3a, and illustrate the successive details typically added by each level in the chain. Figure 4 is to aid understanding of the operation of the chain -in an embodiment of the signal processing chain only the output from the final processor of the chain would be provided externally.
3. Learniiig Learning in the CMixChain model employs an approximation of the previously described EM procedure. Algorithm 4, below, details this training procedure; the underlying mathematical basis is described later. The complexity of this approximated algorithm scales as o(LK(d. +d3+L2(K+d111)) instead of the O(K'(dr+dv +d3) which would be the cost of the exact algorithm -i.e. the computation cost scales as L2K rather than as KL.
-Ugorithin 4 Leiroing in the CilixChiiu model -fnricton FIR ti NUM 1L) 2. %vhuIe IsJ:[r'tn:.iNc:() do 2. zrdnoGP'titPAnA't1r1n.ns 4 {x}L t-*[EETN[dW I)A1i?M f rc Fi-*i'flF{ 1:1 for -1..N do A4:r: Mt.Er yTF.U B ADt}1N1S(x'. hot
(Old skEnhie 10: (nfl 1U!ICt1Ofl ii fundon ZERC&JTtADPA llAMETER:,E) U -I I do 13 MI\ 71 PU{1B tiiNii k 11 U itCi I 14 Pkl(.1!oi iT.iid lunt tioui If1 tHJ( tIQIA B]' U iP R I} I i t'n a-I ilu N 1 1]' El V[' JO Ii P c'nd br 2' end font tatu 21 thition t. I P 1 UF Ci NF \ I, t L. U' -U 23 -I I do I CMrs' T \11J,lLtiJLU\HX 4 (C Fcc 4' ,-( f{\ -L1 [It t11* lit [Ni Ht iFo'-H iF U' c"I a 1-Ic, a) ig eec] f4i, 2o: end.hi.ucun Algorithm 4 uses the functions defined in Algorithm 2 for a single CMix processor; G does not appear explicitly because it is part of the global parameters of a single CMix, although as previously mentioned, each CMix has its own set of parameters 91 and gradient vector G which, in embodiments, is not inherited from one CMix signal processor to the next. Thus in embodiments each CMix processor is trained independently and just the context vector c is inherited.
In Algorithm 4 the AccuMuLATEGRADIENTs function for G (the gradient with respect to the context c) is not required for the chain of signal processors and memory" from the previous level is provided by inherited context. In principle, however, G could be used to inherit more information from one level to the next.
Learning formalism To facilitate understanding we will now outline a mathematical justification for the training procedure of Algorithm 4 for a chain of OMix processors.
For simplicity, we derive here the computations in the case where the number of categories K = K is constant across all layers. Extension to the general case is straightforward.
The algorithmic complexity of a single training step of the CMixchain model scales quadratically with I as o(LKw. +d3+L2(K+d14, (24) whereas that of a single CMix unchained with an equivalent number KL of total categories would scale exponentially with 1 as o(KL(d, +d +d11)). (25) We recall that the gradient of the log-likelihood of a datapoint x associated to a generative model p(x,k) with latent variables K can always be expressed as an expectation under the posterior over the latent variables Vlogp(x)=Vlogp(x,k) (26) = EP(kX [V log p(x,k)], (27) that is, computing the gradient Viog p() requires the knowledge of the posterior distribution pilk I x) In the following we introduce a variational approximation q(k I x,c) to the posterior p(k I x,c) and a way of training its internal parameters in order to achieve the desired scaling properties (see, for example, M.d. Beal, Variational algorithms for approximate Bayesian inference", PhD thesis, Gatsby Computational Neuroscience Unit, University College London, 2003; and M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul, "An introduction to variational methods for graphical models", Machine Learning, 37:183-233, 1999).
The variational framework replaces the problem of maximizing the data log-likelihood G =argrnax logp8(x) (28) =argmaxlogp9(x,k), (29) x k with a nested minimization Qâ) = arg mm EQ [log Q(k lx) -log p9(x.k)I (30) =:argITllflP (31) 9,Q where Q is a distribution over the latent variables K referred to as a variational distribution and F is defined as the variational free energy.
For the model defined in Equation (13), we start by defining a variational distribution Q(k I x,c) in a factorized form Q(k lx,e) := fi Q1 (k I x,c,k<1). (32) The free energy associated to this variational posterior is given by F = EQ [log Q(kIx.c)-log p(x.k Ic)] (33) Minimizing P with respect to q1 under the constraint that it is a probability distribution yields the closed-form solution Q (k1 I x,k<1.c) cc p(k1 IkCl.c)expfEQ 1ogp(x Ik.c) Ik (34) where oc denotes equality up to a normalization constant. This result can be written as Q1' (Ic1 I x,k.1,c) = K p(11 I k,c)f1 (xl c,k<1) = iIk<1,c)j (xIc,kd,kI =1) where the quantity fI(xIc.k):=expEQ, 1ogp(xIk,c)Ik<1J (36) can be seen as an unnormalized distribution over the visible x conditionally on the c and on the chosen categories k1 from the variational distribution up the 1 -th level.
The variational posterior distribution Q obtained in this manner would correspond exactly to the true posterior p(k I x,c). We can as well identify the correspondence between the solution (35) and more common forward-backward algorithms by noting that the first factor in the numerator of Equation (35) is a backward recursion, while the second factor is a forward simulation of Equation (10) as indicated below: 641ta48 Q (k1 I x,k<1,c) cc p(k1 1k4,c)exp< E, [be p(x I k,c) I k< (37) backward The next step of our derivation is to approximate the expectation ItQ og p(x I k,c) I k * (38) Since this expectation depends only on Q for 1'> 1, we could obtain the exact Q by solving recursively the equations for Q1'., starting at the last level 1 = L and going back to the first level I = I. This exact solution is not tractable, having an algorithmic complexity 0(K). Moreover, the EM algorithm is modifying the parameters of p(x I k,c) so that f1 (x c,k<1) defined in Equation (36) approaches the empirical distribution. This learning goal is the same as for the EM iterations for a single CMix model. Therefore, we can adopt the following approximation: LQ. Iogp(xi c,k) Iket log p' xl,c), (39) where p' (xl k<1,c) is the observation model of the I -th CMix model in the chain.
Replacing this approximation into Equation (35) yields Qk (k1 I x,k, c) = K p(k1 I k1,c)p' (xl k,c) (40) H p(1c1 = j I k1,c)p' (xl k<1,k1 = j,c) The approximated solution (40) for Q has the same form as the posterior distribution of a single CMix model given in Equation (9). It therefore allows us to re-use the distribution pt as well as the machinery to learn it inside each CMix in the CMixChain in a modular manner.
The full variational distribution Q thus becomes Q (k I x.c) fi p(k1 I x,kH,c), (41) where p(k1 I x,k1,e) is computed internally by the I -th CMix model given the currently observed data sample x and the input from all the precedent CMixes in the chain concatenated with the global context c. The maximization in Equation (31) with respect to the remaining parameters not belonging to Q is performed by gradient ascent of Equation (27), where each parameter update may be computed using a single sample from Q. The resulting procedure is detailed in Algorithm 4.
Example implementation Figure 6 shows a schematic block diagram of the structure of electronic hardware/software modules to implement a CMix signal processor 100 as previously described.
Thus the context data c is provided to a context vector mapping unit 112, implemented by MLP A, as well as to MLP B of a probability vector generation system 118; these correspond to block 102 and 108 of Figure la and implement corresponding functions.
The context data is also provided to a training module 122.
The mapping unit A provides a K-wide output to a stochastic category selector 114, which has a function corresponding to block 104 of Figure la, and this in turn provides category selection data, for example in the form of an index or address, to category (embedding) vector memory 116 storing matrix m.
Memory 116 provides a c/rn-wide output to probability vector generation system 118, having a function corresponding to block 108 of Figure 1 a and implemented by MLP B. System 118 also receives context vector c, and provides a c/v-wide probability vector output to an optional stochastic output selector 120, which samples from the distribution defined by the probability vector Pv to provide a sampled output example x (corresponding to block 101 of Figure la).
Training module 122 receives training data on input 124, and optionally context data c, and implements the training procedure of Algorithm 2 to update the weights of MLP A (parameters 94), the weights of MLP B (parameters Ofi), and the category vectors or embeddings stored in memory 116 (parameters £). Training module 122 need not be part of the signal processor 100-for example the parameters U could be trained by an external system which is afterwards removed, or the signal processor may be merely programmed with predetermined values, for example storing these into permanent memory such as read-only memory, non-volatile RAM such as FlashTM, or on a disk.
The skilled person will appreciate that the structure of Figure 5 may be implemented in electronic hardware/circuitry. For example it may be defined in a hardware definition language and complied into hardware, or it may be implemented in an ASIC or FPGA.
Alternatively some or all of the illustrated blocks may be implemented using a program-controlled signal processor, which may form part of the hardware, for example by including a processor block on an ASICJFPGA. Alternatively the structure of Figure 5 may be implemented by software modules running on a digital signal processor (DSP) or on, say, a graphics processing unit (GRU). Still further alternatively the structure of Figure 5 may be implemented on a general purpose computer system, or across a plurality of coupled computing systems, for example implementing a hgh performancve computing system.
Figure 6 shows a general purpose computer system 150 programmed to implement a chain of CMix signal processors as illustrated in Figure 2. Thus the computer system comprises a CMix server 152 including a processor and working memory. Server 152 also includes non-volatile memory 154 storing processor control code to implement a plurality of CMix signal processors 100 of the type shown in Figures la and 5, as well as code to sample from the CMix chain to provide an output, and code to implement the training procedure of Algorithms 2 and 4. Server 152 is also coupled to non-volatile storage 156 which stores weights for neural networks A and B and the embeddings of matrix rn. The code/data in memory 154 and storage 156 may be provided on a removeable storage medium, illustratively shown as disk 158.
The OMix server 152 is provided with input data, optionally with associated context data. The input data may be of any type include but is not limited to one or more of: game/search/multimedia data, real-world/sensor data, and external signal data.
Applications also include time-series data, training on a temporal series of examples, albeit the examples may be treated as effectively independent rather than as a time-series succession per se. Such time series data may be of any type including the aforementioned types, as well as time-series image (video) data, audio data, weather and other physical/chemical and/or biological data, financial data, and so forth. The neural network server 152 similarly provides corresponding output data, based on the training examples it has learnt and optionally context data provided to the server.
A user, and/or robot/machine, and/or other computer system(s)/CMix processor(s)/chain(s) may interact with the neural network server 152 to provide input data and/or receive output data via network 160, which may include the Internet. By way of illustration, a user terminal 162, robot/machine 164 and link to other network(s)/computer system(s) 166 are shown in Figure 6.
Example applications
The CMix signal processors we describe can be employed in a wide range of domains and provide good representative power combined with rapidity of sampling. We describe below some example applications in which these features are advantageous; these merely illustrative and are non-exhaustive. A CMix processor may be trained in a supervised or unsupervised manner.
I. "Imagining" elements from a category Straightforward sampling from the learned conditional distribution p(xlc) can be used to simulate imagination: When trained on labelled data, learning with the label as context c and the object as sample x, sampling from p(x I c) outputs example data for, or "imagines", an object from a given category.
1 0 2. Classifying objects amongst categories Conversely, training a CMix processor (or chain) with the label as observation x and the object as the context c, then sampling from p(x Ic) turns the CMix processor into a classifier, predicting the category of an unlabelled object.
For example in a supervised training process for recognising the digits 0-9 it may be known that a particular image corresponds to, say, a "2" and the CMix processor may be trained with the image as the context and x denoting the recognised digit. In such a case x may be a scalar variable with a range of values denoting different digits, or it may be, say, a vector of length 10 with binary-valued components.
3. Completion In another example application a CMix processor or chain can observe part of an image, and be asked to "imagine" or complete the full-image which best matches the partial image provided. Thus Figure 7 shows examples of image completion by a CMixChain. This illustrates, on the left, 4 test samples (never seen by the model before) to be completed. The yellow region (to the right of the dashed line) indicates the pixels that have been occluded to the model. On the right is shown the CMixChain's "best guess" for the occluded pixels -it can be seen that the result is representative of the occluded portion of the input to be completed. This technique can also be applied to completion of time series for missing data, smoothing, and the like.
In more detail, completion involves sampling from another conditional than p(x I c).
Instead, for a given context c, we observe only a part x, of the object (e.g. half of the pixels in the image) while the rest of the image, Xh is hidden from view. Here v and h are two disjoint sets of indices such that x = (XV.;) . The only requirement is that the hidden part Xh and the visible part x are independent of each other conditionally on the context and the category; that is, the distribution p(x I k.c) can be factorized as I k,c) = I k,c)p(; I k,c). (19) Such a factorization is typically the case in image generation without lateral connection in the bottom layer of the neural net B. For example the distribution of Equation (4) factorizes as a product over the pixels where each pixel follows for example a Bernoulli or a Binomial distribution whose parameter is the output of the MLP B: p(xIB(,mkfl=J1[p(x. I!(e,m)). (20) By way of illustration two examples are given: Black and white image: if we are modelling a vector x of binary data x1 e {O,1}, for example pixels in a black and white image, we map each output unit B of the MLP B to the interval O, 1] by applying the sigmoid function g(b) := , (21) I + exp(-b) and use the result as the parameter q of a Bernoulli distribution with density Ber(xI q) :=q(1-q)'' (22) used to model the corresponding pixel x in the image. This leads to the full equation p(xIB(c,m))=flg(B1(c,m)Y(1-g(Bf(c,m)))'. (23) Grayscale image: in another example we model a vector x of value between 1 and some value N, for example grayscale images for which N = 255. We then use the same sigmoid transformation of the output units, and use this image as the parameter of a Binomial distribution with second parameter N 131n(x q, N) := q' (1 _q)NX (24) leading to the full equation p(xIB(c.m3) =flg(B/(c,m)Y (1g(B1(c.m3)). (25) This image completion problem can be written as sampling from the distribution p(x11 Ix.c) using the learned GM p(xlc). For a single CMix model computing p(xh I x,c) is straightforward. From Equation (1) we get I k,c)p(k Ic) P h v' -Xh 1C P(XV,Xh Ik,c)i,clc) =p(Xhlk,c)p(klXV,C), (27) where we have used the fact that the different pixels of the image are independent of each other given Ic and p(k I x,c) is given by p(x I k,c)pk Ic) p(kIx,c)= (28) : p(x I j,c)p(j I The marginal observation likelihood p(x I k,c) may be computed by simply ignoring the factors corresponding to the unobserved pixels in Equation (4). A procedure for sampling from the distribution p(x11 I x.c) for a single CMix signal processor is detailed in Algorithm 5, below: klgorith in 5 Conip1etng intt1n.g ctatt with 1I]h I: function UENFJCVFE(OM [*LUrtoN (xi-. c) 2: frjr:4-I. /140 V C) ÷-.[iL T: lB (cnik)) 4 end fur for f-i. C do E-p(*Ix.
7: end for :: .-.-(2at(r) 9: x1 --* /Jx]11:: e) vLr. IBic 10: x 4-(xis-.
1.1 return x. rn, 12: end Vu net ion For a CMixChain signal processing system, the procedure is similar (although approximated). In this case, Algorithm 5 may be applied successively from the first level = I to the last level I = L, as detailed below in Algorithm 6: iklgrit.h.rn 6 C. enipleting rnisIug dar;a. with (ThiIxClain fnntiun <1 \1 h kIT 1' 1p I II)\IX ci 2 I I in ,-( \iix [ \[ rz.1It'uM[I I [e\t1ç ( IC (*} -( nd ni C: return x..:: end function 4. Classification via learning joint distribution An alternative form of classifier/imagining system may be implemented by training a CMix processor or chain on examples which include, together in the same example, an example and its label. Thus, for example, an image of an object may include text with the name of the object; or a label may be concatenated with a training example vector.
The CMix processor/chain may then be trained on the joint examples and labels, thus learning to recreate the missing part of the example, imagining an object or providing a label classifying an object, in both cases thereby completing an input.
More generally, therefore, when learning labelled data, the Cmix processor/chain may be used to process a concatenation of the label and the object as the observation, and learn their joint distribution. Using the completion Algorithms described above then allows both imagination and classification in a unified manner.
Usefully, learning this joint distribution also allows for semi-supervised learning, i.e. learning from datasets where only certain objects are labelled and many others are not.
This facilitates access to very rich sources of training data.
No doubt many other effective alternatives will occur to the skilled person. It will be understood that the invention is not limited to the described embodiments and encompasses modifications apparent to those skilled in the art lying within the spirit and scope of the claims appended hereto.

Claims (39)

  1. CLAIMS: 1. A signal processor, the signal processor comprising: a probability vector generation system, wherein said probability vector generation system has an input to receive a category vector for a category of output example and an output to provide a probability vector for said category of output example, wherein said output example comprises a set of data points, and wherein said probability vector defines a probability of each of said set of data points for said category of output example; a memory storing a plurality of said category vectors, one for each of a plurality of said categories of output example; and a stochastic selector to select a said stored category of output example for presentation of the corresponding category vector to said probability vector generation system; wherein said signal processor is configured to output data for an output example corresponding to said selected stored category.
  2. 2. A signal processor as claimed in claim 1 further comprising a context vector input to receive a context vector, wherein said context vector defines a relative likelihood of each of said plurality of said categories; wherein said context vector input is coupled to said stochastic selector such that said selection of said category of output example is dependent on said context vector.
  3. 3. A signal processor as claimed in claim 2 wherein said probability vector generation system is coupled to said context vector input, and wherein said probability vector is dependent upon said context vector.
  4. 4. A signal processor as claimed in claim 2 or 3 further comprising a mapping unit coupled between said context vector input and said stochastic selector, wherein said context vector has length d0, wherein K is a number of said categories, and wherein said mapping unit is configured to map said context vector of length d to a category probability vector of length K, and wherein said stochastic selector is configured to select said stored category of output example dependent on said category probability vector.
  5. 5. A signal processor as claimed in claim 4 wherein said mapping unit comprises a deterministic neural network.
  6. 6. A signal processor as claimed in any preceding claim wherein said category vector comprises a compressed representation of said probability vector and wherein said probability vector generation system comprises a deterministic neural network.
  7. 7. A signal processor as claimed in claim 6 when dependent on claim 2, wherein said deterministic neural network has a first input comprising said category vector and a second input dependent on said context vector, and an output to provide said probability vector.
  8. 8. A signal processing system comprising a chain of signal processors, a first signal processor as claimed in any preceding claim each subsequent signal processor as recited in any one of claims 2 to 7 when dependent on claim 2, wherein said output data from one said signal processor provides at least part of said context vector for a next said signal processor in the chain.
  9. 9. A signal processor system as claimed in claim 8 wherein for each successive said signal processor after said first said output data from the preceding signal processors in the chain combine to provide said context vector input for the successive signal processor.
  10. 10. A signal processor system as claimed in claim 8 or9 wherein said output data from a said signal processor comprises said category vector selected by the stochastic selector of the signal processor.
  11. 11. A signal processor system as claimed in claim 8, 9 or 10 wherein said output data from a last said signal processor of the chain comprises said probability vector.
  12. 12. A signal processor system as claimed in claim 11 further comprising an output stochastic selector having an input coupled to receive said probability vector from said last signal processor of said chain and configured to generate and output a said output example comprising a said set of data points having values selected with probabilities defined by said probability vector.
  13. 13. A signal processor or system as recited in any one of claims 6 to 13 when dependent on claim 5 further comprising a training module coupled to said memory, to said probability vector generation system, to said context vector input, and to said context vector mapping unit, and having a training data input to receive training examples, wherein said training module is configured to compute a responsibility value for each said category, dependent on said a training example presented at said training data input and a context vector presented at said context vector input, and to adjust said stored category vectors, weights of said neural network of said probability vector generation system and weights of said neural network of said mapping unit responsive said computed responsibility values.
  14. 14. A signal processor or system as recited in any preceding claim wherein said stored category vectors, and said probability vectors from said probability vector generation system which depend on said category vectors, comprise a learnt representation of real-world data; and/or embodied as a data generation classification, completion or search system.
  15. 15. A signal processing system for generating output examples from categories of a plurality of categories, wherein a distribution of training examples across said plurality of categories has been learnt by said signal processing system, the signal processing system comprising: a chain of signal processors, wherein each signal processor of the chain has learnt a distribution of said training examples across a limited number of categories less than said plurality of categories; wherein at least each said signal processor after a first said signal processor in the chain has a context input and is configured to generate an output example from said learnt distribution conditional on said context input; wherein each successive signal processor in said chain receives the output example from the preceding processor in the chain as said context input; wherein a first said input processor in said chain is configured to stochastically select a said output example according to its learnt distribution; and wherein a last said signal processor in said chain is configured to provide one or both of an output example and a probability distribution for stochastically selecting asaid output example.
  16. 16. A signal processing system as claimed in claim 15 wherein each successive signal processor in said chain receives the output examples from all the preceding processors in the chain.
  17. 17. A signal processing system as claimed in claim 15 or 16 wherein said plurality of categories of said chain is defined by a multiplication product of said limited number of categories of each signal processor of said chain.
  18. 18. A signal processing system as claimed in claim 15, 16 or 17 wherein each said signal processor comprises a data compression system to represent said output example in a compressed data format, and wherein each successive signal processor in said chain receives said output example from the preceding processor in said compressed data format.
  19. 19. A computer system programmed to implement the signal processor or system of any one of claims ito 18.
  20. 20. Electronic hardware configured to implement the signal processor or system of any one of claims ito 18.
  21. 21. A non-transitory data carrier carrying processor control code to implement the signal processor or system of any one of claims 1 to 18.
  22. 22. A method of signal processing to generate data for an output example from a plurality of learnt categories of training examples, the method comprising: storing a plurality of category vectors each defining a learnt category of trainingexample;stochastically selecting a stored said category vector; generating a probability vector, dependent upon said selected category vector; and outputting data for said output example, wherein said output example comprises a set of data points each having a probability defined by a respective component of said probability vector.
  23. 23. A method as claimed in claim 22 further comprising providing said selected category vector to a probability vector generation system to generate said probability vector, wherein said probability vector generation system comprises a data decompression system; and decompressing said selected category vector using said probability vector generation system to generate said probability vector.
  24. 24. A method as claimed in claim 23 further comprising inputting a context vector, wherein said context vector defines a relative likelihood of each of said plurality of said categories; and wherein said selecting of said stored category vector is dependent upon said likelihood of said categories defined by said context vector.
  25. 25. A method as claimed in claim 24 wherein said context vector has length d, and wherein K is a number of said categories, where d0 is different to K, the method further comprising mapping said context vector of length d to a category probability vector of length K.
  26. 26. A method as claimed in claim 25 wherein said mapping is performed by a first neural network and said decompressing is performed by a second neural network.
  27. 27. A method as claimed in claim 26 further comprising inputting said context vector into said second neural network.
  28. 28. A method as claimed in claim 24, 25, 26 or 27 further comprising chaining said signal processing, wherein said claiming comprises repeating said selecting of said stored category vector in a succession of signal processing stages, the method further comprising using said selected category vector from one of said signal processing stages in the context vector input to the next said signal processing stage.
  29. 29. A method of training a signal processor, signal processing system, or method as recited in any preceding claim, the method comprising: presenting training examples to the signal processor system or method, wherein a said training example comprises a set of data points corresponding to data points of a said output example; computing from a said training example a set of responsibility values, one for each said category, wherein a said responsibility value comprises a probability of the training example belonging to the category, each category having a respective stored category vector; computing a gradient vector for a set of parameters of the signal processor, system or method from said set of responsibility values, wherein said set of parameters includes said stored category vectors and defines a shared mapping between said stored category vectors and a corresponding set of said probability vectors defining probability values for a said set of data points of a said output example; and updating said set of parameters using said computed gradient vector.
  30. 30. A method as claimed in claim 29 when dependent on claim 2, 15 or 24, further comprising presenting a said context vector with said training example, and wherein a said responsibility value is further conditional on said context vector.
  31. 31. A method as claimed in claim 30 when dependent on claim 8, 15 or 28 used for training a chained signal processor/stage, the method further comprising: stochastically selecting a said stored category vector dependent on said set of responsibility values in one said chained signal processor/stage, and using said stochastically selected category vector as at least part of said context vector presented to a next said chained signal processor/stage.
  32. 32. A non-transitory data carrier carrying processor control code to implement the method of any one of claims 22 to 31.
  33. 33. A signal processor to generate data for an output example from a plurality of learnt categories of training examples, the signal processor comprising: a memory for storing a plurality of category vectors each defining a learnt category of training example; a system to stochastically select a stored said category vector; a system to generate a probability vector dependent on said selected category vector; and an output to provide an output example which comprises a set of data prints each having a probability defined by a respective component of said probability vector.
  34. 34. A signal processor, method or system as claimed in any preceding claim wherein said data comprises one or more of: image data, sound data, signal data, sensor data, actuator data, spatial data, and text data.
  35. 35. A neural network architecture, the architecture comprising: a first, input layer of stochastic nodes; a second, output layer of stochastic nodes; and a deterministic neural network connected between said input and output layer nodes.
  36. 36. A neural network architecture as claimed in claim 35 wherein a said stochastic node has one or more inputs and one or more outputs, and wherein an output value from a said output has a value dependent on a probability distribution determined by input values on said one or more inputs.
  37. 37. A neural network architecture as claimed in claim 35 or 36 wherein said deterministic second network comprises a multilayer perceptron having at least one layer with a greater number of nodes than a number of stochastic nodes in said input layer.
  38. 38. A modified Helmholtz machine comprising the neural network architecture of claim 35, 36 or 37.
  39. 39. The architecture/machine of any one of claims 35 to 38 embodied in: software on a non-transitory data courier, a programmed computer system, programmed memory, or electronic hardware/circuitry.
GB1304795.6A 2013-03-15 2013-03-15 Signal processing systems Withdrawn GB2513105A (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
GB1304795.6A GB2513105A (en) 2013-03-15 2013-03-15 Signal processing systems
US13/925,637 US9342781B2 (en) 2013-03-15 2013-06-24 Signal processing systems
PCT/GB2014/050695 WO2014140541A2 (en) 2013-03-15 2014-03-10 Signal processing systems
CN201480016209.5A CN105144203B (en) 2013-03-15 2014-03-10 Signal processing system
EP14715977.6A EP2973241B1 (en) 2013-03-15 2014-03-10 Signal processing systems

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1304795.6A GB2513105A (en) 2013-03-15 2013-03-15 Signal processing systems

Publications (2)

Publication Number Publication Date
GB201304795D0 GB201304795D0 (en) 2013-05-01
GB2513105A true GB2513105A (en) 2014-10-22

Family

ID=48226490

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1304795.6A Withdrawn GB2513105A (en) 2013-03-15 2013-03-15 Signal processing systems

Country Status (5)

Country Link
US (1) US9342781B2 (en)
EP (1) EP2973241B1 (en)
CN (1) CN105144203B (en)
GB (1) GB2513105A (en)
WO (1) WO2014140541A2 (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102366783B1 (en) * 2014-07-07 2022-02-24 광주과학기술원 Neuromorphic system operating method therefor
WO2017122785A1 (en) * 2016-01-15 2017-07-20 Preferred Networks, Inc. Systems and methods for multimodal generative machine learning
US9802599B2 (en) * 2016-03-08 2017-10-31 Ford Global Technologies, Llc Vehicle lane placement
WO2017177446A1 (en) * 2016-04-15 2017-10-19 北京中科寒武纪科技有限公司 Discrete data representation-supporting apparatus and method for back-training of artificial neural network
WO2017185248A1 (en) * 2016-04-27 2017-11-02 北京中科寒武纪科技有限公司 Apparatus and method for performing auto-learning operation of artificial neural network
EP3459017B1 (en) * 2016-05-20 2023-11-01 Deepmind Technologies Limited Progressive neural networks
US9779355B1 (en) 2016-09-15 2017-10-03 International Business Machines Corporation Back propagation gates and storage capacitor for neural networks
WO2018085697A1 (en) * 2016-11-04 2018-05-11 Google Llc Training neural networks using a variational information bottleneck
EP3566182A1 (en) * 2017-02-06 2019-11-13 Deepmind Technologies Limited Memory augmented generative temporal models
CN107291690B (en) * 2017-05-26 2020-10-27 北京搜狗科技发展有限公司 Punctuation adding method and device and punctuation adding device
KR102410820B1 (en) * 2017-08-14 2022-06-20 삼성전자주식회사 Method and apparatus for recognizing based on neural network and for training the neural network
KR102387305B1 (en) 2017-11-17 2022-04-29 삼성전자주식회사 Method and device for learning multimodal data
WO2019098644A1 (en) * 2017-11-17 2019-05-23 삼성전자주식회사 Multimodal data learning method and device
CN110110853B (en) * 2018-02-01 2021-07-30 赛灵思电子科技(北京)有限公司 Deep neural network compression method and device and computer readable medium
CN108388446A (en) * 2018-02-05 2018-08-10 上海寒武纪信息科技有限公司 Computing module and method
JP6601644B1 (en) * 2018-08-03 2019-11-06 Linne株式会社 Image information display device
JP7063230B2 (en) * 2018-10-25 2022-05-09 トヨタ自動車株式会社 Communication device and control program for communication device
EP3857324B1 (en) * 2018-10-29 2022-09-14 Siemens Aktiengesellschaft Dynamically refining markers in an autonomous world model
US20200293894A1 (en) * 2019-03-12 2020-09-17 Samsung Electronics Co., Ltd. Multiple-input multiple-output (mimo) detector selection using neural network
CN111127179B (en) * 2019-12-12 2023-08-29 恩亿科(北京)数据科技有限公司 Information pushing method, device, computer equipment and storage medium
US11823060B2 (en) * 2020-04-29 2023-11-21 HCL America, Inc. Method and system for performing deterministic data processing through artificial intelligence
US20210374524A1 (en) * 2020-05-31 2021-12-02 Salesforce.Com, Inc. Systems and Methods for Out-of-Distribution Detection
US11868428B2 (en) * 2020-07-21 2024-01-09 Samsung Electronics Co., Ltd. Apparatus and method with compressed neural network computation
EP3975038A1 (en) * 2020-09-29 2022-03-30 Robert Bosch GmbH An image generation model based on log-likelihood
CN112348158B (en) * 2020-11-04 2024-02-13 重庆大学 Industrial equipment state evaluation method based on multi-parameter deep distribution learning
US20230073226A1 (en) * 2021-09-09 2023-03-09 Yahoo Assets Llc System and method for bounding means of discrete-valued distributions

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1638042A2 (en) * 2005-09-14 2006-03-22 Neal E. Solomon Mobile hybrid software router
US20130325770A1 (en) * 2012-06-05 2013-12-05 Sap Ag Probabilistic language model in contextual network

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5093899A (en) 1988-09-17 1992-03-03 Sony Corporation Neural network with normalized learning constant for high-speed stable learning
US7424270B2 (en) * 2002-09-25 2008-09-09 Qualcomm Incorporated Feedback decoding techniques in a wireless communications system
US8762358B2 (en) * 2006-04-19 2014-06-24 Google Inc. Query language determination using query terms and interface language
US20100169328A1 (en) * 2008-12-31 2010-07-01 Strands, Inc. Systems and methods for making recommendations using model-based collaborative filtering with user communities and items collections
US9031844B2 (en) 2010-09-21 2015-05-12 Microsoft Technology Licensing, Llc Full-sequence training of deep structures for speech recognition
US9287713B2 (en) * 2011-08-04 2016-03-15 Siemens Aktiengesellschaft Topology identification in distribution network with limited measurements
US10453479B2 (en) * 2011-09-23 2019-10-22 Lessac Technologies, Inc. Methods for aligning expressive speech utterances with text and systems therefor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1638042A2 (en) * 2005-09-14 2006-03-22 Neal E. Solomon Mobile hybrid software router
US20130325770A1 (en) * 2012-06-05 2013-12-05 Sap Ag Probabilistic language model in contextual network

Also Published As

Publication number Publication date
WO2014140541A2 (en) 2014-09-18
US9342781B2 (en) 2016-05-17
CN105144203B (en) 2018-09-07
GB201304795D0 (en) 2013-05-01
EP2973241A2 (en) 2016-01-20
WO2014140541A3 (en) 2015-03-19
EP2973241B1 (en) 2020-10-21
CN105144203A (en) 2015-12-09
US20140279777A1 (en) 2014-09-18

Similar Documents

Publication Publication Date Title
EP2973241B1 (en) Signal processing systems
Li et al. Deep learning for case-based reasoning through prototypes: A neural network that explains its predictions
Titsias et al. Spike and slab variational inference for multi-task and multiple kernel learning
Ristovski et al. Continuous conditional random fields for efficient regression in large fully connected graphs
Simeone Machine learning for engineers
Merchan et al. On the sufficiency of pairwise interactions in maximum entropy models of networks
Leibfried et al. A tutorial on sparse Gaussian processes and variational inference
Alom et al. Object recognition using cellular simultaneous recurrent networks and convolutional neural network
Srinivasan et al. Learning and inference in Hilbert space with quantum graphical models
Daube et al. Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity
Shrivastava et al. Multiple kernel-based dictionary learning for weakly supervised classification
Lange et al. Non-Euclidean principal component analysis by Hebbian learning
Amiridi et al. Low-rank characteristic tensor density estimation part II: Compression and latent density estimation
US11157793B2 (en) Method and system for query training
Valls et al. Supervised data transformation and dimensionality reduction with a 3-layer multi-layer perceptron for classification problems
Millea Explorations in echo state networks
Shalova et al. Deep Representation Learning for Dynamical Systems Modeling
Riou-Durand et al. Noise contrastive estimation: asymptotics, comparison with MC-MLE
Swaney et al. Efficient skin segmentation via neural networks: HP-ELM and BD-SOM
Górriz et al. Optimizing blind source separation with guided genetic algorithms
Koyuncu et al. Variational mixture of hypergenerators for learning distributions over functions
Schofield et al. A Genetic Programming Encoder for Increasing Autoencoder Interpretability
Daube et al. Deep neural network explains human visual categorisation using similar functional features
Hollósi et al. Training capsule networks with various parameters
Górriz et al. Hybridizing genetic algorithms with ICA in higher dimension

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)