WO2016134183A1 - Systems and methods for neural language modeling - Google Patents
Systems and methods for neural language modeling Download PDFInfo
- Publication number
- WO2016134183A1 WO2016134183A1 PCT/US2016/018536 US2016018536W WO2016134183A1 WO 2016134183 A1 WO2016134183 A1 WO 2016134183A1 US 2016018536 W US2016018536 W US 2016018536W WO 2016134183 A1 WO2016134183 A1 WO 2016134183A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- linguistic
- embedding
- partition
- focus term
- neural
- Prior art date
Links
- 230000001537 neural effect Effects 0.000 title claims abstract description 43
- 238000000034 method Methods 0.000 title claims description 31
- 238000005192 partition Methods 0.000 claims abstract description 111
- 239000013598 vector Substances 0.000 claims abstract description 71
- 238000013528 artificial neural network Methods 0.000 claims abstract description 63
- 238000012549 training Methods 0.000 claims description 72
- 230000006870 function Effects 0.000 claims description 25
- 239000011159 matrix material Substances 0.000 claims description 11
- 101100507451 Drosophila melanogaster sip3 gene Proteins 0.000 claims description 8
- 238000013459 approach Methods 0.000 claims description 4
- 230000001902 propagating effect Effects 0.000 claims 2
- 238000012552 review Methods 0.000 description 22
- 238000009826 distribution Methods 0.000 description 15
- 238000012545 processing Methods 0.000 description 15
- 238000002474 experimental method Methods 0.000 description 14
- 238000005516 engineering process Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 7
- 238000011161 development Methods 0.000 description 6
- 230000009466 transformation Effects 0.000 description 6
- 238000013519 translation Methods 0.000 description 6
- 230000014616 translation Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 210000000225 synapse Anatomy 0.000 description 5
- 238000012360 testing method Methods 0.000 description 5
- 238000012935 Averaging Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 238000003058 natural language processing Methods 0.000 description 4
- 210000002569 neuron Anatomy 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 238000000638 solvent extraction Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 230000000052 comparative effect Effects 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 208000015181 infectious disease Diseases 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 238000012706 support-vector machine Methods 0.000 description 3
- 238000000844 transformation Methods 0.000 description 3
- 238000010200 validation analysis Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 2
- 150000001875 compounds Chemical class 0.000 description 2
- 239000000470 constituent Substances 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000013138 pruning Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 235000016936 Dendrocalamus strictus Nutrition 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 235000019692 hotdogs Nutrition 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000696 magnetic material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 210000004205 output neuron Anatomy 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 230000009885 systemic effect Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/098—Distributed learning, e.g. federated learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/09—Supervised learning
Definitions
- NLP Natural Language Processing
- Some NLP systems may encounter difficulty due to the complexity and sparsity of information in natural language.
- Neural network language models may overcome limitations of the performance of traditional systems.
- a NNLM may learn distributed representations for words, and may embed a vocabulary into a smaller dimensional linear space that models a probability function for word sequences, expressed in terms of these representations.
- NNLMs may generate word embeddings by training a symbol prediction task over a moving local-context window.
- the ordered set of weights associated with each word becomes that word's dense vector embedding.
- the result is a vector space model that encodes semantic and syntactic relationships.
- a NNLM can predict a word given its surrounding context. These distributed representations encode shades of meaning across their dimensions, allowing for two words to have multiple, real-valued relationships encoded in a single representation. This feature flows from the distributional hypothesis: words that appear in similar contexts have similar meaning. Words that appear in similar contexts will experience similar training examples, training outcomes, and converge to similar weights.
- word embeddings based on word analogies can allow vector operations between words that mirror their semantic and syntactic relationships.
- a LM is provided that can take into account word morphology and shape.
- an NNLM is provided that can be trained by multiple systems working in parallel.
- an NNLM is provided that can be used to process analogy queries.
- word embeddings learned via neural language models can share resources between multiple languages.
- FIG. 1 depicts a neural node in accordance with one embodiment.
- FIG. 2 depicts a neural network in accordance with one embodiment.
- FIG. 3 depicts a neural network language model without partitioning, using a continuous bag of words architecture.
- FIG.4 depicts a windowed partitioned neural network language model in accordance with one embodiment.
- FIG. 5 depicts a directional partitioned neural network language model in accordance with one embodiment.
- FIG. 6 depicts the relative accuracy of each partition in a PE model as judged by row-relative word analogy scores.
- FIG. 7 is a computing architecture diagram of a computing system capable of implementing aspects of the present disclosure in accordance with one or more embodiments.
- Ranges may be expressed herein as from “about” or “approximately” or “substantially” one particular value and/or to "about” or “approximately” or “substantially” another particular value. When such a range is expressed, other exemplary embodiments include from the one particular value and/or to the other particular value.
- a neural network can comprise a plurality of layers of neural nodes (i.e., "neurons").
- a neural network can comprise an input layer, a hidden layer, and an output layer.
- the neural networks in accordance with aspects of the present disclosure may be computer-implemented.
- the plurality of layers and nodes may reside in executable program modules (e.g., program modules 714 in FIG. 7) or other software constructs, or in dedicated programmed hardware components.
- the layers and nodes, and other functional components described herein in accordance with various embodiments for performing aspects of neural language modeling may be stored in memory devices (e.g., memory 704 or mass storage 712) and executable by processors (e.g. processing unit 702) of one or more computers, such as the computer 700 shown in FIG. 7.
- the analysis, data processing and other functions associated with operating the layers and nodes and performing the various neural language modeling functions described herein may be caused by the execution of instructions by one or more such processors.
- Training functions such as model training processes as described herein, may be performed in conjunction with interactions of one or more users with one or more computers, such as the computer 700 of FIG. 7, and may be operated and configured such that trainable models can be improved based on the interaction of the users with training data and prior models, and can be implemented on various data in accordance with machine learning that may be supervised and/or autonomous.
- FIG. 1 depicts a neural node 100 in accordance with an embodiment.
- each node 100 can have a plurality of inputs 1 10 each having a weight 120, and a single output 130.
- the output of a node is calculated as:
- transfer function ⁇ can be a sigmoid function, represented by:
- each input can accept a value between zero and one, although other values can be used.
- the inputs can be represented by input vector x:
- FIG. 2 depicts a generalized neural network in accordance with an embodiment.
- This network comprises an input layer 210 with three nodes, a hidden layer 220 with four nodes, and an output layer 230 with three nodes.
- the nodes in the input layer 210 output a value, but do not themselves calculate a value.
- the output may be a number between zero and one.
- Each node in the hidden layer 220 receives outputs from each node of the input layer 210, m
- y k ⁇ ( ⁇ w ki x ⁇ ) , which corresponds to the output of the network.
- This depiction of a neural network is intended to assist with understanding neural networks and does not limit the disclosed technology. That is, a neural network may consist of hundreds, thousands, millions, or more nodes in each of the input, hidden, and output layers. Further, neural networks may have a single hidden layer (as depicted), or may have multiple hidden layers.
- a neural network in accordance with some embodiments can be used for analyzing an ordered list of linguistic units.
- a linguistic unit as defined herein can refer to a phrase, word, letter, or other character or characters used in language.
- the neural network is configured to take the ordered list of linguistic units, with a linguistic unit omitted, and predict the omitted linguistic unit. This omitted linguistic unit is referred to as a "focus term.” For example, FIG.
- FIG. 3 depicts a neural network analyzing the phrase "SEE SPOT RUN.”
- the input nodes for "SEE” and “RUN” are activated, to predict the focus term “SPOT.”
- the neural network then appropriately predicts that the missing word is "SPOT” by returning a 100% at the "SPOT” output node.
- the neural network 200 has an input layer 210 of nodes that encodes this input using "one-hot" encoding. That is, one input node exists for each linguistic unit in a dictionary of linguistic units that could be used.
- the dictionary need not be a comprehensive dictionary, such as a complete English dictionary, but can exist as a subset of characters, words, or phrases used in language. For example, if the ordered list of linguistic units is a word in the English language, the dictionary may be the list of letters A-Z, all letters capital (A-Z) and lowercase (a-z) and punctuation marks, or some subset thereof. If the ordered list of linguistic units is an English phrase, the dictionary may include all, or some, words in the English language.
- dictionaries may further include compound terms that include more than one word, such as "San Francisco”,”hot dog", etc.
- the neural network has a layer of hidden nodes 220.
- this layer of hidden nodes 220 may be divided into partitions.
- a layer of hidden nodes is divided into partitions, it is referred to as a partitioned embedding neural network (PENN).
- PNN partitioned embedding neural network
- each partition relating to a position, or window, in a phrase one word before the focus term, one word after the focus term, etc). This can be referred to as windowed embedding.
- the network is shown here, analyzing the phrase "SEE SPOT RUN,” where the focus term is "SPOT.”
- SEE SPOT RUN the focus term
- each partition may relate to a direction in the phrase (all words before the focus term, all words following the focus term, etc.). This can be referred to as directional embedding.
- the approaches can be combined in numerous permutations, such as an embodiment having a partition for the linguistic unit two windows before the focus term, and a partition for all linguistic units following the focus term, and other permutations.
- FIG. 5 depicts a directional partitioned embodiment having one partition for all words prior to the focus term (P > 0), and one partition for all words following the focus term (P ⁇ 0).
- the neural network is analyzing the phrase "SEE SPOT RUN FAST" where the focus term is "SPOT.” As shown, the word “SEE” is input to the P > 0 partition, and the words “FAST” and “RUN” are input to the P ⁇ 0 partition. The neural network then returns the focus term "SPOT.”
- partitions are used, in some embodiments, a separate set of input nodes is used for each partition.
- the set of input nodes may be modeled as specialized nodes capable of outputting a separate output to each partition. Both are mathematically equivalent.
- the present description refers to sets of input nodes for each separate partition, but in each instance where separate input nodes are used for each partition, a single set of input nodes feeding forward different values to different partitions could be used.
- each node in each partition of the hidden layer will have a set of weights associated with it, represented by a vector x.
- vector x will have as many elements as input nodes associated with the partition.
- some hidden layer nodes in a given partition may feed forward from fewer than all input nodes associated with that partition. This can be modeled either as a vector x having fewer elements than the number of input nodes associated with that partition, or a vector of equivalent length with the omitted term weights set to zero.
- Each hidden node may further have a bias term - a weight associated with an input node - that is either not multiplied by anything, or multiplied by 1 and added to the result.
- the weights for all hidden nodes in a partition of the hidden layer can be represented by a matrix formed by concatenating the vectors for each node in the hidden layer of nodes.
- This matrix can be referred to as a "synapse" matrix. Because it is the first set of nodes from the input layer, it is referred to as synapse 0 (synO). Further, this matrix can be referred to as an "embedding matrix.”
- there is only a single hidden layer although in some embodiments there may be two or more hidden layers. In embodiments having multiple hidden layers, there may be the same number of partitions as the previous layer, more, or fewer. In some embodiments, an additional hidden layer may not have partitions at all.
- the last hidden layer is followed by an output layer.
- the output layer can have one output neuron associated with each linguistic unit in a dictionary of linguistic units.
- there are fewer sets of output nodes than partitions in the previous hidden layer which takes as inputs the outputs of more than one partition in the last hidden layer of nodes.
- there is a single set of output nodes each node receiving as input the output of all nodes in the previous layer.
- a neural network of the present disclosure can be trained using a continuous list of words (CLOW) training style.
- CLOW continuous list of words
- the output layer is configured as a single set (or single partition) of nodes receiving input from each node in the previous hidden layer.
- the neural network is then trained on a linguistic corpus, which is one or more sequences of a plurality of linguistic units.
- One or more training examples are run, where for each training example, the words surrounding the focus term are input into their respective input nodes corresponding to the appropriate partitions. For example, if the linguistic corpus includes the phrase "SEE SPOT RUN FAST, " and "SPOT" is the focus term, the input nodes associated with the position one-ahead of the focus term would be activated according to the term "SEE", and the position one-behind the focus term would be activated according to the term "RUN".
- Those inputs are propagated through the neural network to produce an output.
- the output of the neural network which correlates with the percent chance that the output word is the focus term, is then compared to a preferred output where the focus term ("SPOT") is 100% (or a corresponding maximum output value) and the output for all other linguistic units is 0% (or a corresponding minimum output value).
- SPOT focus term
- the actual output is compared to the preferred output, and then back-propagated through the neural network to update the synapse matrices according to one or more known back-propagation algorithms. This process can be repeated on additional sequences in the linguistic corpus, until the network is sufficiently accurate.
- the directional CLOW implementation with very small window sizes can be used to achieve acceptable performance.
- Directional CLOW is able to achieve a parity score using a window size of 1 , contrasted with word2vec using a window size of 10 when all other parameters are equal.
- this optimization can reduce the number of training examples and overall training time by a factor of 10.
- a neural network of the present disclosure can be trained using a "skip-gram" training style.
- the skip-gram training style optimizes the following objective function: arg max e (
- c'. is the location specific representation (partition j) for the word at window position j relative to the focus word w.
- a network is configured having separate output partitions for each hidden layer partition.
- the neural network would consist of two neural networks, one modeling the probability that the focus term is a given value based on the word in the position one-ahead of the focus term, and one modeling the probability that the focus term is a given value based on the word in the position one after the focus term.
- the linguistic corpus includes the phrase "SEE SPOT RUN FAST,” and "SPOT" is the focus term, one partition will receive as input "SEE”, and one partition will receive as input "RUN.” The corresponding output nodes will then generate an output.
- This actual output is compared with a preferred output where the focus term ("SPOT") is 100% (or a corresponding maximum output value) and the output for all other linguistic units is 0% (or a corresponding minimum output value).
- SPOT focus term
- the actual output is compared to the preferred output, and then back-propagated through the neural network to update the synapse matrices according to any back-propagation algorithm, as is known in the art of neural networks. This process can be repeated on additional sequences in the linguistic corpus, until the network is sufficiently accurate.
- the outputs of the various output partitions can be added together, or otherwise combined to produce a final probability of the focus term.
- the skip-gram training style can be used to train a neural network in parallel. That is, each separate partition of hidden nodes, and partition of output nodes can be trained independently of other partitions of hidden nodes and output nodes. Thus those embodiment could be trained using multiple threads on a single computer processor, or on multiple computers operating in parallel.
- each classifier partition and its associated embedding partitions can be trained in full parallel reach the same state as if they were not distributed.
- the training can be accomplished by multiple computers with no inter-communication at all.
- the windowed embedding configuration can trained all window positions in full parallel and concatenate embeddings and classifiers at the end of training. Given machine j, the following objective function is optimized:
- c is the location specific representation (partition j) for the word at window position j relative to the focus word w.
- syntactic tasks can be performed by a process called dense interpolation embedding (DIEM).
- DIEM dense interpolation embedding
- characteristic vectors for larger, compound linguistic units such as words
- smaller linguistic units such as letters
- word or phrase structure correlates more with syntax than semantic meaning
- a characteristic vector for a larger linguistic unit can then be calculated by interpolating the embeddings for the smaller linguistic unit over the larger linguistic unit.
- the final embedding size can be selected as a multiple M of the character embedding dimensionality C, such that the final embedding contains M buckets of length C.
- every character's embedding is summed according to a weighted average. This weighted average is determined by the variable d above. This variable compares the squared percent distance between the character's position i in the word of length / and the bucket's position m in the sequence of M buckets. This is computed such that the far left character is of approximate distance 0% from the far left bucket, and the far right character is of approximate distance 100% from the far right bucket.
- a dense interpolation embedding method can be accomplished more efficiently by caching a set of transformation matrices, which are cached values of d tJ for words of varying size. These matrices can be used to transform variable length concatenated character vectors into fixed length word embeddings via vector-matrix multiplication.
- Syntactic vectors in accordance with some embodiments also provide significant scaling and generalization advantages over semantic vectors. For example, new syntactic vectors may be easily generated for words never before seen, giving loss-less generalization to any word from initial character training, assuming only that the word is made up of characters that have been seen. Syntactic embeddings can be generated in a fully distributed fashion and only require a small vector concatenation and vector-matrix multiplication per word. Further, in some embodiments, character vectors (typically length 32) and transformation matrices (at most 20 or so of them) can be stored very efficiently relative to the semantic vocabularies, which can be several million vectors of dimensionality 1000 or more. In some embodiments, DIEM optimally performs using 6+ orders of magnitude less storage space, and 5+ orders of magnitude fewer training examples than word-level semantic embeddings.
- the PE framework models the probability of a word occurring given the words surrounding it. Changing the partitioning strategy modifies this probability distribution by capturing (or ignoring) various synergies. Modeling every permutation of window position around a focus word approaches the full distribution, or the full conditional probability of the focus word occurring given the position and value of each and all of the words in the context surrounding it.
- Modeling every distribution allows every potential synergy to be captured in the combined embedding.
- the number of permutations possible in this partitioning strategy can easily exceed several hundred given a window size of 10 or more with each trained model containing billions of parameters.
- similar word partitioning strategies yield similar word embeddings and by extension, similar word-analogy quality in each sub-task.
- some embodiments can approximate the varying perspectives generated using the full co-occurrence distribution with fewer trained models by training sufficiently different models and concatenating them.
- Such a method uses different probability distributions to generate word embeddings modeling different perspectives on each word in the vocabulary.
- Character-derived vectors also embed a unique perspective by capturing information from the characters of a word instead of the contexts in which a word occurs. Because the embedding represents syntactic relationships between words from a unique perspective, the concatenation method in accordance with embodiments described above may be generalized to include the DIEM vectors.
- a related aspect of neural networks in accordance with some embodiments is that the embedding matrix SynO can be used for analogy tasks.
- each row may represent all the inputs to a specific neuron, each row represents a specific word's characteristic "embedding" in the neural network.
- a word's embedding can be a vector corresponding to all or a subset of the synapse weights for a single partition, or all or a subset of all partitions, or variations thereof.
- characteristic word embeddings can be used to perform analogy tasks.
- the word embeddings learned via neural language models can share resources between multiple languages. By pre-training over both languages, semantic relationships between languages can be encoded into the embeddings.
- simple vector arithmetic of stop words between English and Spanish can approximate machine translation:
- This vector operation subtracts an English stop word ("is”) from an English noun and adds the equivalent Spanish stop word ("es") to the embedding.
- the equation returns the Spanish word for film ("pelicula").
- this property of the embeddings enables a sentiment model trained in one language to predict on another.
- a sentiment trained in one language will predict on another better where there is a shared vocabulary between languages.
- the neural language models can construct a hidden layer using neurons sampled according to vocabulary frequencies. In this way, the hidden layer construction for each language will be similar despite having completely different input layer representations. The consistency in the hidden layer can enable the alignment of semantic concepts in vector space. This alignment can enable sentiment prediction across languages.
- the present inventors conducted experiments on word-analogy tasks made up of a variety of word similarity tasks. Specifically, the Google Analogy Dataset was used, which contains 19,544 questions, split into semantic and syntactic sections. Both sections are further divided into subcategories based on analogy type. Each analogy task is phrased as a question "A is to B as C is to ?”.
- FIG. 6 displays the relative accuracy of each partition in a PENN model as judged by row-relative word-analogy scores.
- Table A shows the performance of the default CBOW implementation of word2vec relative to the directional configuration. Most prominently, PENN outperforms the word2vec implementation using only a window size of 1 , whereas word2vec was parameterized with the default of 10. Furthermore, it can be seen that increasing dimensionality of baseline CBOW word2vec past 500 achieves suboptimal performance. Thus, a fair comparison of two models should be between optimal (as opposed to equal) parameterization for each model. This is especially important given that PENN models are modeling a much richer probability distribution, given that order is being preserved. Thus, optimal parameter settings often require larger dimensionality.
- each embedding is attempting to model a very complex (hundreds of thousands of words) probability distribution, the partition size in each partition must remain high enough to model this distribution.
- modeling large windows for semantic embeddings is optimal when using either the directional embedding model, which has a fixed partition size of 2, or a large global vector size.
- the directional model with optimal parameters has slightly less quality than the windowed model with optimal parameters due to the vector averaging occurring in each window pane.
- Table B documents the change in syntactic analogy query quality as a result of the interpolated DIEM vectors.
- each analogy query was first performed by running the query on CLOW and DIEM independently, and selecting the top thousand CLOW cosine similarities.
- the present inventors summed the squared cosine similarity of each of these top thousand with each associated cosine similarity returned by the DIEM, and resorted. This was found to be an efficient estimation of concatenation that did not reduce quality.
- FIG. 6 displays the relative accuracy of each partition in a PENN model as judged by row-relative word-analogy scores.
- Tables C and D show the performance of the default CBOW implementation of word2vec relative to the directional configuration. Most prominently, PENN outperforms the word2vec implementation using only a window size of 1 , whereas word2vec was parameterized with the default of 10. Furthermore, it can be seen that increasing dimensionality of baseline CBOW word2vec past 500 achieves suboptimal performance. Thus, a fair comparison of two models should be between optimal (as opposed to equal) parameterization for each model. This is especially important given that PENN models are modeling a much richer probability distribution, given that order is being preserved. Thus, optimal parameter settings often require larger dimensionality.
- each embedding is attempting to model a very complex (hundreds of thousands of words) probability distribution, the partition size in each partition must remain high enough to model this distribution.
- modeling large windows for semantic embeddings is optimal when using either the directional embedding model, which has a fixed partition size of 2, or a large global vector size.
- the directional model with optimal parameters has slightly less quality than the windowed model with optimal parameters due to the vector averaging occurring in each window pane.
- Table B documents the change in syntactic analogy query quality as a result of the interpolated DIEM vectors.
- each analogy query was first performed by running the query on CLOW and DIEM independently, and selecting the top thousand CLOW cosine similarities.
- the present inventors summed the squared cosine similarity of each of these top thousand with each associated cosine similarity returned by the DIEM, and resorted. This was found to be an efficient estimation of concatenation that did not reduce quality.
- Final results show a lift in quality and size over previous models with a 40% syntactic lift over the best published result.
- Within PE models there exists a speed vs. performance tradeoff between SG-DIEM and CLOW-DIEM.
- CLOW is the slower model and achieves a higher score.
- a 160 billion parameter network was also trained overnight on 3 multi-core CPUs, however it yielded 20000 dimensional vectors for each word and subsequently overfit the training data. This is because a dataset of 8 billion tokens with a negative sampling parameter of 10 has 80 billion training examples. Having more parameters than training examples overfits a dataset, whereas 40 billion performs at parity with current state of the art, as pictured in Table E.
- syntactic analogy queries the top 100 words from the semantic step were selected used to create syntactic embeddings.
- the analogy query was then performed using the syntactic embeddings such that syntactic embedding based cosine distance scores were generated. These scores were summed element-wise with semantic scores in the original top 100 words. The top word was then selected as the final analogy answer. This step was skipped for analogies coming from semantic categories. Syntactic scores were normalized by raising them to the power of 10.
- the present inventors used the Google Translate API to curate a Spanish movie review corpus.
- Systems such as Google's translation API have been evaluated by research in the NLP literature and are generally accepted as a mechanism with which to curate labeled data sets in new languages [ 1][2][3].
- an API request was made to the Google Translate API to translate the entire text of the review to Spanish.
- the resulting paragraph-level translations maintained the same polarity label as well as ID in order to maintain one-to-one comparisons between the Spanish and English corpora.
- the present inventors used word2vec's skip-gram architecture to learn 100 dimensional feature representations for words in the corpus by using hierarchical softmax and negative sampling (with 10 samples) for the focus word representations. Pre-processing consists of lower-casing, separating words from punctuation, and removing HTML tags. After learning word representations, stop-words are filtered out using NLTK's [4] stop word corpus and review-level representations are created by averaging the features for each remaining word in the review.
- the present inventors used Paragraph Vector's distributed bag-of-words (dbow) configuration to learn 100 dimensional feature representations for reviews by using both hierarchical softmax and negative sampling (with 10 samples) for the focus word representations.
- Pre-processing consists of lower-casing, separating words from punctuation, and removing HTML tags. Stop words are preserved in the reviews.
- the present inventors used PE 's skip-gram architecture to learn 100 dimensional feature representations for words in the corpus by using negative sampling (with 10 samples) and no hierarchical softmax for the focus representations. Partitions from window posititions -9 to -4 were used. The pre-processing and averaging steps are identical to the process described in the word2vec configuration section. [0084] Model ensembling was performed using an augmented implementation of the recent work by [5]. For each corpus (English and Spanish) a validation word model and a full word model was trained. The validation word models (word2vec and PE ) are trained on 20,000 reviews from the training sets (10,000 positive, 10,000 negative) and 50,000 reviews in the development sets.
- the remaining 5,000 reviews in the training sets are used as validation sets.
- the full word models are trained on the entire training sets and development sets.
- Paragraph Vector representations are learned using unsupervised pre-training over the entire corpus (training, development, and testing) using the implementation provided with [5].
- Single language models pre-train English and Spanish word models independently from one another.
- the English and Spanish feature representations are learned in separate unsupervised models. Their evaluations act as a baseline with which to interpret the cross-language models in later experiments. Review-level representations are used as features in a support vector machine (SVM). A separate classifier was used for each language.
- SVM support vector machine
- a separate classifier was used for each language.
- the unsupervised models for English and Spanish never interact. Spanish models were evaluated against the Spanish test set and English models against the English test set. The resulting scores for these models are provided in Table H.
- Cross-language models pre-train using both English and Spanish to learn representations.
- the English and Spanish feature representations are learned in the same unsupervised model.
- Review-level representations are used as features in an SVM.
- the classifier is trained on English examples. No training is performed over the Spanish corpus.
- the features learned in the cross-language pretraining allow feature resources to be shared between both Spanish and English.
- the resulting models are evaluated on the Spanish corpus. The scores for these models are provided in Table I.
- the present inventors' experiments demonstrate the efficacy of neural language modeling for cross-language sentiment classification.
- Neural language models encode meaningful relationships between words by mapping them to vector space, and this property of their embeddings helps explain why the cross-language models can effectively predict on languages on which they have not trained.
- the present inventors employ three different techniques for unsupervised pre-training and show that each technique is capable of encoding semantic relationships across languages.
- the present inventors trained baseline, single language models with which to compare the cross-language predictions (Table H). The highest baseline score for Spanish is 87.31% (word2vec) and the ensemble score is 86.63%. Using the baseline scores for comparison, the present inventors evaluate cross-language models on the Spanish corpus.
- aspects of the present disclosure may be implemented in numerous other fields, including but not limited to healthcare, for instance in real-time clinical monitoring and historical reporting on patient populations. Some embodiments described herein may be applied in clinical decision support and other tasks that require summaries of unstructured text.
- aspects of the present disclosure can be used for modeling the likelihood of a patient developing an infection, including full body systemic infection. More generally speaking, aspects of the present disclosure may be used for predicting whether or not a patient will develop a particular health condition.
- EHR Electronic health records
- EHR Electronic health records for various patients may be used with aspects of the present disclosure as described above in order to learn representations of the positions of nodes and using embeddings to predict whether a patient would develop an an infection.
- the locations of particular clusters in vector space can be representative of such likelihoods of developing certain conditions.
- FIG. 7 is a computing architecture diagram of a computing system capable of implementing aspects of the present disclosure in accordance with one or more embodiments.
- the computing system includes a computer 700 that can be configured to perform one or more functions associated with the present disclosed technology.
- the computer 700 includes a processing unit 702, a system memory 704, and a system bus 706 that couples the memory 704 to the processing unit 702.
- the computer 700 further includes a mass storage device 712 for storing program modules 714.
- the program modules 714 can include computer-executable modules for performing the one or more functions associated with FIGS. 1-6.
- the mass storage device 712 further includes a data store 716.
- the mass storage device 712 is connected to the processing unit 702 through a mass storage controller (not shown) connected to the bus 706.
- the mass storage device 712 and its associated computer-storage media provide non-volatile storage for the computer 700.
- computer-readable storage media can be any available computer-readable storage medium that can be accessed and read by the computer 700.
- computer-readable storage media can include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-storage instructions, data structures, program modules, or other data.
- computer-readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, digital versatile disks ("DVD"), HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer 700.
- Computer-readable storage media as described herein does not include transitory signals.
- the computer 700 can operate in a networked environment using logical connections to remote computers through a network 718.
- the computer 700 can connect to the network 718 through a network interface unit 710 connected to the bus 706. It should be appreciated that the network interface unit 710 can also be utilized to connect to other types of networks and remote computer systems.
- the computer 700 can also include an input/output controller 708 for receiving and processing input from a number of input devices. Input devices may include, but are not limited to, keyboards, mice, stylus, touchscreens, microphones, audio capturing devices, or image/video capturing devices. An end user may utilize such input devices to interact with a user interface such as a graphical user interface for managing various functions performed by the computer 700.
- the bus 706 can enable the processing unit 702 to read code and/or data to/from the mass storage device 712 or other computer-storage media.
- the computer-readable storage media can represent apparatus in the form of storage elements that are implemented using any suitable technology, including but not limited to semiconductors, magnetic materials, optics, or the like.
- the program modules 714 include software instructions that, when loaded into the processing unit 702 and executed, cause the computer 700 to provide functions for neural language modeling in accordance with aspects of the present disclosure described herein with reference to exemplary embodiments.
- the program modules 714 can also provide various tools or techniques by which the computer 700 can participate within the overall systems or operating environments using the components, flows, and data structures discussed throughout this description.
- the program modules 714 can, when loaded into the processing unit 702 and executed, transform the processing unit 702 and the overall computer 700 from a general- purpose computing system into a special-purpose computing system.
- the processing unit 702 can be constructed from any number of transistors or other discrete circuit elements, which can individually or collectively assume any number of states. More specifically, the processing unit 702 can operate as a finite-state machine, in response to executable instructions contained within the program modules 714. These computer-executable instructions can transform the processing unit 702 by specifying how the processing unit 702 transitions between states, thereby transforming the transistors or other discrete hardware elements constituting the processing unit 702 .
- Encoding the program modules 714 can also transform the physical structure of the computer-storage media.
- the specific transformation of physical structure can depend on various factors, in different implementations of this description. Examples of such factors can include, but are not limited to: the technology used to implement the computer-storage media, whether the computer storage media are characterized as primary or secondary storage, and the like.
- the program module 714 can transform the physical state of the semiconductor memory, when the software is encoded therein.
- the program modules 714 can transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory.
- the computer-readable storage media can be implemented using magnetic or optical technology.
- the program modules 714 can transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations can include altering the magnetic characteristics of particular locations within given magnetic media. These transformations can also include altering the physical features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. It should be appreciated that other transformations of physical media are possible without departing from the scope of the present disclosure.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Probability & Statistics with Applications (AREA)
- Machine Translation (AREA)
Abstract
In some aspects, the present disclosure relates to neural language modeling. In one embodiment, a computer-implemented neural network includes a plurality of neural nodes, where each of the neural nodes has a plurality of input weights corresponding to a vector of real numbers. The neural network also includes an input neural node corresponding to a linguistic unit selected from an ordered list of a plurality of linguistic units, and an embedding layer with a plurality of embedding node partitions. Each embedding node partition includes one or more neural nodes. Each of the embedding node partitions corresponds to a position in the ordered list relative to a focus term, is configured to receive an input from an input node, and is configured to generate an output. The neural network also includes a classifier layer with a plurality of neural nodes, each configured to receive the embedding outputs from the embedding layer, and configured to generate an output corresponding to a probability that a particular linguistic unit is the focus term.
Description
SYSTEMS AND METHODS FOR NEURAL LANGUAGE MODELING
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This PCT International application claims priority to and the benefit of United States Provisional Patent Application No. 62/1 18,200 entitled "Systems and Methods for Neural Language Modeling", which was filed on February 19, 2015. This PCT International application also claims priority to and the benefit of and U.S. Provisional Patent Application No. 62/128,915 entitled "Systems and Methods for Neural Language Modeling", which was filed on March 5, 2015. The entire contents and substance of the above mentioned applications are hereby incorporated by reference in their entireties as if fully set forth herein.
[0002] Some references, which may include various patents, patent applications, and publications, are cited in a reference list and discussed in the disclosure provided herein. The citation and/or discussion of such references is provided merely to clarify the description of the present disclosure and is not an admission that any such reference is "prior art" to any aspects of the present disclosure described herein. All references cited and discussed in this specification are incorporated herein by reference in their entireties and to the same extent as if each reference was individually incorporated by reference.
BACKGROUND
[0003] Natural Language Processing (NLP) systems seek to automate the extraction of useful information from sequences of symbols in human language. Some NLP systems may encounter difficulty due to the complexity and sparsity of information in natural language. Neural network language models (NNLMs) may overcome limitations of the performance of traditional systems. A NNLM may learn distributed representations for words, and may embed a vocabulary into a smaller dimensional linear space that models a probability function for word sequences, expressed in terms of these representations.
[0004] NNLMs may generate word embeddings by training a symbol prediction task over a moving local-context window. The ordered set of weights associated with each word becomes that word's dense vector embedding. The result is a vector space model that encodes semantic and syntactic relationships. A NNLM can predict a word given its surrounding context. These
distributed representations encode shades of meaning across their dimensions, allowing for two words to have multiple, real-valued relationships encoded in a single representation. This feature flows from the distributional hypothesis: words that appear in similar contexts have similar meaning. Words that appear in similar contexts will experience similar training examples, training outcomes, and converge to similar weights.
[0005] Once calculated, word embeddings based on word analogies can allow vector operations between words that mirror their semantic and syntactic relationships. The analogy "king is to queen as man is to woman" can be encoded in vector space by the equation king - queen = man - woman.
[0006] Conventional LMs may not account for morphology and word shape. However, information about word structure in word representations can be valuable for part of speech analysis, word similarity, and information extraction. It is with respect to these and other considerations that aspects of the present disclosure are presented herein.
SUMMARY
[0007] Some aspects of the present disclosure relate to a computer-implemented neural network architecture that explicitly encodes order in a sequence of symbols and uses this architecture to embed both word-level and character-level representations. In some embodiments, a LM is provided that can take into account word morphology and shape. In some embodiments, an NNLM is provided that can be trained by multiple systems working in parallel. In some embodiments, an NNLM is provided that can be used to process analogy queries. In some embodiments, word embeddings learned via neural language models can share resources between multiple languages.
BRIEF DESCRIPTION OF THE FIGURES
[0008] Reference will now be made to the accompanying drawings, which are not necessarily drawn to scale.
[0009] FIG. 1 depicts a neural node in accordance with one embodiment. [0010] FIG. 2 depicts a neural network in accordance with one embodiment.
[0011] FIG. 3 depicts a neural network language model without partitioning, using a continuous bag of words architecture.
[0012] FIG.4 depicts a windowed partitioned neural network language model in accordance with one embodiment.
[0013] FIG. 5 depicts a directional partitioned neural network language model in accordance with one embodiment.
[0014] FIG. 6 depicts the relative accuracy of each partition in a PE model as judged by row-relative word analogy scores.
[0015] FIG. 7 is a computing architecture diagram of a computing system capable of implementing aspects of the present disclosure in accordance with one or more embodiments.
DETAILED DESCRIPTION
[0016] Although example embodiments of the present disclosure are explained in detail, it is to be understood that other embodiments are contemplated. Accordingly, it is not intended that the disclosure is limited in its scope to the details of construction and arrangement of components set forth in the following description or illustrated in the drawings. The disclosure is capable of other embodiments and of being practiced or carried out in various ways.
[0017] It should also be noted that, as used in the specification and the appended claims, the singular forms "a," "an" and "the" include plural references unless the context clearly dictates otherwise. References to a composition containing "a" constituent is intended to include other constituents in addition to the one named.
[0018] Also, in describing the example embodiments, terminology will be resorted to for the sake of clarity. It is intended that each term contemplates its broadest meaning as understood by those skilled in the art and includes all technical equivalents which operate in a similar manner to accomplish a similar purpose.
[0019] Ranges may be expressed herein as from "about" or "approximately" or "substantially" one particular value and/or to "about" or "approximately" or "substantially" another particular
value. When such a range is expressed, other exemplary embodiments include from the one particular value and/or to the other particular value.
[0020] Herein, the use of terms such as "having," "has," "including," or "includes" are open-ended and are intended to have the same meaning as terms such as "comprising" or "comprises" and not preclude the presence of other structure, material, or acts. Similarly, though the use of terms such as "can" or "may" are intended to be open-ended and to reflect that structure, material, or acts are not necessary, the failure to use such terms is not intended to reflect that structure, material, or acts are essential. To the extent that structure, material, or acts are presently considered to be essential, they are identified as such.
[0021] It is also to be understood that the mention of one or more method steps does not preclude the presence of additional method steps or intervening method steps between those steps expressly identified. Moreover, although the term "step" may be used herein to connote different aspects of methods employed, the term should not be interpreted as implying any particular order among or between various steps herein disclosed unless and except when the order of individual steps is explicitly required.
[0022] The components described hereinafter as making up various elements of the present disclosure are intended to be illustrative and not restrictive. Many suitable components that would perform the same or similar functions as the components described herein are intended to be embraced within the scope of the present disclosure. Such other components not described herein can include, but are not limited to, for example, similar components that are developed after development of the presently disclosed subject matter.
[0023] To facilitate an understanding of the principles and features of the present disclosure, various illustrative embodiments are explained below. In particular, the presently disclosed subject matter is described in the context of LM. The present disclosure, however, is not so limited, and can be applicable in other contexts. For example and not limitation, some embodiments of the present disclosure may improve other sequence recognition techniques and the like. These embodiments are contemplated within the scope of the present disclosure.
Accordingly, when the present disclosure is described in the context of NNLM, it will be understood that other embodiments can take the place of those referred to.
[0024] A neural network can comprise a plurality of layers of neural nodes (i.e., "neurons"). In some embodiments, a neural network can comprise an input layer, a hidden layer, and an output layer. For ease of explanation, the basic functioning of a neural node in accordance with an embodiment will be explained below. However, as would be understood by a person having ordinary skill in the art, other types of neurons could be used.
[0025] The neural networks in accordance with aspects of the present disclosure may be computer-implemented. The plurality of layers and nodes may reside in executable program modules (e.g., program modules 714 in FIG. 7) or other software constructs, or in dedicated programmed hardware components. The layers and nodes, and other functional components described herein in accordance with various embodiments for performing aspects of neural language modeling may be stored in memory devices (e.g., memory 704 or mass storage 712) and executable by processors (e.g. processing unit 702) of one or more computers, such as the computer 700 shown in FIG. 7. The analysis, data processing and other functions associated with operating the layers and nodes and performing the various neural language modeling functions described herein, may be caused by the execution of instructions by one or more such processors. Training functions, such as model training processes as described herein, may be performed in conjunction with interactions of one or more users with one or more computers, such as the computer 700 of FIG. 7, and may be operated and configured such that trainable models can be improved based on the interaction of the users with training data and prior models, and can be implemented on various data in accordance with machine learning that may be supervised and/or autonomous.
[0026] FIG. 1 depicts a neural node 100 in accordance with an embodiment. In some embodiments, each node 100 can have a plurality of inputs 1 10 each having a weight 120, and a single output 130. The output of a node is calculated as:
m
yk = <p(∑wkjXj)
where the output y of the k"1 node is calculated as the sum of each input x of the node multiplied by a corresponding weight wkj for that input. The result of the summation is then transformed by a transfer function. In some embodiments, transfer function φ can be a sigmoid function, represented by:
[0027] In some embodiments, each input can accept a value between zero and one, although other values can be used. Collectively, the inputs can be represented by input vector x:
[0028] Each input also has a weight associated with it, with the collective set of input weights comprising the vector: wk = [w^ W^ Wj . . .wm]
[0029] FIG. 2 depicts a generalized neural network in accordance with an embodiment. This network comprises an input layer 210 with three nodes, a hidden layer 220 with four nodes, and an output layer 230 with three nodes. The nodes in the input layer 210 output a value, but do not themselves calculate a value. In an embodiment, the output may be a number between zero and one. Each node in the hidden layer 220 receives outputs from each node of the input layer 210, m
and outputs the result of the equation yk = φ(∑ wkix■) set forth above. The output of the hidden layer 220 is then fed forward to the output layer 230. Each node in the output layer calculates its m
output as the result of the equation yk = φ(∑ wkix■) , which corresponds to the output of the network. This depiction of a neural network is intended to assist with understanding neural networks and does not limit the disclosed technology. That is, a neural network may consist of hundreds, thousands, millions, or more nodes in each of the input, hidden, and output layers. Further, neural networks may have a single hidden layer (as depicted), or may have multiple hidden layers.
[0030] The weights for a layer of nodes can further be represented by the matrix: w = [w0 0 w0 l ■ w0J wl 0 wM ; '·■ ; w o - wkJ ]
thus, the output for the group of nodes can be calculated as:
[0031] A neural network in accordance with some embodiments can be used for analyzing an ordered list of linguistic units. A linguistic unit as defined herein can refer to a phrase, word, letter, or other character or characters used in language. In some embodiments, the neural network is configured to take the ordered list of linguistic units, with a linguistic unit omitted, and predict the omitted linguistic unit. This omitted linguistic unit is referred to as a "focus term." For example, FIG. 3 depicts a neural network analyzing the phrase "SEE SPOT RUN." The input nodes for "SEE" and "RUN" are activated, to predict the focus term "SPOT." The neural network then appropriately predicts that the missing word is "SPOT" by returning a 100% at the "SPOT" output node.
[0032] In some embodiments, the neural network 200 has an input layer 210 of nodes that encodes this input using "one-hot" encoding. That is, one input node exists for each linguistic unit in a dictionary of linguistic units that could be used. The dictionary need not be a comprehensive dictionary, such as a complete English dictionary, but can exist as a subset of characters, words, or phrases used in language. For example, if the ordered list of linguistic units is a word in the English language, the dictionary may be the list of letters A-Z, all letters capital (A-Z) and lowercase (a-z) and punctuation marks, or some subset thereof. If the ordered list of linguistic units is an English phrase, the dictionary may include all, or some, words in the English language. For example, if the system is to be trained on a specific book or corpus of text, English words not used in the book or corpus may be omitted. In some embodiments, dictionaries may further include compound terms that include more than one word, such as "San Francisco","hot dog", etc.
[0033] In some embodiments, the neural network has a layer of hidden nodes 220. In some embodiments, this layer of hidden nodes 220 may be divided into partitions. Where a layer of hidden nodes is divided into partitions, it is referred to as a partitioned embedding neural network (PENN). In some embodiments, each partition relating to a position, or window, in a phrase (one word before the focus term, one word after the focus term, etc). This can be referred
to as windowed embedding. FIG. 4 depicts a windowed partitioned embodiment having two partitions, one for the word immediately preceding the focus term (p=+l) and one for the word immediately following the focus term (p=-l). The network is shown here, analyzing the phrase "SEE SPOT RUN," where the focus term is "SPOT." Here, three hidden nodes are used for the p=+l partition, and three hidden nodes are used for the p=-l partition. Again, as a result of inputting "SEE" to the p=+l partition, and "RUN" to the p=-l partition, the network predicts that the focus term is "SPOT."
[0034] In some embodiments, each partition may relate to a direction in the phrase (all words before the focus term, all words following the focus term, etc.). This can be referred to as directional embedding. As would be recognized by one having skill in the art, the approaches can be combined in numerous permutations, such as an embodiment having a partition for the linguistic unit two windows before the focus term, and a partition for all linguistic units following the focus term, and other permutations. FIG. 5 depicts a directional partitioned embodiment having one partition for all words prior to the focus term (P > 0), and one partition for all words following the focus term (P < 0). Here, the neural network is analyzing the phrase "SEE SPOT RUN FAST" where the focus term is "SPOT." As shown, the word "SEE" is input to the P > 0 partition, and the words "FAST" and "RUN" are input to the P < 0 partition. The neural network then returns the focus term "SPOT."
[0035] Where partitions are used, in some embodiments, a separate set of input nodes is used for each partition. Alternatively, the set of input nodes may be modeled as specialized nodes capable of outputting a separate output to each partition. Both are mathematically equivalent. For simplicity of understanding, the present description refers to sets of input nodes for each separate partition, but in each instance where separate input nodes are used for each partition, a single set of input nodes feeding forward different values to different partitions could be used.
[0036] As discussed above, each node in each partition of the hidden layer will have a set of weights associated with it, represented by a vector x. In some embodiments, vector x will have as many elements as input nodes associated with the partition. In some embodiments, some hidden layer nodes in a given partition may feed forward from fewer than all input nodes
associated with that partition. This can be modeled either as a vector x having fewer elements than the number of input nodes associated with that partition, or a vector of equivalent length with the omitted term weights set to zero. Each hidden node may further have a bias term - a weight associated with an input node - that is either not multiplied by anything, or multiplied by 1 and added to the result. The weights for all hidden nodes in a partition of the hidden layer can be represented by a matrix formed by concatenating the vectors for each node in the hidden layer of nodes. This matrix can be referred to as a "synapse" matrix. Because it is the first set of nodes from the input layer, it is referred to as synapse 0 (synO). Further, this matrix can be referred to as an "embedding matrix." syn0 = [wl ... wk ] = [w0fi w0 l - w0J wl 0 wu '·■ wk> 0 - wkJ ]
[0037] In some embodiments, there is only a single hidden layer, although in some embodiments there may be two or more hidden layers. In embodiments having multiple hidden layers, there may be the same number of partitions as the previous layer, more, or fewer. In some embodiments, an additional hidden layer may not have partitions at all.
[0038] In some embodiments, the last hidden layer is followed by an output layer. In some embodiments, the output layer can have one output neuron associated with each linguistic unit in a dictionary of linguistic units. In some embodiments, there are fewer sets of output nodes than partitions in the previous hidden layer, which takes as inputs the outputs of more than one partition in the last hidden layer of nodes. In some embodiments, there is a single set of output nodes, each node receiving as input the output of all nodes in the previous layer. In some embodiments, there is a separate set of output nodes associated with each partition in the last hidden layer of nodes.
CLOW
[0039] In some embodiments, a neural network of the present disclosure can be trained using a continuous list of words (CLOW) training style. The CLOW training style under the PE framework optimizes the following objective function: arg max Π Π pi.cj; Q) Π Π p(w = 0\c'
(w,Q≡d-c<j<cj≠0 J (w,Q≡d'-c<j<cj≠0 J
[0040] where c is the location specific representation (partition j) for the word at window position j relative to the focus word w.
[0041] In this training style, the output layer is configured as a single set (or single partition) of nodes receiving input from each node in the previous hidden layer. The neural network is then trained on a linguistic corpus, which is one or more sequences of a plurality of linguistic units. One or more training examples are run, where for each training example, the words surrounding the focus term are input into their respective input nodes corresponding to the appropriate partitions. For example, if the linguistic corpus includes the phrase "SEE SPOT RUN FAST, " and "SPOT" is the focus term, the input nodes associated with the position one-ahead of the focus term would be activated according to the term "SEE", and the position one-behind the focus term would be activated according to the term "RUN". Those inputs are propagated through the neural network to produce an output. The output of the neural network, which correlates with the percent chance that the output word is the focus term, is then compared to a preferred output where the focus term ("SPOT") is 100% (or a corresponding maximum output value) and the output for all other linguistic units is 0% (or a corresponding minimum output value). The actual output is compared to the preferred output, and then back-propagated through the neural network to update the synapse matrices according to one or more known back-propagation algorithms. This process can be repeated on additional sequences in the linguistic corpus, until the network is sufficiently accurate.
[0042] In some embodiments, the directional CLOW implementation with very small window sizes (pictured with a window size of 1) can be used to achieve acceptable performance. Directional CLOW is able to achieve a parity score using a window size of 1 , contrasted with word2vec using a window size of 10 when all other parameters are equal. In some embodiments, this optimization can reduce the number of training examples and overall training time by a factor of 10.
SKIP-GRAM
[0043] In some embodiments, a neural network of the present disclosure can be trained using a "skip-gram" training style. The skip-gram training style optimizes the following objective function: arg maxe(
(w,ΠQ≡d-c<jΠ p(cj Q)
<cj≠0 J (w,QΠ≡d'-c<jΠ p(wj = 0\ά θ))
<cj≠0 J
where c'. is the location specific representation (partition j) for the word at window position j relative to the focus word w.
[0044] In this training style, a network is configured having separate output partitions for each hidden layer partition. Thus, in the example listed above, the neural network would consist of two neural networks, one modeling the probability that the focus term is a given value based on the word in the position one-ahead of the focus term, and one modeling the probability that the focus term is a given value based on the word in the position one after the focus term. In other words, if the linguistic corpus includes the phrase "SEE SPOT RUN FAST," and "SPOT" is the focus term, one partition will receive as input "SEE", and one partition will receive as input "RUN." The corresponding output nodes will then generate an output. This actual output is compared with a preferred output where the focus term ("SPOT") is 100% (or a corresponding maximum output value) and the output for all other linguistic units is 0% (or a corresponding minimum output value). The actual output is compared to the preferred output, and then back-propagated through the neural network to update the synapse matrices according to any back-propagation algorithm, as is known in the art of neural networks. This process can be repeated on additional sequences in the linguistic corpus, until the network is sufficiently accurate. After training, to arrive at a final result, the outputs of the various output partitions can be added together, or otherwise combined to produce a final probability of the focus term.
[0045] In some embodiments, the skip-gram training style can be used to train a neural network in parallel. That is, each separate partition of hidden nodes, and partition of output nodes can be trained independently of other partitions of hidden nodes and output nodes. Thus those embodiment could be trained using multiple threads on a single computer processor, or on multiple computers operating in parallel.
[0046] In some embodiments, when skip-gram is used to model ordered sets of words under the PENN framework, each classifier partition and its associated embedding partitions can be trained in full parallel reach the same state as if they were not distributed. In some embodiments, the training can be accomplished by multiple computers with no inter-communication at all. In some embodiments, the windowed embedding configuration can trained all window positions in full parallel and concatenate embeddings and classifiers at the end of training. Given machine j, the following objective function is optimized:
[0047] where c is the location specific representation (partition j) for the word at window position j relative to the focus word w.
[0048] Concatenation of the weight matrices syn0 and syn1 then incorporates the sum over j back into the PENN skip-gram objective function during the forward propagation process, yielding identical training results as a network trained in a single-threaded, single-model PENN skip-gram fashion. This training style achieves parity training results with current state-of-the-art methods while training in parallel over as many as j separate machines.
DIEM
[0049] In some embodiments, syntactic tasks can be performed by a process called dense interpolation embedding (DIEM). In this technique, characteristic vectors for larger, compound linguistic units (such as words) can be calculated from embeddings generated on smaller linguistic units (such as letters). Because word or phrase structure correlates more with syntax than semantic meaning, such a technique performs better on syntactic analogy tasks. To perform this method, characteristic vectors are first calculated for the smaller linguistic units using a neural network in accordance with an embodiment. A characteristic vector for a larger linguistic unit can then be calculated by interpolating the embeddings for the smaller linguistic unit over the larger linguistic unit. For example, where the smaller linguistic unit is a letter, and the larger linguistic unit is a word, characteristic vectors for words can be generated according to the following pseudocode:
INPUT: wordlength /, list char embeddings (e.g., the word) chart, multiple , char dim C, vector vm. for i = O to l- 1 do s = M * i l l for m = 0 to M- 1 do d = pow{\ - (abs(s - m)) I M, 2) vm = vm + d * chari end for end for
[0050] The final embedding size can be selected as a multiple M of the character embedding dimensionality C, such that the final embedding contains M buckets of length C. Within each bucket of the final embedding, every character's embedding is summed according to a weighted average. This weighted average is determined by the variable d above. This variable compares the squared percent distance between the character's position i in the word of length / and the bucket's position m in the sequence of M buckets. This is computed such that the far left character is of approximate distance 0% from the far left bucket, and the far right character is of approximate distance 100% from the far right bucket. 1 minus this distance, squared, is then used to weight each respective character's embedding into each respective bucket. For example, the percentage distance across a word of length 4 is 0%, 33%, 66%, and 100% (from left to right across the word). The percentage distance across a sequence of buckets of length 3 is 0%, 50%, and 100%. Thus, the distance between character 1 and bucket 3 is (0% - 100%). Squaring this equals 100%. One minus this square is 0%. Thus, character 1 in bucket 3 would have a weight of 0% (nothing). The opposite would be true for character 4 and bucket 3, as they both lie at the far right position in their respective sequences.
[0051] For each character in a word, its index i is first scaled linearly with the size of the final "syntactic" embedding such that s = M * i / 1. Then, for each length C position m (out of M positions/buckets) in the final word embedding vm, a squared distance is calculated relative to
the scaled index such that distance d = pow(\ -(abs(s - j)) I M,2). The character vector for the character at position i in the word is then scaled by d and added element-wise into position j of vector v.
[0052] In some embodiments, a dense interpolation embedding method can be accomplished more efficiently by caching a set of transformation matrices, which are cached values of dtJ for words of varying size. These matrices can be used to transform variable length concatenated character vectors into fixed length word embeddings via vector-matrix multiplication.
[0053] Syntactic vectors in accordance with some embodiments also provide significant scaling and generalization advantages over semantic vectors. For example, new syntactic vectors may be easily generated for words never before seen, giving loss-less generalization to any word from initial character training, assuming only that the word is made up of characters that have been seen. Syntactic embeddings can be generated in a fully distributed fashion and only require a small vector concatenation and vector-matrix multiplication per word. Further, in some embodiments, character vectors (typically length 32) and transformation matrices (at most 20 or so of them) can be stored very efficiently relative to the semantic vocabularies, which can be several million vectors of dimensionality 1000 or more. In some embodiments, DIEM optimally performs using 6+ orders of magnitude less storage space, and 5+ orders of magnitude fewer training examples than word-level semantic embeddings.
[0054] The PE framework models the probability of a word occurring given the words surrounding it. Changing the partitioning strategy modifies this probability distribution by capturing (or ignoring) various synergies. Modeling every permutation of window position around a focus word approaches the full distribution, or the full conditional probability of the focus word occurring given the position and value of each and all of the words in the context surrounding it.
[0055] Modeling every distribution allows every potential synergy to be captured in the combined embedding. The number of permutations possible in this partitioning strategy can easily exceed several hundred given a window size of 10 or more with each trained model containing billions of parameters. However, similar word partitioning strategies yield similar
word embeddings and by extension, similar word-analogy quality in each sub-task. Thus, some embodiments can approximate the varying perspectives generated using the full co-occurrence distribution with fewer trained models by training sufficiently different models and concatenating them. Such a method uses different probability distributions to generate word embeddings modeling different perspectives on each word in the vocabulary. Character-derived vectors also embed a unique perspective by capturing information from the characters of a word instead of the contexts in which a word occurs. Because the embedding represents syntactic relationships between words from a unique perspective, the concatenation method in accordance with embodiments described above may be generalized to include the DIEM vectors.
Analogy Tasks
[0056] A related aspect of neural networks in accordance with some embodiments is that the embedding matrix SynO can be used for analogy tasks. In the same way that each row may represent all the inputs to a specific neuron, each row represents a specific word's characteristic "embedding" in the neural network. A word's embedding can be a vector corresponding to all or a subset of the synapse weights for a single partition, or all or a subset of all partitions, or variations thereof.
[0057] These characteristic word embeddings can be used to perform analogy tasks. For example, the analogy "king is to queen as man is to woman" can be encoded as king - queen = man - woman. That is, each word can be represented as a characteristic vector of embeddings, and each relationship as a difference between the two characteristic vectors. Using the king/queen example, a neural network in accordance with an embodiment can be used to derive the answer "woman" to the question "king is to queen as man is to what?" (king - queen = man - ?). The problem can be solved by rearrangement of operations to man - king + queen = ?. This results in a solution vector that can then converted to an answer word by comparing the solution vector to the embedding vector for each word in the dictionary using cosine similarity. That is: similarity = cos (Θ) solution-word
solution II II word
By searching the characteristic vectors for the words in the dictionary, and finding a word with the smallest cosine distance (and thus the highest degree of similarity) will usually result in finding the answer "woman." Depending on the structure of the network, degree of training, other results, such as "girl" or "female" may occur, however "woman" is almost always among the most similar words.
[0058] The above example given is a semantic task, however syntactic analogies can also be performed. For example, the question "running is to run as pruning is to ?" may similarly be performed with an embodiment of the disclosure. Among the top results there will be "pruning," which shares a common syntax, though very different semantic meaning.
[0059] In some embodiments, the word embeddings learned via neural language models can share resources between multiple languages. By pre-training over both languages, semantic relationships between languages can be encoded into the embeddings. In some embodiments, for example, simple vector arithmetic of stop words between English and Spanish can approximate machine translation:
("film") - ("is") + ("es") = ("pelicula")
[0060] This vector operation subtracts an English stop word ("is") from an English noun and adds the equivalent Spanish stop word ("es") to the embedding. The equation returns the Spanish word for film ("pelicula"). This suggests that the neural language pre-training map words into vector space in such a way that language becomes another dimension that the distributed representations model. In some embodiments, this property of the embeddings enables a sentiment model trained in one language to predict on another. In some embodiments, a sentiment trained in one language will predict on another better where there is a shared vocabulary between languages. In some embodiments, during pre-training, the neural language models can construct a hidden layer using neurons sampled according to vocabulary frequencies. In this way, the hidden layer construction for each language will be similar despite having completely different input layer representations. The consistency in the hidden layer can enable
the alignment of semantic concepts in vector space. This alignment can enable sentiment prediction across languages.
Implementations and Results
[0061] The following describes implementations of various aspects of the disclosure and corresponding results. Some experimental data are presented herein for purposes of illustration and should not be construed as limiting the scope of the present disclosure in any way or excluding any alternative or additional embodiments.
[0062] The present inventors conducted experiments on word-analogy tasks made up of a variety of word similarity tasks. Specifically, the Google Analogy Dataset was used, which contains 19,544 questions, split into semantic and syntactic sections. Both sections are further divided into subcategories based on analogy type. Each analogy task is phrased as a question "A is to B as C is to ?".
[0063] All training was performed over the dataset available from the Google word2vec website (https://code.google.eom/p/word2vec), using the packaged word-analogy evaluation script. The dataset contains approximately 8 billion words collected from English News Crawl, 1 -Billion- Word Benchmark, UMBC Webbase, and English Wikipedia. The dataset used leverages the default data-phrase2.txt normalization in all training, which includes both single tokens and phrases. Unless otherwise specified, all parameters for training and evaluating are identical to the default parameters specified in the default word2vec big model.
[0064] FIG. 6 displays the relative accuracy of each partition in a PENN model as judged by row-relative word-analogy scores. Other experiments indicated that the pattern present in the heat-map is consistent across parameter tunings. There is a clear quality difference between window positions that predict forward (left side of the figure) and window positions that predict backward (right side of the figure). "Currency" achieves most of its predictive power in short range predictions, whereas "capital-common countries" is a much smoother gradient over the window.
[0065] These patterns support the intuition that different window positions play different roles in different tasks.
TABLE A
Configuration Style W2V D W &
D
Training Style CBOW CLOW CLO
W
Word Vector Size 500 500 500
Partition Size 500 250 250
Window Size 10 10 1
capital-common 89.72 94.86
capital-world 92.1 1 92,46 90.96
currency 14.63 19.95 12.37
city-in-state 78.76 72.48 69.56
family 82,81 86.76 85.18
SEMANTIC TOTAL 81.02 80.19 78.07
adjective-to-adverb 37.70 35.08 35.08
opposite 36.21 40.15 37.93
comparative 86.71 87.31 93.39
superlative 80.12 82,00 87.25
present-participle 77.27 80.78 83.05
nationality-adjective 90.43 90.18 88.49
past-tense 72.37 73.40 75.90
plural 80.1 8 81.83 74.55
plural-verbs 58.51 63.68 78.97
SYNTACTIC TOTAL 72.04 73.45 74.59
COMBINED TOTAL 76.08 76.49 76.16
[0066] Table A shows the performance of the default CBOW implementation of word2vec relative to the directional configuration. Most prominently, PENN outperforms the word2vec implementation using only a window size of 1 , whereas word2vec was parameterized with the default of 10. Furthermore, it can be seen that increasing dimensionality of baseline CBOW word2vec past 500 achieves suboptimal performance. Thus, a fair comparison of two models should be between optimal (as opposed to equal) parameterization for each model. This is especially important given that PENN models are modeling a much richer probability distribution, given that order is being preserved. Thus, optimal parameter settings often require larger dimensionality. Additionally in this table, it can be seen that, unlike the original CBOW
word2vec, bigger window size is not always better. Larger windows tend to create slightly more semantic embeddings, whereas smaller window sizes tend to create slightly more syntactic embeddings. This follows the intuition that syntax plays a large role in grammar, which is dictated by rules about which words make sense to occur immediately next to each other. Words that are +5 words apart cluster based on subject matter and semantics as opposed to grammar. With respect to window size and overall quality, because partitions slice up the global vector for a word, increasing the window size decreases the size of each partition in the window if the global vector size remains constant. Since each embedding is attempting to model a very complex (hundreds of thousands of words) probability distribution, the partition size in each partition must remain high enough to model this distribution. Thus, modeling large windows for semantic embeddings is optimal when using either the directional embedding model, which has a fixed partition size of 2, or a large global vector size. The directional model with optimal parameters has slightly less quality than the windowed model with optimal parameters due to the vector averaging occurring in each window pane.
TABLE B
c.e 0.500 bans 0.895
[0067] Table B documents the change in syntactic analogy query quality as a result of the interpolated DIEM vectors. For the DIEM experiment, each analogy query was first performed by running the query on CLOW and DIEM independently, and selecting the top thousand CLOW cosine similarities. The present inventors summed the squared cosine similarity of each
of these top thousand with each associated cosine similarity returned by the DIEM, and resorted. This was found to be an efficient estimation of concatenation that did not reduce quality.
[0068] Final results show a lift in quality and size over previous models with a 40% syntactic lift over other results. Within PE models, there exists a speed versus performance tradeoff between SG-DIEM and CLOW-DIEM. Unlike word2vec where SG was both slower and higher quality, CLOW is the slower model and achieves a higher score. In this case, the present inventors achieve a 20x level of parallelism in SG-DIEM relative to CLOW, with each model training partitions of 250 dimensions (250 * 20 = 5000 final dimensionality). A 160 billion parameter network was also trained overnight on 3 multi-core CPUs, however it yielded 20000 dimensional vectors for each word and subsequently overfit the training data. This is because a dataset of 8 billion tokens with a negative sampling parameter of 10 has 80 billion training examples. Having more parameters than training examples overfits a dataset, whereas 40 billion performs at parity with current state of the art.
[0069] FIG. 6 displays the relative accuracy of each partition in a PENN model as judged by row-relative word-analogy scores. Other experiments indicated that the pattern present in the heat-map is consistent across parameter tunings. There is a clear quality difference between window positions that predict forward (left side of the figure) and window positions that predict backward (right side of the figure). "Currency" achieves most of its predictive power in short range predictions, whereas "capital-common countries" is a much smoother gradient over the window. These patterns support the intuition that different window positions play different roles in different tasks.
TABLE C
Semantic Architecture CBO CLO DIE
W W M
Semantic Vector Dim, 500 500 500
SEMANTIC TOTAL 81.02 80.19 80.19 adj ective-to-adverb 37.70 35.08 94.55 opposite 36.25 40.15 74.60 comparative 86.71 87.31 92.49 superlative 80.12 82.00 87.61 present-participle 77.27 80.78 93.27 nationality-adjective 90.43 90.18 71.04 past-tense ^2 ^ " 73.40 47.56 plural 80.18 81.83 93.69 plural-verbs 58.51 63.68 95.97
SYNTACTIC TOTAL 72.04 73.45 81.53
COMBINED SCORE 76.08 76.49 80.93
1 "ABLE D
Configuration Style W2V D W &
D
Training Style CBOW CLOW CLO
W
Word Vector Size 500 500 500
Partition Size 500 250 250
Window Size 10 10 1 capital-common 89.72 92.29 94.8 6 capital-world 92.1 1 92.46 90.96 currency 14.63 19.95 12.37 city-in-state 78.76 72.48 69.5 6 family 82.81 86,76 85.1 8
SEMANTIC TOTAL 81.02 80.19 78.07 adj ective-to-adverb 37.70 35.08 35.08 opposite 36.21 40,15 37.93 comparative 86.71 87.31 93.3 9 superlative 80.12 82.00 87.2 5 present-participle 77.27 80.78 83.0 5 nationality-adjective 90.43 90.18 88.49 past-tense 72.37 73,40 75.9 0 plural 80.18 81.83 74.5 5 plural-verbs 58.51 63,68 78.97
SYNTACTIC TOTAL 72.04 73,45 74.5 9
COMBINED TOTAL 76.08 76.49 76.1 6
[0070] Tables C and D show the performance of the default CBOW implementation of word2vec relative to the directional configuration. Most prominently, PENN outperforms the word2vec implementation using only a window size of 1 , whereas word2vec was parameterized with the default of 10. Furthermore, it can be seen that increasing dimensionality of baseline CBOW word2vec past 500 achieves suboptimal performance. Thus, a fair comparison of two models should be between optimal (as opposed to equal) parameterization for each model. This is especially important given that PENN models are modeling a much richer probability distribution, given that order is being preserved. Thus, optimal parameter settings often require larger dimensionality. Additionally in this table, it can be seen that, unlike the original CBOW word2vec, bigger window size is not always better. Larger windows tend to create slightly more semantic embeddings, whereas smaller window sizes tend to create slightly more syntactic embeddings. This follows the intuition that syntax plays a huge role in grammar, which is dictated by rules about which words make sense to occur immediately next to each other. Words that are +5 words apart cluster based on subject matter and semantics as opposed to grammar. With respect to window size and overall quality, because partitions slice up the global vector for a word, increasing the window size decreases the size of each partition in the window if the global vector size remains constant. Since each embedding is attempting to model a very complex (hundreds of thousands of words) probability distribution, the partition size in each partition must remain high enough to model this distribution. Thus, modeling large windows for semantic embeddings is optimal when using either the directional embedding model, which has a fixed partition size of 2, or a large global vector size. The directional model with optimal parameters has slightly less quality than the windowed model with optimal parameters due to the vector averaging occurring in each window pane.
[0071] Table B documents the change in syntactic analogy query quality as a result of the interpolated DIEM vectors. For the DIEM experiment, each analogy query was first performed by running the query on CLOW and DIEM independently, and selecting the top thousand CLOW cosine similarities. The present inventors summed the squared cosine similarity of each of these top thousand with each associated cosine similarity returned by the DIEM, and resorted. This was found to be an efficient estimation of concatenation that did not reduce quality. Final
results show a lift in quality and size over previous models with a 40% syntactic lift over the best published result. Within PE models, there exists a speed vs. performance tradeoff between SG-DIEM and CLOW-DIEM. Unlike word2vec where SG was both slower and higher quality, CLOW is the slower model and achieves a higher score. In this case, the present inventors achieved a 20x level of parallelism in SG-DIEM relative to CLOW, with each model training partitions of 250 dimensions (250 * 20 = 5000 final dimensionality). A 160 billion parameter network was also trained overnight on 3 multi-core CPUs, however it yielded 20000 dimensional vectors for each word and subsequently overfit the training data. This is because a dataset of 8 billion tokens with a negative sampling parameter of 10 has 80 billion training examples. Having more parameters than training examples overfits a dataset, whereas 40 billion performs at parity with current state of the art, as pictured in Table E.
TABLE E
[0072] The present inventors conducted further experiments to approximate the full conditional distribution of word embeddings. These experiments were likewise conducted on word-analogy tasks in the Google Analogy Dataset. This data set contains 19,544 questions asking "a is to b as c is to ?" and is split into 14 subcategories, 5 semantic and 9 syntactic.
[0073] The training occurs over the 'data-phrase2.txt' dataset available from the Google word2vec website using the packaged word-analogy evaluation script for querying each individual model. Because querying each semantic model requires as much as 128 gb of RAM, a normalized concatenation approximation was used similar to that in the experiments described above. Each analogy was queried against each semantic model in parallel, arid the top 1000 results were saved. Each word's score was summed across the various models to create a global
score for each word. The word with the maximum value was selected as the ensemble's prediction. In the present inventors' experiments, each score in every semantic model (PE model) was raised to the power of 0.1 before being summed. This was found to normalize the models to similar confidence ranges.
[0074] For syntactic analogy queries, the top 100 words from the semantic step were selected used to create syntactic embeddings. The analogy query was then performed using the syntactic embeddings such that syntactic embedding based cosine distance scores were generated. These scores were summed element-wise with semantic scores in the original top 100 words. The top word was then selected as the final analogy answer. This step was skipped for analogies coming from semantic categories. Syntactic scores were normalized by raising them to the power of 10.
[0075] The experiment described below approximates the full conditional distribution by concatenating embeddings from a variety of configurations. These configurations are shown in Table F below:
TABLE F
[0076] The results of these experiments is shown in Table G;
TABLE G
[0077] Note that adding additional models to the ensemble increased the overall score even when the model being added had a lower score than the ensemble already. At a granular level, this is a result of the uniqueness of the model being added. The word-analogy misses that the model produces are significantly different than the misses that the ensemble produces such that their combination achieves a higher score than either did apart.
[0078] Final results show a lift in quality over previous models with a 59.2% error reduction in syntactic scores and a 40.2% error reduction overall relative to word2vec. Training semantic word embeddings based on varying probability distributions, normalizing, and concatenating is beneficial for word-analogy tasks, achieving state-of-the-art performance on both semantic and syntactic categories by a significant margin. These results are based on the intuition that different distributions model different perspectives on the same word, the aggregate of which is more expressive than each embedding individually.
[0079] Experiments were further performed using embodiments to perform neural language modeling across different languages, all performed with the IMDB movie review corpus. The
corpus consists of 100,000 movie reviews that are labeled for sentiment polarity (positive / negative). The reviews are split into training, testing and development sections. The training and testing sets have 25,000 reviews each with 12,500 positive and 12,500 negative reviews. The remaining 50,000 documents comprise the development set.
[0080] The present inventors used the Google Translate API to curate a Spanish movie review corpus. Systems such as Google's translation API have been evaluated by research in the NLP literature and are generally accepted as a mechanism with which to curate labeled data sets in new languages [ 1][2][3]. For each review in the original English IMDB corpus, an API request was made to the Google Translate API to translate the entire text of the review to Spanish. The resulting paragraph-level translations maintained the same polarity label as well as ID in order to maintain one-to-one comparisons between the Spanish and English corpora.
[0081] The present inventors used word2vec's skip-gram architecture to learn 100 dimensional feature representations for words in the corpus by using hierarchical softmax and negative sampling (with 10 samples) for the focus word representations. Pre-processing consists of lower-casing, separating words from punctuation, and removing HTML tags. After learning word representations, stop-words are filtered out using NLTK's [4] stop word corpus and review-level representations are created by averaging the features for each remaining word in the review.
[0082] The present inventors used Paragraph Vector's distributed bag-of-words (dbow) configuration to learn 100 dimensional feature representations for reviews by using both hierarchical softmax and negative sampling (with 10 samples) for the focus word representations. Pre-processing consists of lower-casing, separating words from punctuation, and removing HTML tags. Stop words are preserved in the reviews.
[0083] The present inventors used PE 's skip-gram architecture to learn 100 dimensional feature representations for words in the corpus by using negative sampling (with 10 samples) and no hierarchical softmax for the focus representations. Partitions from window posititions -9 to -4 were used. The pre-processing and averaging steps are identical to the process described in the word2vec configuration section.
[0084] Model ensembling was performed using an augmented implementation of the recent work by [5]. For each corpus (English and Spanish) a validation word model and a full word model was trained. The validation word models (word2vec and PE ) are trained on 20,000 reviews from the training sets (10,000 positive, 10,000 negative) and 50,000 reviews in the development sets. The remaining 5,000 reviews in the training sets are used as validation sets. The full word models are trained on the entire training sets and development sets. Paragraph Vector representations are learned using unsupervised pre-training over the entire corpus (training, development, and testing) using the implementation provided with [5].
[0085] Single language models pre-train English and Spanish word models independently from one another. The English and Spanish feature representations are learned in separate unsupervised models. Their evaluations act as a baseline with which to interpret the cross-language models in later experiments. Review-level representations are used as features in a support vector machine (SVM). A separate classifier was used for each language. The unsupervised models for English and Spanish never interact. Spanish models were evaluated against the Spanish test set and English models against the English test set. The resulting scores for these models are provided in Table H.
TABLE H
P re-Training Training Target Language Model Percent Accuracy
Word2Vec 88.42
Paragraph Vector 88.55
English English English
PENN 87.90
Ensemble 89.22
Word2Vec 87.31
Paragraph Vector 85.3
Spanish Spanish Spanish
PENN 82.25
Ensemble 86.63
[0086] Cross-language models pre-train using both English and Spanish to learn representations. The English and Spanish feature representations are learned in the same unsupervised model.
Review-level representations are used as features in an SVM. The classifier is trained on English examples. No training is performed over the Spanish corpus. The features learned in the cross-language pretraining allow feature resources to be shared between both Spanish and English. The resulting models are evaluated on the Spanish corpus. The scores for these models are provided in Table I.
TABLE I
Pre-Training Training Target Language Model Percent Accuracy
Word2Vec 77.54
Paragraph Vector 86.02
English + Spanish English English
PENN 78.33
Ensemble 85.11
Word2Vec 80.08
Paragraph Vector 78.86
English + Spanish English Spanish
PENN 77.54
[0087] Bilingual feature models ensemble single language models with cross-language models, the present inventors leverage ID mappings between the English and Spanish corpora to ensemble each combination of pretraining and training routines mentioned in the previous section. Furthermore, ngram models (no pre-training) were incorporated that are trained in English. The first n-gram model is an Averaged Perception with high affinity n-gram features (chosen through Person's as proposed in [6]). The second n-gram model is a Naive Bayes SVM as proposed in [7]. Maintaining IDs between the languages allows for obtaining multiple predictions for each review: some models predict on the English translation of the review, other models predict on the Spanish translation of the review, creating an ensembled review sentiment score. This ensembling exceeds the state-of-the-art for IMDB polarity classification. These scores are provided in Table J below:
TABLE J
Pre-Training Training Target Language Model Percent Accuracy
Word2Vec 88.42
Paragraph Vector 88,55
English English English
PENN 87.90
Ensemble 89.22
Word2Vec 87.31
Spanish Spanish Spanish Paragraph Vector 85,30
PENN 82.25
NBSVM-T .I 91.87
None English English
Averaged Perceptron 87.75
Full Ensemble 94.20
Previous State-of-!he-Art 92.57
[0088] The present inventors' experiments demonstrate the efficacy of neural language modeling for cross-language sentiment classification. Neural language models encode meaningful relationships between words by mapping them to vector space, and this property of their embeddings helps explain why the cross-language models can effectively predict on languages on which they have not trained. The present inventors employ three different techniques for unsupervised pre-training and show that each technique is capable of encoding semantic relationships across languages. The present inventors trained baseline, single language models with which to compare the cross-language predictions (Table H). The highest baseline score for Spanish is 87.31% (word2vec) and the ensemble score is 86.63%. Using the baseline scores for comparison, the present inventors evaluate cross-language models on the Spanish corpus. Without training on Spanish, the models are able to classify polarity with 80% accuracy, and when these same models are ensembled together they classify polarity with an accuracy of 85.51% (Table I). Thus, the present inventors achieve a margin of 2% between the baseline, single language models and cross-language models. The overall accuracy of the cross-language models increases by 5% when they are ensembled. This behavior suggests that pretraining for
each language captures slightly different patterns and structure that ultimately help the classifier accurately predict labels for those examples near the decision boundary.
[0089] Further to the examples described above for implementing various aspects of the present disclosure across various fields of application, aspects of the present disclosure may be implemented in numerous other fields, including but not limited to healthcare, for instance in real-time clinical monitoring and historical reporting on patient populations. Some embodiments described herein may be applied in clinical decision support and other tasks that require summaries of unstructured text. In one example use case, aspects of the present disclosure can be used for modeling the likelihood of a patient developing an infection, including full body systemic infection. More generally speaking, aspects of the present disclosure may be used for predicting whether or not a patient will develop a particular health condition. Electronic health records (EHR) for various patients may be used with aspects of the present disclosure as described above in order to learn representations of the positions of nodes and using embeddings to predict whether a patient would develop an an infection. The locations of particular clusters in vector space can be representative of such likelihoods of developing certain conditions.
[0090] FIG. 7 is a computing architecture diagram of a computing system capable of implementing aspects of the present disclosure in accordance with one or more embodiments. The computing system includes a computer 700 that can be configured to perform one or more functions associated with the present disclosed technology. The computer 700 includes a processing unit 702, a system memory 704, and a system bus 706 that couples the memory 704 to the processing unit 702. The computer 700 further includes a mass storage device 712 for storing program modules 714. The program modules 714 can include computer-executable modules for performing the one or more functions associated with FIGS. 1-6. The mass storage device 712 further includes a data store 716. The mass storage device 712 is connected to the processing unit 702 through a mass storage controller (not shown) connected to the bus 706. The mass storage device 712 and its associated computer-storage media provide non-volatile storage for the computer 700. Although the description of computer-readable storage media contained herein refers to a mass storage device, such as a hard disk or CD-ROM drive, it should be
appreciated by those skilled in the art that computer-readable storage media can be any available computer-readable storage medium that can be accessed and read by the computer 700.
[0091] By way of example, and not limitation, computer-readable storage media can include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-storage instructions, data structures, program modules, or other data. For example, computer-readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, digital versatile disks ("DVD"), HD-DVD, BLU-RAY, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer 700. Computer-readable storage media as described herein does not include transitory signals.
[0092] According to various embodiments, the computer 700 can operate in a networked environment using logical connections to remote computers through a network 718. The computer 700 can connect to the network 718 through a network interface unit 710 connected to the bus 706. It should be appreciated that the network interface unit 710 can also be utilized to connect to other types of networks and remote computer systems. The computer 700 can also include an input/output controller 708 for receiving and processing input from a number of input devices. Input devices may include, but are not limited to, keyboards, mice, stylus, touchscreens, microphones, audio capturing devices, or image/video capturing devices. An end user may utilize such input devices to interact with a user interface such as a graphical user interface for managing various functions performed by the computer 700. The bus 706 can enable the processing unit 702 to read code and/or data to/from the mass storage device 712 or other computer-storage media. The computer-readable storage media can represent apparatus in the form of storage elements that are implemented using any suitable technology, including but not limited to semiconductors, magnetic materials, optics, or the like. The program modules 714 include software instructions that, when loaded into the processing unit 702 and executed, cause
the computer 700 to provide functions for neural language modeling in accordance with aspects of the present disclosure described herein with reference to exemplary embodiments.
[0093] The program modules 714 can also provide various tools or techniques by which the computer 700 can participate within the overall systems or operating environments using the components, flows, and data structures discussed throughout this description. In general, the program modules 714 can, when loaded into the processing unit 702 and executed, transform the processing unit 702 and the overall computer 700 from a general- purpose computing system into a special-purpose computing system. The processing unit 702 can be constructed from any number of transistors or other discrete circuit elements, which can individually or collectively assume any number of states. More specifically, the processing unit 702 can operate as a finite-state machine, in response to executable instructions contained within the program modules 714. These computer-executable instructions can transform the processing unit 702 by specifying how the processing unit 702 transitions between states, thereby transforming the transistors or other discrete hardware elements constituting the processing unit 702 .
[0094] Encoding the program modules 714 can also transform the physical structure of the computer-storage media. The specific transformation of physical structure can depend on various factors, in different implementations of this description. Examples of such factors can include, but are not limited to: the technology used to implement the computer-storage media, whether the computer storage media are characterized as primary or secondary storage, and the like. For example, if the computer-storage media are implemented as semiconductor-based memory, the program module 714 can transform the physical state of the semiconductor memory, when the software is encoded therein. For example, the program modules 714 can transform the state of transistors, capacitors, or other discrete circuit elements constituting the semiconductor memory.
[0095] As another example, the computer-readable storage media can be implemented using magnetic or optical technology. In such implementations, the program modules 714 can transform the physical state of magnetic or optical media, when the software is encoded therein. These transformations can include altering the magnetic characteristics of particular locations within given magnetic media. These transformations can also include altering the physical
features or characteristics of particular locations within given optical media, to change the optical characteristics of those locations. It should be appreciated that other transformations of physical media are possible without departing from the scope of the present disclosure.
[0096] While the present disclosure has been described in connection with a plurality of exemplary aspects, as illustrated in the various figures and discussed above, it is understood that other similar aspects can be used or modifications and additions can be made to the described aspects for performing the same function of the present disclosure without deviating therefrom. For example, in various aspects of the disclosure, methods and compositions were described according to aspects of the presently disclosed subject matter. However, other equivalent methods or composition to these described aspects are also contemplated by the teachings herein. Therefore, the present disclosure should not be limited to any single aspect, but rather construed in breadth and scope in accordance with the appended claims.
REFERENCES
[1] Balahur, Alexandra, and Marco Turchi. "Improving Sentiment Analysis in Twitter Using Multilingual Machine Translated Data." RANLP. 2013.
[2] Balahur, Alexandra, and Marco Turchi. "Multilingual sentiment analysis using machine translation?." Proceedings of the 3rd Workshop in Computational Approaches to Subjectivity and Sentiment Analysis. Association for Computational Linguistics, 2012.
[3] Wan, Xiaojun. "Using bilingual knowledge and ensemble techniques for unsupervised Chinese sentiment analysis." Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2008.
[4] Bird, Steven. "NLTK: the natural language toolkit." Proceedings of the COLING/ACL on Interactive presentation sessions. Association for Computational Linguistics, 2006.
[5] Mesnil, Gr'egoire, et al. "Ensemble of Generative and Discriminative Techniques for Sentiment Analysis of Movie Reviews." arXiv preprint arXiv: 1412.5335 (2014).
[6] Manning, Christopher D. and Schutze, Hinrich. "Foundations of Statistical Natural Language Processing." 1999.
[7] Wang, Sida, and Christopher D. Manning. "Baselines and bigrams: Simple, good sentiment and topic classification." Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2. Association for Computational Linguistics, 2012.
Claims
1. A computer-implemented neural network, comprising:
a plurality of neural nodes, each of the neural nodes having a plurality of input weights corresponding to a vector of real numbers;
an input neural node corresponding to a linguistic unit selected from an ordered list of a plurality of linguistic units;
an embedding layer comprising a plurality of embedding node partitions, each embedding node partition comprising one or more neural nodes, wherein each of the embedding node partitions corresponds to a position in the ordered list relative to a focus term, is configured to receive an input from an input node, and is configured to generate an output; and
a classifier layer comprising a plurality of neural nodes, each neural node in the classifier layer configured to receive the embedding outputs from the embedding layer, and configured to generate an output corresponding to a probability that a particular linguistic unit is the focus term.
2. The computer-implemented neural network of claim 1, wherein the linguistic unit is a character.
3. The computer- implemented neural network of claim 1, wherein the linguistic unit is a word.
4. The computer- implemented neural network of claim 1 , wherein the positions relative to a focus term of the embedding node partitions are window positions relative to the focus term.
5. The computer-implemented neural network of claim 1, wherein the positions relative to a focus term of the embedding node partitions are directions relative to the focus term.
6. The computer-implemented neural network of claim 1 , wherein the neural network is trained by performing functions that comprise:
removing the focus term from the ordered list of linguistic units;
selecting a partition from each remaining linguistic unit's embeddings based on that linguistic unit's position relative to the focus term;
concatenating the partitions;
propagating the partitions through the classifier layer; and
updating weights for one or more neural nodes in the classifier layer and embedding layer such that a probability of the presence of the focus term is approximately 100% and other randomly sampled linguistic units are approximately 0%.
7. The computer-implemented neural network of claim 1 , wherein the neural network is trained by performing functions that comprise:
training a first partition of the embedding node partitions by:
removing the focus term from the ordered list of linguistic units;
selecting a partition from each remaining linguistic unit's embeddings based on that respective linguistic unit's position relative to the focus term; and
updating weights for each neural node in the classifier layer and embedding layer such that the probability of the presence of the focus term is about 100% and other randomly sampled linguistic units are 0%.
8. The computer- implemented neural network of claim 7, wherein the neural network is trained by performing functions that further comprise:
training a second partition by:
removing the focus term from the ordered list of linguistic units;
selecting a partition from each remaining linguistic unit's embedding based on that linguistic unit's position relative to the focus term; and
updating the weights to model the probability of the presence of the focus term is about 100% and other randomly sampled linguistic units are 0%,
wherein the steps of training a first partition and training a second partition are performed in parallel.
9. The computer-implemented neural network of claim 1, wherein particular linguistic units are selected from the ordered list of the plurality of linguistic units and multiple linguistic domains associated with the selected particular linguistic units are modeled into a common vector space, wherein each of the multiple linguistic domains corresponds to a different language.
10. A computer- implemented neural network, comprising:
a computer- implemented data structure including:
an input matrix synO of N rows and M columns, wherein each row corresponds to a linguistic unit and all columns are partitioned into P partitions, wherein each partition is a vector of an equal number of numerical values, and
an output matrix synl of M rows and N columns, wherein each column corresponds to a linguistic unit and all rows are partitioned into P partitions of equal number and size to the partitions in synO, wherein each partition is a vector of an equal number of numerical values,
wherein the neural network is trained to perform functions that include predicting a linguistic unit given its surrounding context linguistic units within a sentence, wherein each context linguistic unit relates to a partition of its corresponding row in synO corresponding to its relative position relative to the focus term and multiples by the corresponding partition in synl corresponding to this same relative partition in the focus term.
11. The computer-implemented neural network of claim 10, wherein each selected partition in synO and synl is updated to make correct predictions true and incorrect predictions false.
12. The computer- implemented neural network of claim 10, wherein the neural network is trained by performing functions that comprise:
removing the focus term from the ordered list of linguistic units;
selecting a partition from each remaining linguistic unit's embedding based on that linguistic unit's position relative to the focus term;
concatenating the partitions;
propagating the partitions through synl; and
updating the weights for each neural node in the classifier layer and embedding layer such that the probability of the presence of the focus term is approximately 100% and the other randomly sampled linguistic units are approximately 0%.
13. The computer- implemented neural network of claim 10, wherein each partition is trained by performing functions that comprise:
removing the focus term from the ordered list of linguistic units;
selecting a partition from each remaining linguistic unit's embedding based on that linguistic unit's position relative to the focus term; and
updating the weights for each neural node in synl and synO such that the probability of the presence of the focus term approaches 100% and the other randomly sampled linguistic units is about 0%.
14. The computer- implemented neural network of claim 10, wherein in accordance with an interpolation of the synO vectors, the linguistic units are characters according to the following procedure:
interpolating character embeddings in a bucketed fashion, such that the final word embedding is comprised of a predetermined number of buckets; and
weighting each character, via summing, into each final word embedding based on proximity to the bucket position given the respective character position in the word, such that the distance is calculated according to a squared percent distance across the word,
wherein character embeddings do not overlap across the bucketed partitions in the final word embedding.
15. A system having one or more processors configured to implement:
a plurality of neural nodes, each of the neural nodes having a plurality of input weights corresponding to a vector of real numbers;
an input neural node corresponding to a linguistic unit selected from an ordered list of a plurality of linguistic units;
an embedding layer comprising a plurality of embedding node partitions, each embedding node partition comprising one or more neural nodes, wherein each of the embedding node partitions corresponds to a position in the ordered list relative to a focus term, is configured to receive an input from an input node, and is configured to generate an output; and
a classifier layer comprising a plurality of neural nodes, each neural node in the classifier layer configured to receive the embedding outputs from the embedding layer, and configured to generate an output corresponding to a probability that a particular linguistic unit is the focus term.
16. The system of claim 15, wherein the linguistic unit is a character or a word.
17. The system of claim 15, wherein the positions relative to a focus term of the embedding node partitions are window positions relative to the focus term or directions relative to the focus term.
18. The system of claim 15, wherein the plurality of neural nodes, input neural node, embedding layer, and classifier layer form a neural network, the neural network trained by performing functions that comprise:
training a first partition of the embedding node partitions by:
removing the focus term from the ordered list of linguistic units;
selecting a partition from each remaining linguistic unit's embeddings based on that respective linguistic unit's position relative to the focus term; and
updating weights for each neural node in the classifier layer and embedding layer such that the probability of the presence of the focus term is about 100% and other randomly sampled linguistic units are 0%.
19. The system of claim 18, wherein the neural network is trained by performing functions that further comprise:
training a second partition by:
removing the focus term from the ordered list of linguistic units;
selecting a partition from each remaining linguistic unit's embedding based on that linguistic unit's position relative to the focus term; and
updating the weights to model the probability of the presence of the focus term is about 100% and other randomly sampled linguistic units are 0%,
wherein the steps of training a first partition and training a second partition are performed in parallel.
20. The system of claim 15, wherein particular linguistic units are selected from the ordered list of the plurality of linguistic units and multiple linguistic domains associated with the selected particular linguistic units are modeled into a common vector space, wherein each of the multiple linguistic domains corresponds to a different language.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP16753087.2A EP3259688B1 (en) | 2015-02-19 | 2016-02-18 | Systems and methods for neural language modeling |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562118200P | 2015-02-19 | 2015-02-19 | |
US62/118,200 | 2015-02-19 | ||
US201562128915P | 2015-03-05 | 2015-03-05 | |
US62/128,915 | 2015-03-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016134183A1 true WO2016134183A1 (en) | 2016-08-25 |
Family
ID=56689176
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2016/018536 WO2016134183A1 (en) | 2015-02-19 | 2016-02-18 | Systems and methods for neural language modeling |
Country Status (3)
Country | Link |
---|---|
US (1) | US10339440B2 (en) |
EP (1) | EP3259688B1 (en) |
WO (1) | WO2016134183A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107330513A (en) * | 2017-06-28 | 2017-11-07 | 深圳爱拼信息科技有限公司 | It is a kind of to extract the method that node semantics are implied in depth belief network |
US10915711B2 (en) | 2018-12-09 | 2021-02-09 | International Business Machines Corporation | Numerical representation in natural language processing techniques |
US11151182B2 (en) * | 2017-07-24 | 2021-10-19 | Huawei Technologies Co., Ltd. | Classification model training method and apparatus |
US11531859B2 (en) | 2017-08-08 | 2022-12-20 | Samsung Electronics Co., Ltd. | System and method for hashed compressed weighting matrix in neural networks |
Families Citing this family (184)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8677377B2 (en) | 2005-09-08 | 2014-03-18 | Apple Inc. | Method and apparatus for building an intelligent automated assistant |
US9318108B2 (en) | 2010-01-18 | 2016-04-19 | Apple Inc. | Intelligent automated assistant |
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
US10002189B2 (en) | 2007-12-20 | 2018-06-19 | Apple Inc. | Method and apparatus for searching using an active ontology |
US9330720B2 (en) | 2008-01-03 | 2016-05-03 | Apple Inc. | Methods and apparatus for altering audio output signals |
US20100030549A1 (en) | 2008-07-31 | 2010-02-04 | Lee Michael M | Mobile device having human language translation capability with positional feedback |
US8676904B2 (en) | 2008-10-02 | 2014-03-18 | Apple Inc. | Electronic devices with voice command and contextual data processing capabilities |
US10255566B2 (en) | 2011-06-03 | 2019-04-09 | Apple Inc. | Generating and processing task items that represent tasks to perform |
US10276170B2 (en) | 2010-01-18 | 2019-04-30 | Apple Inc. | Intelligent automated assistant |
US8682667B2 (en) | 2010-02-25 | 2014-03-25 | Apple Inc. | User profiling for selecting user specific voice input processing information |
US9262612B2 (en) | 2011-03-21 | 2016-02-16 | Apple Inc. | Device access using voice authentication |
US10057736B2 (en) | 2011-06-03 | 2018-08-21 | Apple Inc. | Active transport based notifications |
US10134385B2 (en) | 2012-03-02 | 2018-11-20 | Apple Inc. | Systems and methods for name pronunciation |
US10417037B2 (en) | 2012-05-15 | 2019-09-17 | Apple Inc. | Systems and methods for integrating third party services with a digital assistant |
DE112014000709B4 (en) | 2013-02-07 | 2021-12-30 | Apple Inc. | METHOD AND DEVICE FOR OPERATING A VOICE TRIGGER FOR A DIGITAL ASSISTANT |
US10652394B2 (en) | 2013-03-14 | 2020-05-12 | Apple Inc. | System and method for processing voicemail |
US10748529B1 (en) | 2013-03-15 | 2020-08-18 | Apple Inc. | Voice activated device for use with a voice-based digital assistant |
WO2014197335A1 (en) | 2013-06-08 | 2014-12-11 | Apple Inc. | Interpreting and acting upon commands that involve sharing information with remote devices |
US10176167B2 (en) | 2013-06-09 | 2019-01-08 | Apple Inc. | System and method for inferring user intent from speech inputs |
EP3937002A1 (en) | 2013-06-09 | 2022-01-12 | Apple Inc. | Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant |
DE112014003653B4 (en) | 2013-08-06 | 2024-04-18 | Apple Inc. | Automatically activate intelligent responses based on activities from remote devices |
US10296160B2 (en) | 2013-12-06 | 2019-05-21 | Apple Inc. | Method for extracting salient dialog usage from live data |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9633004B2 (en) | 2014-05-30 | 2017-04-25 | Apple Inc. | Better resolution when referencing to concepts |
TWI566107B (en) | 2014-05-30 | 2017-01-11 | 蘋果公司 | Method for processing a multi-part voice command, non-transitory computer readable storage medium and electronic device |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
US9430463B2 (en) | 2014-05-30 | 2016-08-30 | Apple Inc. | Exemplar-based natural language processing |
US9338493B2 (en) | 2014-06-30 | 2016-05-10 | Apple Inc. | Intelligent automated assistant for TV user interactions |
US9818400B2 (en) | 2014-09-11 | 2017-11-14 | Apple Inc. | Method and apparatus for discovering trending terms in speech requests |
US9668121B2 (en) | 2014-09-30 | 2017-05-30 | Apple Inc. | Social reminders |
US10127911B2 (en) | 2014-09-30 | 2018-11-13 | Apple Inc. | Speaker identification and unsupervised speaker adaptation techniques |
US10074360B2 (en) | 2014-09-30 | 2018-09-11 | Apple Inc. | Providing an indication of the suitability of speech recognition |
US10152299B2 (en) | 2015-03-06 | 2018-12-11 | Apple Inc. | Reducing response latency of intelligent automated assistants |
US9886953B2 (en) | 2015-03-08 | 2018-02-06 | Apple Inc. | Virtual assistant activation |
US9721566B2 (en) | 2015-03-08 | 2017-08-01 | Apple Inc. | Competing devices responding to voice triggers |
US10460227B2 (en) | 2015-05-15 | 2019-10-29 | Apple Inc. | Virtual assistant in a communication session |
US10200824B2 (en) | 2015-05-27 | 2019-02-05 | Apple Inc. | Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device |
US10083688B2 (en) | 2015-05-27 | 2018-09-25 | Apple Inc. | Device voice control for selecting a displayed affordance |
US9552547B2 (en) * | 2015-05-29 | 2017-01-24 | Sas Institute Inc. | Normalizing electronic communications using a neural-network normalizer and a neural-network flagger |
US20160350644A1 (en) | 2015-05-29 | 2016-12-01 | Sas Institute Inc. | Visualizing results of electronic sentiment analysis |
US9595002B2 (en) | 2015-05-29 | 2017-03-14 | Sas Institute Inc. | Normalizing electronic communications using a vector having a repeating substring as input for a neural network |
US9578173B2 (en) | 2015-06-05 | 2017-02-21 | Apple Inc. | Virtual assistant aided communication with 3rd party service in a communication session |
US11025565B2 (en) | 2015-06-07 | 2021-06-01 | Apple Inc. | Personalized prediction of responses for instant messaging |
US20160378747A1 (en) | 2015-06-29 | 2016-12-29 | Apple Inc. | Virtual assistant for media playback |
US10671428B2 (en) | 2015-09-08 | 2020-06-02 | Apple Inc. | Distributed personal assistant |
US10747498B2 (en) | 2015-09-08 | 2020-08-18 | Apple Inc. | Zero latency digital assistant |
US10740384B2 (en) | 2015-09-08 | 2020-08-11 | Apple Inc. | Intelligent automated assistant for media search and playback |
US10331312B2 (en) | 2015-09-08 | 2019-06-25 | Apple Inc. | Intelligent automated assistant in a media environment |
KR102437689B1 (en) * | 2015-09-16 | 2022-08-30 | 삼성전자주식회사 | Voice recognition sever and control method thereof |
US11587559B2 (en) | 2015-09-30 | 2023-02-21 | Apple Inc. | Intelligent device identification |
US10691473B2 (en) | 2015-11-06 | 2020-06-23 | Apple Inc. | Intelligent automated assistant in a messaging environment |
US10956666B2 (en) | 2015-11-09 | 2021-03-23 | Apple Inc. | Unconventional virtual assistant interactions |
US10049668B2 (en) | 2015-12-02 | 2018-08-14 | Apple Inc. | Applying neural network language models to weighted finite state transducers for automatic speech recognition |
US10223066B2 (en) | 2015-12-23 | 2019-03-05 | Apple Inc. | Proactive assistance based on dialog communication between devices |
JP6400037B2 (en) * | 2016-03-17 | 2018-10-03 | ヤフー株式会社 | Determination apparatus and determination method |
US11227589B2 (en) | 2016-06-06 | 2022-01-18 | Apple Inc. | Intelligent list reading |
US10049663B2 (en) | 2016-06-08 | 2018-08-14 | Apple, Inc. | Intelligent automated assistant for media exploration |
US10586535B2 (en) | 2016-06-10 | 2020-03-10 | Apple Inc. | Intelligent digital assistant in a multi-tasking environment |
DK201670540A1 (en) | 2016-06-11 | 2018-01-08 | Apple Inc | Application integration with a digital assistant |
DK179049B1 (en) * | 2016-06-11 | 2017-09-18 | Apple Inc | Data driven natural language event detection and classification |
DK179415B1 (en) | 2016-06-11 | 2018-06-14 | Apple Inc | Intelligent device arbitration and control |
WO2017217661A1 (en) * | 2016-06-15 | 2017-12-21 | 울산대학교 산학협력단 | Word sense embedding apparatus and method using lexical semantic network, and homograph discrimination apparatus and method using lexical semantic network and word embedding |
US10740374B2 (en) | 2016-06-30 | 2020-08-11 | International Business Machines Corporation | Log-aided automatic query expansion based on model mapping |
WO2018022821A1 (en) * | 2016-07-29 | 2018-02-01 | Arizona Board Of Regents On Behalf Of Arizona State University | Memory compression in a deep neural network |
CN107785016A (en) * | 2016-08-31 | 2018-03-09 | 株式会社东芝 | Train the method and apparatus and audio recognition method and device of neural network aiding model |
US10474753B2 (en) | 2016-09-07 | 2019-11-12 | Apple Inc. | Language identification using recurrent neural networks |
US10043516B2 (en) | 2016-09-23 | 2018-08-07 | Apple Inc. | Intelligent automated assistant |
US10664722B1 (en) * | 2016-10-05 | 2020-05-26 | Digimarc Corporation | Image processing arrangements |
US10839284B2 (en) * | 2016-11-03 | 2020-11-17 | Salesforce.Com, Inc. | Joint many-task neural network model for multiple natural language processing (NLP) tasks |
GB201619724D0 (en) * | 2016-11-22 | 2017-01-04 | Microsoft Technology Licensing Llc | Trained data input system |
GB201620232D0 (en) * | 2016-11-29 | 2017-01-11 | Microsoft Technology Licensing Llc | Data input system with online learning |
US11281993B2 (en) | 2016-12-05 | 2022-03-22 | Apple Inc. | Model and ensemble compression for metric learning |
DE102016125162B4 (en) | 2016-12-05 | 2018-07-26 | Ernst-Moritz-Arndt-Universität Greifswald | Method and device for the automatic processing of texts |
US11068658B2 (en) * | 2016-12-07 | 2021-07-20 | Disney Enterprises, Inc. | Dynamic word embeddings |
US11204787B2 (en) | 2017-01-09 | 2021-12-21 | Apple Inc. | Application integration with a digital assistant |
CN108287858B (en) * | 2017-03-02 | 2021-08-10 | 腾讯科技(深圳)有限公司 | Semantic extraction method and device for natural language |
JP6705763B2 (en) * | 2017-03-16 | 2020-06-03 | ヤフー株式会社 | Generation device, generation method, and generation program |
US11640617B2 (en) * | 2017-03-21 | 2023-05-02 | Adobe Inc. | Metric forecasting employing a similarity determination in a digital medium environment |
US10755174B2 (en) * | 2017-04-11 | 2020-08-25 | Sap Se | Unsupervised neural attention model for aspect extraction |
CN110402445B (en) * | 2017-04-20 | 2023-07-11 | 谷歌有限责任公司 | Method and system for browsing sequence data using recurrent neural network |
US10963501B1 (en) * | 2017-04-29 | 2021-03-30 | Veritas Technologies Llc | Systems and methods for generating a topic tree for digital information |
US10417266B2 (en) | 2017-05-09 | 2019-09-17 | Apple Inc. | Context-aware ranking of intelligent response suggestions |
DK201770383A1 (en) | 2017-05-09 | 2018-12-14 | Apple Inc. | User interface for correcting recognition errors |
US10726832B2 (en) | 2017-05-11 | 2020-07-28 | Apple Inc. | Maintaining privacy of personal information |
US10395654B2 (en) | 2017-05-11 | 2019-08-27 | Apple Inc. | Text normalization based on a data-driven learning network |
DK180048B1 (en) | 2017-05-11 | 2020-02-04 | Apple Inc. | MAINTAINING THE DATA PROTECTION OF PERSONAL INFORMATION |
US11301477B2 (en) | 2017-05-12 | 2022-04-12 | Apple Inc. | Feedback analysis of a digital assistant |
DK201770428A1 (en) | 2017-05-12 | 2019-02-18 | Apple Inc. | Low-latency intelligent automated assistant |
DK179745B1 (en) | 2017-05-12 | 2019-05-01 | Apple Inc. | SYNCHRONIZATION AND TASK DELEGATION OF A DIGITAL ASSISTANT |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
DK201770411A1 (en) | 2017-05-15 | 2018-12-20 | Apple Inc. | Multi-modal interfaces |
US10403278B2 (en) | 2017-05-16 | 2019-09-03 | Apple Inc. | Methods and systems for phonetic matching in digital assistant services |
US20180336892A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Detecting a trigger of a digital assistant |
US10311144B2 (en) | 2017-05-16 | 2019-06-04 | Apple Inc. | Emoji word sense disambiguation |
US20180336275A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Intelligent automated assistant for media exploration |
DK179560B1 (en) | 2017-05-16 | 2019-02-18 | Apple Inc. | Far-field extension for digital assistant services |
US10447635B2 (en) | 2017-05-17 | 2019-10-15 | Slice Technologies, Inc. | Filtering electronic messages |
US11005864B2 (en) | 2017-05-19 | 2021-05-11 | Salesforce.Com, Inc. | Feature-agnostic behavior profile based anomaly detection |
US10817650B2 (en) | 2017-05-19 | 2020-10-27 | Salesforce.Com, Inc. | Natural language processing using context specific word vectors |
US10810472B2 (en) | 2017-05-26 | 2020-10-20 | Oracle International Corporation | Techniques for sentiment analysis of data using a convolutional neural network and a co-occurrence network |
US10958422B2 (en) * | 2017-06-01 | 2021-03-23 | Cotiviti, Inc. | Methods for disseminating reasoning supporting insights without disclosing uniquely identifiable data, and systems for the same |
US10657328B2 (en) | 2017-06-02 | 2020-05-19 | Apple Inc. | Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling |
WO2019000051A1 (en) * | 2017-06-30 | 2019-01-03 | Xref (Au) Pty Ltd | Data analysis method and learning system |
CN107436942A (en) * | 2017-07-28 | 2017-12-05 | 广州市香港科大霍英东研究院 | Word embedding grammar, system, terminal device and storage medium based on social media |
US20190066843A1 (en) * | 2017-08-22 | 2019-02-28 | Koninklijke Philips N.V. | Collapsing clinical event data into meaningful states of patient care |
WO2019053898A1 (en) * | 2017-09-15 | 2019-03-21 | Nec Corporation | Pattern recognition apparatus, pattern recognition method, and storage medium |
US10445429B2 (en) | 2017-09-21 | 2019-10-15 | Apple Inc. | Natural language understanding using vocabularies with compressed serialized tries |
US10755051B2 (en) | 2017-09-29 | 2020-08-25 | Apple Inc. | Rule-based natural language processing |
US10915707B2 (en) * | 2017-10-20 | 2021-02-09 | MachineVantage, Inc. | Word replaceability through word vectors |
WO2019083519A1 (en) | 2017-10-25 | 2019-05-02 | Google Llc | Natural language processing with an n-gram machine |
US11335460B2 (en) * | 2017-11-09 | 2022-05-17 | International Business Machines Corporation | Neural network based selection of representative patients |
US11030997B2 (en) * | 2017-11-22 | 2021-06-08 | Baidu Usa Llc | Slim embedding layers for recurrent neural language models |
US10636424B2 (en) | 2017-11-30 | 2020-04-28 | Apple Inc. | Multi-turn canned dialog |
US10733982B2 (en) | 2018-01-08 | 2020-08-04 | Apple Inc. | Multi-directional dialog |
CN108287820B (en) * | 2018-01-12 | 2021-06-11 | 鼎富智能科技有限公司 | Text representation generation method and device |
US10891943B2 (en) * | 2018-01-18 | 2021-01-12 | Citrix Systems, Inc. | Intelligent short text information retrieve based on deep learning |
US11803883B2 (en) | 2018-01-29 | 2023-10-31 | Nielsen Consumer Llc | Quality assurance for labeled training data |
US10733375B2 (en) | 2018-01-31 | 2020-08-04 | Apple Inc. | Knowledge-based framework for improving natural language understanding |
US10789959B2 (en) | 2018-03-02 | 2020-09-29 | Apple Inc. | Training speaker recognition models for digital assistants |
US11334790B1 (en) * | 2018-03-02 | 2022-05-17 | Supplypike, Llc | System and method for recurrent neural networks for forecasting of consumer goods' sales and inventory |
JP6973192B2 (en) * | 2018-03-08 | 2021-11-24 | 日本電信電話株式会社 | Devices, methods and programs that utilize the language model |
US10592604B2 (en) | 2018-03-12 | 2020-03-17 | Apple Inc. | Inverse text normalization for automatic speech recognition |
US10885082B2 (en) | 2018-03-22 | 2021-01-05 | International Business Machines Corporation | Implicit relation induction via purposeful overfitting of a word embedding model on a subset of a document corpus |
US10818288B2 (en) | 2018-03-26 | 2020-10-27 | Apple Inc. | Natural assistant interaction |
US10909331B2 (en) | 2018-03-30 | 2021-02-02 | Apple Inc. | Implicit identification of translation payload with neural machine translation |
US10928918B2 (en) | 2018-05-07 | 2021-02-23 | Apple Inc. | Raise to speak |
US11145294B2 (en) | 2018-05-07 | 2021-10-12 | Apple Inc. | Intelligent automated assistant for delivering content from user experiences |
US10984780B2 (en) * | 2018-05-21 | 2021-04-20 | Apple Inc. | Global semantic word embeddings using bi-directional recurrent neural networks |
US10678830B2 (en) * | 2018-05-31 | 2020-06-09 | Fmr Llc | Automated computer text classification and routing using artificial intelligence transfer learning |
DK179822B1 (en) | 2018-06-01 | 2019-07-12 | Apple Inc. | Voice interaction at a primary device to access call functionality of a companion device |
US11386266B2 (en) | 2018-06-01 | 2022-07-12 | Apple Inc. | Text correction |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
US10892996B2 (en) | 2018-06-01 | 2021-01-12 | Apple Inc. | Variable latency device coordination |
DK201870355A1 (en) | 2018-06-01 | 2019-12-16 | Apple Inc. | Virtual assistant operation in multi-device environments |
US11076039B2 (en) | 2018-06-03 | 2021-07-27 | Apple Inc. | Accelerated task performance |
US10395169B1 (en) * | 2018-07-06 | 2019-08-27 | Global Elmeast Inc. | Self learning neural knowledge artifactory for autonomous decision making |
US11610107B2 (en) | 2018-07-06 | 2023-03-21 | Global Elmeast Inc. | Methodology to automatically incorporate feedback to enable self learning in neural learning artifactories |
US10311058B1 (en) | 2018-07-06 | 2019-06-04 | Global Elmeast Inc. | Techniques for processing neural queries |
US11449676B2 (en) | 2018-09-14 | 2022-09-20 | Jpmorgan Chase Bank, N.A. | Systems and methods for automated document graphing |
WO2020056199A1 (en) * | 2018-09-14 | 2020-03-19 | Jpmorgan Chase Bank, N.A. | Systems and methods for automated document graphing |
US10812449B1 (en) * | 2018-09-19 | 2020-10-20 | Verisign | Method for generating a domain name using a learned information-rich latent space |
US11010561B2 (en) | 2018-09-27 | 2021-05-18 | Apple Inc. | Sentiment prediction from textual data |
US10839159B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Named entity normalization in a spoken dialog system |
US11170166B2 (en) | 2018-09-28 | 2021-11-09 | Apple Inc. | Neural typographical error modeling via generative adversarial networks |
US11462215B2 (en) | 2018-09-28 | 2022-10-04 | Apple Inc. | Multi-modal inputs for voice commands |
US11475898B2 (en) | 2018-10-26 | 2022-10-18 | Apple Inc. | Low-latency multi-speaker speech recognition |
CN110033091B (en) | 2018-12-13 | 2020-09-01 | 阿里巴巴集团控股有限公司 | Model-based prediction method and device |
US11638059B2 (en) | 2019-01-04 | 2023-04-25 | Apple Inc. | Content playback on multiple devices |
CN109740164B (en) * | 2019-01-09 | 2023-08-15 | 国网浙江省电力有限公司舟山供电公司 | Electric power defect grade identification method based on depth semantic matching |
CN109858031B (en) * | 2019-02-14 | 2023-05-23 | 北京小米智能科技有限公司 | Neural network model training and context prediction method and device |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
US11113469B2 (en) * | 2019-03-27 | 2021-09-07 | International Business Machines Corporation | Natural language processing matrices |
US11144735B2 (en) * | 2019-04-09 | 2021-10-12 | International Business Machines Corporation | Semantic concept scorer based on an ensemble of language translation models for question answer system |
US11307752B2 (en) | 2019-05-06 | 2022-04-19 | Apple Inc. | User configurable task triggers |
DK201970509A1 (en) | 2019-05-06 | 2021-01-15 | Apple Inc | Spoken notifications |
US11475884B2 (en) | 2019-05-06 | 2022-10-18 | Apple Inc. | Reducing digital assistant latency when a language is incorrectly determined |
US11423908B2 (en) | 2019-05-06 | 2022-08-23 | Apple Inc. | Interpreting spoken requests |
CN113811870A (en) * | 2019-05-15 | 2021-12-17 | 北京嘀嘀无限科技发展有限公司 | System and method for generating abstract text excerpts |
US11140099B2 (en) | 2019-05-21 | 2021-10-05 | Apple Inc. | Providing message response suggestions |
US11386276B2 (en) * | 2019-05-24 | 2022-07-12 | International Business Machines Corporation | Method and system for language and domain acceleration with embedding alignment |
DK180129B1 (en) | 2019-05-31 | 2020-06-02 | Apple Inc. | User activity shortcut suggestions |
US11496600B2 (en) | 2019-05-31 | 2022-11-08 | Apple Inc. | Remote execution of machine-learned models |
US11289073B2 (en) | 2019-05-31 | 2022-03-29 | Apple Inc. | Device text to speech |
DK201970510A1 (en) | 2019-05-31 | 2021-02-11 | Apple Inc | Voice identification in digital assistant systems |
US11360641B2 (en) | 2019-06-01 | 2022-06-14 | Apple Inc. | Increasing the relevance of new available information |
US11468890B2 (en) | 2019-06-01 | 2022-10-11 | Apple Inc. | Methods and user interfaces for voice-based control of electronic devices |
US11915701B2 (en) | 2019-06-05 | 2024-02-27 | Refinitiv Us Organization Llc | Automatic summarization of financial earnings call transcripts |
WO2020245754A1 (en) * | 2019-06-05 | 2020-12-10 | Financial & Risk Organisation Limited | Machine-learning natural language processing classifier |
CN110196979B (en) * | 2019-06-05 | 2023-07-25 | 深圳市思迪信息技术股份有限公司 | Intent recognition method and device based on distributed system |
US10685286B1 (en) * | 2019-07-30 | 2020-06-16 | SparkCognition, Inc. | Automated neural network generation using fitness estimation |
US11488406B2 (en) | 2019-09-25 | 2022-11-01 | Apple Inc. | Text detection using global geometry estimators |
US11408746B2 (en) | 2019-12-04 | 2022-08-09 | Toyota Connected North America, Inc. | Systems and methods for generating attributes-based recommendations |
US11875116B2 (en) * | 2019-12-20 | 2024-01-16 | Intuit Inc. | Machine learning models with improved semantic awareness |
CN111625276B (en) * | 2020-05-09 | 2023-04-21 | 山东师范大学 | Code abstract generation method and system based on semantic and grammar information fusion |
US11061543B1 (en) | 2020-05-11 | 2021-07-13 | Apple Inc. | Providing relevant data items based on context |
US11038934B1 (en) | 2020-05-11 | 2021-06-15 | Apple Inc. | Digital assistant hardware abstraction |
US11657332B2 (en) * | 2020-06-12 | 2023-05-23 | Baidu Usa Llc | Method for AI model transferring with layer randomization |
CN111813934B (en) * | 2020-06-22 | 2024-04-30 | 贵州大学 | Multi-source text topic model clustering method based on DMA model and feature division |
US11490204B2 (en) | 2020-07-20 | 2022-11-01 | Apple Inc. | Multi-device audio adjustment coordination |
US11438683B2 (en) | 2020-07-21 | 2022-09-06 | Apple Inc. | User identification using headphones |
US20220284485A1 (en) * | 2021-03-02 | 2022-09-08 | International Business Machines Corporation | Stratified social review recommendation |
CN113032559B (en) * | 2021-03-15 | 2023-04-28 | 新疆大学 | Language model fine tuning method for low-resource adhesive language text classification |
US20230008868A1 (en) * | 2021-07-08 | 2023-01-12 | Nippon Telegraph And Telephone Corporation | User authentication device, user authentication method, and user authentication computer program |
US20240054282A1 (en) * | 2022-08-15 | 2024-02-15 | International Business Machines Corporation | Elucidated natural language artifact recombination with contextual awareness |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080133508A1 (en) * | 1999-07-02 | 2008-06-05 | Telstra Corporation Limited | Search System |
US20110258229A1 (en) * | 2010-04-15 | 2011-10-20 | Microsoft Corporation | Mining Multilingual Topics |
US20130159229A1 (en) * | 2011-12-14 | 2013-06-20 | International Business Machines Corporation | Multi-modal neural network for universal, online learning |
US20130232097A1 (en) * | 2012-03-02 | 2013-09-05 | California Institute Of Technology | Continuous-weight neural networks |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8165870B2 (en) | 2005-02-10 | 2012-04-24 | Microsoft Corporation | Classification filter for processing data for creating a language model |
US8219406B2 (en) | 2007-03-15 | 2012-07-10 | Microsoft Corporation | Speech-centric multimodal user interface design in mobile technology |
US8473430B2 (en) | 2010-01-29 | 2013-06-25 | Microsoft Corporation | Deep-structured conditional random fields for sequential labeling and classification |
US8972253B2 (en) | 2010-09-15 | 2015-03-03 | Microsoft Technology Licensing, Llc | Deep belief network for large vocabulary continuous speech recognition |
US9031844B2 (en) | 2010-09-21 | 2015-05-12 | Microsoft Technology Licensing, Llc | Full-sequence training of deep structures for speech recognition |
US8489529B2 (en) | 2011-03-31 | 2013-07-16 | Microsoft Corporation | Deep convex network with joint use of nonlinear random projection, Restricted Boltzmann Machine and batch-based parallelizable optimization |
US8918352B2 (en) | 2011-05-23 | 2014-12-23 | Microsoft Corporation | Learning processes for single hidden layer neural networks with linear output units |
US10078620B2 (en) | 2011-05-27 | 2018-09-18 | New York University | Runtime reconfigurable dataflow processor with multi-port memory access module |
US8831358B1 (en) | 2011-11-21 | 2014-09-09 | Google Inc. | Evaluating image similarity |
US9235799B2 (en) | 2011-11-26 | 2016-01-12 | Microsoft Technology Licensing, Llc | Discriminative pretraining of deep neural networks |
US8700552B2 (en) | 2011-11-28 | 2014-04-15 | Microsoft Corporation | Exploiting sparseness in training deep neural networks |
US9165243B2 (en) | 2012-02-15 | 2015-10-20 | Microsoft Technology Licensing, Llc | Tensor deep stacked neural network |
US9704068B2 (en) | 2012-06-22 | 2017-07-11 | Google Inc. | System and method for labelling aerial images |
US9292787B2 (en) | 2012-08-29 | 2016-03-22 | Microsoft Technology Licensing, Llc | Computer-implemented deep tensor neural network |
WO2014039732A2 (en) | 2012-09-05 | 2014-03-13 | Element, Inc. | System and method for biometric authentication in connection with camera-equipped devices |
US9477925B2 (en) | 2012-11-20 | 2016-10-25 | Microsoft Technology Licensing, Llc | Deep neural networks training for speech and pattern recognition |
US9251437B2 (en) | 2012-12-24 | 2016-02-02 | Google Inc. | System and method for generating training cases for image classification |
US9406017B2 (en) | 2012-12-24 | 2016-08-02 | Google Inc. | System and method for addressing overfitting in a neural network |
US9811775B2 (en) | 2012-12-24 | 2017-11-07 | Google Inc. | Parallelizing neural networks during training |
US9037464B1 (en) | 2013-01-15 | 2015-05-19 | Google Inc. | Computing numeric representations of words in a high-dimensional space |
US9519858B2 (en) | 2013-02-10 | 2016-12-13 | Microsoft Technology Licensing, Llc | Feature-augmented neural networks and applications of same |
US20140249799A1 (en) | 2013-03-04 | 2014-09-04 | Microsoft Corporation | Relational similarity measurement |
US9177550B2 (en) | 2013-03-06 | 2015-11-03 | Microsoft Technology Licensing, Llc | Conservatively adapting a deep neural network in a recognition system |
US9454958B2 (en) | 2013-03-07 | 2016-09-27 | Microsoft Technology Licensing, Llc | Exploiting heterogeneous data in deep neural network-based speech recognition systems |
US9842585B2 (en) | 2013-03-11 | 2017-12-12 | Microsoft Technology Licensing, Llc | Multilingual deep neural network |
US9099083B2 (en) | 2013-03-13 | 2015-08-04 | Microsoft Technology Licensing, Llc | Kernel deep convex networks and end-to-end learning |
US9141906B2 (en) * | 2013-03-13 | 2015-09-22 | Google Inc. | Scoring concept terms using a deep network |
US20150379571A1 (en) * | 2014-06-30 | 2015-12-31 | Yahoo! Inc. | Systems and methods for search retargeting using directed distributed query word representations |
-
2016
- 2016-02-18 US US15/047,532 patent/US10339440B2/en active Active
- 2016-02-18 WO PCT/US2016/018536 patent/WO2016134183A1/en active Application Filing
- 2016-02-18 EP EP16753087.2A patent/EP3259688B1/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080133508A1 (en) * | 1999-07-02 | 2008-06-05 | Telstra Corporation Limited | Search System |
US20110258229A1 (en) * | 2010-04-15 | 2011-10-20 | Microsoft Corporation | Mining Multilingual Topics |
US20130159229A1 (en) * | 2011-12-14 | 2013-06-20 | International Business Machines Corporation | Multi-modal neural network for universal, online learning |
US20130159231A1 (en) * | 2011-12-14 | 2013-06-20 | International Business Machines Corporation | Multi-modal neural network for universal, online learning |
US20130232097A1 (en) * | 2012-03-02 | 2013-09-05 | California Institute Of Technology | Continuous-weight neural networks |
Non-Patent Citations (3)
Title |
---|
BENGIO ET AL.: "A Neural Probabilistic Language Model.", JOURNAL OF MACHINE LEARNING RESEARCH., 2003, XP058112313, Retrieved from the Internet <URL:http://machinelearning.wustl.edu/mlpapers/paperfiles/BengioDVJ03.pdf> * |
See also references of EP3259688A4 * |
SOCHER.: "Recursive Deep Learning for Natural Language Processing and Computer Vision.", DISS. STANFORD UNIVERSITY., August 2014 (2014-08-01), XP055502734, Retrieved from the Internet <URL:http://nip.stanford.edu/-socherr/thesis.pdf> * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107330513A (en) * | 2017-06-28 | 2017-11-07 | 深圳爱拼信息科技有限公司 | It is a kind of to extract the method that node semantics are implied in depth belief network |
CN107330513B (en) * | 2017-06-28 | 2020-07-31 | 深圳爱拼信息科技有限公司 | Method for extracting hidden node semantics in deep belief network |
US11151182B2 (en) * | 2017-07-24 | 2021-10-19 | Huawei Technologies Co., Ltd. | Classification model training method and apparatus |
US11531859B2 (en) | 2017-08-08 | 2022-12-20 | Samsung Electronics Co., Ltd. | System and method for hashed compressed weighting matrix in neural networks |
US10915711B2 (en) | 2018-12-09 | 2021-02-09 | International Business Machines Corporation | Numerical representation in natural language processing techniques |
Also Published As
Publication number | Publication date |
---|---|
US10339440B2 (en) | 2019-07-02 |
EP3259688A4 (en) | 2018-12-12 |
EP3259688B1 (en) | 2024-09-25 |
US20160247061A1 (en) | 2016-08-25 |
EP3259688A1 (en) | 2017-12-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10339440B2 (en) | Systems and methods for neural language modeling | |
Lee et al. | Learning dense representations of phrases at scale | |
Dhingra et al. | Embedding text in hyperbolic spaces | |
Trask et al. | Modeling order in neural word embeddings at scale | |
Minaee et al. | Automatic question-answering using a deep similarity neural network | |
Alhawarat et al. | A superior Arabic text categorization deep model (SATCDM) | |
Zhao et al. | A protein-protein interaction extraction approach based on deep neural network | |
Gu et al. | Language modeling with sparse product of sememe experts | |
Popov | Neural network models for word sense disambiguation: an overview | |
Luo et al. | Research on Text Sentiment Analysis Based on Neural Network and Ensemble Learning. | |
Alsafari et al. | Semi-supervised self-training of hate and offensive speech from social media | |
Qiu et al. | Chinese Microblog Sentiment Detection Based on CNN‐BiGRU and Multihead Attention Mechanism | |
David et al. | Comparison of word embeddings in text classification based on RNN and CNN | |
Salloum et al. | Analysis and classification of customer reviews in arabic using machine learning and deep learning | |
Pradhan et al. | A multichannel embedding and arithmetic optimized stacked Bi-GRU model with semantic attention to detect emotion over text data | |
Chatterjee et al. | Incremental real-time learning framework for sentiment classification: Indian general election 2019, a case study | |
Wang et al. | Classification-based RNN machine translation using GRUs | |
Yadav et al. | Feature assisted bi-directional LSTM model for protein-protein interaction identification from biomedical texts | |
Lv et al. | Extract, attend, predict: Aspect-based sentiment analysis with deep self-attention network | |
Elfaik | Deep contextualized embeddings for sentiment analysis of Arabic book's reviews | |
Wu et al. | Text window denoising autoencoder: building deep architecture for Chinese word segmentation | |
Kapočiūtė-Dzikienė et al. | Comparison of deep learning approaches for Lithuanian sentiment analysis | |
He et al. | Distant supervised relation extraction via long short term memory networks with sentence embedding | |
Sasikala | Effective Deep Neural Network Method based Sentimental Analysis for Social Media Health Care Information | |
Najm et al. | Text Classification Accuracy Enhancement Using Deep Neural Networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16753087 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2016753087 Country of ref document: EP |