WO2018094295A1 - Adaptive attention model for image captioning - Google Patents
Adaptive attention model for image captioning Download PDFInfo
- Publication number
- WO2018094295A1 WO2018094295A1 PCT/US2017/062434 US2017062434W WO2018094295A1 WO 2018094295 A1 WO2018094295 A1 WO 2018094295A1 US 2017062434 W US2017062434 W US 2017062434W WO 2018094295 A1 WO2018094295 A1 WO 2018094295A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- decoder
- sentinel
- attention
- visual
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Definitions
- the technology disclosed relates to artificial intelligence type computers and digital data processing systems and corresponding data processing methods and products for emulation of intelligence (i.e., knowledge based systems, reasoning systems, and knowledge acquisition systems); and including systems for reasoning with uncertainty (e.g., fuzzy logic systems), adaptive systems, machine learning systems, and artificial neural networks.
- the technology disclosed generally relates to a novel visual attention-based encoder-decoder image captioning model.
- One aspect of the technology disclosed relates to a novel spatial attention model for extracting spatial image features during image captioning.
- the spatial attention model uses current hidden state information of a decoder long short-term memory (LSTM) to guide attention, rather than using a previous hidden state or a previously emitted word.
- LSTM decoder long short-term memory
- Another aspect of the technology disclosed relates to a novel adaptive attention model for image captioning that mixes visual information from a convolutional neural network (CNN) and linguistic information from an LSTM.
- CNN convolutional neural network
- the adaptive attention model automatically decides how heavily to rely on the image, as opposed to the linguistic model, to emit the next caption word.
- Yet another aspect of the technology disclosed relates to adding a new auxiliary sentinel gate to an LSTM architecture and producing a sentinel LSTM (Sn-LSTM).
- the sentinel gate produces a visual sentinel at each timestep, which is an additional representation, derived from the LSTM's memory, of long and short term visual and linguistic information.
- Image captioning is drawing increasing interest in computer vision and machine learning. Basically, it requires machines to automatically describe the content of an image using a natural language sentence. While this task seems obvious for human-beings, it is complicated for machines since it requires the language model to capture various semantic features within an image, such as objects' motions and actions. Another challenge for image captioning, especially for generative models, is that the generated output should be human-like natural sentences.
- FIG. 2A shows an attention leading decoder mat uses previous hidden state information to guide attention and generate an image caption (prior art).
- Deep neural networks have been successfully applied to many areas, including speech and vision.
- RNNs recurrent neural networks
- a long short-term memory (LSTM) neural network is an extension of an RNN that solves this problem.
- LSTM a memory cell has linear dependence of its current activity and its past activity.
- a forget gate is used to modulate the information flow between the past and the current activities.
- LSTMs also have input and output gates to modulate its input and output.
- LSTMs have been configured to condition their output on auxiliary inputs, in addition to the current input and the previous hidden state.
- LSTMs incorporate external visual information provided by image features to influence linguistic choices at different stages.
- image caption generators LSTMs take as input not only the most recently emitted caption word and the previous hidden state, but also regional features of the image being captioned (usually derived from the activation values of a hidden layer in a convolutional neural network (CNN)).
- CNN convolutional neural network
- the auxiliary input carries auxiliary information, which can be visual or textual. It can be generated externally by another LSTM, or derived externally from a hidden state of another LSTM. It can also be provided by an external source such as a CNN, a multilayer perception, an attention network, or another LSTM.
- the auxiliary information can be fed to the LSTM just once at the initial timestep or fed successively at each timestep.
- FIG. 1 illustrates an encoder that processes an image through a convolutional neural network (abbreviated CNN) and produces image features for regions of the image.
- CNN convolutional neural network
- FIG.2A shows an attention leading decoder that uses previous hidden state information to guide attention and generate an image caption (prior art).
- FIG.2B shows the disclosed attention lagging decoder which uses current hidden state information to guide attention and generate an image caption.
- FIG.3A depicts a global image feature generator that generates a global image feature for an image by combining image features produced by the CNN encoder of FIG. 1.
- FIG.3B is a word embedder that vectorizes words in a high-dimensional embedding space.
- FIG.3C is an input preparer that prepares and provides input to a decoder.
- FIG.4 depicts one implementation of modules of an attender that is part of the spatial attention model disclosed in FIG. 6.
- FIG. 5 shows one implementation of modules of an emitter that is used in various aspects of the technology disclosed.
- Emitter comprises a feed-forward neural network (also referred to herein as multilayer perceptron (MLP)), a vocabulary softmax (also referred to herein as vocabulary probability mass producer), and a word embedder (also referred to herein as embedder).
- MLP feed-forward neural network
- vocabulary softmax also referred to herein as vocabulary probability mass producer
- embedder also referred to herein as embedder
- FIG. 6 illustrates the disclosed spatial attention model for image captioning rolled across multiple timesteps.
- the attention lagging decoder of FIG.2B is embodied in and implemented by the spatial attention model.
- FIG. 7 depicts one implementation of image captioning using spatial attention applied by the spatial attention model of FIG. 6.
- FIG. 8 illustrates one implementation of the disclosed sentinel LSTM (Sn-LSTM) that comprises an auxiliary sentinel gate which produces a sentinel state.
- FIG. 9 shows one implementation of modules of a recurrent neural network
- RNN (abbreviated RNN) that implements the Sn-LSTM of FIG. 8.
- FIG. 10 depicts the disclosed adaptive attention model for image captioning that automatically decides how heavily to rely on visual information, as opposed to linguistic information, to emit a next caption word.
- the sentinel LSTM (Sn-LSTM) of FIG. 8 is embodied in and implemented by the adaptive attention model as a decoder.
- FIG. 11 depicts one implementation of modules of an adaptive attender that is part of the adaptive attention model disclosed in FIG. 12.
- the adaptive attender comprises a spatial attender, an extractor, a sentinel gate mass determiner, a sentinel gate mass softmax, and a mixer (also referred to herein as an adaptive context vector producer or an adaptive context producer).
- the spatial attender in turn comprises an adaptive comparator, an adaptive attender softmax, and an adaptive convex combination accumulator.
- FIG. 12 shows the disclosed adaptive attention model for image captioning rolled across multiple timesteps.
- the sentinel LSTM (Sn-LSTM) of FIG. 8 is embodied in and implemented by the adaptive attention model as a decoder.
- FIG. 13 illustrates one implementation of image captioning using adaptive attention applied by the adaptive attention model of FIG. 12.
- FIG. 14 is one implementation of the disclosed visually hermetic decoder that processes purely linguistic information and produces captions for an image.
- FIG. 15 shows a spatial attention model that uses the visually hermetic decoder of
- FIG. 14 for image captioning.
- the spatial attention model is rolled across multiple timesteps.
- FIG. 16 illustrates one example of image captioning using the technology disclosed.
- FIG. 17 shows visualization of some example image captions and image/spatial attention maps generated using the technology disclosed.
- FIG. 18 depicts visualization of some example image captions, word-wise visual grounding probabilities, and corresponding image/spatial attention maps generated using the technology disclosed.
- FIG. 19 illustrates visualization of some other example image captions, word-wise visual grounding probabilities, and corresponding image spatial attention maps generated using the technology disclosed.
- FIG.20 is an example rank-probability plot that illustrates performance of the technology disclosed on the COCO (common objects in context) dataset.
- FIG.21 is another example rank-probability plot that illustrates performance of the technology disclosed on the Flicker30k dataset
- FIG.22 is an example graph that shows localization accuracy of the technology disclosed on the COCO dataset.
- the blue colored bars show localization accuracy of the spatial attention model and the red colored bars show localization accuracy of the adaptive attention model.
- FIG 23 is a table that shows performance of the technology disclosed on the Flicker30k and COCO datasets based on various natural language processing metrics, including BLEU (bilingual evaluation understudy), METEOR (metric for evaluation of translation with explicit ordering), CIDEr (consensus-based image description evaluation), ROUGE-L (recall- oriented understudy for gisting evaluation-longest common subsequence), and SPICE (semantic propositional image caption evaluation).
- BLEU bilingual evaluation understudy
- METEOR metric for evaluation of translation with explicit ordering
- CIDEr consensus-based image description evaluation
- ROUGE-L recall- oriented understudy for gisting evaluation-longest common subsequence
- SPICE semantic propositional image caption evaluation
- FIG.25 is a simplified block diagram of a computer system that can be used to implement the technology disclosed.
- Attention-based visual neural encoder-decoder models use a convolutional neural network (CNN) to encode an input image into feature vectors and a long short-term memory network (LSTM) to decode the feature vectors into a sequence of words.
- CNN convolutional neural network
- LSTM long short-term memory network
- the LSTM relies on an attention mechanism that produces a spatial map that highlights image regions relevant to for generating words. Attention-based models leverage either previous hidden state information of the LSTM or previously emitted caption word(s) as input to the attention mechanism.
- each conditional probability is modeled as:
- / is a nonlinear function that outputs the probability of y t .
- c t is the visual context vector at time t extracted from image I .
- h t is the current hidden state of the RNN at time t.
- the technology disclosed uses a long short-term memory network (LSTM) as the RNN.
- LSTMs are gated variants of a vanilla RNN and have
- Context vector c t is an important factor in the neural encoder-decoder framework because it provides visual evidence for caption generation.
- Different ways of modeling the context vector fall into two categories: vanilla encoder-decoder and attention-based encoder- decoder frameworks.
- context vector c is only dependent on a convolutional neural network (CNN) that serves as the encoder.
- CNN convolutional neural network
- the input image / is fed into the CNN, which extracts the last fully connected layer as a global image feature.
- the context vector c keeps constant, and does not depend on the hidden state of the decoder.
- context vector c t is dependent on both the encoder and the decoder.
- the decoder attends to specific regions of the image and determines context vector c t using the spatial image features from a convolution layer of a CNN. Attention models can significantly improve the performance of image captioning.
- our model uses the current hidden state information of the decoder LSTM to guide attention, instead of using the previous hidden state or a previously emitted word.
- our model supplies the LSTM with a time-invariant global image representation, instead of a progression by timestep of attention-variant image
- the attention mechanism of our model uses current instead of prior hidden state information to guide attention, which requires a different structure and different processing steps.
- the current hidden state information is used to guide attention to image regions and generate, in a timestep, an attention-variant image representation.
- the current hidden state information is computed at each timestep by the decoder LSTM, using a current input and previous hidden state information. Information from the LSTM, the current hidden state, is fed to the attention mechanism, instead of output of the attention mechanism being fed to the LSTM.
- the current input combines word(s) previously emitted with a time-invariant global image representation, which is determined from the encoder CNN's image features.
- the first current input word fed to decoder LSTM is a special start ( ⁇ start>) token.
- the global image representation can be fed to the LSTM once, in a first timestep, or repeatedly at successive timesteps.
- the spatial attention model determines context vector c, that is defined as:
- g is the attention function which is embodied in and implemented by the attender of FIG. 4,
- Each image feature is a d dimensional representation corresponding to a part or region of the image produced by the CNN encoder.
- A is the current hidden state of the LSTM decoder at time t , shown in FIG. 2B.
- the disclosed spatial attention model feeds them through a comparator (FIG.4) followed by an attender softmax (FIG.4) to generate the attention distribution over the k regions of the image:
- the comparator comprises a single layer neural network and a nonlinearity layer to determine z t .
- the context vector c is obtained by a convex combination accumulator as:
- the attender comprises the comparator, the attender softmax (also referred to herein as attention probability mass producer), and the convex combination accumulator (also referred to herein as context vector producer or context producer).
- the encoder CNN is a pretrained ResNet.
- the image features V [v, , . . . v k ], v,. ⁇ d are spatial feature outputs of the last convolutional layer of the ResNet.
- the image features V [v, , . . . v k ], v,. ⁇ d are spatial feature outputs of the last convolutional layer of the ResNet.
- the image features [v, , . . . v k ], v,. ⁇ d are spatial feature outputs of the last convolutional layer of the ResNet.
- the image features [v, , . . . v k ], v,. ⁇ d are spatial feature outputs
- V [V j , . . . v k ],v, ⁇ d have a dimension of 2048 x 7 x 7.
- a global image feature generator produces a global image feature, as discussed below.
- FIG. 2B shows the disclosed attention lagging decoder which uses current hidden state information h t to guide attention and generate an image caption.
- the attention lagging decoder uses current hidden state information h, to analyze where to look in the image, i.e., for generating the context vector c t .
- the decoder then combines both sources of information h t and c t to predict the next word.
- the generated context vector c t embodies the residual visual information of current hidden state h t , which diminishes the uncertainty or complements the informativeness of the current hidden state for next word prediction. Since the decoder is recurrent, LSTM-based and operates sequentially, the current hidden state h t embodies the previous hidden state h t _ x and the current input x t , which form the current visual and linguistic context.
- the attention lagging decoder attends to the image using this current visual and linguistic context rather man stale, prior context (FIG.2A).
- the image is attended after the current visual and linguistic context is determined by the decoder, i.e., the attention lags the decoder. This produces more accurate image captions.
- FIG.3A depicts a global image feature generator that generates a global image feature for an image by combining image features produced by the CNN encoder of FIG. 1.
- Global image feature generator first produces a preliminary global image feature as follows:
- a g is the preliminary global image feature that is determined by averaging the image features produced by the CNN encoder.
- the global image feature generator uses a single layer perception with rectifier activation function to transform the image feature vectors into new vectors with dimension z d :
- W a and W b are the weight parameters
- v g is the global image feature.
- Global image featurev g is time-invariant because it is not sequentially or recurrently produced, but instead determined from non-recurrent, convolved image features.
- Transformation of the image features is embodied in and implemented by the image feature rectifier of the global image feature generator, according to one implementation. Transformation of the preliminary global image feature is embodied in and implemented by the global image feature rectifier of the global image feature generator, according to one implementation.
- FIG.3B is a word embedder that vectorizes words in a high-dimensional embedding space.
- the technology disclosed uses the word embedder to generate word embeddings of vocabulary words predicted by the decoder, w, denotes word embedding of a vocabulary word predicted by the decoder at time t .
- w t _ l denotes word embedding of a vocabulary word predicted by the decoder at time f - 1 .
- word embedder generates word embeddings w t _ l of dimensionality d using an embedding matrix
- word embedder first transforms a word into a one-hot encoding and then converts it into a continuous representation using the embedding matrix
- word embedder initializes
- word embeddings using pretrained word embedding models like GloVe and word2vec and obtains a fixed word embedding of each word in the vocabulary.
- word embedder generates character embeddings and/or phrase embeddings.
- FIG.3C is an input preparer that prepares and provides input to a decoder. At each time step, the input preparer concatenates the word embedding vector (predicted by the input preparer).
- the input preparer is also referred to herein as concatenator.
- a long short-term memory is a cell in a neural network that is repeatedly exercised in timesteps to produce sequential outputs from sequential inputs.
- the output is often referred to as a hidden state, which should not be confused with the cell's memory.
- Inputs are a hidden state and memory from a prior timestep and a current input.
- the cell has an input activation function, memory, and gates.
- the input activation function maps the input into a range, such as -1 to I for a tanh activation function.
- the gates determine weights applied to updating the memory and generating a hidden state output result from the memory.
- the gates are a forget gate, an input gate, and an output gate.
- the forget gate attenuates the memory.
- the input gate mixes activated inputs with the attenuated memory.
- the output gate controls hidden state output from the memory.
- the hidden state output can directly label an input or it can be processed by another component to emit a word or other label or generate a probability distribution over
- An auxiliary input can be added to the LSTM that introduces a different kind of information than the current input, in a sense orthogonal to current input. Adding such a different kind of auxiliary input can lead to overfitting and other training artifacts.
- the technology disclosed adds a new gate to the LSTM cell architecture that produces a second sentinel state output from the memory, in addition to the hidden state output. This sentinel state output is used to control mixing between different neural network processing models in a post-LSTM component.
- a visual sentinel for instance, controls mixing between analysis of visual features from a CNN and of word sequences from a predictive language model.
- the new gate that produces the sentinel state output is called "auxiliary sentinel gate”.
- the auxiliary input contributes to both accumulated auxiliary information in the LSTM memory and to the sentinel output.
- the sentinel state output encodes parts of the accumulated auxiliary information that are most useful for next output prediction.
- the sentinel gate conditions current input, including the previous hidden state and the auxiliary information, and combines the conditioned input with the updated memory, to produce the sentinel state output.
- An LSTM that includes the auxiliary sentinel gate is referred to herein as a "sentinel LSTM (Sn-LSTM)".
- the auxiliary information prior to being accumulated in the Sn-LSTM, the auxiliary information is often subjected to a "tanh" (hyperbolic tangent) function that produces output in the range of -1 and 1 (e.g., tanh function following the fully-connected layer of a CNN).
- tanh hyperbolic tangent
- the auxiliary sentinel gate gates the pointwise tanh of the Sn-LSTM' s memory cell.
- tanh is selected as the non-linearity function applied to the Sn-LSTM's memory cell because it matches the form of the stored auxiliary information.
- FIG. 8 illustrates one implementation of the disclosed sentinel LSTM (Sn-LSTM) that comprises an auxiliary sentinel gate which produces a sentinel state or visual sentinel.
- the Sn-LSTM receives inputs at each of a plurality of timesteps. The inputs include at least an input for a current timestep x f , a hidden state from a previous timestep h t _ l and an auxiliary input for the current timestep a t .
- the Sn-LSTM can run on at least one of the numerous parallel processors.
- the auxiliary input a f is not separately provided, but instead encoded as auxiliary information in the previous hidden state and/or the input x t
- the auxiliary input a t can be visual input comprising image data and the input can be a text embedding of a most recently emitted word and/or character.
- the auxiliary input a t can be a text encoding from another long short-term memory network (abbreviated LSTM) of an input document and the input can be a text embedding of a most recently emitted word and/or character.
- the auxiliary input a t can be a hidden state vector from another LSTM that encodes sequential data and the input can be a text embedding of a most recently emitted word and/or character.
- the auxiliary input can be a prediction derived from a hidden state vector from another LSTM that encodes sequential data and the input can be a text embedding of a most recently emitted word and/or character.
- the auxiliary input a. can be an output of a convolutional neural network (abbreviated CNN).
- the auxiliary input a f can be an output of an attention network.
- the Sn-LSTM generates outputs at each of the plurality of timesteps by processing the inputs through a plurality of gates.
- the gates include at least an input gate, a forget gate, an output gate, and an auxiliary sentinel gate. Each of the gates can run on at least one of the numerous parallel processors.
- the input gate controls how much of the current input x f and the previous hidden state h t _ l will enter the current memory cell state and is represented as:
- the forget gate operates on the current memory cell state m ⁇ and the previous memory cell state ⁇ t _ l and decides whether to erase (set to zero) or keep individual components of the memory cell and is represented as:
- the output gate scales the output from the memory cell and is represented as:
- the Sn-LSTM can also include an activation gate (also referred to as cell update gate or input transformation gate) that transforms the current input x t and previous hidden state h t _ l to be taken into account into the current memory cell state and is represented as:
- the Sn-LSTM can also include a current hidden state producer that outputs the current hidden state h f scaled by a tanh (squashed) transformation of the current memory cell state m ⁇ and is represented as:
- a memory cell updater updates the memory cell of the Sn-LSTM from the previous memory cell state m t _ l to the current memory cell state as follows: [0097]
- the auxiliary sentinel gate produces a sentinel state or visual sentinel which is a latent representation of what the Sn-LSTM decoder already knows.
- the Sn- LSTM decoder's memory stores both long and short term visual and linguistic information.
- the adaptive attention model learns to extract a new component from the Sn-LSTM that the model can fall back on when it chooses to not attend to the image. This new component is called the visual sentinel.
- the gate that decides whether to attend to the image or to the visual sentinel is the auxiliary sentinel gate.
- the visual and linguistic contextual information is stored in the Sn-LSTM decoder's memory cell.
- W and W are weight parameters that are learned
- x is the x h t input to the Sn-LSTM at timestep t
- aux t is the auxiliary sentinel gate applied to the current memory cell state m t .
- ⁇ is the logistic sigmoid activation.
- the Sn-LSTM can be used as a decoder that receives auxiliary information from another encoder LSTM.
- the encoder LSTM can process an input document to produce a document encoding.
- the document encoding or an alternative representation of the document encoding can be fed to the Sn-LSTM as auxiliary information.
- Sn-LSTM can use its auxiliary sentinel gate to determine which parts of the document encoding (or its alternative representation) are most important at a current timestep, considering a previously generated summary word and a previous hidden state.
- the important parts of the document encoding (or its alternative representation) can then be encoded into the sentinel state.
- the sentinel state can be used to generate the next summary word.
- the Sn-LSTM can be used as a decoder that receives auxiliary information from another encoder LSTM.
- the encoder LSTM can process an input question to produce a question encoding.
- the question encoding or an alternative representation of the question encoding can be fed to the Sn-LSTM as auxiliary information.
- Sn-LSTM can use its auxiliary sentinel gate to determine which parts of the question encoding (or its alternative representation) are most important at a current timestep, considering a previously generated answer word and a previous hidden state. The important parts of the question encoding (or its alternative representation) can then be encoded into the sentinel state.
- the sentinel state can be used to generate the next answer word.
- the Sn-LSTM can be used as a decoder that receives auxiliary information from another encoder LSTM.
- the encoder LSTM can process a source language sequence to produce a source encoding.
- the source encoding or an alternative representation of the source encoding can be fed to the Sn- LSTM as auxiliary information.
- Sn-LSTM can use its auxiliary sentinel gate to determine which parts of the source encoding (or its alternative representation) are most important at a current timestep, considering a previously generated translated word and a previous hidden state.
- the important parts of the source encoding (or its alternative representation) can then be encoded into the sentinel state.
- the sentinel state can be used to generate the next translated word.
- the Sn-LSTM can be used as a decoder that receives auxiliary information from an encoder comprising a CNN and an LSTM.
- the encoder can process video frames of a video to produce a video encoding.
- the video encoding or an alternative representation of the video encoding can be fed to the Sn-LSTM as auxiliary information.
- Sn-LSTM can use its auxiliary sentinel gate to determine which parts of the video encoding (or its alternative representation) are most important at a current timestep, considering a previously generated caption word and a previous hidden state.
- the important parts of the video encoding (or its alternative representation) can then be encoded into the sentinel state.
- the sentinel state can be used to generate the next caption word.
- the Sn-LSTM can be used as a decoder that receives auxiliary information from an encoder CNN.
- the encoder can process an input image to produce an image encoding.
- the image encoding or an alternative representation of the image encoding can be fed to the Sn-LSTM as auxiliary information.
- Sn- LSTM can use its auxiliary sentinel gate to determine which parts of the image encoding (or its alternative representation) are most important at a current timestep, considering a previously generated caption word and a previous hidden state.
- the important parts of the image encoding (or its alternative representation) can then be encoded into the sentinel state.
- the sentinel state can be used to generate the next caption word.
- a long short-term memory (LSTM) decoder can be extended to generate image captions by attending to regions or features of a target image and conditioning word predictions on the attended image features.
- attending to the image is only half of the story; knowing when to look is the other half. That is, not all caption words correspond to visual signals; some words, such as stop words and linguistically correlated words, can be better inferred from textual context.
- Existing attention-based visual neural encoder-decoder models force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as "the" and "of.
- FIG. 10 depicts the disclosed adaptive attention model for image captioning that automatically decides how heavily to rely on visual information, as opposed to linguistic information, to emit a next caption word.
- the sentinel LSTM (Sn-LSTM) of FIG. 8 is embodied in and implemented by the adaptive attention model as a decoder.
- the sentinel gate produces a so-called visual sentinel/sentinel state St at each timestep, which is an additional representation, derived from the Sn-LSTM's memory, of long and short term visual and linguistic information.
- the visual sentinel St encodes information that can be relied on by the linguistic model without reference to the visual information from the CNN.
- the visual sentinel St is used, in combination with the current hidden state from the Sn- LSTM, to generate a sentinel gate mass/gate probability mass ⁇ t that controls mixing of image and linguistic context.
- FIG. 14 is one implementation of the disclosed visually hermetic decoder that processes purely linguistic information and produces captions for an image.
- FIG. 15 shows a spatial attention model that uses the visually hermetic decoder of FIG. 14 for image captioning. In FIG. 15, the spatial attention model is rolled across multiple timesteps.
- a visually hermetic decoder can be used that processes purely linguistic information w , which is not mixed with image data during image captioning. This alternative visually hermetic decoder does not receive the global image representation as input. That is, the current input to the visually hermetic decoder is just its most recently emitted caption word and the initial input
- a visually hermetic decoder can be implemented as an LSTM, a gated recurrent unit (GRU), or a quasi-recurrent neural network (QR ). Words, with this alternative decoder, are still emitted after application of the attention mechanism.
- the technology disclosed also provides a system and method of evaluating performance of an image captioning model.
- the technology disclosed generates a spatial attention map of attention values for mixing image region vectors of an image using a convolutional neural network (abbreviated CNN) encoder and a long-short term memory (LSTM) decoder and produces a caption word output based on the spatial attention map.
- CNN convolutional neural network
- LSTM long-short term memory
- the technology disclosed segments regions of the image above a threshold attention value into a segmentation map.
- the technology disclosed projects a bounding box over the image that covers a largest connected image component in the segmentation map.
- the technology disclosed determines an intersection over union (abbreviated 10U) of the projected bounding box and a ground truth bounding box.
- the technology disclosed determines a localization accuracy of the spatial attention map based on the calculated IOU.
- the technology disclosed presents a system.
- the system includes numerous parallel processors coupled to memory.
- the memory is loaded with computer instructions to generate a natural language caption for an image.
- the instructions when executed on the parallel processors, implement the following actions.
- the encoder can be a convolutional neural network (abbreviated CNN).
- the decoder can be a long short-term memory network (abbreviated LSTM).
- the feed-forward neural network can be a multilayer perceptron (abbreviated MLP).
- This system implementation and other systems disclosed optionally include one or more of the following features.
- System can also include features described in connection with methods disclosed. In the interest of conciseness, alternative combinations of system features are not individually enumerated. Features applicable to systems, methods, and articles of manufacture are not repeated for each statutory class set of base features. The reader will understand how features identified in this section can readily be combined with base features in other statutory classes.
- the system can be a computer-implemented system.
- the system can be a neural network-based system.
- the current hidden state of the decoder can be determined based on a current input to the decoder and a previous hidden state of the decoder.
- the image context vector can be a dynamic vector that determines at each timestep an amount of spatial attention allocated to each image region, conditioned on the current hidden state of the decoder.
- the system can use weakly-supervised localization to evaluate the allocated spatial attention.
- the attention values for the image feature vectors can be determined by processing the image feature vectors and the current hidden state of the decoder through a single layer neural network.
- the system can cause the feed-forward neural network to emit the next caption word at each timestep.
- the feed-forward neural network can produce an output based on the image context vector and the current hidden state of the decoder and use the output to determine a normalized distribution of vocabulary probability masses over words in a vocabulary that represent a respective likelihood that a vocabulary word is the next caption word.
- implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- the technology disclosed presents a system.
- the system includes numerous parallel processors coupled to memory.
- the memory is loaded with computer instructions to generate a natural language caption for an image.
- the instructions when executed on the parallel processors, implement the following actions.
- the system can be a computer-implemented system.
- the system can be a neural network-based system.
- the current hidden state information can be determined based on a current input to the decoder and previous hidden state information.
- the system can use weakly-supervised localization to evaluate the attention map.
- the encoder can be a convolutional neural network (abbreviated CNN) and the image feature vectors can be produced by a last convolutional layer of the CNN.
- CNN convolutional neural network
- the attention lagging decoder can be a long short-term memory network (abbreviated
- implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- the technology disclosed presents a system.
- the system includes numerous parallel processors coupled to memory.
- the memory is loaded with computer instructions to generate a natural language caption for an image.
- the instructions when executed on the parallel processors, implement the following actions.
- the encoder can be a convolutional neural network (abbreviated CNN).
- Processing words through a decoder by beginning at an initial timestep with a start- of-caption token ⁇ start > and continuing in successive timesteps using a most recently emitted caption word as input to the decoder.
- the decoder can be a long short-term memory
- LSTM network
- the system can be a computer-implemented system.
- the system can be a neural network-based system.
- the system does not supply the global image feature vector to the decoder and processes words through the decoder by beginning at the initial timestep with the start-of-caption token ⁇ start > and continuing in successive timesteps using the most recently emitted caption word as input to the decoder.
- the system does not supply the image feature vectors to the decoder, in some implementations.
- the technology disclosed presents a system for machine generation of a natural language caption for an image.
- the system runs on numerous parallel processors.
- the system can be a computer-implemented system.
- the system can be a neural network-based system.
- the system comprises an attention lagging decoder.
- the attention lagging decoder can run on at least one of the numerous parallel processors.
- the attention lagging decoder uses at least current hidden state information to generate an attention map for image feature vectors produced by an encoder f om an image.
- the encoder can be a convolutional neural network (abbreviated CNN) and the image feature vectors can be produced by a last convolutional layer of the CNN.
- the attention lagging decoder can be a long short-term memory network (abbreviated LSTM).
- the attention lagging decoder causes generation of an output caption word based on a weighted sum of the image feature vectors, with the weights determined from the attention map.
- implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- FIG. 6 illustrates the disclosed spatial attention model for image captioning rolled across multiple timesteps.
- the attention lagging decoder of FIG.2B is embodied in and implemented by the spatial attention model.
- the technology disclosed presents an image-to- language captioning system that implements the spatial attention model of FIG. 6 for machine generation of a natural language caption for an image.
- the system runs on numerous parallel processors.
- the system comprises an encoder (FIG. 1) for processing an image through a convolutional neural network (abbreviated CNN) and producing image features for regions of the image.
- the encoder can run on at least one of the numerous parallel processors.
- the system comprises a global image feature generator (FIG.3A) for generating a global image feature for the image by combining the image features.
- the global image feature generator can run on at least one of the numerous parallel processors.
- the system comprises an input preparer (FIG.3C) for providing input to a decoder as a combination of a start-of-caption token ⁇ start > and the global image feature at an initial decoder timestep and a combination of a most recently emitted caption word w,_, and the global image feature at successive decoder timesteps.
- the input preparer can run on at least one of the numerous parallel processors.
- the system comprises the decoder (FIG. 2B) for processing the input through a long short-term memory network (abbreviated LSTM) to generate a current decoder hidden state at each decoder timestep.
- the decoder can run on at least one of the numerous parallel processors.
- the system comprises an attender (FIG.4) for accumulating, at each decoder timestep, an image context as a convex combination of the image features scaled by attention probability masses determined using the current decoder hidden state.
- the attender can run on at least one of the numerous parallel processors.
- FIG.4 depicts one implementation of modules of the attender that is part of the spatial attention model disclosed in FIG. 6.
- the attender comprises the comparator, the attender softmax (also referred to herein as attention probability mass producer), and the convex combination accumulator (also referred to herein as context vector producer or context producer).
- the system comprises a feed-forward neural network (also referred to herein as multilayer perceptron (MLP)) (FIG. 5) for processing the image context and the current decoder hidden state to emit a next caption word at each decoder timestep.
- the feed-forward neural network can run on at least one of the numerous parallel processors.
- the system comprises a controller (FIG. 25) for iterating the input preparer, the decoder, the attender, and the feed-forward neural network to generate the natural language caption for the image until the next caption word emitted is an end-of-caption token ⁇ end > .
- the controller can run on at least one of the numerous parallel processors.
- the system can be a computer-implemented system.
- the system can be a neural network-based system.
- the attender softmax can run on at least one of the numerous parallel processors.
- the comparator can run on at least one of the numerous parallel processors.
- the attention values z t [ ⁇ l , . . . ⁇ ] are determined by processing the current decoder hidden state h t and the image features through a single layer neural network
- the attention values z t [ ⁇ l , . . . ⁇ ] are determined by processing the current decoder hidden state
- the attention values are determined by processing
- the decoder can further comprise at least an input gate, a forget gate, and an output gate for determining at each decoder timestep the current decoder hidden state based on a current decoder input and a previous decoder hidden state.
- the input gate, the forget gate, and the output gate can each run on at least one of the numerous parallel processors.
- the attender can further comprise a convex combination accumulator (FIG.4) for producing the image context to identify an amount of spatial attention allocated to each image region at each decoder timestep, conditioned on the current decoder hidden state.
- the convex combination accumulator can run on at least one of the numerous parallel processors.
- the system can further comprise a localizer (FIG. 25) for evaluating the allocated spatial attention based on weakly-supervising localization.
- the localizer can run on at least one of the numerous parallel processors.
- the system can further comprise the feed-forward neural network (FIG. 5) for producing at each decoder timestep an output based on the image context and the current decoder hidden state.
- FOG. 5 feed-forward neural network
- the system can further comprise a vocabulary softmax (FIG. 5) for determining at each decoder timestep a normalized distribution of vocabulary probability masses over words in a vocabulary using the output.
- the vocabulary softmax can run on at least one of the numerous parallel processors.
- the vocabulary probability masses can identify respective likelihood that a vocabulary word is the next caption word.
- implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- FIG. 7 depicts one implementation of image captioning using spatial attention applied by the spatial attention model of FIG. 6.
- the technology disclosed presents a method that performs the image captioning of FIG. 7 for machine generation of a natural language caption for an image.
- the method can be a computer- implemented method.
- the method can be a neural network-based method.
- the encoder can be a convolutional neural network (abbreviated CNN), as shown in FIG. 1.
- the method includes processing words through a decoder (FIGs. 2B and 6) by beginning at an initial timestep with a start-of-caption token ⁇ start > and the global image feature vector v g and continuing in successive timesteps using a most recently emitted caption word w t _ 1 and the global image feature vector avs g input to the decoder.
- the decoder can be a long short-term memory network (abbreviated LSTM), as shown in FIGs. 2B and 6.
- a t denotes an attention map that comprises the attention probability masses
- ⁇ of the image feature vectors V [v 1 , . . . v k ], v t ⁇ d .
- the method includes submitting the image context vector c t and the current hidden state of the decoder A, to a feed-forward neural network and causing the feed-forward neural network to emit a next caption word w t .
- the feed-forward neural network can be a multilayer perceptron (abbreviated MLP).
- the method includes repeating the processing of words through the decoder, the using, the applying, and the submitting until the caption word emitted is end-of-caption token
- the iterations are performed by a controller, shown in FIG. 25.
- implementations may include a non-transitory computer readable storage medium (CRM) storing instructions executable by a processor to perform the method described above.
- implementations may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform the method described above.
- the technology disclosed presents a method of machine generation of a natural language caption for an image.
- the method can be a computer- implemented method.
- the method can be a neural network-based method.
- implementations may include a non-transitory computer readable storage medium (CRM) storing instructions executable by a processor to perform the method described above.
- implementations may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform the method described above.
- the technology disclosed presents a method of machine generation of a natural language caption for an image.
- This method uses a visually hermetic LSTM.
- the method can be a computer-implemented method.
- the method can be a neural network-based method.
- the encoder can be a convolutional neural network (abbreviated CNN).
- the method includes processing words through a decoder by beginning at an initial timestep with a start-of-caption token ⁇ start > and continuing in successive timesteps using a most recently emitted caption word w f _j as input to the decoder.
- the decoder can be a visually hermetic long short-term memory network (abbreviated LSTM), shown in FIGs. 14 and 15.
- the method includes not supplying the image context vector c t to the decoder.
- the method includes submitting the image context vector c t and the current hidden state of the decoder A, to a feed-forward neural network and causing the feed-forward neural network to emit a caption word.
- the method includes repeating the processing of words through the decoder, the using, the not supplying, and the submitting until the caption word emitted is an end-of-caption.
- implementations may include a non-transitory computer readable storage medium (CRM) storing instructions executable by a processor to perform the method described above.
- implementations may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform the method described above.
- FIG. 12 shows the disclosed adaptive attention model for image captioning rolled across multiple timesteps.
- the sentinel LSTM (Sn-LSTM) of FIG. 8 is embodied in and implemented by the adaptive attention model as a decoder.
- FIG. 13 illustrates one
- the technology disclosed presents a system that performs the image captioning of FIGs. 12 and 13.
- the system includes numerous parallel processors coupled to memory.
- the memory is loaded with computer instructions to automatically caption an image.
- the instructions when executed on the parallel processors, implement the following actions.
- the mixing results of an image encoder (FIG. 1) and a language decoder (FIG. 8) to emit a sequence of caption words for an input image / .
- the mixing is governed by a gate probability mass/sentinel gate mass /3 ⁇ 4 determined from a visual sentinel vector St of the language decoder and a current hidden state vector of the language decoder ht .
- the image encoder can be a convolutional neural network (abbreviated CNN)-
- the language decoder can be a sentinel long short-term memory network (abbreviated Sn-LSTM), as shown in FIGs. 8 and 9.
- the language decoder can be a sentinel bi-directional long short-term memory network
- the language decoder can be a sentinel gated recurrent unit network (abbreviated Sn-GRU).
- the language decoder can be a sentinel quasi-recurrent neural network (abbreviated Sn-QRNN).
- V [v 1 , . . . v k ] , v, ⁇ d .
- Determining the results of the language decoder by processing words through the language decoder includes - (1) beginning at an initial timestep with a start-of-caption token ⁇ start > and the global image feature vector v g , (2) continuing in successive timesteps using a most recently emitted caption word and the global image feature vector v g as input to the language decoder, and (3) at each timestep, generating a visual sentinel vector St that combines the most recently emitted caption word w i _ 1 , the global image feature vector v g , a previous hidden state vector of the language decoder ht-i, and memory contents m of the language decoder.
- V [v 1 , . . . v k ] , v.. ⁇ d and an unnormalized gate value [ ⁇ , ] for the visual sentinel vector St .
- the generation of context vector Ct is embodied in and implemented by the spatial attender of the adaptive attender, shown in FIGs. 11 and 13. [00204] Determining an adaptive context vector ct as a mix of the image context vector Ct and the visual sentinel vector St according to the gate probability mass/sentinel gate mass fit .
- adaptive context vector ct is embodied in and implemented by the mixer of the adaptive attender, shown in FIGs. 11 and 13.
- the system can be a computer-implemented system.
- the system can be a neural network-based system.
- the adaptive context vector ct at timestep t can be determined as
- ct fit st + (1 - fit) ct , where ct denotes the adaptive context vector, ct denotes the image context vector, St denotes the visual sentinel vector, ⁇ t denotes the gate probability
- the visual sentinel vector St can encode visual sentinel information that includes visual context determined from the global image feature vector v g and textual context determined from previously emitted caption words.
- the gate probability mass/sentinel gate mass/sentinel gate mass ⁇ being unity can result in the adaptive context vector Ct being equal to the visual sentinel vector St .
- the next caption word Wt is emitted only in dependence upon the visual sentinel information.
- the image context vector Ct can encode spatial image information conditioned on the current hidden state vector ht of the language decoder.
- the gate probability mass sentinel gate mass ⁇ t being zero can result in the adaptive context vector ct being equal to the image context vector Ct .
- the next caption word Wt is emitted only in dependence upon the spatial image information.
- the gate probability mass/sentinel gate mass ⁇ t can be a scalar value between unity and zero that enhances when the next caption word Wt is a visual word and diminishes when the next caption word Wt is a non-visual word or linguistically correlated to the previously emitted caption word Wt-i.
- the system can further comprise a trainer (FIG. 25), which in turn further comprises a preventer (FIG. 25).
- the preventer prevents, during training, backpropagation of gradients from the language decoder to the image encoder when the next caption word is a non-visual word or linguistically correlated to the previously emitted caption word.
- the trainer and the preventer can each run on at least one of the numerous parallel processors.
- implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- the technology disclosed presents a method of automatic image captioning.
- the method can be a computer-implemented method.
- the method can be a neural network-based method.
- the method includes mixing ⁇ results of an image encoder (FIG. 1) and a language decoder (FIGs. 8 and 9) to emit a sequence of caption words for an input image I .
- the mixing is embodied in and implemented by the mixer of the adaptive attender of FIG. 11.
- the mixing is governed by a gate probability mass (also referred to herein as the sentinel gate mass) determined from a visual sentinel vector of the language decoder and a current hidden state vector of the language decoder.
- the image encoder can be a convolutional neural network (abbreviated CNN).
- the language decoder can be a sentinel long short-term memory network (abbreviated Sn-LSTM).
- the language decoder can be a sentinel bi-directional long short-term memory network (abbreviated Sn-Bi-LSTM).
- the language decoder can be a sentinel gated recurrent unit network (abbreviated Sn-GRU).
- the language decoder can be a sentinel quasi- recurrent neural network (abbreviated Sn-QRNN).
- the method includes determining the results of the image encoder by processing the image through the image encoder to produce image feature vectors for regions of the image and computing a global image feature vector from the image feature vectors.
- the method includes determining the results of the language decoder by processing words through the language decoder. This includes - (1) beginning at an initial timestep with a start-of-caption token ⁇ start > and the global image feature vector, (2) continuing in successive timesteps using a most recently emitted caption word w t _ 1 and the global image feature vector as input to the language decoder, and (3) at each timestep, generating a visual sentinel vector that combines the most recently emitted caption word w t _ 1 , the global image feature vector, a previous hidden state vector of the language decoder, and memory contents of the language decoder.
- the method includes, at each timestep, using at least a current hidden state vector of the language decoder to determine unnormalized attention values for the image feature vectors and an unnormalized gate value for the visual sentinel vector.
- the method includes concatenating the unnormalized attention values and the unnormalized gate value and exponentially normalizing the concatenated attention and gate values to produce a vector of attention probability masses and the gate probability mass/sentinel gate mass.
- the method includes applying the attention probability masses to the image feature vectors to accumulate in an image context vector c t a weighted sum of the image feature vectors.
- the method includes determining an adaptive context vector ct as a mix of the image context vector and the visual sentinel vector St according to the gate probability mass/sentinel gate mass ⁇ .
- the method includes submitting the adaptive context vector ct and the current hidden state of the language decoder hi to a feed-forward neural network (MLLP) and causing the feed-forward neural network to emit a next caption word Wt .
- MLLP feed-forward neural network
- the method includes repeating the processing of words through the language decoder, the using, the concatenating, the applying, the determining, and the submitting until the next caption word emitted is an end-of-caption token ⁇ end > .
- the iterations are performed by a controller, shown in FIG. 25.
- implementations may include a non-transitory computer readable storage medium (CRM) storing instructions executable by a processor to perform the method described above.
- implementations may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform the method described above.
- the technology disclosed presents an automated image captioning system. The system runs on numerous parallel processors.
- the system comprises a convolutional neural network (abbreviated CNN) encoder (FIG .11).
- the CNN encoder can run on at least one of the numerous parallel processors.
- the CNN encoder processes an input image through one or more convolutional layers to generate image features by image regions that represent the image.
- the system comprises a sentinel long short-term memory network (abbreviated Sn- LSTM) decoder (FIG .8).
- Sn-LSTM decoder can run on at least one of the numerous parallel processors.
- the Sn-LSTM decoder processes a previously emitted caption word combined with the image features to emit a sequence of caption words over successive timesteps.
- the system comprises an adaptive attender (FIG .11).
- the adaptive attender can run on at least one of the numerous parallel processors.
- the adaptive attender spatially attends to the image features and produces an image context conditioned on a current hidden state of the Sn-LSTM decoder.
- the adaptive attender extracts, from the Sn-LSTM decoder, a visual sentinel that includes visual context determined from previously processed image features and textual context determined from previously emitted caption words.
- the adaptive attender mixes the image context Ct and the visual sentinel St for next caption word Wt emittance.
- the mixing is governed by a sentinel gate mass ⁇ t determined from the visual sentinel St and the current hidden state of the Sn- LSTM decoder h.
- the system can be a computer-implemented system.
- the system can be a neural network-based system.
- the adaptive attender enhances attention directed to the image context when a next caption word is a visual word, as shown in FIGs. 16, 18, and 19.
- the adaptive attender enhances attention directed to the visual sentinel when a next caption word is a non-visual word or linguistically correlated to the previously emitted caption word, as shown in FIGs. 16, 18, and 19.
- the system can further comprise a trainer, which in turn further comprises a preventer.
- the preventer prevents, during training, backpropagation of gradients from the Sn- LSTM decoder to the CNN encoder when a next caption word is a non-visual word or linguistically correlated to the previously emitted caption word.
- the trainer and the preventer can each run on at least one of the numerous parallel processors.
- implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- the technology disclosed presents an automated image captioning system.
- the system runs on numerous parallel processors.
- the system can be a computer-implemented system.
- the system can be a neural network-based system.
- the system comprises an image encoder (FIG. 1).
- the image encoder can run on at least one of the numerous parallel processors.
- the image encoder processes an input image through a convolutional neural network (abbreviated CNN) to generate an image representation.
- abbreviated CNN convolutional neural network
- the system comprises a language decoder (FIG. 8).
- the language decoder can run on at least one of the numerous parallel processors.
- the language decoder processes a previously emitted caption word combined with the image representation through a recurrent neural network (abbreviated RNN) to emit a sequence of caption words.
- RNN recurrent neural network
- the system comprises an adaptive attender (FIG. 11).
- the adaptive attender can run on at least one of the numerous parallel processors.
- the adaptive attender enhances attention directed to the image representation when a next caption word is a visual word.
- the adaptive attender enhances attention directed to memory contents of the language decoder when the next caption word is a non-visual word or linguistically correlated to the previously emitted caption word.
- implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- the technology disclosed presents an automated image captioning system.
- the system runs on numerous parallel processors.
- the system can be a computer-implemented system.
- the system can be a neural network-based system.
- the system comprises an image encoder (FIG. 1).
- the image encoder can run on at least one of the numerous parallel processors.
- the image encoder processes an input image through a convolutional neural network (abbreviated CNN) to generate an image representation.
- the system comprises a language decoder (FIG. 8).
- the language decoder can run on at least one of the numerous parallel processors.
- the language decoder processes a previously emitted caption word combined with the image representation through a recurrent neural network (abbreviated R N) to emit a sequence of caption words.
- R N recurrent neural network
- the system comprises a sentinel gate mass/gate probability mass J3t .
- the sentinel gate mass can run on at least one of the numerous parallel processors.
- the sentinel gate mass controls accumulation of the image representation and memory contents of the language decoder for next caption word emittance.
- the sentinel gate mass is determined from a visual sentinel of the language decoder and a current hidden state of the language decoder.
- implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- the technology disclosed presents a system that automates a task.
- the system runs on numerous parallel processors.
- the system can be a computer-implemented system.
- the system can be a neural network-based system.
- the system comprises an encoder.
- the encoder can run on at least one of the numerous parallel processors.
- the encoder processes an input through at least one neural network to generate an encoded representation.
- the system comprises a decoder.
- the decoder can run on at least one of the numerous parallel processors.
- the decoder processes a previously emitted output combined with the encoded representation through at least one neural network to emit a sequence of outputs.
- the system comprises an adaptive attender.
- the adaptive attender can run on at least one of the numerous parallel processors.
- the adaptive attender uses a sentinel gate mass to mix the encoded representation and memory contents of the decoder for emitting a next output.
- the sentinel gate mass is determined from the memory contents of the decoder and a current hidden state of the decoder.
- the sentinel gate mass can run on at least one of the numerous parallel processors.
- the system comprises a first recurrent neural network (abbreviated RNN) as the encoder that processes an input document to generate a document encoding and a second RNN as the decoder that uses the document encoding to emit a sequence of summary words.
- RNN first recurrent neural network
- the system when the task is question answering, the system comprises a first RNN as the encoder that processes an input question to generate a question encoding and a second RNN as the decoder that uses the question encoding to emit a sequence of answer words.
- the system when the task is machine translation, the system comprises a first RNN as the encoder that processes a source language sequence to generate a source encoding and a second RNN as the decoder that uses the source encoding to emit a target language sequence of translated words.
- the system when the task is video captioning, the system comprises a combination of a convolutional neural network (abbreviated CNN) and a first RNN as the encoder that process video frames to generate a video encoding and a second RNN as the decoder that uses the video encoding to emit a sequence of caption words.
- CNN convolutional neural network
- the system comprises a CNN as the encoder that process an input image to generate an image encoding and a RNN as the decoder that uses the image encoding to emit a sequence of caption words.
- the system can determine an alternative representation of the input from the encoded representation. The system can then use the alternative representation, instead of the encoded representation, for processing by the decoder and mixing by the adaptive attender.
- the alternative representation can be a weighted summary of the encoded representation conditioned on the current hidden state of the decoder.
- the alternative representation can be an averaged summary of the encoded representation.
- implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- the technology disclosed presents a system for machine generation of a natural language caption for an input image / .
- the system runs on numerous parallel processors.
- the system can be a computer-implemented system.
- the system can be a neural network-based system.
- FIG. 10 depicts the disclosed adaptive attention model for image captioning that automatically decides how heavily to rely on visual information, as opposed to linguistic information, to emit a next caption word.
- the sentinel LSTM (Sn-LSTM) of FIG. 8 is embodied in and implemented by the adaptive attention model as a decoder.
- FIG. 11 depicts one implementation of modules of an adaptive attender that is part of the adaptive attention model disclosed in FIG. 12.
- the adaptive attender comprises a spatial attender, an extractor, a sentinel gate mass determiner, a sentinel gate mass softmax, and a mixer (also referred to herein as an adaptive context vector producer or an adaptive context producer).
- the spatial attender in turn comprises an adaptive comparator, an adaptive attender softmax, and an adaptive convex combination accumulator.
- CNN convolutional neural network
- CNN encoder can run on at least one of the numerous parallel processors.
- the system comprises a sentinel long short-term memory network (abbreviated Sn- LSTM) decoder (FIG. 8) for processing a previously emitted caption word ⁇ ,_ ⁇ combined with the image features to produce a current hidden state ht of the Sn-LSTM decoder at each decoder timestep.
- Sn-LSTM decoder can run on at least one of the numerous parallel processors.
- the system comprises an adaptive attender, shown in FIG. 11.
- the adaptive attender can run on at least one of the numerous parallel processors.
- the adaptive attender further comprises a spatial attender (FIGs. 11 and 13) for spatially attending to the image features
- the adaptive attender further comprises an extractor (FIGs. 11 and 13) for extracting, from the Sn-LSTM decoder, a visual sentinel St at each decoder timestep.
- the visual sentinel St includes visual context determined from previously processed image features and textual context determined from previously emitted caption words.
- the adaptive attender further comprises mixer (FIGs. 11 and 13) for mixing ⁇ the image context Ct and the visual sentinel St to produce an adaptive context ct at each decoder timestep.
- the mixing is governed by a sentinel gate mass ⁇ t determined from the visual sentinel St and the current hidden state ht of the Sn-LSTM decoder.
- the spatial attender, the extractor, and the mixer can each run on at least one of the numerous parallel processors.
- the system comprises an emitter (FIGs. 5 and 13) for generating the natural language caption for the input image / based on the adaptive contexts ct produced over successive decoder timesteps by the mixer.
- the emitter can run on at least one of the numerous parallel processors.
- the Sn-LSTM decoder can further comprise an auxiliary sentinel gate (FIG. 8) for producing the visual sentinel St at each decoder timestep.
- the auxiliary sentinel gate can run on at least one of the numerous parallel processors.
- the adaptive attender can further comprise a sentinel gate mass softmax (FIGs. 11 and 13) for exponentially normalizing attention values [ ⁇ 1 , . . . ⁇ k ] of the image features and a gate value [ ⁇ ,] of the visual sentinel to produce an adaptive sequence ⁇ of attention probability masses [ ⁇ 1 , . . . ⁇ k ] and the sentinel gate mass ⁇ t at each decoder timestep.
- the sentinel gate mass softmax can run on at least one of the numerous parallel processors.
- the adaptive sequence can be determined as:
- W g can be the same weight parameter as in equation (6). is the attention distribution
- the last element of the adaptive sequence is the sentinel gate mass
- the probability over a vocabulary of possible words at time t can be determined by the vocabulary softmax of the emitter (FIG. 5) as follows:
- W p is the weight parameter that is learnt.
- the adaptive attender can further comprise a sentinel gate mass determiner (FIGs. 11 and 13) for producing at each decoder timestep the sentinel gate mass ⁇ t as a result of interaction between the current decoder hidden state ht and the visual sentinel St .
- the sentinel gate mass determiner can run on at least one of the numerous parallel processors.
- the spatial attender can further comprise an adaptive comparator (FIGs. 11 and 13) for producing at each decoder timestep the attention values [ ⁇ 1 , . . . ⁇ k ] as a result of interaction between the current decoder hidden state hi and the image features
- FOGs. 11 and 13 for producing at each decoder timestep the attention values [ ⁇ 1 , . . . ⁇ k ] as a result of interaction between the current decoder hidden state hi and the image features
- adaptive comparator can run on at least one of the numerous parallel processors.
- the attention and gate values [ are determined by processing the
- the attention and gate values are
- the spatial attender can further comprise an adaptive attender softmax (FIGs. 11 and 13) for exponentially normalizing the attention values for the image features to produce the attention probability masses at each decoder timestep.
- the adaptive attender softmax can run on at least one of the numerous parallel processors.
- the spatial attender can further comprise an adaptive convex combination accumulator (also referred to herein as mixer or adaptive context producer or adaptive context vector producter) (FIGs. 11 and 13) for accumulating, at each decoder timestep, the image context as a convex combination of the image features scaled by attention probability masses determined using the current decoder hidden state.
- the sentinel gate mass can run on at least one of the numerous parallel processors.
- the system can further comprise a trainer (FIG. 25).
- the trainer in turn further comprises a preventer for preventing backpropagation of gradients from the Sn-LSTM decoder to the CNN encoder when a next caption word is a non-visual word or linguistically correlated to a previously emitted caption word.
- the trainer and the preventer can each run on at least one of the numerous parallel processors.
- the adaptive attender further comprises the sentinel gate mass/gate probability mass ⁇ t for enhancing attention directed to the image context when a next caption word is a visual word.
- the adaptive attender further comprises the sentinel gate mass/gate probability mass fit for enhancing attention directed to the visual sentinel when a next caption word is a non-visual word or linguistically correlated to the previously emitted caption word.
- the sentinel gate mass can run on at least one of the numerous parallel processors.
- Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- the technology disclosed presents a recurrent neural network system (abbreviated RNN).
- the RNN runs on numerous parallel processors.
- the RNN can be a computer-implemented system.
- the RNN comprises a sentinel long short-term memory network (abbreviated Sn- LSTM) that receives inputs at each of a plurality of timesteps.
- the inputs include at least an input for a current timestep, a hidden state from a previous timestep, and an auxiliary input for the current timestep.
- the Sn-LSTM can run on at least one of the numerous parallel processors.
- the RNN generates outputs at each of the plurality of timesteps by processing the inputs through gates of the Sn-LSTM.
- the gates include at least an input gate, a forget gate, an output gate, and an auxiliary sentinel gate. Each of the gates can run on at least one of the numerous parallel processors.
- the RNN stores in a memory cell of the Sn-LSTM auxiliary information accumulated over time from (1) processing of the inputs by the input gate, the forget gate, and the output gate and (2) updating of the memory cell with gate outputs produced by the input gate, the forget gate, and the output gate.
- the memory cell can be maintained and persisted in a database (FIG 9).
- the auxiliary sentinel gate modulates the stored auxiliary information from the memory cell for next prediction.
- the modulation is conditioned on the input for the current timestep, the hidden state from the previous timestep, and the auxiliary input for the current timestep.
- the auxiliary input can be visual input comprising image data and the input can be a text embedding of a most recently emitted word and/or character.
- the auxiliary input can be a text encoding from another long short-term memory network (abbreviated LSTM) of an input document and the input can be a text embedding of a most recently emitted word and/or character.
- LSTM long short-term memory network
- the auxiliary input can be a hidden state vector from another LSTM that encodes sequential data and the input can be a text embedding of a most recently emitted word and/or character.
- the auxiliary input can be a prediction derived from a hidden state vector from another LSTM that encodes sequential data and the input can be a text embedding of a most recently emitted word and/or character.
- the auxiliary input can be an output of a convolutional neural network (abbreviated CNN).
- the auxiliary input can be an output of an attention network.
- the prediction can be a classification label embedding.
- the Sn-LSTM can be further configured to receive multiple auxiliary inputs at a timestep, with at least one auxiliary input comprising concatenated vectors.
- the auxiliary input can be received only at an initial timestep.
- the auxiliary sentinel gate can produce a sentinel state at each timestep as an indicator of the modulated auxiliary information.
- the outputs can comprise at least a hidden state for the current timestep and a sentinel state for the current timestep.
- the RNN can be further configured to use at least the hidden state for the current timestep and the sentinel state for the current timestep for making the next prediction.
- the inputs can further include a bias input and a previous state of the memory cell.
- the Sn-LSTM can further include an input activation function.
- the auxiliary sentinel gate can gate a pointwise hyperbolic tangent (abbreviated tanh) of the memory cell.
- the auxiliary sentinel gate at the current timestep t can be defined as
- Xt is the input for the current timestep
- auxiliary sentinel gate applied on the memory cell
- ⁇ denotes logistic sigmoid activation
- the sentinel state/visual sentinel at the current timestep t is defined as
- st auxt tanh (mi) , where St is the sentinel state, is the auxiliary sentinel gate applied on the memory cell mt , represents element-wise product, and tanh denotes hyperbolic tangent activation.
- implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- the technology disclosed presents a sentinel long short- term memory network (abbreviated Sn-LSTM) that processes auxiliary input combined with input and previous hidden state.
- Sn-LSTM runs on numerous parallel processors.
- the Sn- LSTM can be a computer-implemented system.
- the Sn-LSTM comprises an auxiliary sentinel gate that applies on a memory cell of the Sn-LSTM and modulates use of auxiliary information during next prediction.
- the auxiliary information is accumulated over time in the memory cell at least from the processing of the auxiliary input combined with the input and the previous hidden state.
- the auxiliary sentinel gate can run on at least one of the numerous parallel processors.
- the memory cell can be maintained and persisted in a database (FIG 9).
- the auxiliary sentinel gate can produce a sentinel state at each timestep as an indicator of the modulated auxiliary information, conditioned on an input for a current timestep, a hidden state from a previous timestep, and an auxiliary input for the current timestep.
- the auxiliary sentinel gate can gate a pointwise hyperbolic tangent (abbreviated tanh) of the memory cell.
- implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- the technology disclosed presents a method of extending a long short-term memory network (abbreviated LSTM).
- the method can be a computer-implemented method.
- the method can be a neural network-based method.
- the method includes extending a long short-term memory network (abbreviated LSTM) to include an auxiliary sentinel gate.
- LSTM long short-term memory network
- the auxiliary sentinel gate applies on a memory cell of the LSTM and modulates use of auxiliary information during next prediction.
- the auxiliary information is accumulated over time in the memory cell at least from the processing of auxiliary input combined with current input and previous hidden state.
- the auxiliary sentinel gate can produce a sentinel state at each timestep as an indicator of the modulated auxiliary information, conditioned on an input for a current timestep, a hidden state from a previous timestep, and an auxiliary input for the current timestep.
- the auxiliary sentinel gate can gate a pointwise hyperbolic tangent (abbreviated tanh) of the memory cell.
- implementations may include a non-transitory computer readable storage medium (CRM) storing instructions executable by a processor to perform the method described above.
- implementations may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform the method described above.
- the technology disclosed presents a recurrent neural network system (abbreviated RNN) for machine generation of a natural language caption for an image.
- RNN recurrent neural network system
- the RNN runs on numerous parallel processors.
- the RNN can be a computer- implemented system.
- FIG.9 shows one implementation of modules of a recurrent neural network
- RNN (abbreviated RNN) that implements the Sn-LSTM of FIG. 8.
- the RNN comprises an input provider (FIG. 9) for providing a plurality of inputs to a sentinel long short-term memory network (abbreviated Sn-LSTM) over successive timesteps.
- the inputs include at least an input for a current timestep, a hidden state from a previous timestep, and an auxiliary input for the current timestep.
- the input provider can run on at least one of the numerous parallel processors.
- the RNN comprises a gate processor (FIG. 9) for processing the inputs through each gate in a plurality of gates of the Sn-LSTM.
- the gates include at least an input gate (FIGs. 8 and 9), a forget gate (FIGs. 8 and 9), an output gate (FIGs. 8 and 9), and an auxiliary sentinel gate (FIGs. 8 and 9).
- the gate processor can run on at least one of the numerous parallel processors.
- Each of the gates can run on at least one of the numerous parallel processors.
- the RNN comprises a memory cell (FIG. 9) of the Sn-LSTM for storing auxiliary information accumulated over time from processing of the inputs by the gate processor.
- the memory cell can be maintained and persisted in a database (FIG 9).
- the RNN comprises a memory cell updater (FIG. 9) for updating the memory cell with gate outputs produced by the input gate (FIGs. 8 and 9), the forget gate (FIGs. 8 and 9), and the output gate (FIGs. 8 and 9).
- the memory cell updater can run on at least one of the numerous parallel processors.
- the RNN comprises the auxiliary sentinel gate (FIGs. 8 and 9) for modulating the stored auxiliary information from the memory cell to produce a sentinel state at each timestep.
- the modulation is cond tioned on the input for the current timestep, the hidden state from the previous timestep, and the auxiliary input for the current timestep.
- the RNN comprises an emitter (FIG. 5) for generating the natural language caption for the image based on the sentinel states produced over successive timesteps by the auxiliary sentinel gate.
- the emitter can run on at least one of the numerous parallel processors.
- the auxiliary sentinel gate can further comprise an auxiliary nonlinearity layer (FIG. 9) for squashing results of processing the inputs within a predetermined range.
- the auxiliary nonlinearity layer can run on at least one of the numerous parallel processors.
- the Sn-LSTM can further comprise a memory nonlinearity layer (FIG. 9) for applying a nonlinearity to contents of the memory cell.
- the memory nonlinearity layer can run on at least one of the numerous parallel processors.
- the Sn-LSTM can further comprise a sentinel state producer (FIG. 9) for combining the squashed results from the auxiliary sentinel gate with the nonlinearized contents of the memory cell to produce the sentinel state.
- the sentinel state producer can run on at least one of the numerous parallel processors.
- the input provider (FIG. 9) can provide the auxiliary input that is visual input comprising image data and the input is a text embedding of a most recently emitted word andor character.
- the input provider (FIG. 9) can provide the auxiliary input that is a text encoding from another long short-term memory network (abbreviated LSTM) of an input document and the input is a text embedding of a most recently emitted word and or character.
- LSTM long short-term memory network
- the input provider (FIG. 9) can provide the auxiliary input that is a hidden state from another LSTM that encodes sequential data and the input is a text embedding of a most recently emitted word and/or character.
- the input provider (FIG. 9) can provide the auxiliary input that is visual input comprising image data and the input is a text embedding of a most recently emitted word andor character.
- the input provider (FIG. 9) can provide the auxiliary input that is a text
- the input provider (FIG. 9) can provide the auxiliary input that is a prediction derived from a hidden state from another LSTM that encodes sequential data and the input is a text embedding of a most recently emitted word and/or character.
- the input provider (FIG. 9) can provide the auxiliary input that is an output of a convolutional neural network (abbreviated CNN).
- the input provider (FIG.9) can provide the auxiliary input that is an output of an attention network.
- the input provider can further provide multiple auxiliary inputs to the Sn- LSTM at a timestep, with at least one auxiliary input further comprising concatenated features.
- the Sn-LSTM can further comprise an activation gate (FIG. 9).
- implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
- a visual sentinel vector can represent, identify, and or embody a visual sentinel.
- a sentinel state vector can represent, identify, and/or embody a sentinel state.
- This application uses the phrases “sentinel gate” and "auxiliary sentinel gate” interchangeable.
- a hidden state vector can represent, identify, and or embody a hidden state.
- a hidden state vector can represent, identify, and or embody hidden state information.
- An input vector can represent, identify, and or embody an input.
- An input vector can represent, identify, and or embody a current input.
- a memory cell vector can represent, identify, and or embody a memory cell state.
- a memory cell state vector can represent, identify, and/or embody a memory cell state.
- An image feature vector can represent, identify, and/or embody an image feature.
- An image feature vector can represent, identify, and/or embody a spatial image feature.
- a global image feature vector can represent, identify, and/or embody a global image feature.
- word embedding and "word embedding vector” interchangeably.
- a word embedding vector can represent, identify, and or embody a word embedding.
- image context vector can represent, identify, and/or embody an image context.
- a context vector can represent, identify, and/or embody an image context
- An adaptive image context vector can represent, identify, and/or embody an adaptive image context.
- An adaptive context vector can represent, identify, and/or embody an adaptive image context.
- This application uses the phrases “gate probability mass” and “sentinel gate mass” interchangeably.
- FIG. 17 illustrates some example captions and spatial attentional maps for the specific words in the caption. It can be seen that our learns alignments that correspond with human intuition. Even in the examples in which incorrect captions were generated, the model looked at reasonable regions in the image.
- FIG. 18 shows visualization of some example image captions, word-wise visual grounding probabilities, and corresponding image spatial attention maps generated by our model.
- the model successfully learns how heavily to attend to the image and adapts the attention accordingly. For example, for non- visual words such as "of and "a” the model attends less to the images. For visual words like “red”, “rose”, “doughnuts”, “ woman”, and “snowboard” our model assigns a high visual grounding probabilities (over 0.9). Note that the same word can be assigned different visual grounding probabilities when generated in different contexts. For example, the word “a” typically has a high visual grounding probability at the beginning of a sentence, since without any language context, the model needs the visual information to determine plurality (or not). On the other hand, the visual grounding probability of "a” in the phrase "on a table” is much lower. Since it is unlikely for something to be on more than one table.
- FIG. 19 presents similar results as shown in FIG. 18 on another set of example image captions, word-wise visual grounding probabilities, and corresponding image/spatial attention maps generated using the technology disclosed.
- FIGs. 20 and 21 are example rank-probability plots that illustrate performance of our model on the COCO (common objects in context) and Flickr30k datasets respectively. It can be seen that our model attends to the image more when generating object words like “dishes”, “people”, “cat”, “boat”; attribute words like “giant”, “metal”, “yellow”, and number words like "three". When the word is non-visual, our model learns to not attend to the image such as for "the", “of, “to” etc. For more abstract words such as "crossing”, “during” etc., our model attends less than the visual words and attends more than the non- visual words. The model does not rely on any syntactic features or external knowledge. It discovers these trends automatically through learning.
- FIG. 22 is an example graph that shows localization accuracy over the generated caption for top 45 most frequent COCO object categories.
- the blue colored bars show localization accuracy of the spatial attention model and the red colored bars show localization accuracy of the adaptive attention model.
- FIG. 22 shows that both models perform well on categories such as "cat”, e 3 ⁇ 4ed", "bus", and "truck". On smaller objects, such as "sink”,
- FIG. 23 is a table that shows performance of the technology disclosed on the Flicker30k and COCO datasets based on various natural language processing metrics, including BLEU (bilingual evaluation understudy), METEOR (metric for evaluation of translation with explicit ordering), CIDEr (consensus-based image description evaluation), ROUGE-L (recall- oriented understudy for gisting evaluation-longest common subsequence), and SPICE (semantic propositional image caption evaluation).
- the table in FIG. 23 shows that our adaptive attention model significantly outperforms our spatial attention model.
- the CIDEr score performance of our adaptive attention model is 0.S31 versus 0.493 for spatial attention model on Flickr30k database.
- CIDEr scores of adaptive attention model and spatial attention model on COCO database are 1.085 and 1.029 respectively.
- FIG. 25 is a simplified block diagram of a computer system that can be used to implement the technology disclosed.
- Computer system includes at least one central processing unit (CPU) that communicates with a number of peripheral devices via bus subsystem.
- peripheral devices can include a storage subsystem including, for example, memory devices and a file storage subsystem, user interface input devices, user interface output devices, and a network interface subsystem. The input and output devices allow user interaction with computer system.
- Network interface subsystem provides an interface to outside networks, including an interface to corresponding interface devices in other computer systems.
- At least the spatial attention model, the controller, the localizer (FIG.25), the trainer (which comprises the preventer), the adaptive attention model, and the sentinel LSTM (Sn-LSTM) are communicably linked to the storage subsystem and to the user interface input devices.
- User interface input devices can include a keyboard; pointing devices such as a mouse, trackball, touchpad, or graphics tablet; a scanner; a touch screen incorporated into the display; audio input devices such as voice recognition systems and microphones; and other types of input devices.
- pointing devices such as a mouse, trackball, touchpad, or graphics tablet
- audio input devices such as voice recognition systems and microphones
- input device is intended to include all possible types of devices and ways to input information into computer system.
- User interface output devices can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices.
- the display subsystem can include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image.
- the display subsystem can also provide a non-visual display such as audio output devices.
- output device is intended to include all possible types of devices and ways to output information from computer system to the user or to another machine or computer system.
- Storage subsystem stores programming and data constructs that provide the functionality of some or all of the modules and methods described herein. These software modules are generally executed by deep learning processors.
- Deep learning processors can be graphics processing units (GPUs) or field- programmable gate arrays (FPGAs). Deep learning processors can be hosted by a deep learning cloud platform such as Google Cloud PlatformTM, XilinxTM, and CirrascaleTM.
- a deep learning cloud platform such as Google Cloud PlatformTM, XilinxTM, and CirrascaleTM.
- Examples of deep learning processors include Google's Tensor Processing Unit (TPU)TM, rackmount solutions like GX4 Rackmount SeriesTM, GX8 Rackmount SeriesTM, NVIDIA DGX-1TM, Microsoft' Stratix V FPGATM, Graphcore's Intelligent Processor Unit (IPU)TM, Qualcomm's Zeroth PlatformTM with Snapdragon processorsTM, NVIDIA's VoltaTM, NVIDIA's DRIVE PXTM, NVIDIA's JETSON TX1/TX2 MODULETM, Intel's NirvanaTM, Movidius VPUTM, Fujitsu DPITM, ARM's
- TPU Tensor Processing Unit
- rackmount solutions like GX4 Rackmount SeriesTM, GX8 Rackmount SeriesTM, NVIDIA DGX-1TM, Microsoft' Stratix V FPGATM, Graphcore's Intelligent Processor Unit (IPU)TM, Qualcomm's Zeroth PlatformTM with Snapdragon processorsTM, NVIDIA's VoltaTM, NVIDIA's DRIVE
- Memory subsystem used in the storage subsystem can include a number of memories including a main random access memory (RAM) for storage of instructions and data during program execution and a read only memory (ROM) in which fixed instructions are stored.
- a file storage subsystem can provide persistent storage for program and data files, and can include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges.
- the modules implementing the functionality of certain implementations can be stored by file storage subsystem in the storage subsystem, or in other machines accessible by the processor.
- Bus subsystem provides a mechanism for letting the various components and subsystems of computer system communicate with each other as intended. Although bus subsystem is shown schematically as a single bus, alternative implementations of the bus subsystem can use multiple busses.
- Computer system itself can be of varying types including a personal computer, a portable computer, a workstation, a computer terminal, a network computer, a television, a mainframe, a server farm, a widely-distributed set of loosely networked computers, or any other data processing system or user device. Due to the ever-changing nature of computers and networks, the description of computer system depicted in FIG. 13 is intended only as a specific example for purposes of illustrating the preferred embodiments of the present invention. Many other configurations of computer system are possible having more or less components than the computer system depicted in FIG. 13.
- decoder framework which can automatically decide when to a spatial map highlighting image regions relevant to each
- the visual grounding probabilour model learns to not attend to the image such as for "the”, ity of "a” in the phrase “on a table” is much lower. Since it “of, "to” etc. For more abstract notions such as "crossing”, is unlikely for something to be on more than one table. "during” etc., our model leans to attend less than the visual words and attend more than the non-visual words. Note that
- FIG. 7 Localization accuracy over generated captions for top 45 most frequent COCO object categories.
- "Spatial Attention” and “Adaptive Attention” are our proposed spatial attention model and adaptive attention model, respectively.
- the COCO categories are ranked based on the align results of our adaptive attention, which cover 93.8% and 94.0% of total matched regions for spatial attention and adaptive attention, respectively.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The technology disclosed presents a novel spatial attention model that uses current hidden state information of a decoder long short-term memory (LSTM) to guide attention and to extract spatial image features for use in image captioning. The technology disclosed also presents a novel adaptive attention model for image captioning that mixes visual information from a convolutional neural network (CNN) and linguistic information from an LSTM. At each timestep, the adaptive attention model automatically decides how heavily to rely on the image, as opposed to the linguistic model, to emit the next caption word. The technology disclosed further adds a new auxiliary sentinel gate to an LSTM architecture and produces a sentinel LSTM (Sn-LSTM). The sentinel gate produces a visual sentinel at each timestep, which is an additional representation, derived from the LSTM's memory, of long and short term visual and linguistic information.
Description
ADAPTIVE ATTENTION MODEL FOR IMAGE CAPTIONING
CROSS REFERENCE TO OTHER APPLICATIONS
[0001] This application claims the benefit of US Provisional Patent Application No.
62/424,353, entitled "KNOWING WHEN TO LOOK: ADAPTIVE ATTENTION VIA A VISUAL SENTINEL FOR IMAGE CAPTIONING" (Atty. Docket No. SALE 1184- 1/1950PROV), filed on November 18, 2016. The priority provisional application is hereby incorporated by reference for all purposes;
[0002] This application claims the benefit of US Nonprovisional Patent Application No. 15/817,153, entitled "SPATIAL ATTENTION MODEL FOR IMAGE CAPTIONING" (Atty. Docket No. SALE 1184-2/1950US1), filed on November 17, 2017. The priority nonprovisional application is hereby incorporated by reference for all purposes;
[0003] This application claims the benefit of US Nonprovisional Patent Application No. 15/817,161, entitled "ADAPTIVE ATTENTION MODEL FOR IMAGE CAPTIONING" (Atty. Docket No. SALE 1184-3/1950US2), filed on November 17, 2017. The priority nonprovisional application is hereby incorporated by reference for all purposes;
[0004] This application claims the benefit of US Nonprovisional Patent Application No. 15/817,165, entitled "SENTINEL LONG SHORT-TERM MEMORY (Sn-LSTM)" (Atty.
Docket No. SALE 1184-4/1950US3), filed on November 18, 2017. The priority nonprovisional application is hereby incorporated by reference for all purposes;
[0005] This application incorporates by reference for all purposes US Nonprovisional Patent Application No. 15/421,016, entitled "POINTER SENTINEL MIXTURE MODELS" (Atty. Docket No. SALE 1174-4/1863US), filed on January 31, 2017;
[0006] This application incorporates by reference for all purposes US Provisional Patent Application No. 62/417,334, entitled "QUASI-RECURRENT NEURAL NETWORK" (Atty. Docket No. SALE 1174-3/1863PROV3), filed on November 4, 2016;
[0007] This application incorporates by reference for all purposes US Nonprovisional Patent Application No. 15/420,710, entitled "QUASI-RECURRENT NEURAL NETWORK" (Atty. Docket No. SALE 1180-3/1946US), filed on January 31, 2017; and
[0008] This application incorporates by reference for all purposes US Provisional Patent Application No. 62/418,075, entitled "QUASI-RECURRENT NEURAL NETWORK" (Atty. Docket No. SALE 1180-2/1946PROV2), filed on November 4, 2016.
FIELD OF THE TECHNOLOGY DISCLOSED
[0009] The technology disclosed relates to artificial intelligence type computers and digital data processing systems and corresponding data processing methods and products for emulation
of intelligence (i.e., knowledge based systems, reasoning systems, and knowledge acquisition systems); and including systems for reasoning with uncertainty (e.g., fuzzy logic systems), adaptive systems, machine learning systems, and artificial neural networks. The technology disclosed generally relates to a novel visual attention-based encoder-decoder image captioning model. One aspect of the technology disclosed relates to a novel spatial attention model for extracting spatial image features during image captioning. The spatial attention model uses current hidden state information of a decoder long short-term memory (LSTM) to guide attention, rather than using a previous hidden state or a previously emitted word. Another aspect of the technology disclosed relates to a novel adaptive attention model for image captioning that mixes visual information from a convolutional neural network (CNN) and linguistic information from an LSTM. At each timestep, the adaptive attention model automatically decides how heavily to rely on the image, as opposed to the linguistic model, to emit the next caption word. Yet another aspect of the technology disclosed relates to adding a new auxiliary sentinel gate to an LSTM architecture and producing a sentinel LSTM (Sn-LSTM). The sentinel gate produces a visual sentinel at each timestep, which is an additional representation, derived from the LSTM's memory, of long and short term visual and linguistic information.
BACKGROUND
[0010] The subject matter discussed in this section should not be assumed to be prior art merely as a result of its mention in this section. Similarly, a problem mentioned in this section or associated with the subject matter provided as background should not be assumed to have been previously recognized in the prior art. The subject matter in this section merely represents different approaches, which in and of themselves can also correspond to implementations of the claimed technology.
[0011] Image captioning is drawing increasing interest in computer vision and machine learning. Basically, it requires machines to automatically describe the content of an image using a natural language sentence. While this task seems obvious for human-beings, it is complicated for machines since it requires the language model to capture various semantic features within an image, such as objects' motions and actions. Another challenge for image captioning, especially for generative models, is that the generated output should be human-like natural sentences.
[0012] Recent successes of deep neural networks in machine translation have catalyzed the adoption of neural networks in solving image captioning problems. The idea originates from the encoder-decoder architecture in neural machine translation, where a convolutional neural network (CNN) is adopted to encode the input image into feature vectors, and a sequence
modeling approach (e.g., long short-term memory (LSTM)) decodes the feature vectors into a sequence of words.
[0013] Most recent work in image captioning relies on this structure, and leverages image guidance, attributes, region attention, or text attention as the attention guide. FIG. 2A shows an attention leading decoder mat uses previous hidden state information to guide attention and generate an image caption (prior art).
[0014] Therefore, an opportunity arises to improve the performance of attention-based image captioning models.
[0015] Automatically generating captions for images has emerged as a prominent interdisciplinary research problem in both academia and industry. It can aid visually impaired users, and make it easy for users to organize and navigate through large amounts of typically unstructured visual data. In order to generate high quality captions, an image captioning model needs to incorporate fine-grained visual clues from the image. Recently, visual attention-based neural encoder-decoder models have been explored, where the attention mechanism typically produces a spatial map highlighting image regions relevant to each generated word.
[0016] Most attention models for image captioning and visual question answering attend to the image at every timestep, irrespective of which word is going to be emitted next. However, not all words in the caption have corresponding visual signals. Consider the example in FIG. 16 that shows an image and its generated caption "a white bird perched on top of a red stop sign". The words "a" and "of do not have corresponding canonical visual signals. Moreover, linguistic correlations make the visual signal unnecessary when generating words like "on" and "top" following "perched", and "sign" following "a red stop". Furthermore, training with non-visual words can lead to worse performance in generating captions because gradients from non-visual words could mislead and diminish the overall effectiveness of the visual signal in guiding the caption generation process.
[0017] Therefore, an opportunity arises to determine the importance that should be given to the target image during caption generation by an attention-based visual neural encoder-decoder model.
[0018] Deep neural networks (DN s) have been successfully applied to many areas, including speech and vision. On natural language processing tasks, recurrent neural networks (RNNs) are widely used because of their ability to memorize long-term dependency. A problem of training deep networks, including RNNs, is gradient diminishing and explosion. This problem is apparent when training an RNN. A long short-term memory (LSTM) neural network is an extension of an RNN that solves this problem. In LSTM, a memory cell has linear dependence of its current activity and its past activity. A forget gate is used to modulate the information flow
between the past and the current activities. LSTMs also have input and output gates to modulate its input and output.
[0019] The generation of an output word in an LSTM depends on the input at the current timestep and the previous hidden state. However, LSTMs have been configured to condition their output on auxiliary inputs, in addition to the current input and the previous hidden state. For example, in image captioning models, LSTMs incorporate external visual information provided by image features to influence linguistic choices at different stages. As image caption generators, LSTMs take as input not only the most recently emitted caption word and the previous hidden state, but also regional features of the image being captioned (usually derived from the activation values of a hidden layer in a convolutional neural network (CNN)). The LSTMs are then trained to vectorize the image-caption mixture in such a way that this vector can be used to predict the next caption word.
[0020] Other image captioning models use external semantic information extracted from the image as an auxiliary input to each LSTM gate. Yet other text summarization and question answering models exist in which a textual encoding of a document or a question produced by a first LSTM is provided as an auxiliary input to a second LSTM.
[0021] The auxiliary input carries auxiliary information, which can be visual or textual. It can be generated externally by another LSTM, or derived externally from a hidden state of another LSTM. It can also be provided by an external source such as a CNN, a multilayer perception, an attention network, or another LSTM. The auxiliary information can be fed to the LSTM just once at the initial timestep or fed successively at each timestep.
[0022] However, feeding uncontrolled auxiliary information to the LSTM can yield inferior results because the LSTM can exploit noise from the auxiliary information and overfit more easily. To address this problem, we introduce an additional control gate into the LSTM that gates and guides the use of auxiliary information for next output generation.
[0023] Therefore, an opportunity arises to extend the LSTM architecture to include an auxiliary sentinel gate that determines the importance that should be given to auxiliary information stored in the LSTM for next output generation.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] In the drawings, like reference characters generally refer to like parts throughout the different views. Also, the drawings are not necessarily to scale, with an emphasis instead generally being placed upon illustrating the principles of the technology disclosed. In the following description, various implementations of the technology disclosed are described with reference to the following drawings, in which:
[0025] FIG. 1 illustrates an encoder that processes an image through a convolutional neural network (abbreviated CNN) and produces image features for regions of the image.
[0026] FIG.2A shows an attention leading decoder that uses previous hidden state information to guide attention and generate an image caption (prior art).
[0027] FIG.2B shows the disclosed attention lagging decoder which uses current hidden state information to guide attention and generate an image caption.
[0028] FIG.3A depicts a global image feature generator that generates a global image feature for an image by combining image features produced by the CNN encoder of FIG. 1.
[0029] FIG.3B is a word embedder that vectorizes words in a high-dimensional embedding space.
[0030] FIG.3C is an input preparer that prepares and provides input to a decoder.
[0031] FIG.4 depicts one implementation of modules of an attender that is part of the spatial attention model disclosed in FIG. 6.
[0032] FIG. 5 shows one implementation of modules of an emitter that is used in various aspects of the technology disclosed. Emitter comprises a feed-forward neural network (also referred to herein as multilayer perceptron (MLP)), a vocabulary softmax (also referred to herein as vocabulary probability mass producer), and a word embedder (also referred to herein as embedder).
[0033] FIG. 6 illustrates the disclosed spatial attention model for image captioning rolled across multiple timesteps. The attention lagging decoder of FIG.2B is embodied in and implemented by the spatial attention model.
[0034] FIG. 7 depicts one implementation of image captioning using spatial attention applied by the spatial attention model of FIG. 6.
[0035] FIG. 8 illustrates one implementation of the disclosed sentinel LSTM (Sn-LSTM) that comprises an auxiliary sentinel gate which produces a sentinel state.
[0036] FIG. 9 shows one implementation of modules of a recurrent neural network
(abbreviated RNN) that implements the Sn-LSTM of FIG. 8.
[0037] FIG. 10 depicts the disclosed adaptive attention model for image captioning that automatically decides how heavily to rely on visual information, as opposed to linguistic information, to emit a next caption word. The sentinel LSTM (Sn-LSTM) of FIG. 8 is embodied in and implemented by the adaptive attention model as a decoder.
[0038] FIG. 11 depicts one implementation of modules of an adaptive attender that is part of the adaptive attention model disclosed in FIG. 12. The adaptive attender comprises a spatial attender, an extractor, a sentinel gate mass determiner, a sentinel gate mass softmax, and a mixer (also referred to herein as an adaptive context vector producer or an adaptive context producer).
The spatial attender in turn comprises an adaptive comparator, an adaptive attender softmax, and an adaptive convex combination accumulator.
[0039] FIG. 12 shows the disclosed adaptive attention model for image captioning rolled across multiple timesteps. The sentinel LSTM (Sn-LSTM) of FIG. 8 is embodied in and implemented by the adaptive attention model as a decoder.
[0040] FIG. 13 illustrates one implementation of image captioning using adaptive attention applied by the adaptive attention model of FIG. 12.
[0041] FIG. 14 is one implementation of the disclosed visually hermetic decoder that processes purely linguistic information and produces captions for an image.
[0042] FIG. 15 shows a spatial attention model that uses the visually hermetic decoder of
FIG. 14 for image captioning. In FIG. 15, the spatial attention model is rolled across multiple timesteps.
[0043] FIG. 16 illustrates one example of image captioning using the technology disclosed.
[0044] FIG. 17 shows visualization of some example image captions and image/spatial attention maps generated using the technology disclosed.
[0045] FIG. 18 depicts visualization of some example image captions, word-wise visual grounding probabilities, and corresponding image/spatial attention maps generated using the technology disclosed.
[0046] FIG. 19 illustrates visualization of some other example image captions, word-wise visual grounding probabilities, and corresponding image spatial attention maps generated using the technology disclosed.
[0047] FIG.20 is an example rank-probability plot that illustrates performance of the technology disclosed on the COCO (common objects in context) dataset.
[0048] FIG.21 is another example rank-probability plot that illustrates performance of the technology disclosed on the Flicker30k dataset
[0049] FIG.22 is an example graph that shows localization accuracy of the technology disclosed on the COCO dataset. The blue colored bars show localization accuracy of the spatial attention model and the red colored bars show localization accuracy of the adaptive attention model.
[0050] FIG 23 is a table that shows performance of the technology disclosed on the Flicker30k and COCO datasets based on various natural language processing metrics, including BLEU (bilingual evaluation understudy), METEOR (metric for evaluation of translation with explicit ordering), CIDEr (consensus-based image description evaluation), ROUGE-L (recall- oriented understudy for gisting evaluation-longest common subsequence), and SPICE (semantic propositional image caption evaluation).
[0051] FIG 24 is a leaderboard of the published state-of-the-art that shows that the technology disclosed sets the new state-of-the-art by a significant margin.
[0052] FIG.25 is a simplified block diagram of a computer system that can be used to implement the technology disclosed.
DETAILED DESCRIPTION
[0053] The following discussion is presented to enable any person skilled in the art to make and use the technology disclosed, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed implementations will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the technology disclosed. Thus, the technology disclosed is not intended to be limited to the implementations shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein.
[0054] What follows is a discussion of the neural encoder-decoder framework for image captioning, followed by the disclosed attention-based image captioning models.
Encoder-Decoder Model for Image Captioning
[0055] Attention-based visual neural encoder-decoder models use a convolutional neural network (CNN) to encode an input image into feature vectors and a long short-term memory network (LSTM) to decode the feature vectors into a sequence of words. The LSTM relies on an attention mechanism that produces a spatial map that highlights image regions relevant to for generating words. Attention-based models leverage either previous hidden state information of the LSTM or previously emitted caption word(s) as input to the attention mechanism.
[0056] Given an image and the corresponding caption, the encoder-decoder model directly maximizes the following objective:
[0057] In the above equation (1), , are the parameters of the model, / is the image, and y = {Y1, . . . Yt} is the corresponding caption. Using the chain rule, the log likelihood of the joint probability distribution can be decomposed into the following ordered conditionals:
[0058] As evident by the above equation (2), the dependency on model parameters is dropped for convenience.
[0059] In an encoder-decoder framework that uses a recurrent neural network (RNN) as the decoder, each conditional probability is modeled as:
[0060] In the above equation (3), / is a nonlinear function that outputs the probability of yt . ct is the visual context vector at time t extracted from image I . ht is the current hidden state of the RNN at time t.
[0061] In one implementation, the technology disclosed uses a long short-term memory network (LSTM) as the RNN. LSTMs are gated variants of a vanilla RNN and have
demonstrated state-of-the-art performance on a variety of sequence modeling tasks. Current hidden state ht of the LSTM is modeled as:
ht = LSTM (xt , hf...j , mt _ 1)
[0062] In the above equation (4), A; is the current input at time t and mt _l is the previous memory cell state at time t - 1.
[0063] Context vector ct is an important factor in the neural encoder-decoder framework because it provides visual evidence for caption generation. Different ways of modeling the context vector fall into two categories: vanilla encoder-decoder and attention-based encoder- decoder frameworks. First, in the vanilla framework, context vector c, is only dependent on a convolutional neural network (CNN) that serves as the encoder. The input image / is fed into the CNN, which extracts the last fully connected layer as a global image feature. Across generated words, the context vector c, keeps constant, and does not depend on the hidden state of the decoder.
[0064] Second, in the attention-based framework, context vector ct is dependent on both the encoder and the decoder. At time t , based on the hidden state, the decoder attends to specific regions of the image and determines context vector ct using the spatial image features from a convolution layer of a CNN. Attention models can significantly improve the performance of image captioning.
Spatial Attention Model
[0065] We disclose a novel spatial attention model for image captioning that is different from previous work in at least two aspects. First, our model uses the current hidden state information of the decoder LSTM to guide attention, instead of using the previous hidden state or a previously emitted word. Second, our model supplies the LSTM with a time-invariant global
image representation, instead of a progression by timestep of attention-variant image
representations.
[0066] The attention mechanism of our model uses current instead of prior hidden state information to guide attention, which requires a different structure and different processing steps. The current hidden state information is used to guide attention to image regions and generate, in a timestep, an attention-variant image representation. The current hidden state information is computed at each timestep by the decoder LSTM, using a current input and previous hidden state information. Information from the LSTM, the current hidden state, is fed to the attention mechanism, instead of output of the attention mechanism being fed to the LSTM.
[0067] The current input combines word(s) previously emitted with a time-invariant global image representation, which is determined from the encoder CNN's image features. The first current input word fed to decoder LSTM is a special start (<start>) token. The global image representation can be fed to the LSTM once, in a first timestep, or repeatedly at successive timesteps.
[0068] The spatial attention model determines context vector c, that is defined as:
ct = g(V,ht)
[0069] In the above equation (5), g is the attention function which is embodied in and implemented by the attender of FIG. 4, V = [v1 , . . . vk ],ν,∈ d comprises the image features Vj , . . . vk produced by the CNN encoder of FIG. 1. Each image feature is a d dimensional representation corresponding to a part or region of the image produced by the CNN encoder. A, is the current hidden state of the LSTM decoder at time t , shown in FIG. 2B.
[0070] Given the image features V e dxk produced by the CNN encoder and current hidden state ht≡ d of the LSTM decoder, the disclosed spatial attention model feeds them through a comparator (FIG.4) followed by an attender softmax (FIG.4) to generate the attention distribution over the k regions of the image:
[0071] In the above equations (6) and (7) is a unity vector with all elements set to 1.
image features vlt . . . vk in V and a, denotes an attention map that comprises the attention weights (also referred to herein as the attention probability masses). As shown in FIG.4, the comparator comprises a single layer neural network and a nonlinearity layer to determine zt .
[0072] Based on the attention distribution, the context vector c, is obtained by a convex combination accumulator as:
[0073] In the above equation (8), ct and ht are combined to predict next word yt as in equation (3) using an emitter.
[0074] As shown in FIG.4, the attender comprises the comparator, the attender softmax (also referred to herein as attention probability mass producer), and the convex combination accumulator (also referred to herein as context vector producer or context producer).
Encoder-CNN
[0075] FIG. 1 illustrates an encoder that processes an image through a convolutional neural network (abbreviated CNN) and produces the image features V = [ v1 , . . . vt ], vf∈ d for regions of the image. In one implementation, the encoder CNN is a pretrained ResNet. In such an implementation, the image features V = [v, , . . . vk ], v,.∈ d are spatial feature outputs of the last convolutional layer of the ResNet. In one implementation, the image features
V = [Vj , . . . vk],v,∈ d have a dimension of 2048 x 7 x 7. In one implementation, the technology disclosed uses A = {α1, . . . αk], αi e 2048 to represent the spatial CNN features at each of the k grid locations. Following this, in some implementations, a global image feature generator produces a global image feature, as discussed below.
Attention Lagging Decoder-LSTM
[0076] Different from FIG.2A, FIG. 2B shows the disclosed attention lagging decoder which uses current hidden state information ht to guide attention and generate an image caption.
The attention lagging decoder uses current hidden state information h, to analyze where to look in the image, i.e., for generating the context vector ct . The decoder then combines both sources of information ht and ct to predict the next word. The generated context vector ct embodies the residual visual information of current hidden state ht , which diminishes the uncertainty or complements the informativeness of the current hidden state for next word prediction. Since the decoder is recurrent, LSTM-based and operates sequentially, the current hidden state ht embodies the previous hidden state ht_x and the current input xt , which form the current visual and linguistic context. The attention lagging decoder attends to the image using this current visual and linguistic context rather man stale, prior context (FIG.2A). In other words, the image
is attended after the current visual and linguistic context is determined by the decoder, i.e., the attention lags the decoder. This produces more accurate image captions.
Global Image Feature Generator
[0077] FIG.3A depicts a global image feature generator that generates a global image feature for an image by combining image features produced by the CNN encoder of FIG. 1. Global image feature generator first produces a preliminary global image feature as follows:
[0078] In the above equation (9), ag is the preliminary global image feature that is determined by averaging the image features produced by the CNN encoder. For modeling convenience, the global image feature generator uses a single layer perception with rectifier activation function to transform the image feature vectors into new vectors with dimension z d :
[0079] In the above equations (10) and (11), Wa and Wb are the weight parameters,vg is the global image feature. Global image featurevg is time-invariant because it is not sequentially or recurrently produced, but instead determined from non-recurrent, convolved image features. The transformed spatial image features vt form the image features V = [vlt . . . ν4],ν(∈ d .
Transformation of the image features is embodied in and implemented by the image feature rectifier of the global image feature generator, according to one implementation. Transformation of the preliminary global image feature is embodied in and implemented by the global image feature rectifier of the global image feature generator, according to one implementation.
Word Embedder
[0080] FIG.3B is a word embedder that vectorizes words in a high-dimensional embedding space. The technology disclosed uses the word embedder to generate word embeddings of vocabulary words predicted by the decoder, w, denotes word embedding of a vocabulary word predicted by the decoder at time t . wt _l denotes word embedding of a vocabulary word predicted by the decoder at time f - 1 . In one implementation, word embedder generates word embeddings wt _l of dimensionality d using an embedding matrix
represents the size of the vocabulary. In another implementation, word embedder first transforms a word into a one-hot encoding and then converts it into a continuous representation using the
embedding matrix In yet another implementation, the word embedder initializes
word embeddings using pretrained word embedding models like GloVe and word2vec and obtains a fixed word embedding of each word in the vocabulary. In other implementations, word embedder generates character embeddings and/or phrase embeddings.
Input Preparer
[0081] FIG.3C is an input preparer that prepares and provides input to a decoder. At each time step, the input preparer concatenates the word embedding vector (predicted by the
decoder in an immediately previous timestep) with the global image feature vector v8. The concatenation forms the input xt that is fed to the decoder at a current timestep
denotes the most recently emitted caption word. The input preparer is also referred to herein as concatenator.
Sentinel LSTM fSn-LSTM)
[0082] A long short-term memory (LSTM) is a cell in a neural network that is repeatedly exercised in timesteps to produce sequential outputs from sequential inputs. The output is often referred to as a hidden state, which should not be confused with the cell's memory. Inputs are a hidden state and memory from a prior timestep and a current input. The cell has an input activation function, memory, and gates. The input activation function maps the input into a range, such as -1 to I for a tanh activation function. The gates determine weights applied to updating the memory and generating a hidden state output result from the memory. The gates are a forget gate, an input gate, and an output gate. The forget gate attenuates the memory. The input gate mixes activated inputs with the attenuated memory. The output gate controls hidden state output from the memory. The hidden state output can directly label an input or it can be processed by another component to emit a word or other label or generate a probability distribution over labels.
[0083] An auxiliary input can be added to the LSTM that introduces a different kind of information than the current input, in a sense orthogonal to current input. Adding such a different kind of auxiliary input can lead to overfitting and other training artifacts. The technology disclosed adds a new gate to the LSTM cell architecture that produces a second sentinel state output from the memory, in addition to the hidden state output. This sentinel state output is used to control mixing between different neural network processing models in a post-LSTM component. A visual sentinel, for instance, controls mixing between analysis of visual features from a CNN and of word sequences from a predictive language model. The new gate that produces the sentinel state output is called "auxiliary sentinel gate".
[0084] The auxiliary input contributes to both accumulated auxiliary information in the LSTM memory and to the sentinel output. The sentinel state output encodes parts of the accumulated auxiliary information that are most useful for next output prediction. The sentinel gate conditions current input, including the previous hidden state and the auxiliary information, and combines the conditioned input with the updated memory, to produce the sentinel state output. An LSTM that includes the auxiliary sentinel gate is referred to herein as a "sentinel LSTM (Sn-LSTM)".
[0085] Also, prior to being accumulated in the Sn-LSTM, the auxiliary information is often subjected to a "tanh" (hyperbolic tangent) function that produces output in the range of -1 and 1 (e.g., tanh function following the fully-connected layer of a CNN). To be consistent with the output ranges of the auxiliary information, the auxiliary sentinel gate gates the pointwise tanh of the Sn-LSTM' s memory cell. Thus, tanh is selected as the non-linearity function applied to the Sn-LSTM's memory cell because it matches the form of the stored auxiliary information.
[0086] FIG. 8 illustrates one implementation of the disclosed sentinel LSTM (Sn-LSTM) that comprises an auxiliary sentinel gate which produces a sentinel state or visual sentinel. The Sn-LSTM receives inputs at each of a plurality of timesteps. The inputs include at least an input for a current timestep xf , a hidden state from a previous timestep ht _l and an auxiliary input for the current timestep at . The Sn-LSTM can run on at least one of the numerous parallel processors.
[0087] In some implementations, the auxiliary input af is not separately provided, but instead encoded as auxiliary information in the previous hidden state and/or the input xt
(such as the global image feature vg ).
[0088] The auxiliary input at can be visual input comprising image data and the input can be a text embedding of a most recently emitted word and/or character. The auxiliary input at can be a text encoding from another long short-term memory network (abbreviated LSTM) of an input document and the input can be a text embedding of a most recently emitted word and/or character. The auxiliary input at can be a hidden state vector from another LSTM that encodes sequential data and the input can be a text embedding of a most recently emitted word and/or character. The auxiliary input can be a prediction derived from a hidden state vector from another LSTM that encodes sequential data and the input can be a text embedding of a most recently emitted word and/or character. The auxiliary input a. can be an output of a
convolutional neural network (abbreviated CNN). The auxiliary input af can be an output of an attention network.
[0089] The Sn-LSTM generates outputs at each of the plurality of timesteps by processing the inputs through a plurality of gates. The gates include at least an input gate, a forget gate, an output gate, and an auxiliary sentinel gate. Each of the gates can run on at least one of the numerous parallel processors.
[0090] The input gate controls how much of the current input xf and the previous hidden state ht _l will enter the current memory cell state and is represented as:
[0091] The forget gate operates on the current memory cell state m{ and the previous memory cell state τηt _l and decides whether to erase (set to zero) or keep individual components of the memory cell and is represented as:
[0093] The Sn-LSTM can also include an activation gate (also referred to as cell update gate or input transformation gate) that transforms the current input xt and previous hidden state ht _l to be taken into account into the current memory cell state and is represented as:
[0094] The Sn-LSTM can also include a current hidden state producer that outputs the current hidden state hf scaled by a tanh (squashed) transformation of the current memory cell state m{ and is represented as:
[0095] In the above equation, represents the element-wise product.
[0096] A memory cell updater (FIG.9) updates the memory cell of the Sn-LSTM from the previous memory cell state m t _l to the current memory cell state as follows:
[0097] As discussed above, the auxiliary sentinel gate produces a sentinel state or visual sentinel which is a latent representation of what the Sn-LSTM decoder already knows. The Sn- LSTM decoder's memory stores both long and short term visual and linguistic information. The adaptive attention model learns to extract a new component from the Sn-LSTM that the model can fall back on when it chooses to not attend to the image. This new component is called the visual sentinel. And the gate that decides whether to attend to the image or to the visual sentinel is the auxiliary sentinel gate.
[0098] The visual and linguistic contextual information is stored in the Sn-LSTM decoder's memory cell. We use the visual sentinel vector s to modulate this information by:
[0099] In the above equations, W and W, are weight parameters that are learned, x is the x h t input to the Sn-LSTM at timestep t, and auxt is the auxiliary sentinel gate applied to the current memory cell state mt . represents the element-wise product and σ is the logistic sigmoid activation.
[00100] In an attention-based encoder-decoder text summarization model, the Sn-LSTM can be used as a decoder that receives auxiliary information from another encoder LSTM. The encoder LSTM can process an input document to produce a document encoding. The document encoding or an alternative representation of the document encoding can be fed to the Sn-LSTM as auxiliary information. Sn-LSTM can use its auxiliary sentinel gate to determine which parts of the document encoding (or its alternative representation) are most important at a current timestep, considering a previously generated summary word and a previous hidden state. The important parts of the document encoding (or its alternative representation) can then be encoded into the sentinel state. The sentinel state can be used to generate the next summary word.
[00101] In an attention-based encoder-decoder question answering model, the Sn-LSTM can be used as a decoder that receives auxiliary information from another encoder LSTM. The encoder LSTM can process an input question to produce a question encoding. The question encoding or an alternative representation of the question encoding can be fed to the Sn-LSTM as auxiliary information. Sn-LSTM can use its auxiliary sentinel gate to determine which parts of the question encoding (or its alternative representation) are most important at a current timestep, considering a previously generated answer word and a previous hidden state. The important parts of the question encoding (or its alternative representation) can then be encoded into the sentinel state. The sentinel state can be used to generate the next answer word.
[00102] In an attention-based encoder-decoder machine translation model, the Sn-LSTM can be used as a decoder that receives auxiliary information from another encoder LSTM. The encoder LSTM can process a source language sequence to produce a source encoding. The source encoding or an alternative representation of the source encoding can be fed to the Sn- LSTM as auxiliary information. Sn-LSTM can use its auxiliary sentinel gate to determine which parts of the source encoding (or its alternative representation) are most important at a current timestep, considering a previously generated translated word and a previous hidden state. The important parts of the source encoding (or its alternative representation) can then be encoded into the sentinel state. The sentinel state can be used to generate the next translated word.
[00103] In an attention-based encoder-decoder video captioning model, the Sn-LSTM can be used as a decoder that receives auxiliary information from an encoder comprising a CNN and an LSTM. The encoder can process video frames of a video to produce a video encoding. The video encoding or an alternative representation of the video encoding can be fed to the Sn-LSTM as auxiliary information. Sn-LSTM can use its auxiliary sentinel gate to determine which parts of the video encoding (or its alternative representation) are most important at a current timestep, considering a previously generated caption word and a previous hidden state. The important parts of the video encoding (or its alternative representation) can then be encoded into the sentinel state. The sentinel state can be used to generate the next caption word.
[00104] In an attention-based encoder-decoder image captioning model, the Sn-LSTM can be used as a decoder that receives auxiliary information from an encoder CNN. The encoder can process an input image to produce an image encoding. The image encoding or an alternative representation of the image encoding can be fed to the Sn-LSTM as auxiliary information. Sn- LSTM can use its auxiliary sentinel gate to determine which parts of the image encoding (or its alternative representation) are most important at a current timestep, considering a previously generated caption word and a previous hidden state. The important parts of the image encoding (or its alternative representation) can then be encoded into the sentinel state. The sentinel state can be used to generate the next caption word.
Adaptive Attention Model
[00105] As discussed above, a long short-term memory (LSTM) decoder can be extended to generate image captions by attending to regions or features of a target image and conditioning word predictions on the attended image features. However, attending to the image is only half of the story; knowing when to look is the other half. That is, not all caption words correspond to visual signals; some words, such as stop words and linguistically correlated words, can be better inferred from textual context.
[00106] Existing attention-based visual neural encoder-decoder models force visual attention to be active for every generated word. However, the decoder likely requires little to no visual information from the image to predict non-visual words such as "the" and "of. Other words that seem visual can often be predicted reliably by the linguistic model, e.g., "sign" after "behind a red stop" or "phone" following "talking on a cell". If the decoder needs to generate the compound word "stop sign" as caption, then only "stop" requires access to the image and "sign" can be deduced linguistically. Our technology guides use of visual and linguistic information.
[00107] To overcome the above limitations, we disclose a novel adaptive attention model for image captioning that mixes visual information f om a convolutional neural network (CNN) and linguistic information from an LSTM. At each timestep, our adaptive attention encoder-decoder framework can automatically decide how heavily to rely on the image, as opposed to the linguistic model, to emit the next caption word.
[00108] FIG. 10 depicts the disclosed adaptive attention model for image captioning that automatically decides how heavily to rely on visual information, as opposed to linguistic information, to emit a next caption word. The sentinel LSTM (Sn-LSTM) of FIG. 8 is embodied in and implemented by the adaptive attention model as a decoder.
[00109] As discussed above, our model adds a new auxiliary sentinel gate to the LSTM architecture. The sentinel gate produces a so-called visual sentinel/sentinel state St at each timestep, which is an additional representation, derived from the Sn-LSTM's memory, of long and short term visual and linguistic information. The visual sentinel St encodes information that can be relied on by the linguistic model without reference to the visual information from the CNN. The visual sentinel St is used, in combination with the current hidden state from the Sn- LSTM, to generate a sentinel gate mass/gate probability mass βt that controls mixing of image and linguistic context.
[00110] For example, as illustrated in FIG. 16, our model learns to attend to the image more when generating words "white", "bird", "red" and "stop", and relies more on the visual sentinel when generating words "top", "of and "sign".
Visually Hermetic Decoder
[00111] FIG. 14 is one implementation of the disclosed visually hermetic decoder that processes purely linguistic information and produces captions for an image. FIG. 15 shows a spatial attention model that uses the visually hermetic decoder of FIG. 14 for image captioning. In FIG. 15, the spatial attention model is rolled across multiple timesteps. Alternatively, a visually hermetic decoder can be used that processes purely linguistic information w , which is not mixed with image data during image captioning. This alternative visually hermetic decoder
does not receive the global image representation as input. That is, the current input to the visually hermetic decoder is just its most recently emitted caption word and the initial input
is only the <start> token. A visually hermetic decoder can be implemented as an LSTM, a gated recurrent unit (GRU), or a quasi-recurrent neural network (QR ). Words, with this alternative decoder, are still emitted after application of the attention mechanism.
Weakly-Supervised Localization
[00112] The technology disclosed also provides a system and method of evaluating performance of an image captioning model. The technology disclosed generates a spatial attention map of attention values for mixing image region vectors of an image using a convolutional neural network (abbreviated CNN) encoder and a long-short term memory (LSTM) decoder and produces a caption word output based on the spatial attention map. Then, the technology disclosed segments regions of the image above a threshold attention value into a segmentation map. Then, the technology disclosed projects a bounding box over the image that covers a largest connected image component in the segmentation map. Then, the technology disclosed determines an intersection over union (abbreviated 10U) of the projected bounding box and a ground truth bounding box. Then, the technology disclosed determines a localization accuracy of the spatial attention map based on the calculated IOU.
[00113] The technology disclosed achieves state-of-the-art performance across standard benchmarks on the COCO dataset and the Flickr30k dataset.
Particular Implementations
[00114] We describe a system and various implementations of a visual attention-based encoder-decoder image captioning model. One or more features of an implementation can be combined with the base implementation. Implementations that are not mutually exclusive are taught to be combinable. One or more features of an implementation can be combined with other implementations. This disclosure periodically reminds the user of these options. Omission from some implementations of recitations that repeat these options should not be taken as limiting the combinations taught in the preceding sections - these recitations are hereby incorporated forward by reference into each of the following implementations.
[00115] In one implementation, the technology disclosed presents a system. The system includes numerous parallel processors coupled to memory. The memory is loaded with computer instructions to generate a natural language caption for an image. The instructions, when executed on the parallel processors, implement the following actions.
[00116] Processing an image through an encoder to produce image feature vectors for regions of the image and determining a global image feature vector from the image feature vectors. The encoder can be a convolutional neural network (abbreviated CNN).
[00117] Processing words through a decoder by beginning at an initial timestep with a start- of-caption token < start > and the global image feature vector and continuing in successive timesteps using a most recently emitted caption word and the global image feature vector as input to the decoder. The decoder can be a long short-term memory network (abbreviated LSTM).
[00118] At each timestep, using at least a current hidden state of the decoder to determine unnormalized attention values for the image feature vectors and exponentially normalizing the attention values to produce attention probability masses.
[00119] Applying the attention probability masses to the image feature vectors to accumulate in an image context vector a weighted sum of the image feature vectors.
[00120] Submitting the image context vector and the current hidden state of the decoder to a feed-forward neural network and causing the feed-forward neural network to emit a next caption word. The feed-forward neural network can be a multilayer perceptron (abbreviated MLP).
[00121] Repeating the processing of words through the decoder, the using, the applying, and the submitting until the caption word emitted is an end-of-caption token < end > . The iterations are performed by a controller, shown in FIG. 25.
[00122] This system implementation and other systems disclosed optionally include one or more of the following features. System can also include features described in connection with methods disclosed. In the interest of conciseness, alternative combinations of system features are not individually enumerated. Features applicable to systems, methods, and articles of manufacture are not repeated for each statutory class set of base features. The reader will understand how features identified in this section can readily be combined with base features in other statutory classes.
[00123] The system can be a computer-implemented system. The system can be a neural network-based system.
[00124] The current hidden state of the decoder can be determined based on a current input to the decoder and a previous hidden state of the decoder.
[00125] The image context vector can be a dynamic vector that determines at each timestep an amount of spatial attention allocated to each image region, conditioned on the current hidden state of the decoder.
[00126] The system can use weakly-supervised localization to evaluate the allocated spatial attention.
[00127] The attention values for the image feature vectors can be determined by processing the image feature vectors and the current hidden state of the decoder through a single layer neural network.
[00128] The system can cause the feed-forward neural network to emit the next caption word at each timestep. In such an implementation, the feed-forward neural network can produce an output based on the image context vector and the current hidden state of the decoder and use the output to determine a normalized distribution of vocabulary probability masses over words in a vocabulary that represent a respective likelihood that a vocabulary word is the next caption word.
[00129] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00130] In another implementation, the technology disclosed presents a system. The system includes numerous parallel processors coupled to memory. The memory is loaded with computer instructions to generate a natural language caption for an image. The instructions, when executed on the parallel processors, implement the following actions.
[00131] Using current hidden state information of an attention lagging decoder to generate an attention map for image feature vectors produced by an encoder from an image and generating an output caption word based on a weighted sum of the image feature vectors, with the weights determined from the attention map.
[00132] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00133] The system can be a computer-implemented system. The system can be a neural network-based system.
[00134] The current hidden state information can be determined based on a current input to the decoder and previous hidden state information.
[00135] The system can use weakly-supervised localization to evaluate the attention map.
[00136] The encoder can be a convolutional neural network (abbreviated CNN) and the image feature vectors can be produced by a last convolutional layer of the CNN.
[00137] The attention lagging decoder can be a long short-term memory network (abbreviated
LSTM).
[00138] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00139] In yet another implementation, the technology disclosed presents a system. The system includes numerous parallel processors coupled to memory. The memory is loaded with computer instructions to generate a natural language caption for an image. The instructions, when executed on the parallel processors, implement the following actions.
[00140] Processing an image through an encoder to produce image feature vectors for regions of the image. The encoder can be a convolutional neural network (abbreviated CNN).
[00141] Processing words through a decoder by beginning at an initial timestep with a start- of-caption token < start > and continuing in successive timesteps using a most recently emitted caption word as input to the decoder. The decoder can be a long short-term memory
network (abbreviated LSTM).
[00142] At each timestep, using at least a current hidden state of the decoder to determine, from the image feature vectors, an image context vector that determines an amount of attention allocated to regions of the image conditioned on the current hidden state of the decoder.
[00143] Not supplying the image context vector to the decoder.
[00144] Submitting the image context vector and the current hidden state of the decoder to a feed-forward neural network and causing the feed-forward neural network to emit a caption word.
[00145] Repeating the processing of words through the decoder, the using, the not supplying, and the submitting until the caption word emitted is an end-of-caption token < end > . The iterations are performed by a controller, shown in FIG. 25.
[00146] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00147] The system can be a computer-implemented system. The system can be a neural network-based system.
[00148] The system does not supply the global image feature vector to the decoder and processes words through the decoder by beginning at the initial timestep with the start-of-caption token < start > and continuing in successive timesteps using the most recently emitted caption word as input to the decoder.
[00149] The system does not supply the image feature vectors to the decoder, in some implementations.
[00150] In yet further implementation, the technology disclosed presents a system for machine generation of a natural language caption for an image. The system runs on numerous
parallel processors. The system can be a computer-implemented system. The system can be a neural network-based system.
[00151] The system comprises an attention lagging decoder. The attention lagging decoder can run on at least one of the numerous parallel processors.
[00152] The attention lagging decoder uses at least current hidden state information to generate an attention map for image feature vectors produced by an encoder f om an image. The encoder can be a convolutional neural network (abbreviated CNN) and the image feature vectors can be produced by a last convolutional layer of the CNN. The attention lagging decoder can be a long short-term memory network (abbreviated LSTM).
[00153] The attention lagging decoder causes generation of an output caption word based on a weighted sum of the image feature vectors, with the weights determined from the attention map.
[00154] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00155] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00156] FIG. 6 illustrates the disclosed spatial attention model for image captioning rolled across multiple timesteps. The attention lagging decoder of FIG.2B is embodied in and implemented by the spatial attention model. The technology disclosed presents an image-to- language captioning system that implements the spatial attention model of FIG. 6 for machine generation of a natural language caption for an image. The system runs on numerous parallel processors.
[00157] The system comprises an encoder (FIG. 1) for processing an image through a convolutional neural network (abbreviated CNN) and producing image features for regions of the image. The encoder can run on at least one of the numerous parallel processors.
[00158] The system comprises a global image feature generator (FIG.3A) for generating a global image feature for the image by combining the image features. The global image feature generator can run on at least one of the numerous parallel processors.
[00159] The system comprises an input preparer (FIG.3C) for providing input to a decoder as a combination of a start-of-caption token < start > and the global image feature at an initial decoder timestep and a combination of a most recently emitted caption word w,_, and the global image feature at successive decoder timesteps. The input preparer can run on at least one of the numerous parallel processors.
[00160] The system comprises the decoder (FIG. 2B) for processing the input through a long short-term memory network (abbreviated LSTM) to generate a current decoder hidden state at each decoder timestep. The decoder can run on at least one of the numerous parallel processors.
[00161] The system comprises an attender (FIG.4) for accumulating, at each decoder timestep, an image context as a convex combination of the image features scaled by attention probability masses determined using the current decoder hidden state. The attender can run on at least one of the numerous parallel processors. FIG.4 depicts one implementation of modules of the attender that is part of the spatial attention model disclosed in FIG. 6. The attender comprises the comparator, the attender softmax (also referred to herein as attention probability mass producer), and the convex combination accumulator (also referred to herein as context vector producer or context producer).
[00162] The system comprises a feed-forward neural network (also referred to herein as multilayer perceptron (MLP)) (FIG. 5) for processing the image context and the current decoder hidden state to emit a next caption word at each decoder timestep. The feed-forward neural network can run on at least one of the numerous parallel processors.
[00163] The system comprises a controller (FIG. 25) for iterating the input preparer, the decoder, the attender, and the feed-forward neural network to generate the natural language caption for the image until the next caption word emitted is an end-of-caption token < end > . The controller can run on at least one of the numerous parallel processors.
[00164] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00165] The system can be a computer-implemented system. The system can be a neural network-based system.
[00166] The attender can further comprise an attender softmax (FIG.4) for exponentially normalizing attention values zt = [λl, . .. .. λ to produce the attention probability masses αt = [α1, . . . αk] at each decoder timestep. The attender softmax can run on at least one of the numerous parallel processors.
[00167] The attender can further comprise a comparator (FIG. 4) for producing at each decoder timestep the attention values z, = [λl, . . . λ ] as a result of interaction between the current decoder hidden state ht and the image features V = [λl, . . . λ ], vi∈ d . The comparator can run on at least one of the numerous parallel processors. In some implementations, the attention values zt = [λl, . . . λ ] are determined by processing the current decoder hidden state
ht and the image features through a single layer neural network
applying a weight matrix and a nonlinearity layer (FIG. 4) applying a hyperbolic tangent (tanh) squashing function (to produce an output between -1 and 1). In some implementations, the attention values zt = [λl, . . . λ ] are determined by processing the current decoder hidden state
A, and the image features
through a dot producter or inner producter. In yet other implementations, the attention values are determined by processing
bilinear form producter.
[00168] The decoder can further comprise at least an input gate, a forget gate, and an output gate for determining at each decoder timestep the current decoder hidden state based on a current decoder input and a previous decoder hidden state. The input gate, the forget gate, and the output gate can each run on at least one of the numerous parallel processors.
[00169] The attender can further comprise a convex combination accumulator (FIG.4) for producing the image context to identify an amount of spatial attention allocated to each image region at each decoder timestep, conditioned on the current decoder hidden state. The convex combination accumulator can run on at least one of the numerous parallel processors.
[00170] The system can further comprise a localizer (FIG. 25) for evaluating the allocated spatial attention based on weakly-supervising localization. The localizer can run on at least one of the numerous parallel processors.
[00171] The system can further comprise the feed-forward neural network (FIG. 5) for producing at each decoder timestep an output based on the image context and the current decoder hidden state.
[00172] The system can further comprise a vocabulary softmax (FIG. 5) for determining at each decoder timestep a normalized distribution of vocabulary probability masses over words in a vocabulary using the output. The vocabulary softmax can run on at least one of the numerous parallel processors. The vocabulary probability masses can identify respective likelihood that a vocabulary word is the next caption word.
[00173] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00174] FIG. 7 depicts one implementation of image captioning using spatial attention applied by the spatial attention model of FIG. 6. In one implementation, the technology disclosed presents a method that performs the image captioning of FIG. 7 for machine
generation of a natural language caption for an image. The method can be a computer- implemented method. The method can be a neural network-based method.
[00175] The method includes processing an image / through an encoder (FIG. 1) to produce image feature vectors V = [v1, . . . vk ], vt∈ d for regions of the image / and determining a global image feature vector v* from the image feature vectors V = [v1, . . . vk ],v,∈ d . The encoder can be a convolutional neural network (abbreviated CNN), as shown in FIG. 1.
[00176] The method includes processing words through a decoder (FIGs. 2B and 6) by beginning at an initial timestep with a start-of-caption token < start > and the global image feature vector vg and continuing in successive timesteps using a most recently emitted caption word wt _ 1and the global image feature vector avsg input to the decoder. The decoder can be a long short-term memory network (abbreviated LSTM), as shown in FIGs. 2B and 6.
[00177] The method includes, at each timestep, using at least a current hidden state of the decoder h, to determine unnormalized attention values z, = [Λ,, . . . \] for the image feature vectors V = [v1, . . . vk ], v,∈ d and exponentially normalizing the attention values to produce attention probability masses αt = [α1, . . . αk] that add to unity (1) (also referred to herein as the attention weights). at denotes an attention map that comprises the attention probability masses
[a„. . . at] .
[00178] The method includes applying the attention probability masses [α¾, . . . ¾] to the image feature vectors V = [vlt . . . vk ], vt∈ d to accumulate in an image context vector ct a weighted sum ]|Γ of the image feature vectors V = [v1, . . . vk ], vt∈ d .
[00179] The method includes submitting the image context vector ct and the current hidden state of the decoder A, to a feed-forward neural network and causing the feed-forward neural network to emit a next caption word wt . The feed-forward neural network can be a multilayer perceptron (abbreviated MLP).
[00180] The method includes repeating the processing of words through the decoder, the using, the applying, and the submitting until the caption word emitted is end-of-caption token
< end > . The iterations are performed by a controller, shown in FIG. 25.
[00181] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to mis method implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00182] Other implementations may include a non-transitory computer readable storage medium (CRM) storing instructions executable by a processor to perform the method described above. Yet another implementation may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform the method described above.
[00183] In another implementation, the technology disclosed presents a method of machine generation of a natural language caption for an image. The method can be a computer- implemented method. The method can be a neural network-based method.
[00184] As shown in FIG. 7, the method includes using current hidden state information ht of an attention lagging decoder (FIGs. 2B and 6) to generate an attention map αt = [α1, . . . αk] for image feature vectors V = [vlt . . . vk], vs∈ d produced by an encoder (FIG. 1) from an image I and generating an output caption word wt based on a weighted sum∑ of the image feature vectors V = [λl, . . . λ ],v,∈ d , with the weights determined from the attention map αt = [α1, . . . αk] .
[00185] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this method implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00186] Other implementations may include a non-transitory computer readable storage medium (CRM) storing instructions executable by a processor to perform the method described above. Yet another implementation may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform the method described above.
[00187] In yet another implementation, the technology disclosed presents a method of machine generation of a natural language caption for an image. This method uses a visually hermetic LSTM. The method can be a computer-implemented method. The method can be a neural network-based method.
[00188] The method includes processing an image through an encoder (FIG. 1) to produce image feature vectors V = [vlt . . . vk\, vt∈ d for k regions of the image / . The encoder can be a convolutional neural network (abbreviated CNN).
[00189] The method includes processing words through a decoder by beginning at an initial timestep with a start-of-caption token < start > and continuing in successive timesteps using a
most recently emitted caption word wf_j as input to the decoder. The decoder can be a visually hermetic long short-term memory network (abbreviated LSTM), shown in FIGs. 14 and 15.
[00190] The method includes, at each timestep, using at least a current hidden state ht of the decoder to determine, from the image feature vectors V = [vlt . . . ν4],ν,≡ d , an image context vector c, that determines an amount of attention allocated to regions of the image conditioned on the current hidden state ht of the decoder.
[00191] The method includes not supplying the image context vector ct to the decoder.
[00192] The method includes submitting the image context vector ct and the current hidden state of the decoder A, to a feed-forward neural network and causing the feed-forward neural network to emit a caption word.
[00193] The method includes repeating the processing of words through the decoder, the using, the not supplying, and the submitting until the caption word emitted is an end-of-caption.
[00194] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this method implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00195] Other implementations may include a non-transitory computer readable storage medium (CRM) storing instructions executable by a processor to perform the method described above. Yet another implementation may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform the method described above.
[00196] FIG. 12 shows the disclosed adaptive attention model for image captioning rolled across multiple timesteps. The sentinel LSTM (Sn-LSTM) of FIG. 8 is embodied in and implemented by the adaptive attention model as a decoder. FIG. 13 illustrates one
implementation of image captioning using adaptive attention applied by the adaptive attention model of FIG. 12.
[00197] In one implementation, the technology disclosed presents a system that performs the image captioning of FIGs. 12 and 13. The system includes numerous parallel processors coupled to memory. The memory is loaded with computer instructions to automatically caption an image. The instructions, when executed on the parallel processors, implement the following actions.
[00198] Mixing∑ results of an image encoder (FIG. 1) and a language decoder (FIG. 8) to emit a sequence of caption words for an input image / . The mixing is governed by a gate probability mass/sentinel gate mass /¾ determined from a visual sentinel vector St of the
language decoder and a current hidden state vector of the language decoder ht . The image encoder can be a convolutional neural network (abbreviated CNN)- The language decoder can be a sentinel long short-term memory network (abbreviated Sn-LSTM), as shown in FIGs. 8 and 9. The language decoder can be a sentinel bi-directional long short-term memory network
(abbreviated Sn-Bi-LSTM). The language decoder can be a sentinel gated recurrent unit network (abbreviated Sn-GRU). The language decoder can be a sentinel quasi-recurrent neural network (abbreviated Sn-QRNN).
[00199] Determining the results of the image encoder by processing the image / through the image encoder to produce image feature vectors V = [v1 , . . . vk ],ν,∈ d for k regions of the image / and computing a global image feature vector vg from the image feature vectors
V = [v1 , . . . vk ] , v,∈ d .
[00200] Determining the results of the language decoder by processing words through the language decoder. This includes - (1) beginning at an initial timestep with a start-of-caption token < start > and the global image feature vector vg , (2) continuing in successive timesteps using a most recently emitted caption word and the global image feature vector vg as input to the language decoder, and (3) at each timestep, generating a visual sentinel vector St that combines the most recently emitted caption word wi_1 , the global image feature vector vg , a previous hidden state vector of the language decoder ht-i, and memory contents m of the language decoder.
[00201] At each timestep, using at least a current hidden state vector ht of the language decoder to determine unnormalized attention values [λ1, . . . λk] for the image feature vectors
V = [v1 , . . . vk ] , v..∈ d and an unnormalized gate value [η, ] for the visual sentinel vector St .
[00202] Concatenating the unnormalized attention values [λ1, . . . λk] and the unnormalized gate value [η, ] and exponentially normalizing the concatenated attention and gate values to produce a vector of attention probability masses [α1, . . . αk] and the gate probability mass/sentinel gate mass fit.
[00203] Applying the attention probability masses [α1, . . . αk] to the image feature vectors
V = [v1 , . . . vk ],ν, e '' to accumulate in an image context vector Ct a weighted sum∑ of the image feature vectors V = [v1 , . . . vk ],v,.∈ d . The generation of context vector Ct is embodied in and implemented by the spatial attender of the adaptive attender, shown in FIGs. 11 and 13.
[00204] Determining an adaptive context vector ct as a mix of the image context vector Ct and the visual sentinel vector St according to the gate probability mass/sentinel gate mass fit .
The generation of adaptive context vector ct is embodied in and implemented by the mixer of the adaptive attender, shown in FIGs. 11 and 13.
[00205] Submitting the adaptive context vector and the current hidden state of the language decoder to a feed-forward neural network and causing the feed-forward neural network to emit a next caption word. The feed-forward neural network is embodied in and implemented by the emitter, as shown in FIG. S.
[00206] Repeating the processing of words through the language decoder, the using, the concatenating, the applying, the determining, and the submitting until the next caption word emitted is an end-of-caption token < end > . The iterations are performed by a controller, shown in FIG. 25.
[00207] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00208] The system can be a computer-implemented system. The system can be a neural network-based system.
[00209] The adaptive context vector ct at timestep t can be determined as
ct = fit st + (1 - fit) ct , where ct denotes the adaptive context vector, ct denotes the image context vector, St denotes the visual sentinel vector, βt denotes the gate probability
mass/sentinel gate mass, and (1— βί) denotes visual grounding probability of the next caption word.
[00210] The visual sentinel vector St can encode visual sentinel information that includes visual context determined from the global image feature vector vg and textual context determined from previously emitted caption words.
[00211] The gate probability mass/sentinel gate mass/sentinel gate mass βι being unity can result in the adaptive context vector Ct being equal to the visual sentinel vector St . In such an implementation, the next caption word Wt is emitted only in dependence upon the visual sentinel information.
[00212] The image context vector Ct can encode spatial image information conditioned on the current hidden state vector ht of the language decoder.
[00213] The gate probability mass sentinel gate mass βt being zero can result in the adaptive context vector ct being equal to the image context vector Ct . In such an implementation, the next caption word Wt is emitted only in dependence upon the spatial image information.
[00214] The gate probability mass/sentinel gate mass βt can be a scalar value between unity and zero that enhances when the next caption word Wt is a visual word and diminishes when the next caption word Wt is a non-visual word or linguistically correlated to the previously emitted caption word Wt-i.
[00215] The system can further comprise a trainer (FIG. 25), which in turn further comprises a preventer (FIG. 25). The preventer prevents, during training, backpropagation of gradients from the language decoder to the image encoder when the next caption word is a non-visual word or linguistically correlated to the previously emitted caption word. The trainer and the preventer can each run on at least one of the numerous parallel processors.
[00216] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00217] In one implementation, the technology disclosed presents a method of automatic image captioning. The method can be a computer-implemented method. The method can be a neural network-based method.
[00218] The method includes mixing∑ results of an image encoder (FIG. 1) and a language decoder (FIGs. 8 and 9) to emit a sequence of caption words for an input image I . The mixing is embodied in and implemented by the mixer of the adaptive attender of FIG. 11. The mixing is governed by a gate probability mass (also referred to herein as the sentinel gate mass) determined from a visual sentinel vector of the language decoder and a current hidden state vector of the language decoder. The image encoder can be a convolutional neural network (abbreviated CNN). The language decoder can be a sentinel long short-term memory network (abbreviated Sn-LSTM). The language decoder can be a sentinel bi-directional long short-term memory network (abbreviated Sn-Bi-LSTM). The language decoder can be a sentinel gated recurrent unit network (abbreviated Sn-GRU). The language decoder can be a sentinel quasi- recurrent neural network (abbreviated Sn-QRNN).
[00219] The method includes determining the results of the image encoder by processing the image through the image encoder to produce image feature vectors for regions of the image and computing a global image feature vector from the image feature vectors.
[00220] The method includes determining the results of the language decoder by processing words through the language decoder. This includes - (1) beginning at an initial timestep with a
start-of-caption token < start > and the global image feature vector, (2) continuing in successive timesteps using a most recently emitted caption word wt_1 and the global image feature vector as input to the language decoder, and (3) at each timestep, generating a visual sentinel vector that combines the most recently emitted caption word wt_1 , the global image feature vector, a previous hidden state vector of the language decoder, and memory contents of the language decoder.
[00221] The method includes, at each timestep, using at least a current hidden state vector of the language decoder to determine unnormalized attention values for the image feature vectors and an unnormalized gate value for the visual sentinel vector.
[00222] The method includes concatenating the unnormalized attention values and the unnormalized gate value and exponentially normalizing the concatenated attention and gate values to produce a vector of attention probability masses and the gate probability mass/sentinel gate mass.
[00223] The method includes applying the attention probability masses to the image feature vectors to accumulate in an image context vector ct a weighted sum of the image feature vectors.
[00224] The method includes determining an adaptive context vector ct as a mix of the image context vector and the visual sentinel vector St according to the gate probability mass/sentinel gate mass βι.
[00225] The method includes submitting the adaptive context vector ct and the current hidden state of the language decoder hi to a feed-forward neural network (MLLP) and causing the feed-forward neural network to emit a next caption word Wt .
[00226] The method includes repeating the processing of words through the language decoder, the using, the concatenating, the applying, the determining, and the submitting until the next caption word emitted is an end-of-caption token < end > . The iterations are performed by a controller, shown in FIG. 25.
[00227] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this method implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00228] Other implementations may include a non-transitory computer readable storage medium (CRM) storing instructions executable by a processor to perform the method described above. Yet another implementation may include a system including memory and one or more processors operable to execute instructions, stored in the memory, to perform the method described above.
[00229] In another implementation, the technology disclosed presents an automated image captioning system. The system runs on numerous parallel processors.
[00230] The system comprises a convolutional neural network (abbreviated CNN) encoder (FIG .11). The CNN encoder can run on at least one of the numerous parallel processors. The CNN encoder processes an input image through one or more convolutional layers to generate image features by image regions that represent the image.
[00231] The system comprises a sentinel long short-term memory network (abbreviated Sn- LSTM) decoder (FIG .8). The Sn-LSTM decoder can run on at least one of the numerous parallel processors. The Sn-LSTM decoder processes a previously emitted caption word combined with the image features to emit a sequence of caption words over successive timesteps.
[00232] The system comprises an adaptive attender (FIG .11). The adaptive attender can run on at least one of the numerous parallel processors. At each timestep, the adaptive attender spatially attends to the image features and produces an image context conditioned on a current hidden state of the Sn-LSTM decoder. Then, at each timestep, the adaptive attender extracts, from the Sn-LSTM decoder, a visual sentinel that includes visual context determined from previously processed image features and textual context determined from previously emitted caption words. Then, at each timestep, the adaptive attender mixes the image context Ct and the visual sentinel St for next caption word Wt emittance. The mixing is governed by a sentinel gate mass βt determined from the visual sentinel St and the current hidden state of the Sn- LSTM decoder h.
[00233] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00234] The system can be a computer-implemented system. The system can be a neural network-based system.
[00235] The adaptive attender (FIG. 11) enhances attention directed to the image context when a next caption word is a visual word, as shown in FIGs. 16, 18, and 19. The adaptive attender (FIG. 11) enhances attention directed to the visual sentinel when a next caption word is a non-visual word or linguistically correlated to the previously emitted caption word, as shown in FIGs. 16, 18, and 19.
[00236] The system can further comprise a trainer, which in turn further comprises a preventer. The preventer prevents, during training, backpropagation of gradients from the Sn- LSTM decoder to the CNN encoder when a next caption word is a non-visual word or
linguistically correlated to the previously emitted caption word. The trainer and the preventer can each run on at least one of the numerous parallel processors.
[00237] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00238] In yet another implementation, the technology disclosed presents an automated image captioning system. The system runs on numerous parallel processors. The system can be a computer-implemented system. The system can be a neural network-based system.
[00239] The system comprises an image encoder (FIG. 1). The image encoder can run on at least one of the numerous parallel processors. The image encoder processes an input image through a convolutional neural network (abbreviated CNN) to generate an image representation.
[00240] The system comprises a language decoder (FIG. 8). The language decoder can run on at least one of the numerous parallel processors. The language decoder processes a previously emitted caption word combined with the image representation through a recurrent neural network (abbreviated RNN) to emit a sequence of caption words.
[00241] The system comprises an adaptive attender (FIG. 11). The adaptive attender can run on at least one of the numerous parallel processors. The adaptive attender enhances attention directed to the image representation when a next caption word is a visual word. The adaptive attender enhances attention directed to memory contents of the language decoder when the next caption word is a non-visual word or linguistically correlated to the previously emitted caption word.
[00242] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00243] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00244] In yet further implementation, the technology disclosed presents an automated image captioning system. The system runs on numerous parallel processors. The system can be a computer-implemented system. The system can be a neural network-based system.
[00245] The system comprises an image encoder (FIG. 1). The image encoder can run on at least one of the numerous parallel processors. The image encoder processes an input image through a convolutional neural network (abbreviated CNN) to generate an image representation.
[00246] The system comprises a language decoder (FIG. 8). The language decoder can run on at least one of the numerous parallel processors. The language decoder processes a previously emitted caption word combined with the image representation through a recurrent neural network (abbreviated R N) to emit a sequence of caption words.
[00247] The system comprises a sentinel gate mass/gate probability mass J3t . The sentinel gate mass can run on at least one of the numerous parallel processors. The sentinel gate mass controls accumulation of the image representation and memory contents of the language decoder for next caption word emittance. The sentinel gate mass is determined from a visual sentinel of the language decoder and a current hidden state of the language decoder.
[00248] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00249] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00250] In one further implementation, the technology disclosed presents a system that automates a task. The system runs on numerous parallel processors. The system can be a computer-implemented system. The system can be a neural network-based system.
[00251] The system comprises an encoder. The encoder can run on at least one of the numerous parallel processors. The encoder processes an input through at least one neural network to generate an encoded representation.
[00252] The system comprises a decoder. The decoder can run on at least one of the numerous parallel processors. The decoder processes a previously emitted output combined with the encoded representation through at least one neural network to emit a sequence of outputs.
[00253] The system comprises an adaptive attender. The adaptive attender can run on at least one of the numerous parallel processors. The adaptive attender uses a sentinel gate mass to mix the encoded representation and memory contents of the decoder for emitting a next output. The sentinel gate mass is determined from the memory contents of the decoder and a current hidden state of the decoder. The sentinel gate mass can run on at least one of the numerous parallel processors.
[00254] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00255] In one implementation, when the task is text summarization, the system comprises a first recurrent neural network (abbreviated RNN) as the encoder that processes an input document to generate a document encoding and a second RNN as the decoder that uses the document encoding to emit a sequence of summary words.
[00256] In one other implementation, when the task is question answering, the system comprises a first RNN as the encoder that processes an input question to generate a question encoding and a second RNN as the decoder that uses the question encoding to emit a sequence of answer words.
[00257] In another implementation, when the task is machine translation, the system comprises a first RNN as the encoder that processes a source language sequence to generate a source encoding and a second RNN as the decoder that uses the source encoding to emit a target language sequence of translated words.
[00258] In yet another implementation, when the task is video captioning, the system comprises a combination of a convolutional neural network (abbreviated CNN) and a first RNN as the encoder that process video frames to generate a video encoding and a second RNN as the decoder that uses the video encoding to emit a sequence of caption words.
[00259] In yet further implementation, when the task is image captioning, the system comprises a CNN as the encoder that process an input image to generate an image encoding and a RNN as the decoder that uses the image encoding to emit a sequence of caption words.
[00260] The system can determine an alternative representation of the input from the encoded representation. The system can then use the alternative representation, instead of the encoded representation, for processing by the decoder and mixing by the adaptive attender.
[00261] The alternative representation can be a weighted summary of the encoded representation conditioned on the current hidden state of the decoder.
[00262] The alternative representation can be an averaged summary of the encoded representation.
[00263] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00264] In one other implementation, the technology disclosed presents a system for machine generation of a natural language caption for an input image / . The system runs on numerous parallel processors. The system can be a computer-implemented system. The system can be a neural network-based system.
[00265] FIG. 10 depicts the disclosed adaptive attention model for image captioning that automatically decides how heavily to rely on visual information, as opposed to linguistic
information, to emit a next caption word. The sentinel LSTM (Sn-LSTM) of FIG. 8 is embodied in and implemented by the adaptive attention model as a decoder. FIG. 11 depicts one implementation of modules of an adaptive attender that is part of the adaptive attention model disclosed in FIG. 12. The adaptive attender comprises a spatial attender, an extractor, a sentinel gate mass determiner, a sentinel gate mass softmax, and a mixer (also referred to herein as an adaptive context vector producer or an adaptive context producer). The spatial attender in turn comprises an adaptive comparator, an adaptive attender softmax, and an adaptive convex combination accumulator.
[00266] The system comprises a convolutional neural network (abbreviated CNN) encoder (FIG. 1) for processing the input image through one or more convolutional layers to generate image features V = [ν , . . . ν4],ν,∈ d by k image regions that represent the image / . The
CNN encoder can run on at least one of the numerous parallel processors.
[00267] The system comprises a sentinel long short-term memory network (abbreviated Sn- LSTM) decoder (FIG. 8) for processing a previously emitted caption word ν,_Λ combined with the image features to produce a current hidden state ht of the Sn-LSTM decoder at each decoder timestep. The Sn-LSTM decoder can run on at least one of the numerous parallel processors.
[00268] The system comprises an adaptive attender, shown in FIG. 11. The adaptive attender can run on at least one of the numerous parallel processors. The adaptive attender further comprises a spatial attender (FIGs. 11 and 13) for spatially attending to the image features
V = [Vj, . . . vk ],vt e d at each decoder timestep to produce an image context Ct conditioned on the current hidden state ht of the Sn-LSTM decoder. The adaptive attender further comprises an extractor (FIGs. 11 and 13) for extracting, from the Sn-LSTM decoder, a visual sentinel St at each decoder timestep. The visual sentinel St includes visual context determined from previously processed image features and textual context determined from previously emitted caption words. The adaptive attender further comprises mixer (FIGs. 11 and 13) for mixing∑ the image context Ct and the visual sentinel St to produce an adaptive context ct at each decoder timestep. The mixing is governed by a sentinel gate mass βt determined from the visual sentinel St and the current hidden state ht of the Sn-LSTM decoder. The spatial attender, the extractor, and the mixer can each run on at least one of the numerous parallel processors.
[00269] The system comprises an emitter (FIGs. 5 and 13) for generating the natural language caption for the input image / based on the adaptive contexts ct produced over successive decoder timesteps by the mixer. The emitter can run on at least one of the numerous parallel processors.
[00270] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00271] The Sn-LSTM decoder can further comprise an auxiliary sentinel gate (FIG. 8) for producing the visual sentinel St at each decoder timestep. The auxiliary sentinel gate can run on at least one of the numerous parallel processors.
[00272] The adaptive attender can further comprise a sentinel gate mass softmax (FIGs. 11 and 13) for exponentially normalizing attention values [λ1, . . . λk] of the image features and a gate value [η,] of the visual sentinel to produce an adaptive sequence φ of attention probability masses [α1, . . . αk] and the sentinel gate mass βt at each decoder timestep. The sentinel gate mass softmax can run on at least one of the numerous parallel processors.
[00274] In the equation above, [;] denotes concatenation, Ws and Wg are weight parameters.
[00275] The probability over a vocabulary of possible words at time t can be determined by the vocabulary softmax of the emitter (FIG. 5) as follows:
[00276] In the above equation, Wp is the weight parameter that is learnt.
[00277] The adaptive attender can further comprise a sentinel gate mass determiner (FIGs. 11 and 13) for producing at each decoder timestep the sentinel gate mass βt as a result of interaction between the current decoder hidden state ht and the visual sentinel St . The sentinel gate mass determiner can run on at least one of the numerous parallel processors.
[00278] The spatial attender can further comprise an adaptive comparator (FIGs. 11 and 13) for producing at each decoder timestep the attention values [λ1, . . . λk] as a result of interaction between the current decoder hidden state hi and the image features The
adaptive comparator can run on at least one of the numerous parallel processors. In some
implementations, the attention and gate values [ are determined by processing the
state vector st through a single layer neural network applying a weight matrix and a nonlinearity layer applying a hyperbolic tangent (tanh) squashing function (to produce an output between -1 and 1). In other implementations, In some implementations, the attention and gate values
producter. In yet other implementations, the attention and gate values are
determined by processing the current decoder hidden state ht , the image features
[00279] The spatial attender can further comprise an adaptive attender softmax (FIGs. 11 and 13) for exponentially normalizing the attention values for the image features to produce the attention probability masses at each decoder timestep. The adaptive attender softmax can run on at least one of the numerous parallel processors.
[00280] The spatial attender can further comprise an adaptive convex combination accumulator (also referred to herein as mixer or adaptive context producer or adaptive context vector producter) (FIGs. 11 and 13) for accumulating, at each decoder timestep, the image context as a convex combination of the image features scaled by attention probability masses determined using the current decoder hidden state. The sentinel gate mass can run on at least one of the numerous parallel processors.
[00281] The system can further comprise a trainer (FIG. 25). The trainer in turn further comprises a preventer for preventing backpropagation of gradients from the Sn-LSTM decoder to the CNN encoder when a next caption word is a non-visual word or linguistically correlated to a previously emitted caption word. The trainer and the preventer can each run on at least one of the numerous parallel processors.
[00282] The adaptive attender further comprises the sentinel gate mass/gate probability mass βt for enhancing attention directed to the image context when a next caption word is a visual word. The adaptive attender further comprises the sentinel gate mass/gate probability mass fit for enhancing attention directed to the visual sentinel when a next caption word is a non-visual word or linguistically correlated to the previously emitted caption word. The sentinel gate mass can run on at least one of the numerous parallel processors.
[00283] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00284] In one implementation, the technology disclosed presents a recurrent neural network system (abbreviated RNN). The RNN runs on numerous parallel processors. The RNN can be a computer-implemented system.
[00285] The RNN comprises a sentinel long short-term memory network (abbreviated Sn- LSTM) that receives inputs at each of a plurality of timesteps. The inputs include at least an input for a current timestep, a hidden state from a previous timestep, and an auxiliary input for the current timestep. The Sn-LSTM can run on at least one of the numerous parallel processors.
[00286] The RNN generates outputs at each of the plurality of timesteps by processing the inputs through gates of the Sn-LSTM. The gates include at least an input gate, a forget gate, an output gate, and an auxiliary sentinel gate. Each of the gates can run on at least one of the numerous parallel processors.
[00287] The RNN stores in a memory cell of the Sn-LSTM auxiliary information accumulated over time from (1) processing of the inputs by the input gate, the forget gate, and the output gate and (2) updating of the memory cell with gate outputs produced by the input gate, the forget gate, and the output gate. The memory cell can be maintained and persisted in a database (FIG 9).
[00288] The auxiliary sentinel gate modulates the stored auxiliary information from the memory cell for next prediction. The modulation is conditioned on the input for the current timestep, the hidden state from the previous timestep, and the auxiliary input for the current timestep.
[00289] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00290] The auxiliary input can be visual input comprising image data and the input can be a text embedding of a most recently emitted word and/or character. The auxiliary input can be a text encoding from another long short-term memory network (abbreviated LSTM) of an input document and the input can be a text embedding of a most recently emitted word and/or character. The auxiliary input can be a hidden state vector from another LSTM that encodes sequential data and the input can be a text embedding of a most recently emitted word and/or character. The auxiliary input can be a prediction derived from a hidden state vector from another LSTM that encodes sequential data and the input can be a text embedding of a most
recently emitted word and/or character. The auxiliary input can be an output of a convolutional neural network (abbreviated CNN). The auxiliary input can be an output of an attention network.
[00291] The prediction can be a classification label embedding.
[00292] The Sn-LSTM can be further configured to receive multiple auxiliary inputs at a timestep, with at least one auxiliary input comprising concatenated vectors.
[00293] The auxiliary input can be received only at an initial timestep.
[00294] The auxiliary sentinel gate can produce a sentinel state at each timestep as an indicator of the modulated auxiliary information.
[00295] The outputs can comprise at least a hidden state for the current timestep and a sentinel state for the current timestep.
[00296] The RNN can be further configured to use at least the hidden state for the current timestep and the sentinel state for the current timestep for making the next prediction.
[00297] The inputs can further include a bias input and a previous state of the memory cell.
[00298] The Sn-LSTM can further include an input activation function.
[00299] The auxiliary sentinel gate can gate a pointwise hyperbolic tangent (abbreviated tanh) of the memory cell.
[00300] The auxiliary sentinel gate at the current timestep t can be defined as
are weight parameters to be learned, Xt is the input for the current timestep, is the auxiliary sentinel gate applied on the memory cell
mt , represents element-wise product, and σ denotes logistic sigmoid activation.
[00301] The sentinel state/visual sentinel at the current timestep t is defined as
st = auxt tanh (mi) , where St is the sentinel state,
is the auxiliary sentinel gate applied on the memory cell mt , represents element-wise product, and tanh denotes hyperbolic tangent activation.
[00302] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00303] In another implementation, the technology disclosed presents a sentinel long short- term memory network (abbreviated Sn-LSTM) that processes auxiliary input combined with input and previous hidden state. The Sn-LSTM runs on numerous parallel processors. The Sn- LSTM can be a computer-implemented system.
[00304] The Sn-LSTM comprises an auxiliary sentinel gate that applies on a memory cell of the Sn-LSTM and modulates use of auxiliary information during next prediction. The auxiliary information is accumulated over time in the memory cell at least from the processing of the
auxiliary input combined with the input and the previous hidden state. The auxiliary sentinel gate can run on at least one of the numerous parallel processors. The memory cell can be maintained and persisted in a database (FIG 9).
[00305] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00306] The auxiliary sentinel gate can produce a sentinel state at each timestep as an indicator of the modulated auxiliary information, conditioned on an input for a current timestep, a hidden state from a previous timestep, and an auxiliary input for the current timestep.
[00307] The auxiliary sentinel gate can gate a pointwise hyperbolic tangent (abbreviated tanh) of the memory cell.
[00308] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00309] In yet another implementation, the technology disclosed presents a method of extending a long short-term memory network (abbreviated LSTM). The method can be a computer-implemented method. The method can be a neural network-based method.
[00310] The method includes extending a long short-term memory network (abbreviated LSTM) to include an auxiliary sentinel gate. The auxiliary sentinel gate applies on a memory cell of the LSTM and modulates use of auxiliary information during next prediction. The auxiliary information is accumulated over time in the memory cell at least from the processing of auxiliary input combined with current input and previous hidden state.
[00311] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this method implementation. As indicated above, all the other features are not repeated here and should be considered repeated by reference.
[00312] The auxiliary sentinel gate can produce a sentinel state at each timestep as an indicator of the modulated auxiliary information, conditioned on an input for a current timestep, a hidden state from a previous timestep, and an auxiliary input for the current timestep.
[00313] The auxiliary sentinel gate can gate a pointwise hyperbolic tangent (abbreviated tanh) of the memory cell.
[00314] Other implementations may include a non-transitory computer readable storage medium (CRM) storing instructions executable by a processor to perform the method described above. Yet another implementation may include a system including memory and one or more
processors operable to execute instructions, stored in the memory, to perform the method described above.
[00315] In one further implementation, the technology disclosed presents a recurrent neural network system (abbreviated RNN) for machine generation of a natural language caption for an image. The RNN runs on numerous parallel processors. The RNN can be a computer- implemented system.
[00316] FIG.9 shows one implementation of modules of a recurrent neural network
(abbreviated RNN) that implements the Sn-LSTM of FIG. 8.
[00317] The RNN comprises an input provider (FIG. 9) for providing a plurality of inputs to a sentinel long short-term memory network (abbreviated Sn-LSTM) over successive timesteps. The inputs include at least an input for a current timestep, a hidden state from a previous timestep, and an auxiliary input for the current timestep. The input provider can run on at least one of the numerous parallel processors.
[00318] The RNN comprises a gate processor (FIG. 9) for processing the inputs through each gate in a plurality of gates of the Sn-LSTM. The gates include at least an input gate (FIGs. 8 and 9), a forget gate (FIGs. 8 and 9), an output gate (FIGs. 8 and 9), and an auxiliary sentinel gate (FIGs. 8 and 9). The gate processor can run on at least one of the numerous parallel processors. Each of the gates can run on at least one of the numerous parallel processors.
[00319] The RNN comprises a memory cell (FIG. 9) of the Sn-LSTM for storing auxiliary information accumulated over time from processing of the inputs by the gate processor. The memory cell can be maintained and persisted in a database (FIG 9).
[00320] The RNN comprises a memory cell updater (FIG. 9) for updating the memory cell with gate outputs produced by the input gate (FIGs. 8 and 9), the forget gate (FIGs. 8 and 9), and the output gate (FIGs. 8 and 9). The memory cell updater can run on at least one of the numerous parallel processors.
[00321] The RNN comprises the auxiliary sentinel gate (FIGs. 8 and 9) for modulating the stored auxiliary information from the memory cell to produce a sentinel state at each timestep. The modulation is cond tioned on the input for the current timestep, the hidden state from the previous timestep, and the auxiliary input for the current timestep.
[00322] The RNN comprises an emitter (FIG. 5) for generating the natural language caption for the image based on the sentinel states produced over successive timesteps by the auxiliary sentinel gate. The emitter can run on at least one of the numerous parallel processors.
[00323] Each of the features discussed in this particular implementation section for other system and method implementations apply equally to this system implementation. As indicated
above, all the other features are not repeated here and should be considered repeated by reference.
[00324] The auxiliary sentinel gate can further comprise an auxiliary nonlinearity layer (FIG. 9) for squashing results of processing the inputs within a predetermined range. The auxiliary nonlinearity layer can run on at least one of the numerous parallel processors.
[00325] The Sn-LSTM can further comprise a memory nonlinearity layer (FIG. 9) for applying a nonlinearity to contents of the memory cell. The memory nonlinearity layer can run on at least one of the numerous parallel processors.
[00326] The Sn-LSTM can further comprise a sentinel state producer (FIG. 9) for combining the squashed results from the auxiliary sentinel gate with the nonlinearized contents of the memory cell to produce the sentinel state. The sentinel state producer can run on at least one of the numerous parallel processors.
[00327] The input provider (FIG. 9) can provide the auxiliary input that is visual input comprising image data and the input is a text embedding of a most recently emitted word andor character. The input provider (FIG. 9) can provide the auxiliary input that is a text encoding from another long short-term memory network (abbreviated LSTM) of an input document and the input is a text embedding of a most recently emitted word and or character. The input provider (FIG. 9) can provide the auxiliary input that is a hidden state from another LSTM that encodes sequential data and the input is a text embedding of a most recently emitted word and/or character. The input provider (FIG. 9) can provide the auxiliary input that is a prediction derived from a hidden state from another LSTM that encodes sequential data and the input is a text embedding of a most recently emitted word and/or character. The input provider (FIG. 9) can provide the auxiliary input that is an output of a convolutional neural network (abbreviated CNN). The input provider (FIG.9) can provide the auxiliary input that is an output of an attention network.
[00328] The input provider (FIG.9) can further provide multiple auxiliary inputs to the Sn- LSTM at a timestep, with at least one auxiliary input further comprising concatenated features.
[00329] The Sn-LSTM can further comprise an activation gate (FIG. 9).
[00330] Other implementations may include a non-transitory computer readable storage medium storing instructions executable by a processor to perform actions of the system described above.
[00331] This application uses the phrases "visual sentinel", "sentinel state", "visual sentinel vector", and "sentinel state vector" interchangeable. A visual sentinel vector can represent, identify, and or embody a visual sentinel. A sentinel state vector can represent, identify, and/or
embody a sentinel state. This application uses the phrases "sentinel gate" and "auxiliary sentinel gate" interchangeable.
[00332] This application uses the phrases "hidden state", "hidden state vector", and "hidden state information" interchangeable. A hidden state vector can represent, identify, and or embody a hidden state. A hidden state vector can represent, identify, and or embody hidden state information.
[00333] This application uses the word "input", the phrase "current input", and the phrase "input vector" interchangeable. An input vector can represent, identify, and or embody an input. An input vector can represent, identify, and or embody a current input.
[00334] This application uses the words "time" and "timestep" interchangeably.
[00335] This application uses the phrases "memory cell state", "memory cell vector", and "memory cell state vector" interchangeably. A memory cell vector can represent, identify, and or embody a memory cell state. A memory cell state vector can represent, identify, and/or embody a memory cell state.
[00336] This application uses the phrases "image features", "spatial image features", and "image feature vectors" interchangeably. An image feature vector can represent, identify, and/or embody an image feature. An image feature vector can represent, identify, and/or embody a spatial image feature.
[00337] This application uses the phrases "spatial attention map", "image attention map", and "attention map" interchangeably.
[00338] This application uses the phrases "global image feature" and "global image feature vector" interchangeably. A global image feature vector can represent, identify, and/or embody a global image feature.
[00339] This application uses the phrases "word embedding" and "word embedding vector" interchangeably. A word embedding vector can represent, identify, and or embody a word embedding.
[00340] This application uses the phrases "image context", "image context vector", and "context vector" interchangeably. An image context vector can represent, identify, and/or embody an image context. A context vector can represent, identify, and/or embody an image context
[00341] This application uses the phrases "adaptive image context", "adaptive image context vector", and "adaptive context vector" interchangeably. An adaptive image context vector can represent, identify, and/or embody an adaptive image context. An adaptive context vector can represent, identify, and/or embody an adaptive image context.
[00342] This application uses the phrases "gate probability mass" and "sentinel gate mass" interchangeably.
Results
[00343] FIG. 17 illustrates some example captions and spatial attentional maps for the specific words in the caption. It can be seen that our learns alignments that correspond with human intuition. Even in the examples in which incorrect captions were generated, the model looked at reasonable regions in the image.
[00344] FIG. 18 shows visualization of some example image captions, word-wise visual grounding probabilities, and corresponding image spatial attention maps generated by our model. The model successfully learns how heavily to attend to the image and adapts the attention accordingly. For example, for non- visual words such as "of and "a" the model attends less to the images. For visual words like "red", "rose", "doughnuts", "woman", and "snowboard" our model assigns a high visual grounding probabilities (over 0.9). Note that the same word can be assigned different visual grounding probabilities when generated in different contexts. For example, the word "a" typically has a high visual grounding probability at the beginning of a sentence, since without any language context, the model needs the visual information to determine plurality (or not). On the other hand, the visual grounding probability of "a" in the phrase "on a table" is much lower. Since it is unlikely for something to be on more than one table.
[00345] FIG. 19 presents similar results as shown in FIG. 18 on another set of example image captions, word-wise visual grounding probabilities, and corresponding image/spatial attention maps generated using the technology disclosed.
[00346] FIGs. 20 and 21 are example rank-probability plots that illustrate performance of our model on the COCO (common objects in context) and Flickr30k datasets respectively. It can be seen that our model attends to the image more when generating object words like "dishes", "people", "cat", "boat"; attribute words like "giant", "metal", "yellow", and number words like "three". When the word is non-visual, our model learns to not attend to the image such as for "the", "of, "to" etc. For more abstract words such as "crossing", "during" etc., our model attends less than the visual words and attends more than the non- visual words. The model does not rely on any syntactic features or external knowledge. It discovers these trends automatically through learning.
[00347] FIG. 22 is an example graph that shows localization accuracy over the generated caption for top 45 most frequent COCO object categories. The blue colored bars show localization accuracy of the spatial attention model and the red colored bars show localization
accuracy of the adaptive attention model. FIG. 22 shows that both models perform well on categories such as "cat", e¾ed", "bus", and "truck". On smaller objects, such as "sink",
"surfboard", "clock", and "frisbee" both models do not perform well. This is because the spatial attention maps are directly rescaled from a 7x7 feature map, which loses a considerable spatial information and detail.
[00348] FIG. 23 is a table that shows performance of the technology disclosed on the Flicker30k and COCO datasets based on various natural language processing metrics, including BLEU (bilingual evaluation understudy), METEOR (metric for evaluation of translation with explicit ordering), CIDEr (consensus-based image description evaluation), ROUGE-L (recall- oriented understudy for gisting evaluation-longest common subsequence), and SPICE (semantic propositional image caption evaluation). The table in FIG. 23 shows that our adaptive attention model significantly outperforms our spatial attention model. The CIDEr score performance of our adaptive attention model is 0.S31 versus 0.493 for spatial attention model on Flickr30k database. Similarly, CIDEr scores of adaptive attention model and spatial attention model on COCO database are 1.085 and 1.029 respectively.
[00349] We compare our model to state-of-the-art system on the COCO evaluation server as shown in a leaderboard of the published state-of-the-art in FIG. 24. It can be seen from the leaderboard that our approach achieves the best performance on all metrics among the published systems hence setting a new state-of-the-art by a significant margin.
Computer System
[00350] FIG. 25 is a simplified block diagram of a computer system that can be used to implement the technology disclosed. Computer system includes at least one central processing unit (CPU) that communicates with a number of peripheral devices via bus subsystem. These peripheral devices can include a storage subsystem including, for example, memory devices and a file storage subsystem, user interface input devices, user interface output devices, and a network interface subsystem. The input and output devices allow user interaction with computer system. Network interface subsystem provides an interface to outside networks, including an interface to corresponding interface devices in other computer systems.
[00351] In one implementation, at least the spatial attention model, the controller, the localizer (FIG.25), the trainer (which comprises the preventer), the adaptive attention model, and the sentinel LSTM (Sn-LSTM) are communicably linked to the storage subsystem and to the user interface input devices.
[00352] User interface input devices can include a keyboard; pointing devices such as a mouse, trackball, touchpad, or graphics tablet; a scanner; a touch screen incorporated into the
display; audio input devices such as voice recognition systems and microphones; and other types of input devices. In general, use of the term "input device" is intended to include all possible types of devices and ways to input information into computer system.
[00353] User interface output devices can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem can include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem can also provide a non-visual display such as audio output devices. In general, use of the term "output device" is intended to include all possible types of devices and ways to output information from computer system to the user or to another machine or computer system.
[00354] Storage subsystem stores programming and data constructs that provide the functionality of some or all of the modules and methods described herein. These software modules are generally executed by deep learning processors.
[00355] Deep learning processors can be graphics processing units (GPUs) or field- programmable gate arrays (FPGAs). Deep learning processors can be hosted by a deep learning cloud platform such as Google Cloud Platform™, Xilinx™, and Cirrascale™. Examples of deep learning processors include Google's Tensor Processing Unit (TPU)™, rackmount solutions like GX4 Rackmount Series™, GX8 Rackmount Series™, NVIDIA DGX-1™, Microsoft' Stratix V FPGA™, Graphcore's Intelligent Processor Unit (IPU)™, Qualcomm's Zeroth Platform™ with Snapdragon processors™, NVIDIA's Volta™, NVIDIA's DRIVE PX™, NVIDIA's JETSON TX1/TX2 MODULE™, Intel's Nirvana™, Movidius VPU™, Fujitsu DPI™, ARM's
DynamicIQ™, IBM TrueNorth™, and others.
[00356] Memory subsystem used in the storage subsystem can include a number of memories including a main random access memory (RAM) for storage of instructions and data during program execution and a read only memory (ROM) in which fixed instructions are stored. A file storage subsystem can provide persistent storage for program and data files, and can include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations can be stored by file storage subsystem in the storage subsystem, or in other machines accessible by the processor.
[00357] Bus subsystem provides a mechanism for letting the various components and subsystems of computer system communicate with each other as intended. Although bus subsystem is shown schematically as a single bus, alternative implementations of the bus subsystem can use multiple busses.
[00358] Computer system itself can be of varying types including a personal computer, a portable computer, a workstation, a computer terminal, a network computer, a television, a mainframe, a server farm, a widely-distributed set of loosely networked computers, or any other data processing system or user device. Due to the ever-changing nature of computers and networks, the description of computer system depicted in FIG. 13 is intended only as a specific example for purposes of illustrating the preferred embodiments of the present invention. Many other configurations of computer system are possible having more or less components than the computer system depicted in FIG. 13.
[00359] The preceding description is presented to enable the making and use of the technology disclosed. Various modifications to the disclosed implementations will be apparent, and the general principles defined herein may be applied to other implementations and applications without departing from the spirit and scope of the technology disclosed. Thus, the technology disclosed is not intended to be limited to the implementations shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. The scope of the technology disclosed is defined by the appended claims.
[00360] The preceding description is presented to enable the making and use of the technology disclosed. Various modifications to the disclosed implementations will be apparent, and the general principles defined herein may be applied to other implementations and applications without departing from the spirit and scope of the technology disclosed. Thus, the technology disclosed is not intended to be limited to the implementations shown, but is to be accorded the widest scope consistent with the principles and features disclosed herein. The scope of the technology disclosed is defined by the appended claims.
Abstract
Attention-based neural encoder-decoder frameworks
have been widely adopted for image captioning. Most methods force visual attention to be active for every generated
word. However, the decoder likely requires little to no visual
information from the image to predict non-visual words
such as "the " and "of". Other words that may seem visual
can often be predicted reliably just from the language model
"talking on a cell". In this paper, we propose a novel adaptive attention model with a visual sentinel. At each time Figure 1: Our model learns an adaptive attention model step, our model decides whether to attend to the image (and that automatically determines when to look (sentinel gate) if so, to which regions) or to the visual sentinel. The model and where to look (spatial attention) for word generation, decides whether to attend to the image and where, in order which are explained in section 2.2, 2.3 & 5.4.
to extract meaningful information for sequential word generation. We test our method on the COCO image captioning generated word.
2015 challenge dataset and Flickr30JC Our approach sets Most attention models for image captioning and visual the new state-of-the-art by a significant margin. question answering attend to the image at every time step, irrespective of which word is going to be emitted next [31 , 29, 17]. However, not all words in the caption have cor¬
1. Introduction responding visual signals. Consider the example in Fig. 1 that shows an image and its generated caption "A white
Automatically generating captions for images has
bird perched on top of a red stop sign". The words "a" emerged as a prominent interdisciplinary research problem
and "of do not have corresponding canonical visual sigin both academia and industry. [¾, 1 i, 23, 27, 30]. It
nals. Moreover, language correlations make the visual sigcan aid visually impaired users, and make it easy for users
nal unnecessary when generating words like "on" and "top" to organize and navigate through large amounts of typically
following "perched", and "sign" following "a red stop". In unstructured visual data. In order to generate high quality
fact, gradients from non-visual words could mislead and dicaptions, the model needs to incorporate fine-grained visual
minish the overall effectiveness of the visual signal in guidclues from the image. Recently, visual attention-based
ing the caption generation process.
neural encoder-decoder models [30, 11 , '32] have been ex¬
In this paper, we introduce an adaptive attention encoder- plored, where the attention mechanism typically produces
decoder framework which can automatically decide when to a spatial map highlighting image regions relevant to each
rely on visual signals and when to just rely on the language
'Jiasen worked on this project during his internship at Salesforce Remodel. Of course, when relying on visual signals, the model search. Equal contribution also decides where - which image region - it should attend
probabilities (over 0.9). Note that the same word may be over all the generated captions containing that word. Fig.6 assigned different visual grounding probabilities when genshows the rank-probability plot on COCO and Flickr30k. erated in different contexts. For example, the word "a" usuWe find that our model attends to the image more when ally has a high visual grounding probability at the begingenerating object words like "dishes", "people", "cat", ning of a sentence, since without any language context, the "boat"; attribute words like "giant", "metal", "yellow" and model needs the visual information to determine plurality number words like "three". When the word is non-visual, (or not). On the other hand, the visual grounding probabilour model learns to not attend to the image such as for "the", ity of "a" in the phrase "on a table" is much lower. Since it "of, "to" etc. For more abstract notions such as "crossing", is unlikely for something to be on more than one table. "during" etc., our model leans to attend less than the visual words and attend more than the non-visual words. Note that
5.4. Adaptive Attention Analysis our model does not rely on any syntactic features or external knowledge. It discovers these trends automatically.
In this section, we analysis the adaptive attention generated by our methods. We visualize the sentinel gate to unOur model cannot distinguish between words that are derstand "when" our model attends to the image as a caption truly non- visual from the ones that are technically visual but is generated. We also perform a weakly-supervised localizhave a high correlation with other words and hence chooses ation on COCO categories by using the generated attention to not rely on the visual signal. For example, words such as maps. This can help us to get an intuition of "where" our "phone" get a relatively low visual grounding probability in model attends, and whether it attends to the correct regions. our model. This is because it has a large language correlation with the word "cell". We can also observe some interesting trends in what the model learns on different datasets.
5A1 Learning "when" to attend For example, when generating "UNK" words, our model learns to attend less to the image on COCO, but more on
In order to assess whether our model learns to separate FlickrSOk. Same words with different forms can also results visual words in captions from non-visual words, we visuin different visual grounding probabilities. For example, alize the visual grounding probability. For each word in "crossing", "cross" and "crossed" are cognate words which the vocabulary, we average the visual grounding probability have similar meaning. However, in terms of the visual
grounding probability learnt by our model, there is a large 5.4.2 Learning "where" to attend
variance. Our model learns to attend to images more when
generating "crossing", followed by "cross" and attend least We now assess whether our model attends to the correct spaon image when generating "crossed". tial image regions. We perform weakly-supervised localization [22, 36] using the generated attention maps. To the best of our best knowledge, no previous works have used weakly supervised localization to evaluate spatial attention
F
w
Figure 7: Localization accuracy over generated captions for top 45 most frequent COCO object categories. "Spatial Attention" and "Adaptive Attention" are our proposed spatial attention model and adaptive attention model, respectively. The COCO categories are ranked based on the align results of our adaptive attention, which cover 93.8% and 94.0% of total matched regions for spatial attention and adaptive attention, respectively.
for image captioning. Given the word wt and attention map tions generated by the spatial and adaptive attention modat, we first segment the regions of of the image with attenels respectively. The average localization accuracy for our tion values larger than th (after map is normalized to have spatial attention model is 0.362, and 0.373 for our adaptthe largest value be 1), where th is a per-class threshold esive attention model. This demonstrates that as a byproduct, timated using the COCO validation split. Then we take the knowing when to attend also helps where to attend. bounding box that covers the largest connected component
in the segmentation map. We use intersection over union
(IOU) of the generated and ground truth bounding box as Fig.7 shows the localization accuracy over the generated the localization accuracy. captions for top 45 most frequent COCO object categories.
For each of the COCO object categories, we do a word- We can see that our spatial attention and adaptive attention by-word match to align the generated words with the ground models share similar trends. We observe that both modtruth bounding box''. For the object categories which has els perform well on categories such as "cat", "bed", "bus" multiple words, such as "teddy bear", we take the maximum and "truck". On smaller objects, such as "sink", "surfIOU score over the multiple words as its localization accurboard", "clock" and "frisbee", both models perform relatacy. We are able to align 5981 and 5924 regions for cap- ively poorly. This is because our spatial attention maps are directly rescaled from a coarse 7 x 7 feature map, which looses a lot of spatial resolution and detail. Using a larger feature map may improve the performance.
Claims
1. An image-to-language captioning system, running on numerous parallel processors, for machine generation of a natural language caption for an input image, the system comprising: a convolutional neural network (abbreviated CNN) encoder for processing the input image through one or more convolutional layers to generate image features by image regions that represent the image;
a sentinel long short-term memory network (abbreviated Sn-LSTM) decoder for processing a previously emitted caption word combined with the image features to produce a current hidden state of the Sn-LSTM decoder at each decoder timestep;
an adaptive attender further comprising
a spatial attender for spatially attending to the image features at each decoder timestep to produce an image context conditioned on the current hidden state of the Sn- LSTM decoder,
an extractor for extracting, from the Sn-LSTM decoder, a visual sentinel at each decoder timestep, wherein the visual sentinel includes visual context determined from previously processed image features and textual context determined from previously emitted caption words, and
a mixer for mixing the image context and the visual sentinel to produce an adaptive context at each decoder timestep, with the mixing governed by a sentinel gate mass determined from the visual sentinel and the current hidden state of the Sn-LSTM decoder; and
an emitter for generating the natural language caption for the input image based on the adaptive contexts produced over successive decoder timesteps by the mixer.
2. The system of claim 1, wherein the Sn-LSTM decoder further comprises an auxiliary sentinel gate for producing the visual sentinel gate at each decoder timestep.
3. The system of any of claims 1-2, wherein the adaptive attender further comprises a sentinel gate mass softmax for exponentially normalizing attention values of the image features and a gate value of the visual sentinel to produce an adaptive sequence of attention probability masses and the sentinel gate mass at each decoder timestep.
4. The system of any of claims 1-3, wherein the adaptive attender further comprises a sentinel gate mass determiner for producing at each decoder timestep the sentinel gate mass as a result of interaction between the current decoder hidden state and the visual sentinel.
5. The system of any of claims 1 -4, wherein the spatial attender further comprises an adaptive comparator for producing at each decoder timestep the attention values as a result of interaction between the current decoder hidden state and the image features.
6. The system of any of claims 1 -5, wherein the spatial attender further comprises an adaptive attender softmax for exponentially normalizing the attention values for the image features to produce the attention probability masses at each decoder timestep.
7. The system of any of claims 1 -6, wherein the spatial attender further comprises an adaptive convex combination accumulator for accumulating, at each decoder timestep, the image context as a convex combination of the image features scaled by attention probability masses determined using the current decoder hidden state.
8. The system of any of claims 1-7, further comprising a trainer which further comprises a preventer for preventing backpropagation of gradients from the Sn-LSTM decoder to the CNN encoder when a next caption word is a non-visual word or linguistically correlated to a previously emitted caption word.
9. The system of any of claims 1 -8, wherein the adaptive attender further comprises the sentinel gate mass for enhancing attention directed to the image context when a next caption word is a visual word.
10. The system of any of claims 1-9, wherein the adaptive attender further comprises the sentinel gate mass for enhancing attention directed to the visual sentinel when a next caption word is a non-visual word or linguistically correlated to the previously emitted caption word.
11. An image-to-language captioning system, running on numerous parallel processors, for machine generation of a natural language caption for an image, the system comprising: an image encoder for processing an input image through a convolutional neural network (abbreviated CNN) to generate an image representation;
a language decoder for processing a previously emitted caption word combined with the image representation through a recurrent neural network (abbreviated RNN) to emit a sequence of caption words; and
an adaptive attender
for enhancing attention directed to the image representation when a next caption word is a visual word, and
for enhancing attention directed to memory contents of the language decoder when the next caption word is a non-visual word or linguistically correlated to the previously emitted caption word.
12. An image-to-language captioning system, running on numerous parallel processors, for machine generation of a natural language caption for an image, the system comprising: an image encoder for processing an input image through a convolutional neural network (abbreviated CNN) to generate an image representation;
a language decoder for processing a previously emitted caption word combined with the image representation through a recurrent neural network (abbreviated RNN) to emit a sequence of caption words; and
a sentinel gate mass for controlling accumulation of the image representation and memory contents of the language decoder for next caption word emittance, wherein the sentinel gate mass is determined from a visual sentinel of the language decoder and a current hidden state of the language decoder.
13. An image-to-language captioning system, running on numerous parallel processors, for machine generation of a natural language caption for an image, the system comprising: an encoder for processing an input through at least one neural network to generate an encoded representation;
a decoder for processing a previously emitted output combined with the encoded representation through at least one neural network to emit a sequence of outputs; and
an adaptive attender for using a sentinel gate mass to mix the encoded representation and memory contents of the decoder for emitting a next output, with the sentinel gate mass determined from the memory contents of the decoder and a current hidden state of the decoder.
14. The system of claim 13, wherein the task is text summarization, further configured to comprise: a first recurrent neural network (abbreviated RNN) as the encoder that processes an input document to generate a document encoding; and
a second RNN as the decoder that uses the document encoding to emit a sequence of summary words.
15. The system of any of claims 13-14, wherein the task is question answering, further configured to comprise: a first RNN as the encoder that processes an input question to generate a question encoding; and
a second RNN as the decoder that uses the question encoding to emit a sequence of answer words.
16. The system of any of claims 13- 15, wherein the task is machine translation, further configured to comprise: a first RNN as the encoder that processes a source language sequence to generate a source encoding; and
a second RNN as the decoder that uses the source encoding to emit a target language sequence of translated words.
17. The system of any of claims 13-16, wherein the task is video captioning, further configured to comprise: a combination of a convolutional neural network (abbreviated CNN) and a first RNN as the encoder that process video frames to generate a video encoding; and
a second RNN as the decoder that uses the video encoding to emit a sequence of caption words.
18. The system of any of claims 13-17, wherein the task is image captioning, further configured to comprise: a CNN as the encoder that process an input image to generate an image encoding; and a RNN as the decoder that uses the image encoding to emit a sequence of caption words.
19. The system of any of claims 13-18, further configured to: determine an alternative representation of the input from the encoded representation; and use the alternative representation instead of the encoded representation for processing by the decoder and mixing by the adaptive attender.
20. The system of any of claims 13-19, wherein the alternative representation is a weighted summary of the encoded representation conditioned on the current hidden state of the decoder.
21. The system of any of claims 13-20, wherein the alternative representation is an averaged
summary of the encoded representation.
22. A method of automatic image captioning, the method including: mixing results of an image encoder and a language decoder to emit a sequence of caption words for an input image, with the mixing governed by a gate probability mass determined from a visual sentinel vector of the language decoder and a current hidden state vector of the language decoder;
determining the results of the image encoder by processing the image through the image encoder to produce image feature vectors for regions of the image and computing a global image feature vector from the image feature vectors;
determining the results of the language decoder by processing words through the language decoder, including
beginning at an initial timestep with a start-of-caption token and the global image feature vector,
continuing in successive timesteps using a most recently emitted caption word and the global image feature vector as input to the language decoder, and at each timestep, generating a visual sentinel vector that combines the most recently emitted caption word, the global image feature vector, a previous hidden state vector of the language decoder, and memory contents of the language decoder;
at each timestep, using at least a current hidden state vector of the language decoder to determine unnormalized attention values for the image feature vectors and an unnormalized gate value for the visual sentinel vector;
concatenating the unnormalized attention values and the unnormalized gate value and exponentially normalizing the concatenated attention and gate values to produce a vector of attention probability masses and the gate probability mass;
applying the attention probability masses to the image feature vectors to accumulate in an image context vector a weighted sum of the image feature vectors;
determining an adaptive context vector as a mix of the image context vector and the visual sentinel vector according to the gate probability mass;
submitting the adaptive context vector and the current hidden state of the language decoder to a feed-forward neural network and causing the feed-forward neural network to emit a next
caption word; and
repeating the processing of words through the language decoder, the using, the concatenating, the applying, the determining, and the submitting until the next caption word emitted is an end-of-caption token.
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201662424353P | 2016-11-18 | 2016-11-18 | |
US62/424,353 | 2016-11-18 | ||
US15/817,161 | 2017-11-17 | ||
US15/817,153 | 2017-11-17 | ||
US15/817,161 US10565305B2 (en) | 2016-11-18 | 2017-11-17 | Adaptive attention model for image captioning |
US15/817,153 US10558750B2 (en) | 2016-11-18 | 2017-11-17 | Spatial attention model for image captioning |
US15/817,165 US10565306B2 (en) | 2016-11-18 | 2017-11-18 | Sentinel gate for modulating auxiliary information in a long short-term memory (LSTM) neural network |
US15/817,165 | 2017-11-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018094295A1 true WO2018094295A1 (en) | 2018-05-24 |
Family
ID=60629823
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2017/062433 WO2018094294A1 (en) | 2016-11-18 | 2017-11-18 | Spatial attention model for image captioning |
PCT/US2017/062435 WO2018094296A1 (en) | 2016-11-18 | 2017-11-18 | Sentinel long short-term memory |
PCT/US2017/062434 WO2018094295A1 (en) | 2016-11-18 | 2017-11-18 | Adaptive attention model for image captioning |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2017/062433 WO2018094294A1 (en) | 2016-11-18 | 2017-11-18 | Spatial attention model for image captioning |
PCT/US2017/062435 WO2018094296A1 (en) | 2016-11-18 | 2017-11-18 | Sentinel long short-term memory |
Country Status (1)
Country | Link |
---|---|
WO (3) | WO2018094294A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109086779A (en) * | 2018-07-28 | 2018-12-25 | 天津大学 | A kind of attention target identification method based on convolutional neural networks |
KR20190039817A (en) * | 2016-09-26 | 2019-04-15 | 구글 엘엘씨 | Neural Machine Translation System |
CN112529857A (en) * | 2020-12-03 | 2021-03-19 | 重庆邮电大学 | Ultrasonic image diagnosis report generation method based on target detection and strategy gradient |
US20210383194A1 (en) * | 2020-06-08 | 2021-12-09 | International Business Machines Corporation | Using negative evidence to predict event datasets |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10558750B2 (en) | 2016-11-18 | 2020-02-11 | Salesforce.Com, Inc. | Spatial attention model for image captioning |
CN108898639A (en) * | 2018-05-30 | 2018-11-27 | 湖北工业大学 | A kind of Image Description Methods and system |
CN109034373B (en) * | 2018-07-02 | 2021-12-21 | 鼎视智慧(北京)科技有限公司 | Parallel processor and processing method of convolutional neural network |
JP6695947B2 (en) * | 2018-09-21 | 2020-05-20 | ソニーセミコンダクタソリューションズ株式会社 | Solid-state imaging system, image processing method and program |
CN109376246B (en) * | 2018-11-07 | 2022-07-08 | 中山大学 | Sentence classification method based on convolutional neural network and local attention mechanism |
CN111753822B (en) | 2019-03-29 | 2024-05-24 | 北京市商汤科技开发有限公司 | Text recognition method and device, electronic equipment and storage medium |
CN110175979B (en) * | 2019-04-08 | 2021-07-27 | 杭州电子科技大学 | Lung nodule classification method based on cooperative attention mechanism |
CN110163299B (en) * | 2019-05-31 | 2022-09-06 | 合肥工业大学 | Visual question-answering method based on bottom-up attention mechanism and memory network |
CN112307769B (en) * | 2019-07-29 | 2024-03-15 | 武汉Tcl集团工业研究院有限公司 | Natural language model generation method and computer equipment |
US20220046206A1 (en) * | 2020-08-04 | 2022-02-10 | Vingroup Joint Stock Company | Image caption apparatus |
CN112052906B (en) * | 2020-09-14 | 2024-02-02 | 南京大学 | Image description optimization method based on pointer network |
CN112580777A (en) * | 2020-11-11 | 2021-03-30 | 暨南大学 | Attention mechanism-based deep neural network plug-in and image identification method |
CN112528989B (en) * | 2020-12-01 | 2022-10-18 | 重庆邮电大学 | Description generation method for semantic fine granularity of image |
CN112927255B (en) * | 2021-02-22 | 2022-06-21 | 武汉科技大学 | Three-dimensional liver image semantic segmentation method based on context attention strategy |
CN114782702A (en) * | 2022-03-23 | 2022-07-22 | 成都瑞数猛兽科技有限公司 | Image semantic understanding algorithm based on three-layer LSTM (least Square TM) push network |
CN115658865A (en) * | 2022-10-26 | 2023-01-31 | 茅台学院 | Picture question-answering method based on attention pre-training |
CN115544259B (en) * | 2022-11-29 | 2023-02-17 | 城云科技(中国)有限公司 | Long text classification preprocessing model and construction method, device and application thereof |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016077797A1 (en) * | 2014-11-14 | 2016-05-19 | Google Inc. | Generating natural language descriptions of images |
-
2017
- 2017-11-18 WO PCT/US2017/062433 patent/WO2018094294A1/en active Search and Examination
- 2017-11-18 WO PCT/US2017/062435 patent/WO2018094296A1/en active Application Filing
- 2017-11-18 WO PCT/US2017/062434 patent/WO2018094295A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016077797A1 (en) * | 2014-11-14 | 2016-05-19 | Google Inc. | Generating natural language descriptions of images |
Non-Patent Citations (5)
Title |
---|
KELVIN XU ET AL: "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 10 February 2015 (2015-02-10), XP080677655 * |
LONG CHEN ET AL: "SCA-CNN: Spatial and Channel-wise Attention in Convolutional Networks for Image Captioning", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 17 November 2016 (2016-11-17), XP080732428 * |
LU JIASEN ET AL: "Knowing When to Look: Adaptive Attention via a Visual Sentinel for Image Captioning", IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION. PROCEEDINGS, IEEE COMPUTER SOCIETY, US, 21 July 2017 (2017-07-21), pages 3242 - 3250, XP033249671, ISSN: 1063-6919, [retrieved on 20171106], DOI: 10.1109/CVPR.2017.345 * |
RYAN KIROS ET AL: "Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models", 10 November 2014 (2014-11-10), pages 1 - 13, XP055246385, Retrieved from the Internet <URL:http://arxiv.org/pdf/1411.2539v1.pdf> [retrieved on 20160201] * |
STEPHEN MERITY ET AL: "Pointer Sentinel Mixture Models", 26 September 2016 (2016-09-26), XP055450460, Retrieved from the Internet <URL:https://arxiv.org/pdf/1609.07843.pdf> [retrieved on 20180213] * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20190039817A (en) * | 2016-09-26 | 2019-04-15 | 구글 엘엘씨 | Neural Machine Translation System |
KR102323548B1 (en) | 2016-09-26 | 2021-11-08 | 구글 엘엘씨 | neural machine translation system |
CN109086779A (en) * | 2018-07-28 | 2018-12-25 | 天津大学 | A kind of attention target identification method based on convolutional neural networks |
CN109086779B (en) * | 2018-07-28 | 2021-11-09 | 天津大学 | Attention target identification method based on convolutional neural network |
US20210383194A1 (en) * | 2020-06-08 | 2021-12-09 | International Business Machines Corporation | Using negative evidence to predict event datasets |
CN112529857A (en) * | 2020-12-03 | 2021-03-19 | 重庆邮电大学 | Ultrasonic image diagnosis report generation method based on target detection and strategy gradient |
CN112529857B (en) * | 2020-12-03 | 2022-08-23 | 重庆邮电大学 | Ultrasonic image diagnosis report generation method based on target detection and strategy gradient |
Also Published As
Publication number | Publication date |
---|---|
WO2018094296A1 (en) | 2018-05-24 |
WO2018094294A1 (en) | 2018-05-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10846478B2 (en) | Spatial attention model for image captioning | |
WO2018094295A1 (en) | Adaptive attention model for image captioning | |
JP6972265B2 (en) | Pointer sentinel mixed architecture | |
JP6873236B2 (en) | Dynamic mutual attention network for question answering | |
Lu et al. | Knowing when to look: Adaptive attention via a visual sentinel for image captioning | |
US20190279075A1 (en) | Multi-modal image translation using neural networks | |
CN113268609B (en) | Knowledge graph-based dialogue content recommendation method, device, equipment and medium | |
US20200242736A1 (en) | Method for few-shot unsupervised image-to-image translation | |
US11016740B2 (en) | Systems and methods for virtual programming by artificial intelligence | |
CA3066337A1 (en) | Method of and server for training a machine learning algorithm for estimating uncertainty of a sequence of models | |
US11837000B1 (en) | OCR using 3-dimensional interpolation | |
US20230124177A1 (en) | System and method for training a sparse neural network whilst maintaining sparsity | |
KR102386373B1 (en) | Information processing apparatus, information processing method, and information processing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
DPE1 | Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17821751 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17821751 Country of ref document: EP Kind code of ref document: A1 |