US7783459B2 - Analog system for computing sparse codes - Google Patents

Analog system for computing sparse codes Download PDF

Info

Publication number
US7783459B2
US7783459B2 US12/035,424 US3542408A US7783459B2 US 7783459 B2 US7783459 B2 US 7783459B2 US 3542408 A US3542408 A US 3542408A US 7783459 B2 US7783459 B2 US 7783459B2
Authority
US
United States
Prior art keywords
nodes
node
signal
coefficients
sparsely
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US12/035,424
Other versions
US20080270055A1 (en
Inventor
Christopher John Rozell
Don Herrick Johnson
Richard Gordon Baraniuk
Bruno A. Olshausen
Robert Lowell Ortman
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of California
William Marsh Rice University
Original Assignee
William Marsh Rice University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by William Marsh Rice University filed Critical William Marsh Rice University
Priority to US12/035,424 priority Critical patent/US7783459B2/en
Assigned to THE REGENTS OF THE UNIVERSITY OF CALIFORNIA reassignment THE REGENTS OF THE UNIVERSITY OF CALIFORNIA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: OLSHAUSEN, BRUNO ADOLPHUS
Assigned to WILLIAM MARSH RICE UNIVERSITY reassignment WILLIAM MARSH RICE UNIVERSITY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BARANIUK, RICHARD GORDON, JOHNSON, DON HERRICK, ORTMAN, ROBERT LOWELL, ROZELL, CHRISTOPHER JOHN
Publication of US20080270055A1 publication Critical patent/US20080270055A1/en
Application granted granted Critical
Publication of US7783459B2 publication Critical patent/US7783459B2/en
Assigned to NAVY, SECRETARY OF THE UNITED STATES OF AMERICA reassignment NAVY, SECRETARY OF THE UNITED STATES OF AMERICA CONFIRMATORY LICENSE (SEE DOCUMENT FOR DETAILS). Assignors: RICE UNIVERSITY
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06GANALOGUE COMPUTERS
    • G06G7/00Devices in which the computing operation is performed by varying electric or magnetic quantities
    • G06G7/12Arrangements for performing computing operations, e.g. operational amplifiers
    • G06G7/26Arbitrary function generators

Definitions

  • the present invention was made with government support under the following government grants or contracts: Office of Naval Research Grant Nos. N00014-06-1-0769, N00014-06-1-0829 and N00014-02-1-0353, U.S. Department of Energy Grant No. DE-FC02-01ER25462, and National Science Foundation Grant Nos. ANI-0099148, ANI-0099148 and IIS-0625717. The government has certain rights in the invention.
  • the present invention relates to a system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images.
  • Natural signals can be well-approximated by a small subset of elements from an over complete dictionary.
  • the process of choosing a good subset of dictionary elements along with the corresponding coefficients to represent a signal is known as sparse approximation.
  • Sparse approximation is a difficult non-convex optimization problem that is at the center of much research in mathematics and signal processing.
  • the first general approach substitutes an alternate sparsity measure to convexify the l 0 norm.
  • One well-known example is Basis Pursuit (BP) (Chen et al., 2001), which replaces the l 0 norm with the l 1 norm
  • BPDN Basis Pursuit De-Noising
  • BPDN provides the l 1 -sparsest approximation for a given reconstruction quality.
  • the second general approach employed by signal processing researchers uses iterative greedy algorithms to constructively build up a signal representation (Tropp, 2004).
  • the canonical example of a greedy algorithm is known in the signal processing community as Matching Pursuit (MP) (Mallat and Zhang, 1993).
  • MP Matching Pursuit
  • V1 population responses to natural stimuli may be the result of a sparse approximation. For example, it has been shown that V1 receptive fields are consistent with optimizing the coefficient sparsity when encoding natural images. Additionally, V1 recordings in response to natural scene stimuli show activity levels (corresponding to the coefficients ⁇ a m ⁇ ) becoming sparser as neighboring units are also stimulated. These populations are typically very overcomplete, allowing great flexibility in the representation of a stimulus. Using this flexibility to pursue sparse codes might offer many advantages to sensory neural systems, including enhancing the performance of subsequent processing stages, increasing the storage capacity in associative memories, and increasing the energy efficiency of the system.
  • this implementation requires persistent (two-way) signaling between all units with overlapping receptive fields (e.g., even a node with a nearly zero value would have to continue sending inhibition signals to all similar nodes).
  • spiking neural circuits can be constructed to implement MP.
  • this type of circuit implementation relies on a temporal code that requires tightly coupled and precise elements to both encode and decode.
  • a time-varying input signal s(t) is represented with a set of time-varying coefficients ⁇ a m (t) ⁇ . While temporal coefficient changes are necessary to encode stimulus changes, the most useful encoding would use coefficient changes that reflect the character of the stimulus. In particular, sparse coefficients should have smooth temporal variations in response to smooth changes in the image.
  • most sparse approximation schemes have a single goal: select the smallest number of coefficients to represent a fixed signal. This single-minded approach can produce coefficient sequences for time-varying stimuli that are erratic, with drastic changes not only in the values of the coefficients but also in the selection of which coefficients are used. These erratic temporal codes are inefficient because they introduce uncertainty about which coefficients are coding the most significant stimulus changes, thereby complicating the process of understanding the changing stimulus content.
  • the present invention is a parallel dynamical system based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics.
  • LCAs Locally Competitive Algorithms
  • nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an overcomplete dictionary.
  • ODEs nonlinear ordinary differential equations
  • Each LCA corresponds to an optimal sparse approximation problem that minimizes an energy function combining reconstruction mean-squared error (MSE) and a sparsity-inducing cost function.
  • MSE mean-squared error
  • the present invention is a neural architecture for locally competitive algorithms (“LCAs”) that correspond to a broad class of sparse approximation problems and possess three properties critical for a neurally plausible sparse coding system.
  • LCAs locally competitive algorithms
  • the LCA dynamical system is stable to guarantee that a physical implementation is well-behaved.
  • the LCAs perform their primary task well, finding codes for fixed images that are have sparsity comparable to the most popular centralized algorithms.
  • the LCAs display inertia, coding video sequences with a coefficient time series that is significantly smoother in time than the coefficients produced by other algorithms. This increased coefficient regularity better reflects the smooth nature of natural input signals, making the coefficients much more predictable and making it easier for higher-level structures to identify and understand the changing content in the time-varying stimulus.
  • the present invention is an analog system for sparsely approximating a signal.
  • the system comprises a matching system for calculating and outputting matching signals representative of how well-matched said signal is to a plurality of dictionary elements and a plurality of nodes.
  • Each node receives one of said matching signals from said matching system.
  • Each node comprises a source of an internal state signal and a thresholding element.
  • the internal state signal in each node is calculated as a function of said matching signal received at said node and weighted outputs of all other nodes.
  • the source of an internal state signal may comprise a low pass averaging system.
  • the matching system may comprise a projection system for projecting a signal vector onto the plurality of dictionary elements.
  • Each node may further comprise a plurality of weighting elements, each weighting element receiving an output from another one of the plurality of nodes and providing the weighted outputs to the source of an internal state signal.
  • the internal state signal may be derived from the matching signal less a sum of weighted outputs from the other nodes.
  • each node may further comprise a plurality of weighting elements for receiving an output of the thresholding element and providing a plurality of weighted outputs.
  • the inputted signal may be a video signal or other type of signal.
  • the source of an internal state signal may be a voltage source, current source or other source of electrical energy.
  • the low pass averaging system may comprise a low pass averaging circuit such as a resistor and capacitor or any other type of low pass averaging circuit.
  • the present invention is a parallel dynamical system for computing sparse representations of data.
  • the system comprises a projection system for projecting the data onto projection vectors and a plurality of nodes.
  • Each node receives one of the projection vectors from the projection system.
  • Each node comprises a source of electrical energy, a low pass averaging circuit and a thresholding element.
  • the source of electrical energy in each node comprises a projection vector received at the node less weighted outputs of all other nodes.
  • Each node further comprises a plurality of weighting elements, each weighting element receiving an output from another one of the plurality of nodes and providing the weighted output to the source of electrical energy. Other arrangements of the weighting elements may be used with the present invention and with this embodiment.
  • the present invention is a parallel dynamical system for computing sparse representations of data.
  • the system comprises a plurality of nodes, each node being active or inactive.
  • Each node comprises a leaky integrator element, wherein inputs to the leaky integrator element cause an activation potential to charge up, and a thresholding element for receiving the activation potential and for producing an output coefficient.
  • the output coefficient is the result of an activation function applied to the activation potential and parameterized by a system threshold.
  • Active nodes inhibit other nodes with inhibition signals proportional to both level of activity of the active nodes and a similarity of receptive fields of the active nodes.
  • FIG. 1( a ) illustrates LCA nodes in accordance with a preferred embodiment of the present invention behaving as leaky integrators, charging with a speed that depends on how well the input matches the associated dictionary element and the inhibition received from other nodes.
  • FIG. 1( b ) is a diagram of a system in accordance with a preferred embodiment of the present invention showing the inhibition signals being sent between nodes.
  • node 2 is shown as being active (i.e., having a coefficient above threshold) and inhibiting its neighbors. Since the neighbors are inactive then the inhibition is one-way.
  • FIG. 1 ( c ) illustrates a source of electrical energy in a node in accordance with a preferred embodiment of the present invention.
  • FIGS. 2( a )-( f ) illustrate the relationship between the threshold function T ( ⁇ , ⁇ , ⁇ ) (•) and the sparsity cost function C(•). Only the positive half of the symmetric threshold and cost functions are plotted.
  • FIG. 2( a ) illustrates a sigmoidal threshold function.
  • FIG. 2( d ) illustrates the corresponding cost function.
  • the dashed line shows the limit, but coefficients produced by the ideal thresholding function cannot take values in this range.
  • FIG. 2( f ) illustrates the corresponding cost function.
  • FIGS. 3( a ) and ( b ) respectively illustrate the top 200 coefficients from a BPDN solver sorted by magnitude and the same coefficients, sorted according to the magnitude ordering of the SLCA coefficients. While there is a gross decreasing trend noticeable, the largest SLCA coefficients are not in the same locations as the largest BPDN coefficients. While the solutions have equivalent energy functions, the two sets of coefficients differ significantly.
  • FIG. 4( a ) illustrates an example of a dictionary having one “extra” element that comprises decaying combinations of all other dictionary elements.
  • FIG. 4( b ) illustrates an input vector having a sparse representation in just a few dictionary elements.
  • FIG. 4( c ) illustrates an MP initially choosing an “extra” dictionary element, preventing it from finding the optimally sparse representation (coefficients shown after 100 iterations).
  • FIG. 4( d ) illustrate that, in contrast, the HLCA system finds the optimally sparse coefficients.
  • FIG. 4( e ) illustrates how the time-dynamics of the HLCA system illustrate its advantage.
  • the “extra” dictionary element is the first node to activate, followed shortly by the nodes corresponding to the optimal coefficients.
  • the collective inhibition of the optimal nodes causes the “extra” node to die away.
  • FIG. 5 illustrates SLCA and BPDN coefficients for a series of standard test images. Each line on the plot indicates the tradeoff between MSE and l 1 coefficient norm as ⁇ is varied. The results for SLCA and BPDN overlap exactly, illustrating that the systems are finding equivalent minima of the energy function.
  • FIG. 6( a ) shows the MSE decay and
  • FIG. 6( b ) shows the l 0 sparsity for HLCA.
  • FIG. 6( c ) illustrates the MSE decay and
  • FIG. 6( d ) illustrates the l 0 sparsity for SLCA.
  • the error converges within 1-2 time constants and the sparsity often approximately converges within 3-4 time constants. In some cases sparsity is reduced with a longer running time.
  • FIG. 7 illustrates the mean tradeoff between MSE and l 0 -sparsity for normalized (32 ⁇ 32) patches from a standard set of test images. For a given MSE range, the mean (a) and standard deviation (b) of the l 0 sparsity are plotted.
  • FIGS. 8( a )-( d ) shows the HLCA and SLCA systems simulated on 200 frames of the “foreman” test video sequence. For comparison, MP coefficients and thresholded BPDN coefficients are also shown. Average values for each system are notated in the legend.
  • FIG. 8( a ) shows Per-frame MSE for each coding scheme, designed to be approximately equal.
  • FIG. 8( b ) shows the number of active coefficients in each frame.
  • FIG. 8( c ) shows the number of changing coefficient locations for each frame, including the number of inactive nodes becoming active and the number of active nodes becoming inactive.
  • FIG. 8( d ) shows the ratio of changing coefficients to active coefficients.
  • a ratio near 2 (such as with MP) means that almost 100% of the coefficient locations are new at each frame.
  • a ratio near 0.5 (such as with HLCA) means that approximately 25% of the coefficients are new at each frame.
  • FIG. 9( a ) shows the marginal probabilities denoting the fraction of the time coefficients spent in the three states: negative, zero and positive ( ⁇ , 0, and +).
  • FIG. 9( b ) shows the transition probabilities denoting the probability of a node in one state transitioning to another state on the next frame.
  • +) is the probability that a node with an active positive coefficient will be inactive (i.e., zero) in the next frame.
  • FIG. 10( a ) illustrates an example time-series coefficient for the HLCA and MP (top and bottom, respectively) encodings for the test video sequence.
  • HLCA clusters non-zero entries together into longer runs while MP switches more often between states.
  • FIG. 10( b ) illustrates the empirical conditional entropy of the coefficient states ( ⁇ ,0,+) during the test video sequence.
  • FIG. 10( c ) illustrates the conditional entropy is calculated analytically while varying P (+
  • +) the tendency of a system to group non-zero states together is the most important factor in determining the entropy.
  • FIG. 11 is a diagram illustrating an LCA network for compressive sensing reconstruction and the nonlinear transformation applied to the state variable in each node in accordance with a preferred embodiment of the present invention.
  • the LCA network found the best sparse approximation for the data vector x.
  • the network input m equaled the data x directly and the interconnection strengths were given by ⁇ i , ⁇ l .
  • the LCA structure was modified as indicated so that it could solve the compressive sensing reconstruction problem.
  • the present invention is an analog system that compresses data before digitization, thereby saving time and energy that would have been wasted. More specifically, the present invention is a parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements. Such a system could be envisioned to perform data compression before digitization, reversing the resource wasting common in digital systems.
  • a technique referred to as compressive sensing permits a signal to be captured directly in a compressed form rather than recording raw samples in the classical sense. With compressive sensing, only about 5-10% of the original number of measurements need to be made from the original analog image to retain a reasonable quality image. In compressive sensing, however, reconstruction involves solving an optimal sparse approximation problem which requires enormous calculations and memory.
  • the present invention employs a locally competitive algorithm (“LCA”) that stylizes interacting neuron-like nodes to solve the sparse approximation problem.
  • LCDA locally competitive algorithm
  • the present invention uses thresholding functions to induce local (possibly one-way) inhibitory competitions among units, thus constituting a locally competitive algorithm (LCA).
  • the LCA can be implemented as a circuit and can be shown to minimize weighted combination of mean-squared-error in describing the data and a cost function on neural activity. It demonstrates sparsity levels comparable to existing sparse coding algorithms, but in contrast to greedy algorithms that iteratively select the single best element, the circuit allows continual interaction among many units, allowing the system to reach more optimal solutions.
  • the LCA coefficients for video sequences demonstrate inertial properties that are both qualitatively and quantitatively more regular, i.e., smoother and more predictable, than the coefficients produced by greedy algorithms.
  • the LCAs associate each node with an element of a dictionary ⁇ m ⁇ D.
  • the collection of nodes evolve according to fixed dynamics (described below) and settle on a collective output ⁇ a m (t) ⁇ , corresponding to the short-term average firing rate of the nodes.
  • positive and negative coefficients are allowed, but rectified systems could use two physical units to implement one LCA node.
  • Each node's sub-threshold value is represented by a time-varying internal state u m (t).
  • the nodes best matching the stimulus will have internal state variables that charge at the fastest rates and become active soonest.
  • active nodes inhibit other nodes with an inhibition signal proportional to both their activity level and the similarity of the nodes' receptive fields.
  • LCA node dynamics are expressed by the non-linear ordinary differential equation (ODE)
  • u . m ⁇ ( t ) 1 ⁇ [ b m ⁇ ( t ) - u m ⁇ ( t ) - ⁇ n ⁇ m ⁇ G m , n ⁇ a n ⁇ ( t ) ] . ( 4 )
  • FIG. 1( a ) shows an LCA node circuit schematic and FIG. 1( b ) is a system diagram illustrating the lateral inhibition.
  • the node 100 has a source of electrical energy 110 , a low pass averaging circuit 120 comprised of a resistor and a capacitor, and a thresholding element 130 .
  • the source of electrical energy 110 is shown in FIG. 1( a ) as a voltage source, other arrangements such as a current source may be used in the present invention and such alternatives will readily apparent to those of ordinary skill in the art.
  • the low pass averaging circuit 120 is shown as a simple resistor and capacitor arrangement in FIG. 1( a ), other arrangements may be used with the present invention and will be readily apparently to those of ordinary skill in the art.
  • FIG. 1( c ) An embodiment of the source of electrical energy 110 is shown in greater detail in FIG. 1( c ).
  • the source 110 is not a “source” in the sense that it generates electrical energy, but rather, it uses received signals to produce or “compute” the output provided to the low pass averaging circuit 120 and the thresholding element 130 . More specifically, the source 110 receives a projection vector ⁇ n ,s(t) from a projection system 200 (shown in FIG. 1( b )) and an output a n (t) from each other node 100 .
  • the source 110 in each node 100 has a weighting element 112 corresponding to the output received from each other node for weighting that output.
  • the source 110 outputs the difference
  • V m ⁇ ( t ) ⁇ ⁇ m , s ⁇ ( t ) ⁇ - ⁇ n ⁇ m ⁇ T ⁇ ⁇ ( u n ⁇ ( t ) ) ⁇ ⁇ ⁇ m , ⁇ n ⁇ .
  • a preferred embodiment of the analog system for compressing signals of the present invention has a projection system 200 for projecting a received signal s(t) onto a plurality of projection vectors ⁇ n , s(t) that are then provided to a plurality of nodes 100 .
  • each node 100 receives a projection vector ⁇ n , s(t) and also the output a n (t) of each other node.
  • FIGS. 1( a )-( c ) show the output of each node being pass directly back to each other node and the weighting of such outputs being performed inside the receiving node
  • each node could calculate the weighting of its own output a n (t) and could then pass its own weighted output a n (t) ⁇ n , ⁇ m to the other nodes.
  • LCA architecture described by equation (5) solves a family of sparse approximation problems corresponding to different sparsity measures. Specifically, LCAs descend an energy function combining the reconstruction MSE and a sparsity-inducing cost penalty C(•),
  • the specific form of the cost function C(•) is determined by the form of the thresholding activation function T ⁇ (•). For a given threshold function, the cost function is specified (up to a constant) by
  • Thresholding functions limit the lateral inhibition by allowing only “strong” units to suppress other units and forcing most coefficients to be identically zero.
  • thresholding functions T ⁇ (•) have two distinct behaviors over their range: they are essentially linear with unit slope above threshold ⁇ , and essentially zero below threshold.
  • T ( ⁇ , ⁇ ) ⁇ ( u m ) u m - ⁇ 1 + e - ⁇ ⁇ ( u m - ⁇ ) , ( 8 )
  • is a parameter controlling the speed of the threshold transition
  • ⁇ [0,1] indicates what fraction of an additive adjustment is made for values above threshold.
  • An example sigmoidal thresholding function is shown in FIG. 2 a .
  • a LCA can optimize a variety of different sparsity measures depending on the choice of thresholding function.
  • the SLCA is simply another solution method for the general BPDN problem described above.
  • SLCA and BPDN solvers will find different sets of coefficients, as illustrated in FIG. 3 .
  • the connection between soft-thresholding and BPDN is well-known in the case of orthonormal dictionaries (Chen et al., 2001), and recent results have given some justification for using soft-thresholding in over complete dictionaries.
  • the SLCA provides another formal connection between the soft-thresholding function and the l 1 cost function.
  • BPDN uses the l 1 -norm as its sparsity penalty, we often expect many of the resulting coefficients to be identically zero (especially when M>N). However, most numerical methods (including direct gradient descent and interior point solvers) will drive coefficients toward zero but will never make them identically zero. While an ad hoc threshold could be applied to the results of a BPDN solver, the SLCA has the advantage of incorporating a natural thresholding function that keeps coefficients identically zero during the computation unless they become active. In other words, while BPDN solvers often start with many non-zero coefficients and try to force coefficients down, the SLCA starts with all coefficients equal to zero and only lets a few grow up. This advantage is especially important for systems that must expend energy for non-zero values throughout the entire computation.
  • the HLCA also has connections to known sparse approximation principles. If node m is fully charged, the inhibition signal it sends to other nodes would be exactly the same as the update step when the m th node is chosen in the MP algorithm. However, due to the continuous competition between nodes before they are fully charged, the HLCA is not equivalent to MP in general.
  • e m is the canonical basis element (i.e., it contains a single 1 in the m th location) and k is a constant to make the vectors have unit norm.
  • the input signal is sparsely represented in the first K dictionary elements,
  • the first MP iteration chooses ⁇ M , introducing a residual with decaying terms. Even though s has an exact representation in K elements, MP iterates forever trying to atone for this bad initial choice.
  • the HLCA initially activates the M th node but uses the collective inhibition from nodes 1, . . . , K to suppress this node and calculate the optimal set of coefficients. While this pathological example is unlikely to exactly occur in natural signals, it is often used as a criticism of greedy methods to demonstrate their shortsightedness. We mention it here to demonstrate the flexibility of LCAs and their differences from pure greedy algorithms.
  • LCAs must exhibit several critical properties: the dynamic systems must remain stable under normal operating conditions, the system must produce sparse coefficients that represent the stimulus with low error, and coefficient sequences must exhibit regularity in response to time-varying inputs. We show that LCAs exhibit good characteristics in each of these three areas. We focus our analysis on the HLCA both because it yields the most interesting results and because it is notationally the cleanest to discuss. In general, the analysis principles we use will apply to all LCAs through straightforward (through perhaps laborious) extensions.
  • M u(t) ⁇ m:
  • the stability criteria are likely to be met under normal operating conditions for two reasons. First, small subsets of dictionary elements are unlikely to be linearly dependent unless the dictionary is designed with this property. Second, sparse coding systems are actively trying to select dictionary subsets so that they can use many fewer coefficients then the dimension of the signal space,
  • the most powerful algorithm produces the lowest reconstruction MSE for a given sparsity.
  • the sparsity measure is the l 1 norm
  • the problem is convex and the SLCA produces solutions with equivalent l 1 -sparsity to interior point BPDN solvers (demonstrated in FIG. 5 ).
  • the HLCA is appealing because of its l 0 -like sparsity penalty, but this objective function is not convex and the HLCA may find a local minimum.
  • ⁇ ⁇ m , s ⁇ ( t ) - s ⁇ ⁇ ( t ) ⁇ ⁇ u m ⁇ ( t ) if ⁇ ⁇ ⁇ u m ⁇ ⁇ ⁇ 0 if ⁇ ⁇ ⁇ u m ⁇ > ⁇ .
  • ⁇ min is the minimum eigenvalue of ⁇ t .
  • the HLCA may use the m th node or a collection of other nodes to represent s, but it cannot use a combination of both. This result extends intuitively beyond one-sparse signals: each component in an optimal decomposition is represented by either the optimal node or another collection of nodes, but not both. While not necessarily finding the optimal representation, the system does not needlessly employ both the optimal and extraneous nodes.
  • the LCAs achieve a combination of error and sparsity comparable with known methods.
  • a dictionary consisting of the bandpass band of a steerable pyramid with one level and four orientation bands (i.e., the dictionary is approximately four times overcomplete).
  • Image patches 32 ⁇ 32 were selected at random from a standard set of test images. The selected image patches were decomposed using the steerable pyramid and reconstructed using just the bandpass band. The bandpass image patches were also normalized to have unit energy.
  • We simulated the system using a simple Euler's method approach i.e., first order finite difference approximation
  • FIG. 6 shows the time evolution of the reconstruction MSE and l 0 sparsity for SLCA and HLCA responding to an individual image
  • FIG. 7 shows the mean steady-state tradeoff between l 0 sparsity and MSE.
  • BPDNthr a standard BPDN interior point solver followed by thresholding to enforce l 0 sparsity
  • SLCAthr SLCA with the same threshold applied
  • u . m ⁇ ( t ) 1 ⁇ ⁇ ⁇ ⁇ ⁇ m , ( s ⁇ ( t ) - s ⁇ ⁇ ( t ) ) ⁇ - u m ⁇ ( t ) when ⁇ ⁇ ⁇ u m ⁇ ( t ) ⁇ ⁇ ⁇ ⁇ ⁇ m , ( s ⁇ ( t ) - s ⁇ ⁇ ( t ) ) ⁇ - ⁇ when ⁇ ⁇ ⁇ u m ⁇ ( t ) ⁇ ⁇ ⁇ .
  • a new residual signal drives the coefficient higher but suffers an additive penalty.
  • This property induces a “king of the hill” effect: when a new residual appears, active nodes move virtually unimpeded to represent it while inactive nodes are penalized until they reach threshold. This inertia encourages inactive nodes to remain inactive unless the active nodes cannot adequately represent the new input.
  • the HLCA uses approximately the same number of active coefficients as MP but chooses coefficients that more efficiently represent the video sequence.
  • the HLCA is significantly more likely to re-use active coefficient locations from the previous frame without making significant sacrifices in the sparsity of the solution. This difference is highlighted when looking at the ratio of the number of changing coefficients to the number of active coefficients,
  • MP has a ratio of 1.7, meaning that MP is finding almost an entirely new set of active coefficient locations for each frame.
  • the HLCA has a ratio of 0.5, meaning that it is changing approximately 25% of its coefficient locations at each frame.
  • SLCA and BPDNthr have approximately the same performance, with regularity falling between HLCA and MP. Though the two systems can calculate different coefficients, the convexity of the energy function appears to be limiting the coefficient choices enough so that SLCA cannot smooth the coefficient time series substantially more than BPDNthr.
  • the simulation results indicate that the HLCA is producing time series coefficients that are much more regular than MP.
  • This regularity is visualized in FIG. 10 by looking at the time-series of example HLCA and MP coefficients. Note that though the two coding schemes produce roughly the same number of non-zero entries, the HLCA does much better than MP at clustering the values into consecutive runs of positive or negative values. This type of smoothness better reflects the regularity in the natural video sequence input. We can quantify this increased regularity by examining the Markov state transitions. Specifically, each coefficient time-series is Markov chain with three possible states at frame n:
  • ⁇ m ⁇ ( n ) ⁇ - if ⁇ ⁇ u m ⁇ ( n ) ⁇ - ⁇ 0 if ⁇ - ⁇ ⁇ u m ⁇ ( n ) ⁇ ⁇ + if ⁇ ⁇ u m ⁇ ( n ) > ⁇
  • FIG. 9 shows the marginal probabilities P(•) of the states and the conditional probabilities P(•
  • the HLCA and MP are equally likely to have non-zero states, but the HLCA is over five times more likely than MP to have a positive coefficient stay positive (P (+
  • conditional entropy indicates how much uncertainty there is about the status of the current coefficients given the coefficients from the previous frame.
  • the conditional entropy for MP is almost double the entropy for the HLCA, while SLCA is again similar to BPDNthr.
  • the principle contributing factor to the conditional entropy appears to be the probability a non-zero node remains in the same state (i.e., P(+
  • FIG. 10 shows the change in conditional entropy is almost linear with varying P(+
  • ⁇ ) P(+
  • the substantial decrease in the conditional entropy for the HLCA compared to MP quantifies the increased regularity in time-series coefficients due to the inertial properties of the LCAs.
  • the HLCA in particular encourages coefficients to maintain their present state (i.e., active or inactive) if it is possible
  • Sparse approximation is an important paradigm in modern sensing and signal processing, though mechanisms to calculate these codes using parallel analog computational elements instead of digital computers have remained unknown.
  • a locally competitive algorithm that solves a series of sparse approximation problems (including BPDN as a special case).
  • These LCAs can be implemented using a parallel network of simple elements that match well with parallel analog computational architectures, including analog circuits and sensory cortical areas such as V1. Though these LCA systems are non-linear, we have shown that they are well-behaved under nominal operating conditions.
  • LCA systems are not generally guaranteed to find a globally optimal solution to their energy function, we have proven that the systems will be efficient in a meaningful sense.
  • the SLCA system produces coefficients with sparsity levels comparable to BPDN solvers, but uses a natural physical implementation that is more energy efficient (i.e., it uses fewer non-zero inhibition signals between nodes).
  • the HLCA produces coefficients with almost identical sparsity as MP. This is significant because greedy methods such as MP are widely used in signal processing practice because of their efficiency, but HLCA offers a much more natural parallel implementation.
  • LCAs are particularly appropriate for time-varying data such as video sequences.
  • the LCA ODE not only encourages sparsity but also introduces an inertia into the coefficient time-series that we have quantified using both raw counts of changing coefficient location and through the conditional entropy of the coefficient states.
  • the LCAs produce smoother coefficient sequences that better reflect the structure of the time-varying stimulus. This property could prove valuable for higher levels of analysis that are trying to interpret the sensory scene from a set of sparse coefficients.
  • LCAs By using simple computational primitives, LCAs also have the benefit of being implementable in analog hardware.
  • An imaging system using VLSI to implement LCAs as a data collection front end has the potential to be extremely fast and energy efficient. Instead of digitizing all of the sensed data and using digital hardware to run a compression algorithm, analog processing would compress the data into sparse coefficients before digitization. In this system, time and energy resources would only be spent digitizing coefficients that are a critical component in the signal representation.
  • the LCA network Since the LCA network represents an analog way of finding sparse representations, it can be modified it for the compressive sensing reconstruction problem.
  • the compressed input signal is received by a projection system 300 which passes projection vectors to a plurality of nodes 400 .
  • the compressive sensing reconstruction problem amounts to a constrained optimization problem that can be recast with Lagrange multipliers into an unconstrained optimization problem.
  • the network's inputs must now equal ⁇ t y and the inner products ⁇ i , ⁇ l determine the connection strengths between nodes i and l.
  • the diagonal values of ⁇ t ⁇ are not all equal to 1, which is required to find the optimal solution without error. As a result, the following equation accommodates this problem.
  • D is a diagonal matrix whose entries equal those of the diagonal of ⁇ t ⁇ .
  • Test ⁇ matrices were derived from the product of a randomly generated ⁇ matrix (Gaussian independent and identically distributed random variables with zero mean) and a ⁇ basis matrix (2D Haar or Daubechies 4 wavelets).

Landscapes

  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS
The present application claims the benefit of the filing date of U.S. Provisional Patent Application Ser. No. 60/902,673, entitled “System Using Locally Competitive Algorithms for Sparse Approximation” and filed on Feb. 21, 2007 by inventors Christopher John Rozell, Bruno Adolphus Olshausen, Don Herrick Johnson and Richard Gordon Baraniuk.
The aforementioned provisional patent application is hereby incorporated by reference in its entirety.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
The present invention was made with government support under the following government grants or contracts: Office of Naval Research Grant Nos. N00014-06-1-0769, N00014-06-1-0829 and N00014-02-1-0353, U.S. Department of Energy Grant No. DE-FC02-01ER25462, and National Science Foundation Grant Nos. ANI-0099148, ANI-0099148 and IIS-0625717. The government has certain rights in the invention.
BACKGROUND OF THE INVENTION
1. Field of the Invention
The present invention relates to a system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images.
2. Brief Description of the Related Art
Natural signals can be well-approximated by a small subset of elements from an over complete dictionary. The process of choosing a good subset of dictionary elements along with the corresponding coefficients to represent a signal is known as sparse approximation. Sparse approximation is a difficult non-convex optimization problem that is at the center of much research in mathematics and signal processing. Existing sparse approximation algorithms suffer from one or more of the following drawbacks: 1) they are not implementable in parallel computational architectures; 2) they have difficulty producing exactly sparse coefficients in finite time; 3) they produce coefficients for time-varying stimuli that contain inefficient fluctuations, making the stimulus content more difficult to interpret; or 4) they only use a heuristic approximation to minimizing a desired objective function.
Given an N-dimensional stimulus sε
Figure US07783459-20100824-P00001
, we seek a representation in terms of a dictionary D composed of M vectors {φm} that span the space
Figure US07783459-20100824-P00002
. Define the lP norm of the vector x to be ∥x∥p=(Σ|xm|p)1/p and the inner product (projection) between x and y to be <x, y>=Σxmym. Without loss of generality, assume the dictionary vectors are unit-norm, ∥φm2=1. When the dictionary is overcomplete (M>N), there are an infinite number of ways to choose coefficients {am} such that
s = m = 1 M a m ϕ m .
In optimal sparse approximation, we seek the coefficients having the fewest number of non-zero entries by solving the minimization problem
min a a 0 subject to s = m = 1 M a m ϕ m , ( 1 )
where the l0 “norm” denotes the number of non-zero elements of a=[a1, a2, . . . , aM]. While this clearly is not a norm in the mathematical sense, the term here will be used as it is prevalent in the literature. Unfortunately, this combinatorial optimization problem is NP-hard.
In the signal processing community, two approaches are typically used on digital computers to find acceptable suboptimal solutions to this intractable problem. The first general approach substitutes an alternate sparsity measure to convexify the l0 norm. One well-known example is Basis Pursuit (BP) (Chen et al., 2001), which replaces the l0 norm with the l1 norm
min a a 1 subject to s = m = 1 M a m ϕ m . ( 2 )
Despite this substitution, BP has the same solution as the optimal sparse approximation problem (Donoho and Elad, 2003) if the signal is sparse compared to the nearest pair of dictionary elements
( e . g . , a 0 < min m n 1 2 [ 1 + 1 / ϕ m , ϕ n ] ) .
In practice, the presence of signal noise often leads to using a modified approach called Basis Pursuit De-Noising (BPDN) (Chen et al., 2001) that makes a tradeoff between reconstruction mean-squared error (MSE) and sparsity in an unconstrained optimization problem:
min a ( s - m = 1 M a m ϕ m 2 2 + λ a 1 ) , ( 3 )
where λ is a tradeoff parameter. BPDN provides the l1-sparsest approximation for a given reconstruction quality. There are many algorithms that can be used to solve the BPDN optimization problem, with interior point-type methods being the most common choice.
The second general approach employed by signal processing researchers uses iterative greedy algorithms to constructively build up a signal representation (Tropp, 2004). The canonical example of a greedy algorithm is known in the signal processing community as Matching Pursuit (MP) (Mallat and Zhang, 1993). The MP algorithm is initialized with a residual r0=s. At the kth iteration, MP finds the index of the single dictionary element best approximating the current residual signal, θk=arg maxm
Figure US07783459-20100824-P00003
rk−1, φm
Figure US07783459-20100824-P00004
. The coefficient dk=
Figure US07783459-20100824-P00005
rk−1, φθ k
Figure US07783459-20100824-P00004
and index θk are recorded as part of the reconstruction, and the residual is updated, rk=rk−1−φθ k dk. After K iterations, the signal approximation using MP is given by
s ^ = k = 1 K ϕ θ k d k .
Though they may not be optimal in general, greedy algorithms often efficiently find good sparse signal representations in practice.
Recent research has found compelling evidence that the properties of V1 population responses to natural stimuli may be the result of a sparse approximation. For example, it has been shown that V1 receptive fields are consistent with optimizing the coefficient sparsity when encoding natural images. Additionally, V1 recordings in response to natural scene stimuli show activity levels (corresponding to the coefficients {am}) becoming sparser as neighboring units are also stimulated. These populations are typically very overcomplete, allowing great flexibility in the representation of a stimulus. Using this flexibility to pursue sparse codes might offer many advantages to sensory neural systems, including enhancing the performance of subsequent processing stages, increasing the storage capacity in associative memories, and increasing the energy efficiency of the system.
However, existing sparse approximation algorithms do not have implementations that correspond both naturally and efficiently to parallel computational architectures such as those seen in neural populations or in analog hardware. For convex relaxation approaches, a network implementation of BPDN can be constructed, following the common practice of using dynamical systems to implement direct gradient descent optimization. Unfortunately, this implementation has two major drawbacks. First, it lacks a natural mathematical mechanism to make small coefficients identically zero. While the true BPDN solution would have many coefficients that are exactly zero, direct gradient methods to find an approximate solution in finite time produce coefficients that merely have small magnitudes. Ad hoc thresholding can be used on the results to produce zero-valued coefficients, but such methods lack theoretical justification and can be difficult to use without oracle knowledge of the best threshold value. Second, this implementation requires persistent (two-way) signaling between all units with overlapping receptive fields (e.g., even a node with a nearly zero value would have to continue sending inhibition signals to all similar nodes). In greedy algorithm approaches, spiking neural circuits can be constructed to implement MP. Unfortunately, this type of circuit implementation relies on a temporal code that requires tightly coupled and precise elements to both encode and decode.
Beyond implementation considerations, existing sparse approximation algorithms also do not consider the time-varying signals common in nature. A time-varying input signal s(t) is represented with a set of time-varying coefficients {am(t)}. While temporal coefficient changes are necessary to encode stimulus changes, the most useful encoding would use coefficient changes that reflect the character of the stimulus. In particular, sparse coefficients should have smooth temporal variations in response to smooth changes in the image. However, most sparse approximation schemes have a single goal: select the smallest number of coefficients to represent a fixed signal. This single-minded approach can produce coefficient sequences for time-varying stimuli that are erratic, with drastic changes not only in the values of the coefficients but also in the selection of which coefficients are used. These erratic temporal codes are inefficient because they introduce uncertainty about which coefficients are coding the most significant stimulus changes, thereby complicating the process of understanding the changing stimulus content.
There are several sparse approximation methods that do not fit into the two primary approaches of pure greedy algorithms or convex relaxation. Methods such as Sparse Bayesian Learning, FOCUSS, modifications of greedy algorithms that select multiple coefficients on each iteration and MP extensions that perform an orthogonalization at each step involve computations that would be very difficult to implement in a parallel, distributed architecture. For FOCUSS, there also exists a dynamical system implementation that uses parallel computation to implement a competition strategy among the nodes (strong nodes are encouraged to grow while weak nodes are penalized), however it does not lend itself to forming smooth time-varying representations because coefficients cannot be reactivated once they go to zero.
There are also several sparse approximation methods built on a parallel computational framework that are related to our LCAs. These algorithms typically start with many super-threshold coefficients and iteratively try to prune the representation through a thresholding procedure, rather than charging up from zero as in our LCAs. In addition, most of these algorithms are not explicitly connected to the optimization of a specific objective function.
SUMMARY OF THE INVENTION
In a preferred embodiment, the present invention is a parallel dynamical system based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. In our Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an overcomplete dictionary. Our continuous-time LCA is described by the dynamics of a system of nonlinear ordinary differential equations (ODEs) that govern the internal state and external communication of units in a parallel computational environment. These systems use computational primitives that correspond to simple analog elements (e.g., resistors, capacitors, amplifiers), making them realistic for parallel implementations. These systems could be physically implemented in a variety of substrates, including analog hardware elements, organic tissue (e.g., neural networks) or in nanophotonic systems. Each LCA corresponds to an optimal sparse approximation problem that minimizes an energy function combining reconstruction mean-squared error (MSE) and a sparsity-inducing cost function.
In another embodiment, the present invention is a neural architecture for locally competitive algorithms (“LCAs”) that correspond to a broad class of sparse approximation problems and possess three properties critical for a neurally plausible sparse coding system. First, the LCA dynamical system is stable to guarantee that a physical implementation is well-behaved. Next, the LCAs perform their primary task well, finding codes for fixed images that are have sparsity comparable to the most popular centralized algorithms. Finally, the LCAs display inertia, coding video sequences with a coefficient time series that is significantly smoother in time than the coefficients produced by other algorithms. This increased coefficient regularity better reflects the smooth nature of natural input signals, making the coefficients much more predictable and making it easier for higher-level structures to identify and understand the changing content in the time-varying stimulus.
The parallel analog architecture described by our LCAs could greatly benefit the many modern signal processing applications that rely on sparse approximation. While the principles we describe apply to many signal modalities, we will focus on the visual system and the representation of video sequences.
In a preferred embodiment, the present invention is an analog system for sparsely approximating a signal. The system comprises a matching system for calculating and outputting matching signals representative of how well-matched said signal is to a plurality of dictionary elements and a plurality of nodes. Each node receives one of said matching signals from said matching system. Each node comprises a source of an internal state signal and a thresholding element. The internal state signal in each node is calculated as a function of said matching signal received at said node and weighted outputs of all other nodes. The source of an internal state signal may comprise a low pass averaging system. The matching system may comprise a projection system for projecting a signal vector onto the plurality of dictionary elements. Each node may further comprise a plurality of weighting elements, each weighting element receiving an output from another one of the plurality of nodes and providing the weighted outputs to the source of an internal state signal. The internal state signal may be derived from the matching signal less a sum of weighted outputs from the other nodes. Alternatively, each node may further comprise a plurality of weighting elements for receiving an output of the thresholding element and providing a plurality of weighted outputs. The inputted signal may be a video signal or other type of signal. The source of an internal state signal may be a voltage source, current source or other source of electrical energy. The low pass averaging system may comprise a low pass averaging circuit such as a resistor and capacitor or any other type of low pass averaging circuit.
In another embodiment, the present invention is a parallel dynamical system for computing sparse representations of data. The system comprises a projection system for projecting the data onto projection vectors and a plurality of nodes. Each node receives one of the projection vectors from the projection system. Each node comprises a source of electrical energy, a low pass averaging circuit and a thresholding element. The source of electrical energy in each node comprises a projection vector received at the node less weighted outputs of all other nodes. Each node further comprises a plurality of weighting elements, each weighting element receiving an output from another one of the plurality of nodes and providing the weighted output to the source of electrical energy. Other arrangements of the weighting elements may be used with the present invention and with this embodiment.
In still another embodiment, the present invention is a parallel dynamical system for computing sparse representations of data. The system comprises a plurality of nodes, each node being active or inactive. Each node comprises a leaky integrator element, wherein inputs to the leaky integrator element cause an activation potential to charge up, and a thresholding element for receiving the activation potential and for producing an output coefficient. The output coefficient is the result of an activation function applied to the activation potential and parameterized by a system threshold. Active nodes inhibit other nodes with inhibition signals proportional to both level of activity of the active nodes and a similarity of receptive fields of the active nodes.
Still other aspects, features, and advantages of the present invention are readily apparent from the following detailed description, simply by illustrating a preferable embodiments and implementations. The present invention is also capable of other and different embodiments and its several details can be modified in various obvious respects, all without departing from the spirit and scope of the present invention. Accordingly, the drawings and descriptions are to be regarded as illustrative in nature, and not as restrictive. Additional objects and advantages of the invention will be set forth in part in the description which follows and in part will be obvious from the description, or may be learned by practice of the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
For a more complete understanding of the present invention and the advantages thereof, reference is now made to the following description and the accompanying drawings, in which:
FIG. 1( a) illustrates LCA nodes in accordance with a preferred embodiment of the present invention behaving as leaky integrators, charging with a speed that depends on how well the input matches the associated dictionary element and the inhibition received from other nodes.
FIG. 1( b) is a diagram of a system in accordance with a preferred embodiment of the present invention showing the inhibition signals being sent between nodes. In this case, only node 2 is shown as being active (i.e., having a coefficient above threshold) and inhibiting its neighbors. Since the neighbors are inactive then the inhibition is one-way.
FIG. 1 (c) illustrates a source of electrical energy in a node in accordance with a preferred embodiment of the present invention.
FIGS. 2( a)-(f) illustrate the relationship between the threshold function T(α, γ, λ)(•) and the sparsity cost function C(•). Only the positive half of the symmetric threshold and cost functions are plotted. FIG. 2( a) illustrates a sigmoidal threshold function. FIG. 2( b) illustrates a cost function for γ=5, α=0 and λ=1. FIG. 2( c) illustrates the ideal hard thresholding function (γ=∞, α=0. λ=1) and FIG. 2( d) illustrates the corresponding cost function. The dashed line shows the limit, but coefficients produced by the ideal thresholding function cannot take values in this range. FIG. 2( e) illustrates the ideal soft thresholding function (γ=∞, α=1, λ=1) and FIG. 2( f) illustrates the corresponding cost function.
FIGS. 3( a) and (b) respectively illustrate the top 200 coefficients from a BPDN solver sorted by magnitude and the same coefficients, sorted according to the magnitude ordering of the SLCA coefficients. While there is a gross decreasing trend noticeable, the largest SLCA coefficients are not in the same locations as the largest BPDN coefficients. While the solutions have equivalent energy functions, the two sets of coefficients differ significantly.
FIG. 4( a) illustrates an example of a dictionary having one “extra” element that comprises decaying combinations of all other dictionary elements.
FIG. 4( b) illustrates an input vector having a sparse representation in just a few dictionary elements.
FIG. 4( c) illustrates an MP initially choosing an “extra” dictionary element, preventing it from finding the optimally sparse representation (coefficients shown after 100 iterations).
FIG. 4( d) illustrate that, in contrast, the HLCA system finds the optimally sparse coefficients.
FIG. 4( e) illustrates how the time-dynamics of the HLCA system illustrate its advantage. The “extra” dictionary element is the first node to activate, followed shortly by the nodes corresponding to the optimal coefficients. The collective inhibition of the optimal nodes causes the “extra” node to die away.
FIG. 5 illustrates SLCA and BPDN coefficients for a series of standard test images. Each line on the plot indicates the tradeoff between MSE and l1 coefficient norm as λ is varied. The results for SLCA and BPDN overlap exactly, illustrating that the systems are finding equivalent minima of the energy function.
FIGS. 6( a)-(d) illustrates the time response of the HLCA and SLCA (τ=10 ms) for a single fixed image patch. FIG. 6( a) shows the MSE decay and FIG. 6( b) shows the l0 sparsity for HLCA. FIG. 6( c) illustrates the MSE decay and FIG. 6( d) illustrates the l0 sparsity for SLCA. The error converges within 1-2 time constants and the sparsity often approximately converges within 3-4 time constants. In some cases sparsity is reduced with a longer running time.
FIG. 7 illustrates the mean tradeoff between MSE and l0-sparsity for normalized (32×32) patches from a standard set of test images. For a given MSE range, the mean (a) and standard deviation (b) of the l0 sparsity are plotted.
FIGS. 8( a)-(d) shows the HLCA and SLCA systems simulated on 200 frames of the “foreman” test video sequence. For comparison, MP coefficients and thresholded BPDN coefficients are also shown. Average values for each system are notated in the legend. FIG. 8( a) shows Per-frame MSE for each coding scheme, designed to be approximately equal. FIG. 8( b) shows the number of active coefficients in each frame. FIG. 8( c) shows the number of changing coefficient locations for each frame, including the number of inactive nodes becoming active and the number of active nodes becoming inactive. FIG. 8( d) shows the ratio of changing coefficients to active coefficients. A ratio near 2 (such as with MP) means that almost 100% of the coefficient locations are new at each frame. A ratio near 0.5 (such as with HLCA) means that approximately 25% of the coefficients are new at each frame.
FIG. 9( a) shows the marginal probabilities denoting the fraction of the time coefficients spent in the three states: negative, zero and positive (−, 0, and +).
FIG. 9( b) shows the transition probabilities denoting the probability of a node in one state transitioning to another state on the next frame. For example, P (0|+) is the probability that a node with an active positive coefficient will be inactive (i.e., zero) in the next frame.
FIG. 10( a) illustrates an example time-series coefficient for the HLCA and MP (top and bottom, respectively) encodings for the test video sequence. HLCA clusters non-zero entries together into longer runs while MP switches more often between states.
FIG. 10( b) illustrates the empirical conditional entropy of the coefficient states (−,0,+) during the test video sequence.
FIG. 10( c) illustrates the conditional entropy is calculated analytically while varying P (+|+) and equalizing all other transition probabilities to the values seen in HLCA and MP. The tendency of a system to group non-zero states together is the most important factor in determining the entropy.
FIG. 11 is a diagram illustrating an LCA network for compressive sensing reconstruction and the nonlinear transformation applied to the state variable in each node in accordance with a preferred embodiment of the present invention. Originally, the LCA network found the best sparse approximation for the data vector x. In this case, the network input m equaled the data x directly and the interconnection strengths were given by
Figure US07783459-20100824-P00005
Ψi, Ψl
Figure US07783459-20100824-P00006
. The LCA structure was modified as indicated so that it could solve the compressive sensing reconstruction problem.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Digital systems waste time and energy digitizing information that eventually is thrown away during compression. In contrast, the present invention is an analog system that compresses data before digitization, thereby saving time and energy that would have been wasted. More specifically, the present invention is a parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements. Such a system could be envisioned to perform data compression before digitization, reversing the resource wasting common in digital systems.
A technique referred to as compressive sensing permits a signal to be captured directly in a compressed form rather than recording raw samples in the classical sense. With compressive sensing, only about 5-10% of the original number of measurements need to be made from the original analog image to retain a reasonable quality image. In compressive sensing, however, reconstruction involves solving an optimal sparse approximation problem which requires enormous calculations and memory. The present invention employs a locally competitive algorithm (“LCA”) that stylizes interacting neuron-like nodes to solve the sparse approximation problem.
More specifically, the present invention uses thresholding functions to induce local (possibly one-way) inhibitory competitions among units, thus constituting a locally competitive algorithm (LCA). The LCA can be implemented as a circuit and can be shown to minimize weighted combination of mean-squared-error in describing the data and a cost function on neural activity. It demonstrates sparsity levels comparable to existing sparse coding algorithms, but in contrast to greedy algorithms that iteratively select the single best element, the circuit allows continual interaction among many units, allowing the system to reach more optimal solutions. Additionally, the LCA coefficients for video sequences demonstrate inertial properties that are both qualitatively and quantitatively more regular, i.e., smoother and more predictable, than the coefficients produced by greedy algorithms.
The LCAs associate each node with an element of a dictionary φmεD. When the system is presented with an input image s(t), the collection of nodes evolve according to fixed dynamics (described below) and settle on a collective output {am(t)}, corresponding to the short-term average firing rate of the nodes. For analytical simplicity, positive and negative coefficients are allowed, but rectified systems could use two physical units to implement one LCA node. The goal is to define the LCA system dynamics so that few coefficients have non-zero values while approximately reconstructing the input, ŝ(t)=Σmam(t)φm≈s(t).
The LCA dynamics follow several properties observed in neural systems: inputs cause the membrane potential to “charge up” like a leaky integrator; membrane potentials over a threshold produce “action potentials” for extra cellular signaling; and these super-threshold responses inhibit neighboring nodes through lateral connections. Each node's sub-threshold value is represented by a time-varying internal state um(t). The node's excitatory input current is proportional to how well the image matches with the node's receptive field, bm(t)=
Figure US07783459-20100824-P00005
φm,s(t)
Figure US07783459-20100824-P00004
. When the internal state um of a node becomes significantly large, the node becomes “active” and produces an output signal amused to represent the stimulus and inhibit other nodes. This output coefficient is the result of an activation function applied to the membrane potential, am=Tλ(um), parameterized by the system threshold λ. Though similar activation functions have traditionally taken a sigmoidal form, we consider activation functions that operate as thresholding devices (e.g., essentially zero for values below λ and essentially linear for values above λ).
The nodes best matching the stimulus will have internal state variables that charge at the fastest rates and become active soonest. To induce the competition that allows these nodes to suppress weaker nodes, we have active nodes inhibit other nodes with an inhibition signal proportional to both their activity level and the similarity of the nodes' receptive fields. Specifically, the inhibition signal from the active node m to any other node n is proportional to amGm,n, where Gm,n=
Figure US07783459-20100824-P00005
φmn
Figure US07783459-20100824-P00004
. The possibility of unidirectional inhibition gives strong nodes a chance to prevent weaker nodes from becoming active and initiating counter-inhibition, thus making the search for a sparse solution more energy efficient. Note that unlike the direct gradient descent methods described above that require two-way inhibition signals from all nodes that overlap (i.e., have Gm,n=0), LCAs only require one-way inhibition from a small selection of nodes (i.e., only the active nodes). Putting all of the above components together, LCA node dynamics are expressed by the non-linear ordinary differential equation (ODE)
u . m ( t ) = 1 τ [ b m ( t ) - u m ( t ) - n m G m , n a n ( t ) ] . ( 4 )
This ODE is essentially the same form as the well-known continuous Hopfield network. FIG. 1( a) shows an LCA node circuit schematic and FIG. 1( b) is a system diagram illustrating the lateral inhibition. As shown in FIG. 1( a), the node 100 has a source of electrical energy 110, a low pass averaging circuit 120 comprised of a resistor and a capacitor, and a thresholding element 130. While the source of electrical energy 110 is shown in FIG. 1( a) as a voltage source, other arrangements such as a current source may be used in the present invention and such alternatives will readily apparent to those of ordinary skill in the art. Likewise, while the low pass averaging circuit 120 is shown as a simple resistor and capacitor arrangement in FIG. 1( a), other arrangements may be used with the present invention and will be readily apparently to those of ordinary skill in the art.
An embodiment of the source of electrical energy 110 is shown in greater detail in FIG. 1( c). As can be seen from FIG. 1( c), the source 110 is not a “source” in the sense that it generates electrical energy, but rather, it uses received signals to produce or “compute” the output provided to the low pass averaging circuit 120 and the thresholding element 130. More specifically, the source 110 receives a projection vector
Figure US07783459-20100824-P00005
φn,s(t)
Figure US07783459-20100824-P00004
from a projection system 200 (shown in FIG. 1( b)) and an output an(t) from each other node 100. In the embodiment shown in FIG. 1( c), the source 110 in each node 100 has a weighting element 112 corresponding to the output received from each other node for weighting that output. The source 110 outputs the difference
V m ( t ) = ϕ m , s ( t ) - n m T λ ( u n ( t ) ) ϕ m , ϕ n .
As illustrated in FIG. 1( b), a preferred embodiment of the analog system for compressing signals of the present invention has a projection system 200 for projecting a received signal s(t) onto a plurality of projection vectors
Figure US07783459-20100824-P00005
φn, s(t)
Figure US07783459-20100824-P00004
that are then provided to a plurality of nodes 100. In the embodiment shown in FIG. 1( b), each node 100 receives a projection vector
Figure US07783459-20100824-P00005
φn, s(t)
Figure US07783459-20100824-P00004
and also the output an(t) of each other node.
Other arrangement of the system of the present invention may be used and will be apparent to those of skill in the art. For example, while the embodiment of FIGS. 1( a)-(c) show the output of each node being pass directly back to each other node and the weighting of such outputs being performed inside the receiving node, each node could calculate the weighting of its own output an(t) and could then pass its own weighted output an(t)
Figure US07783459-20100824-P00005
φn, φm
Figure US07783459-20100824-P00004
to the other nodes.
To express the system of coupled non-linear ODEs that govern the whole dynamic system, we represent the internal state variables in the vector u(t)=[u1(t), . . . , uM(t)]t, the active coefficients in the vector a(t)=[a1(t), . . . , aM(t)]t=Tλ(u(t)), the dictionary elements in the columns of the (N×M) matrix Φ=[φ1, . . . , φM] and the driving inputs in the vector b(t)=[b1(t), . . . , bM(t)]tts(t). The function Tλ(•) performs element-by-element thresholding on vector inputs. The stimulus approximation is ŝ(t)=Φa(t), and the full dynamic system equation is
u . ( t ) = f ( u ( t ) ) = 1 τ [ b ( t ) - u ( t ) - ( Φ Φ - I ) a ( t ) ] , a ( t ) = T λ ( u ( t ) ) . ( 5 )
The LCA architecture described by equation (5) solves a family of sparse approximation problems corresponding to different sparsity measures. Specifically, LCAs descend an energy function combining the reconstruction MSE and a sparsity-inducing cost penalty C(•),
E ( t ) = 1 2 s ( t ) - s ^ ( t ) 2 + λ m C ( a m ( t ) ) . ( 6 )
The specific form of the cost function C(•) is determined by the form of the thresholding activation function Tλ(•). For a given threshold function, the cost function is specified (up to a constant) by
λ C ( a m ) a m = u m - a m = u m - T λ ( u m ) . ( 7 )
This correspondence between the thresholding function and the cost function can be seen by computing the derivative of E with respect to the active coefficients, {am}. If equation (7) holds, then letting the internal states {um} evolve according to
u . m α - E a m
yields the equation for the internal state dynamics above in equation (4). Note that although the dynamics are specified through a gradient approach, the system is not performing direct gradient descent
( e . g . , u . m - E u m ) .
As long as am and um are related by a monotonically increasing function, the {am} will also descend the energy function E.
We focus specifically on the cost functions associated with thresholding activation functions. Thresholding functions limit the lateral inhibition by allowing only “strong” units to suppress other units and forcing most coefficients to be identically zero. For our purposes, thresholding functions Tλ(•) have two distinct behaviors over their range: they are essentially linear with unit slope above threshold λ, and essentially zero below threshold. Among many reasonable choices for thresholding functions, we use a smooth sigmoidal function
T ( αζγ , λ ) ( u m ) = u m - αλ 1 + - γ ( u m - λ ) , ( 8 )
where γ is a parameter controlling the speed of the threshold transition and αε[0,1] indicates what fraction of an additive adjustment is made for values above threshold. An example sigmoidal thresholding function is shown in FIG. 2 a. We are particularly interested in the limit of this thresholding function as γ→∞, a piecewise linear function we denote as the ideal thresholding function. In the signal processing literature, T(0,∞,λ)(•)=limγ→∞T(0,γ,λ)(•) is known as a “hard” thresholding function and T(1,∞,λ)(•)=limγ→∞T(1,γ,λ)(•) is known as a “soft” thresholding function.
Combining (7) and (8), we can integrate numerically to determine the cost function corresponding to the T(α,γ,λ)(•) shown in FIG. 2 b. For the ideal threshold functions we derive a corresponding ideal cost function,
C ( α , , λ ) ( a m ) = ( 1 - α ) 2 λ 2 + α α m . ( 9 )
Note that unless α=1 the ideal cost function has a gap because active coefficients cannot take all possible values, |am|∉[0,(1−α)λ] (i.e., the ideal thresholding function is not technically invertible).
As shown above, a LCA can optimize a variety of different sparsity measures depending on the choice of thresholding function. One special case is the soft thresholding function, corresponding to α=1 and shown graphically in FIGS. 2 e and 2 f. The soft-thresholding locally competitive algorithm (SLCA) applies the l1 norm as a cost function on the active coefficients, C(1,∞,λ)(am)=|am|. Thus, the SLCA is simply another solution method for the general BPDN problem described above. Despite minimizing the same convex energy function, SLCA and BPDN solvers will find different sets of coefficients, as illustrated in FIG. 3. The connection between soft-thresholding and BPDN is well-known in the case of orthonormal dictionaries (Chen et al., 2001), and recent results have given some justification for using soft-thresholding in over complete dictionaries. The SLCA provides another formal connection between the soft-thresholding function and the l1 cost function.
Though BPDN uses the l1-norm as its sparsity penalty, we often expect many of the resulting coefficients to be identically zero (especially when M>N). However, most numerical methods (including direct gradient descent and interior point solvers) will drive coefficients toward zero but will never make them identically zero. While an ad hoc threshold could be applied to the results of a BPDN solver, the SLCA has the advantage of incorporating a natural thresholding function that keeps coefficients identically zero during the computation unless they become active. In other words, while BPDN solvers often start with many non-zero coefficients and try to force coefficients down, the SLCA starts with all coefficients equal to zero and only lets a few grow up. This advantage is especially important for systems that must expend energy for non-zero values throughout the entire computation.
Another important special case is the hard thresholding function, corresponding to α=0 and shown graphically in FIGS. 2 c and 2 d. Using the relationship in (7), we see that this hard-thresholding locally competitive algorithm (HLCA) applies an l0-like cost function by using a constant penalty regardless of magnitude,
C ( 0 , , λ ) ( a m ) = λ 2 I ( a m λ ) ,
where I(•) is the indicator function evaluating to 1 if the argument is true and 0 if the argument is false. As with the SLCA, the HLCA also has connections to known sparse approximation principles. If node m is fully charged, the inhibition signal it sends to other nodes would be exactly the same as the update step when the mth node is chosen in the MP algorithm. However, due to the continuous competition between nodes before they are fully charged, the HLCA is not equivalent to MP in general.
As a demonstration of the power of competitive algorithms over greedy algorithms, consider a canonical example used to illustrate the shortcomings of greedy algorithms. For this example, specify a positive integer K<N and construct a dictionary D with M=N+1 elements to have the following form:
ϕ m = { e m if m N n = 1 K κ e n + n = K + 1 N ( κ / ( n - K ) ) e n if m = N + 1 ,
where em is the canonical basis element (i.e., it contains a single 1 in the mth location) and k is a constant to make the vectors have unit norm. In words, the dictionary includes the canonical basis along with one “extra” element that is a decaying combination of all other elements (illustrated in FIG. 4, with N=20 and K=5). The input signal is sparsely represented in the first K dictionary elements,
s = m = 1 K 1 K m .
The first MP iteration chooses φM, introducing a residual with decaying terms. Even though s has an exact representation in K elements, MP iterates forever trying to atone for this bad initial choice. In contrast, the HLCA initially activates the Mth node but uses the collective inhibition from nodes 1, . . . , K to suppress this node and calculate the optimal set of coefficients. While this pathological example is unlikely to exactly occur in natural signals, it is often used as a criticism of greedy methods to demonstrate their shortsightedness. We mention it here to demonstrate the flexibility of LCAs and their differences from pure greedy algorithms.
To be a reasonable for physical implementation, LCAs must exhibit several critical properties: the dynamic systems must remain stable under normal operating conditions, the system must produce sparse coefficients that represent the stimulus with low error, and coefficient sequences must exhibit regularity in response to time-varying inputs. We show that LCAs exhibit good characteristics in each of these three areas. We focus our analysis on the HLCA both because it yields the most interesting results and because it is notationally the cleanest to discuss. In general, the analysis principles we use will apply to all LCAs through straightforward (through perhaps laborious) extensions.
Any proposed physical system must remain well-behaved under normal conditions. While linear systems theory has an intuitive notion of stability that is easily testable, no such unifying concept of stability exists for non-linear systems. Instead, non-linear systems are characterized in a variety of ways, including their behavior near an equilibrium point u* where f(u*)=0 and their input-output relationship.
The various stability analyses below depend on common criteria. Define Mu(t) [1, . . . , M] as the set of nodes that are above threshold in the internal state vector u(t), Mu(t)={m:|um(t)|≧λ}. We say that the LCA meets the stability criteria if for all time t the set of active vectors {φm}mεM u(t) is linearly independent. It makes some intuitive sense that this condition is important to an LCA: if a collection of linearly dependent nodes are active simultaneously, the nodes could have growing coefficients but no net effect on the reconstruction.
The stability criteria are likely to be met under normal operating conditions for two reasons. First, small subsets of dictionary elements are unlikely to be linearly dependent unless the dictionary is designed with this property. Second, sparse coding systems are actively trying to select dictionary subsets so that they can use many fewer coefficients then the dimension of the signal space, |Mu(t)|<<N<<M. While the LCA lateral inhibition signals discourage linear dependent sets from activating, the stability criteria could be violated when a collection of nodes becomes active too quickly, before inhibition can take effect. In practice, this situation could occur when the threshold is too low compared to the system time constant.
In a LCA presented with a static input, we look to the steady-state response (where {dot over (u)}(t)=0) to determine the coefficients. The character of the equilibrium points u*(f(u*)=0) and the system's behavior in a neighborhood around an equilibrium point provides one way to ensure that a system is well behaved. Consider the ball around an equilibrium point Bε(u*)={u:∥u−u*∥
Figure US07783459-20100824-P00005
ε}. Nonlinear system's analysis typically asks an intuitive question: if the system is perturbed within this ball, does it then run away, stay where it is, or get attracted back? Specifically, a system is said to be locally asymptotically stable at an equilibrium point u* if one can specify an ε>0 such that
u ( 0 ) B ɛ ( u * ) lim t -> u ( t ) = u * .
Previous research has used the tools of Lyapunov functions to study a Hopfield network similar to the LCA architecture. However, all of these analyses make assumptions that do not encompass the ideal thresholding functions used in the LCAs (e.g., they are continuously differentiable and/or monotone increasing). As the stability criteria is met, the HLCA:
    • has a finite number of equilibrium points;
    • has equilibrium points that are almost certainly isolated (no two equilibrium points are arbitrarily close together); and
    • is almost certainly locally asymptotically stable for every equilibrium point.
The conditions that hold “almost certainly” are true as long as none of the equilibria have components identically equal to the threshold, (u*m≠λ, ∀m), which holds with overwhelming probability. With a finite number of isolated equilibria, we can be confident that the HLCA steady-state response is a distinct set of coefficients representing the stimulus. Asymptotic stability also implies a notion of robustness, guaranteeing that the system will remain well-behaved even under perturbations.
In physical systems it is important that the energy of both internal and external signals remain bounded for bounded inputs. One intuitive approach to ensuring output stability is to examine the energy function E. For non-decreasing threshold functions, the energy function is non-increasing
( t E ( t ) 0 )
for fixed inputs. While this is encouraging, it does not guarantee input-output stability. To appreciate this effect, note that the HLCA cost function is constant for nodes above threshold—nothing explicitly keeps a node from growing without bound once it is active.
While there is no universal input-output stability test for general non-linear systems, we observe that the LCA system equation is linear and fixed until a unit crosses threshold. A branch of control theory specifically addresses these switched systems. Results from this field indicate that input-output stability can be guaranteed if the individual linear subsystems are stable, and the system doesn't switch “too fast” between these subsystems. The HLCA linear subsystems are individually stable if and only if the stability criteria are met. Therefore, the HLCA is input-output stable as long as nodes are limited in how fast they can change states. The infinitely fast switching condition is avoided in practice either by the physical principles of the system implementation or through an explicit hysteresis in the thresholding function.
Viewing the sparse approximation problem through the lens of rate-distortion theory, the most powerful algorithm produces the lowest reconstruction MSE for a given sparsity. When the sparsity measure is the l1 norm, the problem is convex and the SLCA produces solutions with equivalent l1-sparsity to interior point BPDN solvers (demonstrated in FIG. 5). Despite the analytic appeal of the l1 norm as a sparsity measure, many systems concerned with energy minimization (including neural systems) likely have an interest in minimizing the l0 norm of the coefficients. The HLCA is appealing because of its l0-like sparsity penalty, but this objective function is not convex and the HLCA may find a local minimum. We will show that while HLCA cannot guarantee the L0 sparsest solution, it produces coefficients that demonstrate comparable sparsity to MP for natural images. Insight about the HLCA reconstruction fidelity comes from rewriting the LCA system equation
u . ( t ) = 1 τ [ Φ t ( s ( t ) - s ^ ( t ) ) - u ( t ) + T ( α , , λ ) ( u ( t ) ) ] . ( 10 )
For a constant input, HLCA equilibrium points ({dot over (u)}(t)=0) occur when the residual error is orthogonal to active nodes and balanced with the internal state variables of inactive nodes.
ϕ m , s ( t ) - s ^ ( t ) = { u m ( t ) if u m λ 0 if u m > λ .
Therefore, when HLCA converges the coefficients will perfectly reconstruct the component of the input signal that projects onto the subspace spanned by the final set of active nodes. Using standard results from frame theory (Christensen, 2002), we can bound the HLCA reconstruction MSE in terms of the set of inactive nodes
s ( t ) - s ^ ( t ) 2 1 η min m u ( t ) ϕ m , s ( t ) - s ^ ( t ) 2 ( M - u ( t ) ) λ 2 η min .
where ηmin is the minimum eigenvalue of ΦΦt.
Though the HLCA is not guaranteed to find the globally optimal l0 sparsest solution we must ensure that it does not produce unreasonably non-sparse solutions. While the system nonlinearity makes it impossible to analytically determine the LCA steady-state coefficients, it is possible to rule out some coefficients as not being possible. For example, let M[1, . . . , M] be an arbitrary set of active coefficients. Using linear systems theory we can calculate the steady-state response assuming that M stays fixed. If |ũm M|<λ for any mεM or if |ũm M|>λ for any mεM, then M cannot describe the set of active nodes in the steady-state response and we call it inconsistent. When the stability criteria are met, the following statement is true for the HLCA: If s=φm, then any set of active coefficients M with mεM and |M|>1 is inconsistent. In other words, the HLCA may use the mth node or a collection of other nodes to represent s, but it cannot use a combination of both. This result extends intuitively beyond one-sparse signals: each component in an optimal decomposition is represented by either the optimal node or another collection of nodes, but not both. While not necessarily finding the optimal representation, the system does not needlessly employ both the optimal and extraneous nodes.
It can also be verified numerically that the LCAs achieve a combination of error and sparsity comparable with known methods. For example, we employed a dictionary consisting of the bandpass band of a steerable pyramid with one level and four orientation bands (i.e., the dictionary is approximately four times overcomplete). Image patches (32×32) were selected at random from a standard set of test images. The selected image patches were decomposed using the steerable pyramid and reconstructed using just the bandpass band. The bandpass image patches were also normalized to have unit energy. Each image patch was used as the fixed input to the LCA system equation (5) using either a soft or hard thresholding function (with variable threshold values) and with a biologically plausible membrane time constant of τ=10 ms. We simulated the system using a simple Euler's method approach (i.e., first order finite difference approximation) with a time step of 1 ms.
FIG. 6 shows the time evolution of the reconstruction MSE and l0 sparsity for SLCA and HLCA responding to an individual image, and FIG. 7 shows the mean steady-state tradeoff between l0 sparsity and MSE. For comparison, we also plotted the results obtained from using MP, a standard BPDN interior point solver followed by thresholding to enforce l0 sparsity (denoted “BPDNthr”) and SLCA with the same threshold applied (denoted “SLCAthr”). Most importantly, note that the HLCA and MP are almost identical in their sparsity-MSE tradeoff. Though the connections between the HLCA and MP were pointed out above, these are very different systems and there is no reason to expect them to produce the same coefficients. Additionally, note that the SLCA is producing coefficients that are nearly as l0-sparse as what we can be achieved by thresholding the results of a BPDN solver even though the SLCA keeps most coefficients zero throughout the calculation.
Systems sensing the natural world are faced with constantly changing stimuli due to both external movement and internal factors (e.g., sensor movement, etc.). As discussed above, sparse codes with temporal variations that also reflect the smooth nature of the changing signal would be easier for higher level systems to understand and interpret. However, approximation methods that only optimize sparsity at each time step (especially greedy algorithms) can produce “brittle” representations that change dramatically with relatively small stimulus changes. In contrast, LCAs naturally produce smoothly changing outputs in response to smoothly changing time-varying inputs. Assuming that the system time constant τ is faster than the temporal changes in the stimulus, the LCA will evolve to capture the stimulus change and converge to a new sparse representation. While local minima in an energy function are typically problematic, the LCAs can use these local minima to find coefficients that are “close” to their previous coefficients even if they are not optimally sparse. While permitting suboptimal coefficient sparsity, this property allows the LCA to exhibit inertia that smoothes the coefficient sequences. The inertia property exhibited in LCAs can be seen by focusing on a single node in the system equation (10):
u . m ( t ) = 1 τ { ϕ m , ( s ( t ) - s ^ ( t ) ) - u m ( t ) when u m ( t ) < λ ϕ m , ( s ( t ) - s ^ ( t ) ) - αλ when u m ( t ) λ .
A new residual signal drives the coefficient higher but suffers an additive penalty. Inactive coefficients suffer an increasing penalty as they get closer to threshold while active coefficients only suffer a constant penalty αλ that can be very small (e.g., the HLCA has αλ=0). This property induces a “king of the hill” effect: when a new residual appears, active nodes move virtually unimpeded to represent it while inactive nodes are penalized until they reach threshold. This inertia encourages inactive nodes to remain inactive unless the active nodes cannot adequately represent the new input.
To illustrate this inertia, we applied the LCAs to a sequence of 144×144 pixel, bandpass filtered, normalized frames from the standard “foreman” test video sequence. The LCA input is switched to the next video frame every (simulated) 1/30 seconds. The results are shown in FIG. 8, along with comparisons to MP and BPDN applied independently on each frame. The changing coefficient locations are nodes that either became active or inactive at each frame. Mathematically, the number of changing coefficients at frame n is: |Mu(n−1){circle around (+)}Mu(n)|, where {circle around (+)} is the “exclusive OR” operator and u(n) are the internal state variables at the end of the simulation for frame n.
This simulation highlights that the HLCA uses approximately the same number of active coefficients as MP but chooses coefficients that more efficiently represent the video sequence. The HLCA is significantly more likely to re-use active coefficient locations from the previous frame without making significant sacrifices in the sparsity of the solution. This difference is highlighted when looking at the ratio of the number of changing coefficients to the number of active coefficients, |Mu(n−1){circle around (+)}Mu(n)|/|Mu(n)|. MP has a ratio of 1.7, meaning that MP is finding almost an entirely new set of active coefficient locations for each frame. The HLCA has a ratio of 0.5, meaning that it is changing approximately 25% of its coefficient locations at each frame. SLCA and BPDNthr have approximately the same performance, with regularity falling between HLCA and MP. Though the two systems can calculate different coefficients, the convexity of the energy function appears to be limiting the coefficient choices enough so that SLCA cannot smooth the coefficient time series substantially more than BPDNthr.
The simulation results indicate that the HLCA is producing time series coefficients that are much more regular than MP. This regularity is visualized in FIG. 10 by looking at the time-series of example HLCA and MP coefficients. Note that though the two coding schemes produce roughly the same number of non-zero entries, the HLCA does much better than MP at clustering the values into consecutive runs of positive or negative values. This type of smoothness better reflects the regularity in the natural video sequence input. We can quantify this increased regularity by examining the Markov state transitions. Specifically, each coefficient time-series is Markov chain with three possible states at frame n:
σ m ( n ) = { - if u m ( n ) < - λ 0 if - λ u m ( n ) λ + if u m ( n ) > λ
FIG. 9 shows the marginal probabilities P(•) of the states and the conditional probabilities P(•|•) of moving to a state given the previous state. The HLCA and MP are equally likely to have non-zero states, but the HLCA is over five times more likely than MP to have a positive coefficient stay positive (P (+|+)). Also, though the absolute probabilities are small, MP is roughly two orders of magnitude more likely to have a coefficient swing from positive to negative (P(−|+)) and vice-versa (P(−|+)). To quantify the regularity of the active coefficient locations we calculate the entropy of the coefficient states at frame n conditioned on the coefficient states at frame (n−1):
H ( σ m ( n ) σ m ( n - 1 ) ) = - P ( + ) [ P ( - + ) + P ( 0 + ) + P ( + + ) ] - P ( 0 ) [ P ( - 0 ) + P ( 0 0 ) + P ( + 0 ) ] - P ( - ) [ P ( - - ) + P ( 0 - ) + P ( + - ) ] , ( 11 )
plotted in FIG. 10. This conditional entropy indicates how much uncertainty there is about the status of the current coefficients given the coefficients from the previous frame. Note that the conditional entropy for MP is almost double the entropy for the HLCA, while SLCA is again similar to BPDNthr. The principle contributing factor to the conditional entropy appears to be the probability a non-zero node remains in the same state (i.e., P(+|+) and P(−|−)). To illustrate, FIG. 10 shows the change in conditional entropy is almost linear with varying P(+|+) (assuming P(−|−)=P(+|+) and all other transition probabilities are kept fixed).
The substantial decrease in the conditional entropy for the HLCA compared to MP quantifies the increased regularity in time-series coefficients due to the inertial properties of the LCAs. The HLCA in particular encourages coefficients to maintain their present state (i.e., active or inactive) if it is possible
Sparse approximation is an important paradigm in modern sensing and signal processing, though mechanisms to calculate these codes using parallel analog computational elements instead of digital computers have remained unknown. In the present invention, a locally competitive algorithm that solves a series of sparse approximation problems (including BPDN as a special case). These LCAs can be implemented using a parallel network of simple elements that match well with parallel analog computational architectures, including analog circuits and sensory cortical areas such as V1. Though these LCA systems are non-linear, we have shown that they are well-behaved under nominal operating conditions.
While the LCA systems (other than SLCA) are not generally guaranteed to find a globally optimal solution to their energy function, we have proven that the systems will be efficient in a meaningful sense. The SLCA system produces coefficients with sparsity levels comparable to BPDN solvers, but uses a natural physical implementation that is more energy efficient (i.e., it uses fewer non-zero inhibition signals between nodes). Perhaps most interestingly, the HLCA produces coefficients with almost identical sparsity as MP. This is significant because greedy methods such as MP are widely used in signal processing practice because of their efficiency, but HLCA offers a much more natural parallel implementation.
LCAs are particularly appropriate for time-varying data such as video sequences. The LCA ODE not only encourages sparsity but also introduces an inertia into the coefficient time-series that we have quantified using both raw counts of changing coefficient location and through the conditional entropy of the coefficient states. By allowing suboptimal sparsity in exchange for more regularity in the set of active coefficients, the LCAs produce smoother coefficient sequences that better reflect the structure of the time-varying stimulus. This property could prove valuable for higher levels of analysis that are trying to interpret the sensory scene from a set of sparse coefficients.
By using simple computational primitives, LCAs also have the benefit of being implementable in analog hardware. An imaging system using VLSI to implement LCAs as a data collection front end has the potential to be extremely fast and energy efficient. Instead of digitizing all of the sensed data and using digital hardware to run a compression algorithm, analog processing would compress the data into sparse coefficients before digitization. In this system, time and energy resources would only be spent digitizing coefficients that are a critical component in the signal representation.
Since the LCA network represents an analog way of finding sparse representations, it can be modified it for the compressive sensing reconstruction problem. As shown in FIG. 11, the compressed input signal is received by a projection system 300 which passes projection vectors to a plurality of nodes 400. Mathematically, the compressive sensing reconstruction problem amounts to a constrained optimization problem that can be recast with Lagrange multipliers into an unconstrained optimization problem.
min s ( y - Θ s 2 2 + λ s 1 )
The network's inputs must now equal Θty and the inner products
Figure US07783459-20100824-P00005
θi, θl
Figure US07783459-20100824-P00004
determine the connection strengths between nodes i and l. When presented with the input, m=Θty, the LCA dynamically evolves to a steady state, producing as a result the set of sparse output coefficients. Unlike in the original applications of LCA, in compressively sensed image reconstruction the diagonal values of ΘtΘ are not all equal to 1, which is required to find the optimal solution without error. As a result, the following equation accommodates this problem.
u . ( t ) - 1 τ [ m - Du - ( Θ t Θ - D ) s ^ ( t ) ] ( 12 )
Here, D is a diagonal matrix whose entries equal those of the diagonal of ΘtΘ. Considering the above-threshold components and taking into account the thresholding function, we have shown that the steady-state solution becomes
ŝ=s−λDtΘ)−11  (13)
Thus, λD(ΘtΘ)−1 represents error. Note that this bias could be removed because it is data-independent. However, calculating it would require inverting a large matrix. Our goal is to reduce and control it as much as possible by other means.
The state equation (12) was simulated in Matlab using the ode45 ordinary differential equation solver over a simulated time of 1.0 second with a time constant τ of 0.1. Test Θ matrices were derived from the product of a randomly generated Φ matrix (Gaussian independent and identically distributed random variables with zero mean) and a Ψ basis matrix (2D Haar or Daubechies 4 wavelets). Test measurement vectors denoted by y, were initially calculated by randomly choosing a sparse set of numerical values of predetermined sparsity K and then computing y=Θs. The nonzero elements of s were in known positions and of known values, permitting the ŝ (reconstructed) coefficients determined by the LCA network to be compared against the original s vector. Other measurement vectors were computed by taking natural images, computing the wavelet transform (known to sparsify natural images), removing a set number of small-magnitude coefficients to achieve a target sparsity, and then computing the inverse wavelet transform. A soft threshold function was used for Tλ(u(t)) with an initial threshold of one that was optimized for the target sparsity and PSNR of the result (see FIG. 11). Experimental simulations of the LCA typically produce all or nearly all nonzero coefficients in the correct locations, but error in the actual coefficient values averages 5-10% increasing with higher K. For example, for K=1, error is usually close to 1%, but for K=10, error averages 8%.
Two issues arise in using the LCA network for compressive sensing reconstruction. The essential error that it introduces has already been mentioned. A second issue is the network's connectivity: all nodes must be connected to all others, what we cal the “many wires” problem. A key insight into mitigating both of these problems revolves around choosing Θ directly to have several desired properties. Because the basis matrix Ψ is predetermined, the relationship Θ=ΦΨ shows that choosing Θ amounts to choosing the measurement matrix Φ, which must have specific properties to enable compressive sensing. Because Ψ is an orthogonal matrix, Φ=ΘΨt, an easily computed quantity. We propose choosing this matrix according to desirable properties we wish to impose on Θ. One very effective choice is a random binary Θ (values −1 and 1) with all elements subsequently divided by √N so that the diagonal matrix D=I. The compressive sensing constraints are not violated because an orthogonal transformation of a matrix of white noise is also white noise. Since D=I, equation (13) becomes
ŝ=s−λ(ΘTΘ)−11  (14)
Using the binary Θ in our simulations, error was typically 5-10%. Although the reconstruction error is approximately the same as before, the D term's influence has been eliminated, simplifying the error expression and permitting greater control. As for the “many wires” problem, we want to choose the matrix Θ to have as many orthogonal columns as possible. In the way, the connection strength between nodes corresponding to these columns will be zero.
The foregoing description of the preferred embodiment of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. The embodiment was chosen and described in order to explain the principles of the invention and its practical application to enable one skilled in the art to utilize the invention in various embodiments as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto, and their equivalents. The entirety of each of the aforementioned documents is incorporated by reference herein.

Claims (14)

1. An analog system for sparsely approximating a signal comprising:
a matching system for calculating and outputting matching signals representative of how well-matched said signal is to a plurality of dictionary elements; and
a plurality of nodes, each node receiving one of said matching signals from said matching system, wherein each node comprises:
a source of an internal state signal; and
a thresholding element;
wherein said internal state signal in each node is calculated as a function of said matching signal received at said node and weighted outputs of all other nodes.
2. An analog system for sparsely approximating a signal according to claim 1, wherein said matching system comprises a projection system for projecting a signal vector onto said plurality of dictionary elements.
3. An analog system for sparsely approximating a signal according to claim 1 wherein said source of an internal state signal comprises a low pass averaging system.
4. An analog system for sparsely approximating a signal according to claim 1 wherein each node further comprises a plurality of weighting elements for receiving an output of said thresholding element and providing a plurality of weighted outputs.
5. An analog system for sparsely approximating a signal according to claim 1 wherein each node further comprises a plurality of weighting elements, each weighting element receiving an output from another one of said plurality of nodes and providing a weighted output to said source of an internal state signal.
6. An analog system for sparsely approximating a signal according to claim 1 wherein said internal state signal is derived from said matching signal less a sum of weighted outputs from said other nodes.
7. An analog system for sparsely approximating a signal according to claim 1 wherein said signal comprises a video signal.
8. An analog system for sparsely approximating a signal according to claim 1 wherein said source of an activation signal comprises a voltage source.
9. An analog system for sparsely approximating a signal according to claim 3 wherein said low pass averaging system comprises a low pass averaging circuit.
10. An analog system for sparsely approximating a signal according to claim 8 wherein said low pass averaging circuit comprises a resistor and a capacitor.
11. An analog system for sparsely approximating a signal according to claim 1 wherein said source of an activation signal comprises a current source.
12. A parallel dynamical system for computing sparse representations of data comprising:
a projection system for projecting said data onto projection vectors; and
a plurality of nodes, each node receiving one of said projection vectors from said projection system, wherein each node comprises:
a source of electrical energy;
a low pass averaging circuit; and
a thresholding element;
wherein said source of electrical energy in each node comprises a projection vector received at said node less weighted outputs of all other nodes.
13. A parallel dynamical system for computing sparse representations of data according to claim 12 wherein each node further comprises a plurality of weighting elements, each weighting element receiving an output from another one of said plurality of nodes and providing said weighted output to said source of electrical energy.
14. A parallel dynamical system for computing sparse representations of data comprising:
a plurality of nodes, each node being active or inactive and each said node comprising:
a leaky integrator element, wherein inputs to said leaky integrator element cause an activation potential to charge up; and
a thresholding element for receiving said activation potential and for producing an output coefficient, said output coefficient being the result of an activation function applied to said activation potential and parameterized by a system threshold;
wherein active nodes inhibits other nodes with inhibition signals proportional to both level of activity of said active nodes and a similarity of receptive fields of said active nodes.
US12/035,424 2007-02-21 2008-02-21 Analog system for computing sparse codes Active 2028-12-11 US7783459B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/035,424 US7783459B2 (en) 2007-02-21 2008-02-21 Analog system for computing sparse codes

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US90267307P 2007-02-21 2007-02-21
US12/035,424 US7783459B2 (en) 2007-02-21 2008-02-21 Analog system for computing sparse codes

Publications (2)

Publication Number Publication Date
US20080270055A1 US20080270055A1 (en) 2008-10-30
US7783459B2 true US7783459B2 (en) 2010-08-24

Family

ID=39888011

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/035,424 Active 2028-12-11 US7783459B2 (en) 2007-02-21 2008-02-21 Analog system for computing sparse codes

Country Status (1)

Country Link
US (1) US7783459B2 (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100077016A1 (en) * 2008-09-24 2010-03-25 Eduardo Perez Estimating a Signal Based on Samples Derived from Random Projections
EP2424117A2 (en) 2010-08-24 2012-02-29 Her Majesty the Queen in Right of Canada, as represented by the Minister of Industry, through The Communications Research Centre Canada Sparse data compression
US20120259590A1 (en) * 2011-04-11 2012-10-11 Jong Chul Ye Method and apparatus for compressed sensing with joint sparsity
US20130151924A1 (en) * 2011-12-08 2013-06-13 Harris Corporation, Corporation Of The State Of Delaware Data system for interfacing with a remote data storage facility using compressive sensing and associated methods
US8538329B2 (en) 2011-12-08 2013-09-17 Harris Corporation System with sub-nyquist signal acquisition and transmission and associated methods
US8570406B2 (en) 2010-08-11 2013-10-29 Inview Technology Corporation Low-pass filtering of compressive imaging measurements to infer light level variation
US8605786B2 (en) * 2007-09-04 2013-12-10 The Regents Of The University Of California Hierarchical motion vector processing method, software and devices
US8760542B2 (en) * 2010-08-11 2014-06-24 Inview Technology Corporation Compensation of compressive imaging measurements based on measurements from power meter
US9137528B1 (en) 2012-09-28 2015-09-15 Google Inc. Synchronizing sparse coding dictionaries for use in communication sessions
US9167274B1 (en) 2012-03-21 2015-10-20 Google Inc. Generating synchronized dictionaries for sparse coding
US9633306B2 (en) * 2015-05-07 2017-04-25 Siemens Healthcare Gmbh Method and system for approximating deep neural networks for anatomical object detection
US9823350B2 (en) 2014-07-31 2017-11-21 Raytheon Company Linear mode computational sensing LADAR
CN107667381A (en) * 2015-06-04 2018-02-06 西门子保健有限责任公司 The medical model represented using non-linear and non-negative sparse is classified
US10049683B2 (en) 2013-10-21 2018-08-14 Dolby International Ab Audio encoder and decoder
CN108924148A (en) * 2018-07-18 2018-11-30 中南大学 A kind of source signal collaborative compression perception data restoration methods
CN106993240B (en) * 2017-03-14 2020-10-16 天津大学 Multi-video abstraction method based on sparse coding

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8437564B2 (en) * 2007-08-07 2013-05-07 Ntt Docomo, Inc. Image and video compression using sparse orthonormal transforms
US8787186B2 (en) 2008-12-12 2014-07-22 Blackberry Limited Mobility in a distributed antenna system
US8693597B2 (en) * 2008-12-12 2014-04-08 Blackberry Limited Sensor-based wireless communication systems using compressive sampling
US20100290395A1 (en) * 2009-04-15 2010-11-18 Research In Motion Ltd Sensor-based wireless communication systems using compressive sampling
US8644244B2 (en) * 2008-12-12 2014-02-04 Research In Motion Limited Sensor-based wireless communication systems using compressive sampling
CA2751717C (en) * 2008-12-22 2015-10-06 Schlumberger Canada Limited Automatic dispersion extraction of multiple time overlapped acoustic signals
US8768095B2 (en) * 2009-08-14 2014-07-01 General Electric Company System and method for processing data signals
US8484024B2 (en) 2011-02-24 2013-07-09 Nuance Communications, Inc. Phonetic features for speech recognition
GB201114255D0 (en) * 2011-08-18 2011-10-05 Univ Antwerp Smart sampling and sparse reconstruction
CN105338357B (en) * 2015-09-29 2018-08-24 湖北工业大学 A kind of distributed video compressed sensing decoding method
WO2019074744A1 (en) * 2017-10-09 2019-04-18 Fovia Inc. Bit prediction method and system using a statistical model
CN110766139A (en) * 2019-10-21 2020-02-07 哈尔滨工业大学 Composite neural network model and modeling method thereof

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5303058A (en) * 1990-10-22 1994-04-12 Fujitsu Limited Data processing apparatus for compressing and reconstructing image data
US6298166B1 (en) * 1998-03-30 2001-10-02 Seiko Epson Corporation Image transformations in the compressed domain
US20010028683A1 (en) * 1999-12-28 2001-10-11 Vincent Bottreau Video encoding method based on the matching pursuit algorithm
US20030058943A1 (en) * 2001-07-18 2003-03-27 Tru Video Corporation Dictionary generation method for video and image compression
US20040268334A1 (en) * 2003-06-30 2004-12-30 Kalyan Muthukumar System and method for software-pipelining of loops with sparse matrix routines
US20060029279A1 (en) * 2004-08-09 2006-02-09 Donoho David L Method and apparatus for compressed sensing
US20070019723A1 (en) * 2003-08-12 2007-01-25 Koninklijke Philips Electronics N.V. Video encoding and decoding methods and corresponding devices
US20070027656A1 (en) * 2005-05-10 2007-02-01 Baraniuk Richard G Method and Apparatus for Distributed Compressed Sensing
US7345603B1 (en) * 2006-11-07 2008-03-18 L3 Communications Integrated Systems, L.P. Method and apparatus for compressed sensing using analog projection
US20080168324A1 (en) * 2005-01-23 2008-07-10 Zte Corporation Basic Matrix Based on Irregular Ldpc, Codec and Generation Method Thereof
US7450032B1 (en) * 2005-12-27 2008-11-11 At & T Intellectual Property Ii, L.P. System and method for encoding a signal using compressed sensor measurements
US7584396B1 (en) * 2005-12-27 2009-09-01 At&T Corp. System and method for decoding a signal using compressed sensor measurements

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5303058A (en) * 1990-10-22 1994-04-12 Fujitsu Limited Data processing apparatus for compressing and reconstructing image data
US6298166B1 (en) * 1998-03-30 2001-10-02 Seiko Epson Corporation Image transformations in the compressed domain
US20010028683A1 (en) * 1999-12-28 2001-10-11 Vincent Bottreau Video encoding method based on the matching pursuit algorithm
US20030058943A1 (en) * 2001-07-18 2003-03-27 Tru Video Corporation Dictionary generation method for video and image compression
US7003039B2 (en) * 2001-07-18 2006-02-21 Avideh Zakhor Dictionary generation method for video and image compression
US20040268334A1 (en) * 2003-06-30 2004-12-30 Kalyan Muthukumar System and method for software-pipelining of loops with sparse matrix routines
US20070019723A1 (en) * 2003-08-12 2007-01-25 Koninklijke Philips Electronics N.V. Video encoding and decoding methods and corresponding devices
US20060029279A1 (en) * 2004-08-09 2006-02-09 Donoho David L Method and apparatus for compressed sensing
US7646924B2 (en) * 2004-08-09 2010-01-12 David Leigh Donoho Method and apparatus for compressed sensing
US20080168324A1 (en) * 2005-01-23 2008-07-10 Zte Corporation Basic Matrix Based on Irregular Ldpc, Codec and Generation Method Thereof
US20080129560A1 (en) * 2005-05-10 2008-06-05 Baraniuk Richard G Method and Apparatus for Distributed Compressed Sensing
US7271747B2 (en) * 2005-05-10 2007-09-18 Rice University Method and apparatus for distributed compressed sensing
US20070027656A1 (en) * 2005-05-10 2007-02-01 Baraniuk Richard G Method and Apparatus for Distributed Compressed Sensing
US7450032B1 (en) * 2005-12-27 2008-11-11 At & T Intellectual Property Ii, L.P. System and method for encoding a signal using compressed sensor measurements
US7584396B1 (en) * 2005-12-27 2009-09-01 At&T Corp. System and method for decoding a signal using compressed sensor measurements
US7345603B1 (en) * 2006-11-07 2008-03-18 L3 Communications Integrated Systems, L.P. Method and apparatus for compressed sensing using analog projection

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8605786B2 (en) * 2007-09-04 2013-12-10 The Regents Of The University Of California Hierarchical motion vector processing method, software and devices
US8239436B2 (en) 2008-09-24 2012-08-07 National Instruments Corporation Estimating a signal based on samples derived from dot products and random projections
US20100077016A1 (en) * 2008-09-24 2010-03-25 Eduardo Perez Estimating a Signal Based on Samples Derived from Random Projections
US8885073B2 (en) 2010-08-11 2014-11-11 Inview Technology Corporation Dedicated power meter to measure background light level in compressive imaging system
US8760542B2 (en) * 2010-08-11 2014-06-24 Inview Technology Corporation Compensation of compressive imaging measurements based on measurements from power meter
US8570406B2 (en) 2010-08-11 2013-10-29 Inview Technology Corporation Low-pass filtering of compressive imaging measurements to infer light level variation
US8711015B2 (en) 2010-08-24 2014-04-29 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry, Through The Communications Research Centre Canada Sparse data compression
EP2424117A2 (en) 2010-08-24 2012-02-29 Her Majesty the Queen in Right of Canada, as represented by the Minister of Industry, through The Communications Research Centre Canada Sparse data compression
US20120259590A1 (en) * 2011-04-11 2012-10-11 Jong Chul Ye Method and apparatus for compressed sensing with joint sparsity
US8516340B2 (en) * 2011-12-08 2013-08-20 Harris Corporation Data system for interfacing with a remote data storage facility using compressive sensing and associated methods
US8538329B2 (en) 2011-12-08 2013-09-17 Harris Corporation System with sub-nyquist signal acquisition and transmission and associated methods
US20130151924A1 (en) * 2011-12-08 2013-06-13 Harris Corporation, Corporation Of The State Of Delaware Data system for interfacing with a remote data storage facility using compressive sensing and associated methods
US9167274B1 (en) 2012-03-21 2015-10-20 Google Inc. Generating synchronized dictionaries for sparse coding
US9137528B1 (en) 2012-09-28 2015-09-15 Google Inc. Synchronizing sparse coding dictionaries for use in communication sessions
US10049683B2 (en) 2013-10-21 2018-08-14 Dolby International Ab Audio encoder and decoder
US9823350B2 (en) 2014-07-31 2017-11-21 Raytheon Company Linear mode computational sensing LADAR
US9633306B2 (en) * 2015-05-07 2017-04-25 Siemens Healthcare Gmbh Method and system for approximating deep neural networks for anatomical object detection
CN107667381A (en) * 2015-06-04 2018-02-06 西门子保健有限责任公司 The medical model represented using non-linear and non-negative sparse is classified
CN107667381B (en) * 2015-06-04 2022-02-11 西门子保健有限责任公司 Medical mode classification using non-linear and non-negative sparse representations
CN106993240B (en) * 2017-03-14 2020-10-16 天津大学 Multi-video abstraction method based on sparse coding
CN108924148A (en) * 2018-07-18 2018-11-30 中南大学 A kind of source signal collaborative compression perception data restoration methods
CN108924148B (en) * 2018-07-18 2020-07-24 中南大学 Multi-source signal collaborative compressed sensing data recovery method

Also Published As

Publication number Publication date
US20080270055A1 (en) 2008-10-30

Similar Documents

Publication Publication Date Title
US7783459B2 (en) Analog system for computing sparse codes
Rozell et al. Sparse coding via thresholding and local competition in neural circuits
Ballé et al. Nonlinear transform coding
Ballé et al. Variational image compression with a scale hyperprior
Iliadis et al. Deepbinarymask: Learning a binary mask for video compressive sensing
Ballé et al. Integer networks for data compression with latent-variable models
Skatchkovsky et al. End-to-end learning of neuromorphic wireless systems for low-power edge artificial intelligence
Murray et al. Learning sparse overcomplete codes for images
WO2023207836A1 (en) Image encoding method and apparatus, and image decompression method and apparatus
Li et al. Fixed-sign binary neural network: An efficient design of neural network for Internet-of-Things devices
CN117501696A (en) Parallel context modeling using information shared between partitions
CN116309135A (en) Diffusion model processing method and device and picture processing method and device
CN115361559A (en) Image encoding method, image decoding method, image encoding device, image decoding device, and storage medium
Golts et al. Image compression optimized for 3D reconstruction by utilizing deep neural networks
Tartaglione et al. HEMP: high-order entropy minimization for neural network compression
EP4449313A1 (en) Operation of a neural network with clipped input data
WO2023177318A1 (en) Neural network with approximated activation function
Mahmoud et al. Learning algorithm effect on multilayer feed forward artificial neural network performance in image coding
Murray et al. Learning sparse overcomplete codes for images
Chang et al. Informative sensing
Rozell et al. Neurally plausible sparse coding via thresholding and local competition
US20240348783A1 (en) Methods and apparatus for approximating a cumulative distribution function for use in entropy coding or decoding data
Jalali Toward Theoretically Founded Learning-Based Compressed Sensing
US20230262237A1 (en) System and methods for video analysis
Xu Learning in Compressed Domains

Legal Events

Date Code Title Description
AS Assignment

Owner name: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA, CALIF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:OLSHAUSEN, BRUNO ADOLPHUS;REEL/FRAME:021626/0548

Effective date: 20080401

Owner name: WILLIAM MARSH RICE UNIVERSITY, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROZELL, CHRISTOPHER JOHN;JOHNSON, DON HERRICK;BARANIUK, RICHARD GORDON;AND OTHERS;REEL/FRAME:021626/0687;SIGNING DATES FROM 20080401 TO 20080404

Owner name: WILLIAM MARSH RICE UNIVERSITY, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROZELL, CHRISTOPHER JOHN;JOHNSON, DON HERRICK;BARANIUK, RICHARD GORDON;AND OTHERS;SIGNING DATES FROM 20080401 TO 20080404;REEL/FRAME:021626/0687

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: PAT HOLDER NO LONGER CLAIMS SMALL ENTITY STATUS, ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: STOL); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

AS Assignment

Owner name: NAVY, SECRETARY OF THE UNITED STATES OF AMERICA, V

Free format text: CONFIRMATORY LICENSE;ASSIGNOR:RICE UNIVERSITY;REEL/FRAME:042221/0020

Effective date: 20120326

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552)

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: 11.5 YR SURCHARGE- LATE PMT W/IN 6 MO, LARGE ENTITY (ORIGINAL EVENT CODE: M1556); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 12TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1553); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 12