WO2015048232A1 - Systems, devices, and methods for classification and sensor identification using enhanced sparsity - Google Patents

Systems, devices, and methods for classification and sensor identification using enhanced sparsity Download PDF

Info

Publication number
WO2015048232A1
WO2015048232A1 PCT/US2014/057370 US2014057370W WO2015048232A1 WO 2015048232 A1 WO2015048232 A1 WO 2015048232A1 US 2014057370 W US2014057370 W US 2014057370W WO 2015048232 A1 WO2015048232 A1 WO 2015048232A1
Authority
WO
WIPO (PCT)
Prior art keywords
determining
significant
optimal sparse
set information
circuitry
Prior art date
Application number
PCT/US2014/057370
Other languages
French (fr)
Inventor
Bingni W. BRUNTON
Steven L. BRUNTON
J. Nathan KUTZ
Joshua L. PROCTOR
Original Assignee
Tokitae Llc
University Of Washington Through Its Center For Commercialization
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tokitae Llc, University Of Washington Through Its Center For Commercialization filed Critical Tokitae Llc
Publication of WO2015048232A1 publication Critical patent/WO2015048232A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • G06F18/2115Selection of the most significant subset of features by evaluating different subsets according to an optimisation criterion, e.g. class separability, forward selection or backward elimination

Definitions

  • the present disclosure is directed to, among other things, an object classification system.
  • the object classification system includes a significant pixel identification module including circuitry for determining significant pixels from one or more reference images.
  • the object classification system includes a significant pixel identification module including circuitry for generating optimal sparse pixel set information.
  • the object classification system includes an object classification module including circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information.
  • the object classification system includes an object classification module including circuitry for classifying at least one object in the first image based on the comparison.
  • the present disclosure is directed to, among other things, an object classification system including circuitry for determining an optimal sparse pixel set from one or more reference images.
  • the object classification system includes circuitry for classifying an object in at least one image based on a comparison of one or more parameters associated with the optimal sparse pixel set to one or more pixels associated with the at least one image.
  • the present disclosure is directed to, among other things, a method for classifying images.
  • the method for classifying images includes determining optimal sparse pixel set information from one or more reference images.
  • the method for classifying images includes comparing one or more parameters associated with the optimal sparse pixel set information to one or more pixels associated with an image.
  • the method for classifying images includes classifying at least one object in an image based on the comparison.
  • the present disclosure is directed to, among other things, an object classification system including a significant pixel determination module, an optimal sparse pixel set information generation module, and an object classification module.
  • the present disclosure is directed to, among other things, an object classification system including a significant pixel identification module and an object classification module.
  • the object classification module including circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to optimal sparse pixel set information, and classifying at least one object in the first image based on the comparison.
  • the present disclosure is directed to, among other things, a system including a significant sensors identification module including circuitry for determining significant sensors associated with a sensor network.
  • the system includes an optimal sparse sensor module including circuitry for generating optimal sparse sensor set information from the significant sensors.
  • the system includes a target decision task module including circuitry for classifying information associated with a target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • system for classifying grid behavior includes circuitry for determining significant sensors from a sensor network.
  • system for classifying grid behavior includes circuitry for generating optimal sparse sensor set information.
  • system for classifying grid behavior includes circuitry for generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs.
  • the system for classifying grid behavior includes circuitry for classifying information associated with a target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • the present disclosure is directed to, among other things, a method including determining significant sensors from a sensor network.
  • the method includes generating optimal sparse sensor set information.
  • the method includes generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs.
  • the method includes classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • the present disclosure is directed to, among other things, a system including a significant sensors determination from a sensor network module.
  • the system includes an optimal sparse sensor set information generation module.
  • the system includes a target decision task classification module.
  • the present disclosure is directed to, among other things, a system including a significant sensors identification module and an optimal sparse sensor module.
  • the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors.
  • the system includes a target decision task module.
  • Figure 1 is a schematic diagram of an extension of image classification procedure to a class of datasets.
  • Figure 2 depicts a visualization of the Fisherface and how sparse sensors approximate its dominant features.
  • Figure 3 shows a cross-validation of classification accuracy between images of cats and dogs.
  • Figure 4 illustrates mean sensor locations averaged over 400 random learning iterations.
  • Figure 5 shows classification accuracy of sensors.
  • Figure 6a depicts classification of 3 faces as coupling weight varies.
  • Figure 6b illustrates classification of 4 faces as coupling weight varies.
  • Figure 6c shows classification of 5 faces as coupling weight varies.
  • Figure 7 depicts increasing the coupling weight.
  • Figure 8 illustrates means sensor locations to discriminate between 3 faces.
  • Figure 9 is a schematic diagram of a system according one embodiment.
  • Figure 10a depicts examples of cats and dogs images used in the dataset.
  • Figure 10b illustrates the four PC A modes with the largest singular valvues of the full image dataset.
  • Figure 11a shows examples of images from the Yale Faces Database B.
  • Figure lib depicts the six PCA modes with the largest singular values of the full image dataset.
  • Figure 12 outlines exemplary sensor networks, pseudo-static sensor networks, grid systems, or the like that may benefit from the technologies and methodologies described herein.
  • Figure 13 shows a flow diagram of a method for classifying images according to one embodiment.
  • Figure 14 shows a flow diagram of a method according to one embodiment.
  • the compressive sensing strategy relies on the measurements being incoherent with respect to the known basis, so that measurement vectors are uncorrected with basis directions. Incoherence holds between many pairs of bases, such as between delta functions and the Fourier basis. Therefore, it is possible to reconstruct images, which are sparse in the Fourier domain, from single pixel measurements, which may be viewed as discrete spatial delta functions.
  • a random Gaussian or Bernoulli matrix is incoherent with respect to any arbitrary basis with high probability (see: E. J. Candes, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52(2):489-509, 2006; D. L. Donoho, Compressed sensing. IEEE Transactions on Information Theory, 52(4): 1289-1306, 2006, which are each incorporated by reference herein).
  • Compressive sensing is able to reconstruct a signal using surprisingly few measurements, but assigning a signal to one of a few categories may be accomplished with orders-of-magnitude fewer measurements (see for instance J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE
  • PCA principal components analysis
  • K. Pearson, On lines and planes of closest fit to systems of points in space Philosophical Magazine, 2(7-12):559-572, 1901; and R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, Wiley-Interscience, 2000, which are each incorporated by reference herein
  • SVD singular value decomposition
  • LDA Linear discriminant analysis
  • classification is particularly efficient because of 1) the use of a tailored low-dimensional feature basis, and 2) the simplicity of deciding on a category rather than reconstructing exact details (J. N. Kutz, Data-Driven Modeling & Scientific Computation: Methods for Complex Systems & Big Data. Oxford University Press, 2013, which is incorporated by reference herein).
  • Methods and systems described herein include a novel framework to harness the enhanced sparsity in image recognition to classify images based on very few pixel measurements. Its distinct contribution consists of optimally selecting, within a large set of measurement locations, a smaller subset of key locations that serve the classification. Classification using very few learned pixel sensors with the systems and methods performs comparably with using the full image. Further, the algorithm has a parameter to tune the trade-off between fewer sensors and accuracy.
  • Figs. 2 and 7 illustrate the sensor locations produced by the algorithm.
  • the principle of enhanced sparsity may also have relevance for biological organisms, which often need to make decisions based on very limited sensory information. Specifically, organisms interact with high-dimensional physical systems in nature but must rely on information gathered through a handful of sensory organs.
  • the methods and systems including the sparse sensor algorithm provides one approach to answer the question, Given a fixed budget of sensors, where should they be placed to optimally inform decision-making?
  • Methods and systems include 1) design of optimal sensor locations that take advantage of enhanced sparsity in classification, and 2) to design sensors from
  • Fig. 1 illustrates a schematic of the classification procedure based on full (top), randomly subsampled (middle), and sparsely sensed (bottom) data.
  • Data measurements (left) are projected onto a PCA feature space (middle).
  • LDA defines the projection w T from PCA space to R c (right), where classification occurs.
  • the ⁇ matrix is a random projection that sub-samples the full image; the ⁇ matrix is a learned projection that samples the full image at specific sensors as described below.
  • Sparse sensors can also be learned from the randomly sub-sampled image, resulting in ⁇ 2 ⁇ , which is not the same as, but may approximate, ⁇ .
  • Compressive sensing theory states that if the information of a signal x E R" is k- sparse in a transformed basis ⁇ (all but k coefficients are zero), it is possible to
  • the signal x has coefficients a in basis ⁇ , so that
  • Eq. (2) may be solved by minimizing the t ⁇ norm (E. J.
  • sparse random matrices where many of the entries of the measurement vectors are zero, also allow reconstruction (as reviewed in A. C. Gilbert and P. Indyk. Sparse recovery using sparse matrices.
  • JL Johnson-Lindenstrauss
  • (a) is a vector the same size as a whose only non-zero entries are the entries in a associated with training images in class i.
  • y is assigned a category based on the class whose approximation minimizes the residual between y and y :
  • the SRC framework exploits enhanced sparsity for classification, making use of the sparse subspace structure of face images. It has been demonstrated to work
  • the columns of ⁇ are the left singular vectors of X; they span the columns of X and are often referred to as the principal components or features of the data set.
  • the columns of the matrix ⁇ are eigenvectors of XX T :
  • the SVD provides a systematic approach to transform data into a lower- dimensional representation.
  • the singular values ⁇ of Eq. (4) when ordered by magnitude, will contain many entries that are exactly zero, providing evidence for an exact low-rank representation.
  • the singular values often exhibit a power-law decrease, where the low-rank representation of the data is approximate.
  • the singular values exhibiting a power-law decrease provides the opportunity for a heuristic (at least quantitatively informed) decision about the number of information-rich dimensions to retain for the reduced-order subspace. Taking the r columns of ⁇ as basis vectors corresponding to the r largest singular values, a truncated basis ⁇ ⁇ can be formed. We may use this basis to project data from the full measurement space into the reduced r-dimensional PCA space (known as feature space):
  • LDA linear discriminant analysis
  • LDA attempts to find a set of directions in feature space w £ R r x c " 1 where the between-class variance is maximized and within- class variance is minimized.
  • SW is the within-class scatter matrix
  • LDA may also be performed on the full data X.
  • w are eigenvectors corresponding to the nonzero eigenvalues of It should be noted that the number of non-trivial LDA directions must be less than or equal to the number of categories minus one (c - 1), and that at least r + c samples are needed to guarantee is not singular (C M. Bishop. Pattern recognition and machine learning. Springer, New York, 2006, which is incorporated by reference herein).
  • the top "full image" row of Fig. 1 illustrates the PCA-LDA process for the classification task.
  • the LDA projection is n ' : I ⁇ R, and we may apply a threshold value that separates the two categories.
  • the threshold is the midpoint between the means of the two classes, ⁇ > ⁇ ⁇ ⁇ and ⁇ > ⁇ ⁇ ⁇ .
  • NC nearest centroid
  • the discrimination vector w encodes the direction in ⁇ ⁇ that is most informative in discriminating between categories of observations given by X.
  • We seek a measurement vector s that satisfies s w.
  • FIG. 2 depicts a visualization of these vectors for the dog/cat face recognition problem discussed below.
  • Figure 2 (a) shows the magnitude X along with the 20 pixel measurement vector s obtained by t ⁇ minimization of Eq. (7).
  • the image of X (grey) and the sparse pixels (red) are shown in Fig. 2 (b).
  • Fig. 2 (a) it is not possible to obtain sparse pixel measurements by thresholding X (Fig. 2 (a)); rather, s is a sparse image that exactly projects to w in ⁇ ⁇ space.
  • q x n projection matrix ⁇ that maps x ⁇ x.
  • the number of sensors q is at most r, although in practice q is usually equal to r.
  • the matrix ⁇ are rows of the n x n identity matrix corresponding to the non-zero elements of s.
  • ⁇ and ⁇ 2 ⁇ are both r x n matrices, where typically r « p « n.
  • can be either ⁇ ⁇ 2 ⁇ , as illustrated in Fig. 1. Possible methods to project these learned sparse measurements to classification space are described below.
  • IIMIIi is the Frobenius norm
  • is a small error tolerance ( ⁇ ⁇ 10 ⁇ 10 for examples as below).
  • the value of the coupling weight ⁇ determines the number of non-zero rows of s, so that the number of sensors q identified is at most q where r ⁇ q ⁇ r (c - 1).
  • S-OMP is a greedy algorithm, so instead of a coupling weight parameter ⁇ , one would decide on a stopping criterion (for example, the desired number of iterations/sensors).
  • the projections from the high-dimensional space into feature space ( ⁇ ) and then into decision space ⁇ w T ) are computed as a one-time upfront cost. It is then possible to re-use these projections to obtain an induced projection into the decision space starting only from learned sparse measurements. The alternative is to recompute the discrimination vectors on the sparse measurement data X.
  • Figure 9 shows a system in which one or more methodologies or technologies can be implemented such as, for example, determining optimal sparse set information, determining significant sensors from a sensor network, determining significant pixel identification classifying information associated with a target decision task, classifying imaged objects, refining a large set of measurement locations to learn a much smaller subset of key locations that best serve a classification task, or the like.
  • Figure 9 illustrates the extension of image classification procedure to a broader class of datasets. In this case, single-pixel measurements become individual sensors in a sensor network. In some embodiments, single-pixel measurements become individual sensors in a sensor network.
  • the system includes an algorithm that refines a large set of measurement locations to learn a much smaller subset of key locations to best serve a classification task.
  • the algorithm exploits enhanced sparsity for classification, when reconstruction may be bypassed.
  • Enhanced sparsity provides an orders-of-magnitude reduction in number of measurements required when compared with standard compressive sensing strategies. Measurements are projected directly into decision space; reconstruction from so few measurements is neither possible or needed.
  • Organisms interact with the external world with motor outputs, which are often discrete trajectories at specific moments in time, so that the transformation from sensory inputs to motor outputs can be thought of as a classification task. For example, a fly has no need to reconstruct the full flow velocity field around its body— it has only to decide what to do in response to a gust. Sensory organs and data processing by the nervous system can be expensive; therefore, it is advantageous to place a smaller number of sensors at key locations on the body.
  • Fig. 9 One such application is the detection of disease spread in epidemiological monitoring.
  • Figure 12 illustrates applications for some embodiments of the methods and systems described herein.
  • the systems and methods are applicable for use with grid technologies.
  • methods and systems described herein can be applied to epidemiological monitoring, power electrical grid systems, internet-related systems, and grids relating to the transport of goods and services.
  • the systems and methods are applicable for use with medical diagnostics.
  • methods and systems described herein can be used as part of medical diagnostic systems such as those used in genetics, epigenetics, metabolites and natural sensing for
  • the systems and methods are applicable for use with engineering diagnostics.
  • methods and systems described herein can be used in process control for manufacturing and acoustic tone engineering diagnostics.
  • the systems and methods described herein are applicable for use with other technologies.
  • methods and systems described herein can be used with RGB- D images and questionnaire refinement.
  • the system includes an object classification system.
  • the object classification system includes one or more modules.
  • the object classification system includes a significant sensor module operable to determine significant sensor information associated with a sensor network.
  • the object classification system includes a significant pixel identification module.
  • a module includes, among other things, one or more computing devices such as a processor (e.g., a microprocessor), a central processing unit (CPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like, or any combinations thereof, and can include discrete digital or analog circuit elements or electronics, or combinations thereof.
  • a module includes one or more ASICs having a plurality of predefined logic components.
  • a module includes one or more FPGAs, each having a plurality of programmable logic components.
  • a module includes one or more components operably coupled (e.g. , communicatively, electromagnetically, magnetically, ultrasonically, optically, inductively, electrically, capacitively coupled, or the like) to each other.
  • operably coupled e.g. , communicatively, electromagnetically, magnetically, ultrasonically, optically, inductively, electrically, capacitively coupled, or the like.
  • a module includes one or more remotely located components.
  • remotely located components are operably coupled, for example, via wireless communication.
  • remotely located components are operably coupled, for example, via one or more receivers, transmitters, transceivers, antennas, or the like.
  • the drive control module includes a module having one or more routines, components, data structures, interfaces, and the like.
  • a module includes memory that, for example, stores
  • a control module includes memory that stores optimal sparse pixel set information, near optimal sparse pixel set information, reference image information, training image information, protocol information, sensor set information, optimal sparse sensor set information, near optimal sparse sensor set information, optimal sparse mote location information, etc.
  • memory include volatile memory (e.g., Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), or the like), non- volatile memory (e.g., Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or the like), persistent memory, or the like.
  • RAM Random Access Memory
  • DRAM Dynamic Random Access Memory
  • ROM Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • CD-ROM Compact Disc Read-Only Memory
  • the memory is coupled to, for example, one or more computing devices by one or more instructions, information, or power buses.
  • the significant pixel identification module includes memory that, for example, stores optimal sparse pixel set information, near optimal sparse pixel set information, reference image information, training image information, protocol information, or the like.
  • a module includes one or more computer-readable media drives, interface sockets, Universal Serial Bus (USB) ports, memory card slots, or the like, and one or more input/output components such as, for example, a graphical user interface, a display, a keyboard, a keypad, a trackball, a joystick, a touch-screen, a mouse, a switch, a dial, or the like, and any other peripheral device.
  • a module includes one or more user input/output components, user interfaces, or the like, that are operably coupled to at least one computing device configured to control (electrical,
  • a module includes a computer-readable media drive or memory slot that is configured to accept signal-bearing medium (e.g., computer-readable memory media, computer-readable recording media, or the like).
  • signal-bearing medium e.g., computer-readable memory media, computer-readable recording media, or the like.
  • a program for causing a system to execute any of the disclosed methods can be stored on, for example, a computer-readable recording medium (CRMM), a signal-bearing medium, or the like.
  • CRMM computer-readable recording medium
  • Non-limiting examples of signal-bearing media include a recordable type medium such as a magnetic tape, floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), Blu-Ray Disc, a digital tape, a computer memory, or the like, as well as transmission type medium such as a digital or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link (e.g., receiver, transmitter, transceiver, transmission logic, reception logic, etc.).
  • a recordable type medium such as a magnetic tape, floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), Blu-Ray Disc, a digital tape, a computer memory, or the like
  • transmission type medium such as a digital or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link (e.g
  • signal-bearing media include, but are not limited to, DVD-ROM, DVD-RAM, DVD+RW, DVD-RW, DVD-R, DVD+R, CD-ROM, Super Audio CD, CD-R, CD+R, CD+RW, CD-RW, Video Compact Discs, Super Video Discs, flash memory, magnetic tape, magneto-optic disk, MINIDISC, non-volatile memory card, EEPROM, optical disk, optical storage, RAM, ROM, system memory, web server, or the like.
  • the significant pixel identification module includes circuitry for determining significant pixels from one or more reference images.
  • the significant pixel identification module includes circuitry for determining significant pixels from one or more training images.
  • the significant pixel identification module includes circuitry for generating optimal sparse pixel set information. In an embodiment, the significant pixel identification module includes circuitry for generating a near optimal sparse pixel set information. In an embodiment, the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based one or more learning protocols.
  • the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a modal decomposition protocol. In an embodiment, the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a convex optimization protocol. In an embodiment, the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on an t ⁇ -minimization protocol. In an embodiment, the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a subsampling using a sparse random matrices protocol. In an embodiment, the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a subsampling using a dense random matrices protocol.
  • the significant pixel identification module includes circuitry for determining one or more discrimination vectors associated with the significant pixels based on a discrimination analysis protocol.
  • the object classification system includes object classification module.
  • the object classification system includes object classification module including circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information.
  • the object classification module includes circuitry for classifying at least one object in the first image based on the comparison.
  • the object classification module includes circuitry for classifying the at least one object in the first image based on comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information and the one or more discrimination vectors.
  • the object classification module includes circuitry for determining reference data clusters associated with the significant pixels in a sparse sensor space.
  • the object classification system includes circuitry for classifying an object in at least one image.
  • the object classification system includes circuitry for classifying an object in at least one image based on a comparison of one or more parameters associated with the optimal sparse pixel set to one or more pixels associated with the at least one image.
  • the object classification module includes circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information and generating classification information associated with at least one object in the first image based on a learning protocol.
  • an object classification system includes circuitry for determining an optimal sparse pixel set.
  • the object classification system includes circuitry for determining an optimal sparse pixel set from one or more reference images.
  • the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more discrimination vectors from the one or more reference images.
  • the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more sparse pixel locations.
  • the circuitry for determining the optimal sparse pixel set includes circuitry for generating a feature space transformation from the one or more reference images.
  • the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more weighting factors associated with the optimal sparse pixel set.
  • the object classification system includes circuitry for classifying an object.
  • the object classification system includes circuitry for classifying an object in at least one image based on a comparison of one or more parameters associated with the optimal sparse pixel set to one or more pixels associated with the at least one image.
  • the circuitry for classifying the object in at least one image includes circuitry for generating one or more classification categories associated with the object.
  • an object classification system includes a significant pixel determination from one or more reference images module.
  • the object classification system includes an optimal sparse pixel set information generation module.
  • the object classification system includes an object classification module.
  • the object classification module includes one or more first image pixels comparison to optimal sparse pixel set information, the one or more first image pixels indicative of at least one object in the first image.
  • the object classification module includes a first image object classification module configured to classify at least one object in the first image based on a comparison of one or more pixels of the first image to optimal sparse pixel set information.
  • an object classification system includes a significant pixel identification module and an object classification module.
  • the significant pixel identification module includes circuitry for determining significant pixels from one or more reference images, and generating optimal sparse pixel set information.
  • the object classification module includes circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information.
  • the object classification module includes circuitry for classifying at least one object in the first image based on the comparison.
  • the object classification module includes circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information.
  • the object classification module includes circuitry for classifying at least one object in the first image based on the comparison.
  • a system includes a significant sensors identification module.
  • the system includes a significant sensors identification module including circuitry for determining significant sensors associated with a sensor network.
  • the significant sensors identification module includes circuitry for determining optimal sparse sensor location information associated with the sensor network.
  • the significant sensors identification module includes circuitry for determining nearly optimal sparse sensor location information associated with the sensor network.
  • the significant sensors identification module includes circuitry for determining optimal sparse sensor location information associated with the sensor network based on at least one t ⁇ -minimization protocol.
  • the significant sensors identification module includes circuitry for determining significant motes associated with a sensor network. In an embodiment, the significant sensors identification module includes circuitry for determining significant motes associated with a sensor network, and wherein the optimal sparse sensor module includes circuitry for generating optimal sparse mote set information from the significant motes responsive to one or more target decision task inputs. In an embodiment, the significant sensors identification module includes circuitry for determining optimal sparse mote location information associated with a mote network.
  • the system includes an optimal sparse sensor module.
  • the system includes an optimal sparse sensor module including circuitry for generating optimal sparse sensor set information from the significant sensors.
  • the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs.
  • the system includes a target decision task module.
  • the system includes a target decision task module including circuitry for classifying information associated with the target decision task.
  • the system includes a target decision task module including circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • the system for classifying grid behavior includes circuitry for determining significant sensors from a sensor network.
  • the system for classifying grid behavior includes circuitry for generating optimal sparse sensor set information from the significant sensors.
  • the system for classifying grid behavior includes circuitry for generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs.
  • the circuitry for generating the optimal sparse sensor set information includes circuitry for generating one or more discrimination vectors associated with the sensor network.
  • the circuitry for generating the optimal sparse sensor set information includes circuitry for generating one or more sparse sensor locations associated with the sensor network.
  • the system for classifying grid behavior includes circuitry for classifying information associated with the target decision task.
  • the system for classifying grid behavior includes circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • the circuitry for classifying information associated with the target decision task includes circuitry for generating one or more classification categories associated with the target decision task.
  • the circuitry for classifying information associated with the target decision task includes circuitry for generating one or more classification parameters associated with the target decision task.
  • a system includes one or more modules.
  • the system includes a significant sensors determination from a sensor network module.
  • the system includes an optimal sparse sensor set information generation module.
  • the optimal sparse sensor set information generation module is configured to generate optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs.
  • the system includes a target decision task classification module.
  • the target decision task classification module is configured to classify information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • a system includes a significant sensors identification module and an optimal sparse sensor module.
  • the significant sensors identification module includes circuitry for determining significant sensors associated with a sensor network.
  • the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors.
  • the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors.
  • the system includes a target decision task module.
  • the target decision task module includes circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • Figure 13 shows a method 300 for classifying images.
  • the method 300 includes determining optimal sparse pixel set information from one or more reference images.
  • determining the optimal sparse pixel set information includes
  • determining the optimal sparse pixel set information includes determining the optimal sparse pixel set information based on at least one recovery protocol.
  • determining the optimal sparse pixel set information includes determining the optimal sparse pixel set information based on at least one recovery protocol.
  • determining the optimal sparse pixel set information includes determining optimal sparse pixel location information.
  • determining the optimal sparse pixel set information includes determining optimal sparse pixel weighting factor information.
  • determining the optimal sparse pixel set information includes determining enhanced sparsity classification information associated with the one or more reference images.
  • determining the optimal sparse pixel set information includes applying a coupling weight protocol.
  • determining the optimal sparse pixel set information includes generating discrimination vector information associated with the one or more reference images.
  • the method 300 includes comparing one or more parameters associated with the optimal sparse pixel set information to one or more pixels associated with an image.
  • the method 300 includes classifying at least one object in the image based on the comparison.
  • Figure 14 shows a method 400.
  • the method 400 includes determining significant sensors from a sensor network.
  • determining significant sensors from the sensor network includes determining one or more spatial sensor locations optimally informative to a categorical discrimination based on enhanced sparsity protocol.
  • determining significant sensors from the sensor network includes determining one or more spatial sensor locations optimally informative to a categorical discrimination based on enhanced sparsity for classification protocol.
  • determining significant sensors from the sensor network includes determining sparse representation information of discriminant vectors in a feature space.
  • determining significant sensors from the sensor network includes determining sparse representation information of discriminant vectors in a feature space based on an t ⁇ -minimization protocol.
  • determining significant sensors from the sensor network includes determining nearly optimal sparse sensor locations information.
  • determining the significant sensors from the sensor network includes determining significant motes from a sensor network, and generating optimal the sparse sensor set information includes generating optimal sparse mote set information from the significant motes.
  • the method 400 includes generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs.
  • the method 400 includes classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • classifying the information associated with the target decision task includes generating categorical decision information about a state of the sensor network.
  • the logical operations/functions described herein are a distillation of machine specifications or other physical mechanisms specified by the operations/functions such that the otherwise inscrutable machine specifications may be comprehensible to the human mind.
  • the distillation also allows one of skill in the art to adapt the operational/functional description of the technology across many different specific vendors' hardware configurations or platforms, without being limited to specific vendors' hardware configurations or platforms.
  • VIDAL Very high speed Hardware Description Language
  • software is a shorthand for a massively complex interchanging/specification of ordered- matter elements.
  • ordered-matter elements may refer to physical components of computation, such as assemblies of electronic logic gates, molecular computing logic constituents, quantum computing mechanisms, etc.
  • a high-level programming language is a programming language with strong abstraction, e.g., multiple levels of abstraction, from the details of the sequential organizations, states, inputs, outputs, etc., of the machines that a high-level programming language actually specifies.
  • strong abstraction e.g., multiple levels of abstraction, from the details of the sequential organizations, states, inputs, outputs, etc., of the machines that a high-level programming language actually specifies.
  • high-level programming language available at the website en.wikipedia.org/wiki/High-level_programming_language (as of June 5, 2012, 21 :00 GMT).
  • high-level programming languages resemble or even share symbols with natural languages. See, e.g., Wikipedia, Natural language, available at the website
  • high-level programming languages use strong abstraction to facilitate human understanding should not be taken as an indication that what is expressed is an abstract idea.
  • a high-level programming language is the tool used to implement a technical disclosure in the form of functions/operations, it can be understood that, far from being abstract, imprecise, "fuzzy,” or “mental” in any significant semantic sense, such a tool is instead a near incomprehensibly precise sequential specification of specific computational -machines—the parts of which are built up by activating/selecting such parts from typically more general computational machines over time (e.g. , clocked time).
  • This fact is sometimes obscured by the superficial similarities between high-level programming languages and natural languages. These superficial similarities also may cause a glossing over of the fact that high-level programming language implementations ultimately perform valuable work by creating/controlling many different computational machines.
  • the hardware used in the computational machines typically consists of some type of ordered matter (e.g. , traditional electronic devices (e.g., transistors), deoxyribonucleic acid (DNA), quantum devices, mechanical switches, optics, fluidics, pneumatics, optical devices (e.g., optical
  • Logic gates are typically physical devices that may be electrically, mechanically, chemically, or otherwise driven to change physical state in order to create a physical reality of Boolean logic.
  • Logic gates may be arranged to form logic circuits, which are typically physical devices that may be electrically, mechanically, chemically, or otherwise driven to create a physical reality of certain logical functions.
  • Types of logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), computer memory devices, etc., each type of which may be combined to form yet other types of physical devices, such as a central processing unit (CPU)— the best known of which is the microprocessor.
  • CPU central processing unit
  • a modern microprocessor will often contain more than one hundred million logic gates in its many logic circuits (and often more than a billion transistors). See, e.g., Wikipedia, Logic gates, available at the website en.wikipedia.org/wiki/Logic_gates (as of June 5, 2012, 21 :03 GMT).
  • the logic circuits forming the microprocessor are arranged to provide a
  • the Instruction Set Architecture is the part of the microprocessor architecture related to programming, including the native data types, instructions, registers, addressing modes, memory architecture, interrupt and exception handling, and external Input/Output. See, e.g., Wikipedia, Computer architecture, available at the website en.wikipedia.org/wiki/Computer_architecture (as of June 5, 2012, 21 :03 GMT).
  • the Instruction Set Architecture includes a specification of the machine language that can be used by programmers to use/control the microprocessor. Since the machine language instructions are such that they may be executed directly by the microprocessor, typically they consist of strings of binary digits, or bits. For example, a typical machine language instruction might be many bits long ⁇ e.g., 32, 64, or 128 bit strings are currently common). A typical machine language instruction might take the form
  • the binary number "1" (e.g., logical "1") in a machine language instruction specifies around +5 volts applied to a specific "wire” (e.g., metallic traces on a printed circuit board) and the binary number "0" (e.g., logical "0") in a machine language instruction specifies around -5 volts applied to a specific "wire.”
  • a specific "wire” e.g., metallic traces on a printed circuit board
  • the binary number "0" (e.g., logical "0") in a machine language instruction specifies around -5 volts applied to a specific "wire.”
  • machine language instructions also select out and activate specific groupings of logic gates from the millions of logic gates of the more general machine.
  • machine language instruction programs even though written as a string of zeros and ones, specify many, many constructed physical machines or physical machine states.
  • Machine language is typically incomprehensible by most humans (e.g., the above example was just ONE instruction, and some personal computers execute more than two billion instructions every second). See, e.g., Wikipedia, Instructions per second, available at the website en.wikipedia.org/wiki/Instructions_per_second (as of June 5, 2012, 21 :04 GMT).
  • a compiler is a device that takes a statement that is more comprehensible to a human than either machine or assembly language, such as "add 2 + 2 and output the result," and translates that human understandable statement into a complicated, tedious, and immense machine language code (e.g., millions of 32, 64, or 128 bit length strings).
  • Compilers thus translate high-level programming language into machine language.
  • This compiled machine language as described above, is then used as the technical specification which sequentially constructs and causes the interoperation of many different computational machines such that humanly useful, tangible, and concrete work is done.
  • such machine language- the compiled version of the higher-level language- functions as a technical specification which selects out hardware logic gates, specifies voltage levels, voltage transition timings, etc., such that the humanly useful work is accomplished by the hardware.
  • any such operational/functional technical descriptions may be understood as operations made into physical reality by (a) one or more interchained physical machines, (b) interchained logic gates configured to create one or more physical machine(s) representative of sequential/combinatorial logic(s), (c) interchained ordered matter making up logic gates (e.g., interchained electronic devices (e.g., transistors), DNA, quantum devices, mechanical switches, optics, fluidics, pneumatics, molecules, etc.) that create physical reality representative of logic(s), or (d) virtually any combination of the foregoing.
  • logic gates e.g., interchained electronic devices (e.g., transistors), DNA, quantum devices, mechanical switches, optics, fluidics, pneumatics, molecules, etc.
  • any physical object which has a stable, measurable, and changeable state may be used to construct a machine based on the above technical description. Charles Babbage, for example, constructed the first computer out of wood and powered by cranking a handle.
  • the logical operations/functions set forth in the present technical description are representative of static or sequenced specifications of various ordered-matter elements, in order that such specifications may be comprehensible to the human mind and adaptable to create many various hardware configurations.
  • the logical operations/functions disclosed herein should be treated as such, and should not be disparagingly characterized as abstract ideas merely because the specifications they represent are presented in a manner that one of skill in the art can readily understand and apply in a manner independent of a specific vendor's hardware implementation.
  • An information processing system generally includes one or more of a system unit housing, a video display device, memory, such as volatile or non- volatile memory, processors such as microprocessors or digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices (e.g., a touch pad, a touch screen, an antenna, etc.), or control systems including feedback loops and control motors (e.g., feedback for detecting position or velocity, control motors for moving or adjusting components or quantities).
  • An information processing system can be implemented utilizing suitable commercially available components, such as those typically found in data computing/communication or network computing/communication systems.
  • an implementer may opt for a mainly hardware or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation that is implemented in one or more machines or articles of manufacture; or, yet again alternatively, the implementer may opt for some combination of hardware, software, firmware, etc. in one or more machines or articles of manufacture.
  • any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary.
  • implementations will typically employ optically-oriented hardware, software, firmware, etc., in one or more machines or articles of manufacture.
  • any two components so associated can also be viewed as being “operably connected”, or “operably coupled, " to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably coupleable, " to each other to achieve the desired functionality.
  • operably coupleable include, but are not limited to, physically mateable, physically interacting components, wirelessly interactable, wirelessly interacting components, logically interacting, logically interactable components, etc.
  • one or more components may be referred to herein as
  • ASICs Application Specific Integrated Circuits
  • FPGAs Field Programmable Gate Arrays
  • DSPs digital signal processors
  • Non-limiting examples of a signal-bearing medium include the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital or an analog
  • a communication medium e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link (e.g., transmitter, receiver, transmission logic, reception logic, etc.), etc.).
  • Classifiers built on learned sensors approached the performance of classifiers built on projections to principal components of the full images. These optimal sensor locations could be learned approximately even from already randomly subsampled images. In either case, the learned sensors performed significantly better than the matched number of randomly chosen sensors. Further, the ensemble of learned sensor locations clustered around coherent features of the images. It is possible to think of this ensemble as a pixel mask for the faces in the training set, which may be of use when applied to engineered or biological systems where the important features may not be salient by inspection.
  • Organisms interact with the external world with motor outputs, which are often discrete trajectories at specific moments in time, so that the transformation from sensory inputs to motor outputs can be thought of as a classification task. For example, a fly has no need to reconstruct the full flow velocity field around its body— it has only to decide what to do in response to a gust. Sensory organs and data processing by the nervous system can be expensive; therefore, it is advantageous to place a smaller number of sensors at key locations on the body.
  • Figure 3 depicts a cross-validation of classification accuracy between images of cats and dogs.
  • Panel (a) compares using r learned sensors/features (solid red and green lines) against using r random pixel sensors (dashed blue line) and projections onto the first r principal components of the full image (solid blue line). Each data point summarizes 400 random iterations. At each iteration, a different 90% subsample was used to train the classifier, whose accuracy was assessed on the remaining 10%> of images. Error bars are standard deviations.
  • Panel (b) shows a summary of mean cross-validated accuracy varying p, the number of pixels used in the random subsample, and r, the number of
  • Figure 4 shows the mean sensor locations averaged over 400 random learning iterations (top row), compared to images of the mean cat and the mean dog (bottom row).
  • r 15 sensors locations are learned at each iteration from a random 50% of images designated as the training set, and the colormap from black to white represents the probability a sensors is at that location.
  • Panel (a) shows sensors learned with no subsampling
  • panel (b) shows sensors learned from 400 randomly sampled pixels
  • panel (c) shows sensors learned from 400 random projections.
  • the bottom row shows the centroid of the raw data in panel (c), which was subtracted from each image to obtain X.
  • Panels (e) and (f) show the average cat and the average dog after mean subtraction. Comparing the top row to the difference between (e) and (f), it is apparently that sensors cluster around the forehead, eyes, mouth, and the tops of ears.
  • the sparse sensors represent pixels that are key to the decision, then they should be clustered at locations that are maximally informative about the difference between cats and dogs. Indeed, an average of sensor locations learned over 400 random iterations shows clustering around the animals' mouth, forehead, eyes, and tops of ears (Fig. 4 (a)). When sparse sensors were learned from an already subsampled dataset, a qualitatively similar distribution of sensor locations was found, where each cluster of sensors showed a slightly larger spatial variance (Fig. 4 (b)). Such an ensemble of sensor placements can be thought of as a mask for facial features particularly relevant for the classification.
  • Figure 5 shows that the classification accuracy of sensors learned from random projections (black dashed line) is the same as sensors learned from single pixels (red solid line). The experiments performed were identical to those show in Fig. 3. Random projections were ensembles of Bernoulli random variables with mean of 0.5.
  • Figure 11 (b) shows the first six eigenfaces corresponding to the three example sets of faces. Comparing the faces dataset with the cat/dog dataset above, there is significantly less variability within each category in facial features, although large portions of the faces were effectively occluded by lack of illumination.
  • Figure 6 depicts classification of 3, 4, and 5 faces as the coupling weight ⁇ varies between 0 and 100. Number of sensors and the cross-validated performance are shown, comparing sensors learned with no subsampling (solid green lines), sensors learned from 1/10 of the pixels (solid red lines), and random sensors of the same number (dashed blue lines) against the accuracy by using r PC A features of the full images (blue square). Each instance of the classifier was trained on a random 75% of the images and evaluated on the remaining 25%; error bars are standard deviations.
  • Figure 7 shows that increasing the coupling weight ⁇ results in fewer sensor locations that capture approximately the same features.
  • Figure 7 illustrates an example of increasing ⁇ on the number and locations of sensors identified by the solution to the optimization in Eq. (9).
  • r(c - 1) 20 total sensors are found.
  • Fig. 7 (a) that certain pairs of sensor locations are in close proximity (red boxes) and likely carry information about the same non-local facial feature. As ⁇ increases and the total number of sensors is penalized, these sensor pairs appear to collapse onto single sensors in Fig. 7 (b).
  • Figure 8 illustrates mean sensor locations (top left) to discriminate between 3 faces
  • the mean sensor locations map was shrunk by a factor of 4 (using a cubic kernel) to emphasize the sensors converging around the eyes, nose, corner of mouth, and arches of eyebrows.
  • an object classification system includes: a significant pixel identification module including circuitry for determining significant pixels from one or more reference images, and generating optimal sparse pixel set information; and an object classification module including circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information; and classifying at least one object in the first image based on the comparison.
  • Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating a near optimal sparse pixel set information.
  • Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based one or more learning protocols. 4. Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a modal decomposition protocol.
  • Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a convex optimization protocol.
  • Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on an protocol.
  • Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a subsampling using a sparse random matrices protocol.
  • Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a subsampling using a dense random matrices protocol.
  • Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for determining one or more discrimination vectors associated with the significant pixels based on a discrimination analysis protocol.
  • Some embodiments include an object classification system such as in paragraph 9, wherein the object classification module includes circuitry for classifying the at least one object in the first image based on comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information and the one or more discrimination vectors.
  • Some embodiments include an object classification system such as in paragraph 1, wherein the object classification module includes circuitry for determining reference data clusters associated with the significant pixels in a sparse sensor space.
  • Some embodiments include an object classification system such as in paragraph 1, wherein the object classification module includes circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information and generating classification information associated with at least one object in the first image based on a learning protocol.
  • Some embodiments include an object classification system such as in paragraph 1, wherein the one or more reference images includes one or more training images.
  • an object classification system includes: circuitry for
  • circuitry for classifying an object in at least one image based on a comparison of one or more parameters associated with the optimal sparse pixel set to one or more pixels associated with the at least one image.
  • Some embodiments include an object classification system such as in paragraph 14, wherein the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more discrimination vectors from the one or more reference images.
  • Some embodiments include an object classification system such as in paragraph 14, wherein the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more sparse pixel locations.
  • Some embodiments include an object classification system such as in paragraph 14, wherein the circuitry for determining the optimal sparse pixel set includes circuitry for generating a feature space transformation from the one or more reference images. 18. Some embodiments include an object classification system such as in paragraph 14, wherein the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more weighting factors associated with the optimal sparse pixel set.
  • Some embodiments include an object classification system such as in paragraph 14, wherein the circuitry for classifying the object in at least one image includes circuitry for generating one or more classification categories associated with the object.
  • a method for classifying images includes: determining optimal sparse pixel set information from one or more reference images; comparing one or more parameters associated with the optimal sparse pixel set information to one or more pixels associated with an image; and classifying at least one object in the image based on the comparison.
  • Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes determining the optimal sparse pixel set information based on at least one recovery protocol. 22. Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes determining the optimal sparse pixel set information based on at least one t ⁇ -minimization protocol.
  • Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes determining optimal sparse pixel location information.
  • Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes determining optimal sparse pixel weighting factor information.
  • Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes determining enhanced sparsity classification information associated with the one or more reference images.
  • Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes applying a coupling weight protocol.
  • Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes generating discrimination vector information associated with the one or more reference images.
  • an object classification system includes: a significant pixel determination from one or more reference images module; an optimal sparse pixel set information generation module; and an object classification module.
  • Some embodiments include an object classification system such as in paragraph 28, wherein the object classification module includes: one or more first image pixels comparison to optimal sparse pixel set information, the one or more first image pixels indicative of at least one object in the first image; and a first image object
  • classification module configured to classify at least one object in the first image based on a comparison of one or more pixels of the first image to optimal sparse pixel set information.
  • an object classification system includes: a significant pixel identification module; and an object classification module including circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information; and classifying at least one object in the first image based on the comparison.
  • Some embodiments include an object classification system such as in paragraph 30, wherein the significant pixel identification module includes circuitry for determining significant pixels from one or more reference images, and generating optimal sparse pixel set information.
  • Some embodiments include an object classification system such as in paragraph 30, wherein the object classification module includes circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information.
  • Some embodiments include an object classification system such as in paragraph 32, wherein the object classification module includes circuitry for classifying at least one object in the first image based on the comparison.
  • a system includes: a significant sensors identification module including circuitry for determining significant sensors associated with a sensor network; an optimal sparse sensor module including circuitry for generating optimal sparse sensor set information from the significant sensors; and a target decision task module including circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • a significant sensors identification module including circuitry for determining significant sensors associated with a sensor network.
  • Some embodiments include a system such in paragraph 34, wherein the significant sensors identification module includes circuitry for determining nearly optimal sparse sensor location information associated with the sensor network.
  • Some embodiments include a system such in paragraph 34, wherein the significant sensors identification module includes circuitry for determining optimal sparse sensor location information associated with the sensor network based on at least one minimization protocol.
  • Some embodiments include a system such in paragraph 34, wherein the significant sensors identification module includes circuitry for determining significant motes associated with a sensor network. Some embodiments include a system such in paragraph 34, wherein the significant sensors identification module includes circuitry for determining significant motes associated with a sensor network, and wherein the optimal sparse sensor module includes circuitry for generating optimal sparse mote set information from the significant motes responsive to one or more target decision task inputs.
  • Some embodiments include a system such in paragraph 34, wherein the significant sensors identification module includes circuitry for determining optimal sparse mote location information associated with a mote network.
  • a system for classifying grid behavior includes: circuitry for determining significant sensors from a sensor network; circuitry for generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs; and circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • Some embodiments include a system for classifying grid behavior such in paragraph 42, wherein the circuitry for generating the optimal sparse sensor set information includes circuitry for generating one or more discrimination vectors associated with the sensor network.
  • Some embodiments include a system for classifying grid behavior such in paragraph 42, wherein the circuitry for generating the optimal sparse sensor set information includes circuitry for generating one or more sparse sensor locations associated with the sensor network.
  • Some embodiments include a system for classifying grid behavior such in paragraph 42, wherein the circuitry for classifying information associated with the target decision task includes circuitry for generating one or more classification categories associated with the target decision task.
  • Some embodiments include a system for classifying grid behavior such in paragraph 42, wherein the circuitry for classifying information associated with the target decision task includes circuitry for generating one or more classification parameters associated with the target decision task.
  • a method includes: determining significant sensors from a sensor network; generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs; and classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • Some embodiments include a method as in paragraph 47, wherein determining
  • significant sensors from the sensor network includes determining one or more spatial sensor locations optimally informative to a categorical discrimination based on enhanced sparsity protocol.
  • Some embodiments include a method as in paragraph 47, wherein determining
  • significant sensors from the sensor network includes determining one or more spatial sensor locations optimally informative to a categorical discrimination based on enhanced sparsity for classification protocol.
  • Some embodiments include a method as in paragraph 47, wherein determining
  • significant sensors from the sensor network includes determining sparse representation information of discriminant vectors in a feature space.
  • Some embodiments include a method as in paragraph 47, wherein determining
  • significant sensors from the sensor network includes determining sparse representation information of discriminant vectors in a feature space based on an
  • Some embodiments include a method as in paragraph 47, wherein determining
  • Some embodiments include a method as in paragraph 47, wherein classifying the
  • information associated with the target decision task includes generating categorical decision information about a state of the sensor network.
  • Some embodiments include a method as in paragraph 47, wherein determining the significant sensors from the sensor network includes determining significant motes from a sensor network; and wherein generating optimal the sparse sensor set information includes generating optimal sparse mote set information from the significant motes.
  • a system includes: a significant sensors determination from a sensor network module; an optimal sparse sensor set information generation module; and a target decision task classification module.
  • Some embodiments include a system such as in paragraph 55, wherein the optimal sparse sensor set information generation module is configured to generate optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs.
  • Some embodiments include a system such as in paragraph 55, wherein the target decision task classification module is configured to classify information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • a system includes: a significant sensors identification module; and an optimal sparse sensor module including circuitry for generating optimal sparse sensor set information from the significant sensors.
  • Some embodiments include a system such as in paragraph 58, further including a target decision task module.
  • Some embodiments include a system such as in paragraph 59, wherein the target decision task module includes circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
  • Some embodiments include a system such as in paragraph 58, wherein the significant sensors identification module includes circuitry for determining significant sensors associated with a sensor network.
  • Some embodiments include a system such as in paragraph 58, wherein the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors.

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The goal of compressive sensing is efficient reconstruction of data from sparse measurements, often leading to a categorical decision. If only classification is required, reconstruction can be circumvented and the measurements needed are sparser still by orders-of-magnitude. "Enhanced sparsity" is used herein as the reduction in the number of measurements required for classification over reconstruction. Systems and methods exploit such enhanced sparsity and learn spatial sensor locations that optimally inform a categorical decision. Systems and methods include algorithms that solve an l1 minimization to find the fewest entries of the full measurement vector that exactly reconstruct the discriminant vector in feature space. Once the sensor locations have been identified from the training data, subsequent test samples are classified with remarkable efficiency, with performance comparable to that obtained by discrimination using the full image. Sensor locations may be learned from full images, or from a random subsample of pixels. For classification between more than two categories, we introduce a coupling parameter whose value tunes the number of sensors selected, trading accuracy for economy. Example datasets include those from image recognition using PCA for feature extraction and LDA for discrimination; however, the system and method can be broadly applied to non-image data types and is easily adapted to work with other methods for feature extraction and discrimination.

Description

Systems, Devices, and Methods for Classification and Sensor Identification Using Enhanced Sparsity
SUMMARY
In an aspect, the present disclosure is directed to, among other things, an object classification system. In an embodiment, the object classification system includes a significant pixel identification module including circuitry for determining significant pixels from one or more reference images. In an embodiment, the object classification system includes a significant pixel identification module including circuitry for generating optimal sparse pixel set information. In an embodiment, the object classification system includes an object classification module including circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information. In an embodiment, the object classification system includes an object classification module including circuitry for classifying at least one object in the first image based on the comparison.
In an aspect, the present disclosure is directed to, among other things, an object classification system including circuitry for determining an optimal sparse pixel set from one or more reference images. In an embodiment, the object classification system includes circuitry for classifying an object in at least one image based on a comparison of one or more parameters associated with the optimal sparse pixel set to one or more pixels associated with the at least one image.
In an aspect, the present disclosure is directed to, among other things, a method for classifying images. In an embodiment, the method for classifying images includes determining optimal sparse pixel set information from one or more reference images. In an embodiment, the method for classifying images includes comparing one or more parameters associated with the optimal sparse pixel set information to one or more pixels associated with an image. In an embodiment, the method for classifying images includes classifying at least one object in an image based on the comparison.
In an aspect, the present disclosure is directed to, among other things, an object classification system including a significant pixel determination module, an optimal sparse pixel set information generation module, and an object classification module.
In an aspect, the present disclosure is directed to, among other things, an object classification system including a significant pixel identification module and an object classification module. In an embodiment, the object classification module including circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to optimal sparse pixel set information, and classifying at least one object in the first image based on the comparison.
In an aspect, the present disclosure is directed to, among other things, a system including a significant sensors identification module including circuitry for determining significant sensors associated with a sensor network. In an embodiment, the system includes an optimal sparse sensor module including circuitry for generating optimal sparse sensor set information from the significant sensors. In an embodiment, the system includes a target decision task module including circuitry for classifying information associated with a target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
In an aspect, the present disclosure is directed to, among other things, a system for classifying grid behavior. In an embodiment, system for classifying grid behavior includes circuitry for determining significant sensors from a sensor network. In an embodiment, the system for classifying grid behavior includes circuitry for generating optimal sparse sensor set information. In an embodiment, the system for classifying grid behavior includes circuitry for generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs. In an
embodiment, the system for classifying grid behavior includes circuitry for classifying information associated with a target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
In an aspect, the present disclosure is directed to, among other things, a method including determining significant sensors from a sensor network. In an embodiment, the method includes generating optimal sparse sensor set information. In an embodiment, the method includes generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs. In an embodiment, the method includes classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information. In an aspect, the present disclosure is directed to, among other things, a system including a significant sensors determination from a sensor network module. In an embodiment, the system includes an optimal sparse sensor set information generation module. In an embodiment, the system includes a target decision task classification module.
In an aspect, the present disclosure is directed to, among other things, a system including a significant sensors identification module and an optimal sparse sensor module. In an embodiment, the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors. In an embodiment, the system includes a target decision task module.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
BRIEF DESCRIPTION OF THE FIGURES
Figure 1 is a schematic diagram of an extension of image classification procedure to a class of datasets.
Figure 2 depicts a visualization of the Fisherface and how sparse sensors approximate its dominant features.
Figure 3 shows a cross-validation of classification accuracy between images of cats and dogs.
Figure 4 illustrates mean sensor locations averaged over 400 random learning iterations.
Figure 5 shows classification accuracy of sensors.
Figure 6a depicts classification of 3 faces as coupling weight varies.
Figure 6b illustrates classification of 4 faces as coupling weight varies.
Figure 6c shows classification of 5 faces as coupling weight varies.
Figure 7 depicts increasing the coupling weight.
Figure 8 illustrates means sensor locations to discriminate between 3 faces.
Figure 9 is a schematic diagram of a system according one embodiment.
Figure 10a depicts examples of cats and dogs images used in the dataset. Figure 10b illustrates the four PC A modes with the largest singular valvues of the full image dataset.
Figure 11a shows examples of images from the Yale Faces Database B.
Figure lib depicts the six PCA modes with the largest singular values of the full image dataset.
Figure 12 outlines exemplary sensor networks, pseudo-static sensor networks, grid systems, or the like that may benefit from the technologies and methodologies described herein.
Figure 13 shows a flow diagram of a method for classifying images according to one embodiment.
Figure 14 shows a flow diagram of a method according to one embodiment.
DETAILED DESCRIPTION
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.
I. Introduction
Classification and detection based on measurements is a crucial capability for intelligent systems, both biological and engineered. It is desirable to perform such categorical tasks with limited or incomplete information, as the full state of the system may be inaccessible or expensive to measure. Fortunately, even when full measurement space is quite high-dimensional, the desired information extracted from the signal is often inherently low-dimensional. For example, images may have a large number of pixels, but it is observed that natural images occupy a minuscule fraction of pixel space. Such an inherent sparsity in an appropriate transformed basis is the foundation of image compression (see M Elad, Sparse and redundant representations: from theory to applications in signal and image processing. Springer, New York, 2010, which is incorporated by reference herein); natural images may be compressed in generic Fourier or wavelet bases. The theory of compressive sensing (see: E. J. Candes, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52(2):489-509, 2006; D. L. Donoho, Compressed sensing. IEEE Transactions on Information Theory,
52(4): 1289-1306, 2006; R. G. Baraniuk, Compressive sensing, IEEE Signal Processing Magazine, 24(4): 118-120, 2007; and J. A. Tropp and A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Transactions on
Information Theory, 53(12):4655-4666, 2007, which are each incorporated by reference herein) takes further advantage of a signal's compressibility and demonstrates how a n- dimensional signal that is ^-sparse in a transformed basis may be reconstructed exactly from 0{k \og{nlk)) measurements. Here, ^-sparse means all but k coefficients are zero, and we assume k « n. The signal may be reconstructed from the known basis as an Co sparse set of coefficients that best recover the measurements. The implementation of compressive sensing rests on the fact that the o sparse solution to an underdetermined linear system may almost certainly be found by relaxation to a convex t\ minimization (see: E. J.
Candes, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal
reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52(2):489-509, 2006; D. L. Donoho, Compressed sensing. IEEE Transactions on Information Theory, 52(4): 1289-1306, 2006, which are each incorporated by reference herein), or alternatively, by a greedy algorithm (see J. A. Tropp and A. C. Gilbert, Signal recovery from random measurements via orthogonal matching pursuit, IEEE Transactions on Information Theory, 53(12):4655-4666, 2007, which are each incorporated by reference herein).
The compressive sensing strategy relies on the measurements being incoherent with respect to the known basis, so that measurement vectors are uncorrected with basis directions. Incoherence holds between many pairs of bases, such as between delta functions and the Fourier basis. Therefore, it is possible to reconstruct images, which are sparse in the Fourier domain, from single pixel measurements, which may be viewed as discrete spatial delta functions. Surprisingly, a random Gaussian or Bernoulli matrix is incoherent with respect to any arbitrary basis with high probability (see: E. J. Candes, J. Romberg, and T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information, IEEE Transactions on Information Theory, 52(2):489-509, 2006; D. L. Donoho, Compressed sensing. IEEE Transactions on Information Theory, 52(4): 1289-1306, 2006, which are each incorporated by reference herein).
Compressive sensing is able to reconstruct a signal using surprisingly few measurements, but assigning a signal to one of a few categories may be accomplished with orders-of-magnitude fewer measurements (see for instance J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE
Transactions on Pattern Analysis and Machine Intelligence (PAMI), 31(2):210-227, 2009, which is incorporated by reference herein). Typical natural images with n pixels can be recovered with significant savings, usually from n/10 to n/3 measurements (see J.
Romberg, Imaging via compressive sampling, Signal Processing Magazine, IEEE,
25(2): 14-20, 2008, which is incorporated by reference herein). For a 1-megapixel image, this means solving an t\ minimization with 100,000 constraints in a basis with one million vectors— a non-trivial computational task. In contrast, in the herein systems and methods, to classify a natural image and reduce it to its semantic content, only tens of measurement may be required.
Classification is often performed in a tailored low-dimensional basis extracted as hierarchical features of the data. One of the most ubiquitous methods in dimensionality reduction is principal components analysis (PCA) (see: K. Pearson, On lines and planes of closest fit to systems of points in space, Philosophical Magazine, 2(7-12):559-572, 1901; and R. O. Duda, P. E. Hart, and D. G. Stork, Pattern Classification, Wiley-Interscience, 2000, which are each incorporated by reference herein), which may be computed by a singular value decomposition (SVD) to yield an ordered orthonormal basis. Linear discriminant analysis (LDA) (see: C. M. Bishop, Pattern recognition and machine learning, Springer, New York, 2006; R. A. Fisher, The use of multiple measurements in taxonomic problems, Annals of Eugenics, 7(2): 179-188, 1936; and C. R. Rao, The utilization of multiple measurements in problems of biological classification, Journal of the Royal Statistical Society Series B (Methodological), 10(2): 159-203, 1948, which are each incorporated by reference herein) is commonly applied in conjunction with PCA for discrimination tasks. Indeed, PCA-LDA is one of the classic approaches used to introduce machine learning methodologies (R. O. Duda, P. E. Hart, and D. G. Stork, Pattern
Classification, Wiley-Interscience, 2000, which is incorporated by reference herein). High- variance PCA modes can account for common features shared across categories, which are not useful for discrimination. LDA produces a basis built with respect to the category geometry to maximize between-class variance while minimizing within-class variance. PCA and LDA have both been used extensively for facial recognition (see for instance: L. Sirovich and M. Kirby. A low-dimensional procedure for the characterization of human faces. Journal of the Optical Society of America A, 4(3):519-524, 1987; M. Turk and A. Pentland. Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71— 86, 1991; D. L. Swets and J. Weng. Using discriminant eigenfeatures for image retrieval. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 18(8):831—836, 1996; P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman. Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 19(7):711-720, 1997; H. Yu and J. Yang. A direct LDA algorithm for high-dimensional data- with application to face recognition. Pattern Recognition, 34:2067-2070, 2001; and A. M. Martinez and A. C. Kak. PCA versus LDA. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 23(2):228-233, 2001, which are each incorporated by reference herein) and is a common benchmark against which novel classification techniques are compared.
Compared to signal reconstruction, classification is particularly efficient because of 1) the use of a tailored low-dimensional feature basis, and 2) the simplicity of deciding on a category rather than reconstructing exact details (J. N. Kutz, Data-Driven Modeling & Scientific Computation: Methods for Complex Systems & Big Data. Oxford University Press, 2013, which is incorporated by reference herein).
A. Previous work on enhanced sparsity for classification
Combining ideas from compressive sensing with classification, there has been significant work relevant to exploring enhanced sparsity for classification. For example, the sparse representation algorithm has been applied to biometric recognition problems and demonstrated to be surprisingly robust (see J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma,. Robust face recognition via sparse representation, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 31(2):210-227, 2009 and J. K. Pillai, V. M Patel, R. Chellappa, and N. K Ratha, Secure and robust iris recognition using random projections and sparse representations, Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(9): 1877-1893, 2011, which are each incorporated by reference herein). Sparse approximation (see: J. A. Tropp, Greed is good: Algorithmic results for sparse approximation, IEEE Transactions on Information Theory, 50(10):2231-2242, 2004; and J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, Robust face recognition via sparse representation, IEEE Transactions on Pattern Analysis and Machine
Intelligence (PAMI), 31(2):210-227, 2009, which are each incorporated by reference herein) applied to semantic hierarchies (see M. Marszalek and C. Schmid, Semantic hierarchies for visual object recognition, In Computer Vision and Pattern Recognition, 2007, CVPR '07. IEEE Conference on, pages 1-7. IEEE, 2007, which is incorporated by reference herein) has been shown to be efficient in categorizing between large numbers of classes (see B. Kim, J. Y. Park, A. Mohan, A. C. Gilbert, and S. Savarese, Hierarchical classification of images by sparse approximation. In BMVC, pages 1-11. Citeseer, 2011, which is incorporated by reference herein). In another line of work, ideas from
compressive sensing have been applied to dimensionality reduction tools to develop a theory of sketched SVD based on randomly projected data (see: J. E. Fowler.
Compressive-projection principal component analysis, IEEE Transactions on Image Processing, 18(10):2230-2242, 2009; H. Qi and S. M. Hughes, Invariance of principal components under low-dimensional random projection of the data, IEEE International Conference on Image Processing, October 2012; and A. C. Gilbert, J. Y. Park, and M. B. Wakin, Sketched SVD: Recovering spectral features from compressive measurements, ArXiv e-prints, 2012, which are each incorporated by reference herein).
Of particular relevance to our work is the sparse representation for classification (SRC) algorithm as originally proposed for face recognition by Wright et al. (see J.
Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse representation, IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 31(2):210-227, 2009, which is incorporated by reference herein). In SRC, t\ minimization is used to find the sparsest representation of a test image in a dictionary composed of the training images. Each test image is then classified based on the category of dictionary elements whose sparse coefficients best reconstructs the test image, minimizing the residual in an & sense. Notably, in this framework, the classification is robust even using dictionaries of tens or hundreds of random projections measurements, due to the enhanced sparsity of classification.
These existing algorithms have all relied on random measurements, typically using dense or sparse random measurement matrices (A. C. Gilbert and P. Indyk, Sparse recovery using sparse matrices, Proceedings of the IEEE, 5=98(6):937-947, 2010, which is incorporated by reference herein). It remains unexplored how a refinement process may select an optimal subset of measurements to accomplish the classification, further enhancing sparsity. Moreover, acquiring randomly projected measurements may be impractical where many individual point measurements are prohibitively expensive (e.g. ocean or atmospheric sensors). How can classification be accomplished with very few point measurements, and would the locations of those measurements illuminate coherent features of the data?
B. Contributions and perspectives herein
Methods and systems described herein include a novel framework to harness the enhanced sparsity in image recognition to classify images based on very few pixel measurements. Its distinct contribution consists of optimally selecting, within a large set of measurement locations, a smaller subset of key locations that serve the classification. Classification using very few learned pixel sensors with the systems and methods performs comparably with using the full image. Further, the algorithm has a parameter to tune the trade-off between fewer sensors and accuracy.
Datasets may be vast and data acquisition can be limited by bandwidth on data streams. Therefore, we also develop an intermediate technique, related to the sketched SVD (see: J. E. Fowler, Compressive-projection principal component analysis, IEEE Transactions on Image Processing, 18(10):2230-2242, 2009; A. C. Gilbert and P. Indyk, Sparse recovery using sparse matrices, Proceedings of the IEEE, 5=98(6):937-947, 2010; and A. C. Gilbert, J. Y. Park, and M. B. Wakin. Sketched SVD: Recovering spectral features from compressive measurements. ArXiv e-prints, 2012, which are each
incorporated by reference herein), to start instead with a subsample of the original data. Starting with 10% of the original data, methods and systems described herein are still able to find nearly optimal sparse sensor locations.
In generic applications, this entire theory is identical if single pixels are replaced with random projection measurements. In the case of images, the use of single pixel measurements has a number of practical advantages, since pixels are the basic unit of measurement in images. Note that sampling with single-pixel measurements is effective when the image features are non-localized, so that measurements and basis are incoherent. Importantly, sparse sensor pixels identified by our algorithm cluster near coherent features in the images.
Our framework for sparse classification has two characteristics that distinguish it significantly from previous work: 1) instead of random measurements, spatial sensor locations are specifically selected; and 2) i\ minimization is applied once in sensor learning, and classification of new images involves dot products involving a very small measurement vector. A schematic representation of the methods and systems is found in Fig. 1 and associated text. Equations (7) and (9) comprise primary theoretical
contributions; Figs. 2 and 7 illustrate the sensor locations produced by the algorithm.
An engineering perspective of probing complex systems with underlying low- dimensional structure is taken herein, with the explicit goal of forming some categorical decision about the state of the system. The optimally sparse spatial sensors framework is particularly well suited for engineering applications, as an upfront investment in the learning phase allows remarkably efficient performance for all subsequent queries.
Abstractions of this method to more general data sets is discussed in more detail herein.
Without being bound by theory, the principle of enhanced sparsity may also have relevance for biological organisms, which often need to make decisions based on very limited sensory information. Specifically, organisms interact with high-dimensional physical systems in nature but must rely on information gathered through a handful of sensory organs. The methods and systems including the sparse sensor algorithm provides one approach to answer the question, Given a fixed budget of sensors, where should they be placed to optimally inform decision-making?
C. Organization
The section below provides an overview of well-known techniques on which contributions were built and establishes the notation used for the remainder herein.
Summarized are compressive sensing, the PCA-LDA method for categorization, and how random projections may be used to form the sketched SVD. Following is described our algorithm for determining optimally sparse sensor locations by using i\ minimization and a dictionary of learned features. Sensor learning was applied to two image discrimination tasks based on real-world data; the results of these experiments are presented in the Examples and the methods are discussed more generally herein.
II. Framework
Methods and systems include 1) design of optimal sensor locations that take advantage of enhanced sparsity in classification, and 2) to design sensors from
substantially subsampled data. To put our contributions in context, below is a review from compressive sensing, which describes the theory of l\ reconstruction, sparse random projection matrices, and the sketched SVD. Methods and systems make use of two well-established methods for systematically producing low-rank representations of a high-dimensional data set, principal components analysis (PCA) and linear discriminant analysis (LDA). Examples below demonstrate our algorithm by applying PCA in combination with LDA for face recognition. In general, the method can be implemented with many other dimensionality-reduction and discrimination algorithms, as drawn schematically in the top of Fig. 1.
Fig. 1 illustrates a schematic of the classification procedure based on full (top), randomly subsampled (middle), and sparsely sensed (bottom) data. Data measurements (left) are projected onto a PCA feature space (middle). LDA defines the projection wT from PCA space to Rc (right), where classification occurs. The Φ matrix is a random projection that sub-samples the full image; the Φι matrix is a learned projection that samples the full image at specific sensors as described below. Sparse sensors can also be learned from the randomly sub-sampled image, resulting in Φ2Φ, which is not the same as, but may approximate, Φι. The number of sensors q is at most q, which is bounded by r < q≤ r (c - 1); for two-way classification( c = 2), q = r.
A. Compressive sensing and sparse representation
Compressive sensing theory states that if the information of a signal x E R" is k- sparse in a transformed basis Ψ (all but k coefficients are zero), it is possible to
reconstruct the signal from very limited measurements 0(k \og(n/k)) (see: E. J. Candes. Compressive sensing. Proceedings of the International Congress of Mathematics, 2006; E. J. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal
reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489-509, 2006; E. J. Candes and M. B. Wakin. An introduction to compressive sampling. IEEE Signal Processing Magazine, pages 21-30, 2008; and E. J. Candes, Y. Eldar, D. Needell, and P. Randall. Compressed sensing with coherent and redundant dictionaries. Applied and Computational Harmonic Analysis, 31 :59-73, 2010, which are each incorporated by reference herein). This technique has widespread applications, including the fields of medical imaging (M. Lustig, D.L. Donoho, J.M.
Santos, and J.M. Pauly. Compressed sensing mri: A look at how cs can improve on current imaging techniques. IEEE Signal Processing Magazine, 25(2):72-82, 2008, which is incorporated by reference herein), neuroscience (S. Ganguli and H. Sompolinsky.
Compressed sensing, sparsity, and dimensionality in neuronal information processing and data analysis. Annual Review ofNeuroscience, 35:485-508, 2012, which is incorporated by reference herein), and engineering (S. Ganguli and H. Sompolinsky. Compressed sensing, sparsity, and dimensionality in neuronal information processing and data analysis. Annual Review ofNeuroscience, 35:485-508, 2012, which is incorporated by reference herein).
The standard framing of the compressive sensing problem is as follows. Let us suppose the measurement vector x £ Rp is related to the desired signal x E R" by Φ, a known p x n measurement matrix: x = Φχ. Compressive sensing seeks to recover x from x and applies to the underdetermined case where p « n.
The signal x has coefficients a in basis Ψ, so that
x = Φ χ = Φ Ψ α = Θα (1)
If x is sufficiently sparse in Ψ and the matrix Θ obeys the restricted isometry principle (RIP) (E. J. Candes and T. Tao. Decoding by linear programming. Information
Theory, IEEE Transactions on, 51(12):4203-4215, 2005, which is incorporated by reference herein), the search for a (and thus the reconstruction of x) is possible.
We would like to solve for the sparsest a,
a = ar grain II a! Wo, subject to x = Θα', (2) a'
but this involves an intractable combinatorial search. It has been shown that under certain conditions (related to the RIP), Eq. (2) may be solved by minimizing the t\ norm (E. J.
Candes, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal
reconstruction from highly incomplete frequency information. IEEE Transactions on
Information Theory, 52(2):489-509, 2006, D. L. Donoho. For most large underdetermined systems of linear equations the minimal 1 -norm solution is also the sparsest solution. Communications on pure and applied mathematics, 59(6):797-829, 2006, which are each incorporated by reference herein),
a = ar grain II a' II i, subject to x = Θα', (3) a'
Relaxation to a convex t\ minimization bypasses the combinatorially difficult problem. Solutions to Eq. (3) may be found through standard convex optimization routines, or by greedy algorithms such as orthogonal matching pursuit (J. A. Tropp, A. C. Gilbert, and M. J. Strauss. Algorithms for simultaneous sparse approximation, part i:
Greedy pursuit. Signal Processing, 86(3):572-588, 2006, J. A. Tropp and A. C. Gilbert. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on Information Theory, 53(12):4655-4666, 2007, which are each
incorporated by reference herein).
An important aspect of compressive sensing is the choice of the measurement matrix Φ. To achieve exact reconstruction, Omust be incoherent with respect to Ψ
(therefore satisfying the RIP). Many authors describe the properties of random matrix ensembles that fulfill these conditions with high probability: they include Gaussian, Bernoulli, and random partial Fourier matrices (E. J. Candes. Compressive sensing.
Proceedings of the International Congress of Mathematics, 2006, E. J. Candes and T. Tao. Near optimal signal recovery from random projections: Universal encoding strategies? IEEE Transactions on Information Theory, 52(12):5406-5425, 2006, D. L. Donoho.
Compressed sensing. IEEE Transactions on Information Theory, 52(4): 1289-1306, 2006, which are each incorporated by reference herein). In addition, sparse random matrices, where many of the entries of the measurement vectors are zero, also allow reconstruction (as reviewed in A. C. Gilbert and P. Indyk. Sparse recovery using sparse matrices.
Proceedings of the IEEE, 5=98(6):937-947, 2010, which is incorporated by reference herein).
In addition to the random measurement framework for reconstruction, we are concerned with the related question of the spectral properties of a data matrix under random projection. Consider a data matrix X = [xi x2 ... xm] £ R n x m. Let
x = 4>x>
where Φ £ R p x n and p « n, as above. Under what conditions are the spectral properties (i.e., the singular values and vectors) of X and X approximately equal? Gilbert et al. (see A. C. Gilbert, J. Y. Park, and M. B. Wakin. Sketched SVD: Recovering spectral features from compressive measurements. ArXiv e-prints, 2012, which is incorporated by reference herein) found that spectral properties are approximately equal when the random
measurement matrix Φ satisfies the distributed Johnson-Lindenstrauss (JL) property. This property regards the preservation of relative distances between points after random projection to a new space. Since the classification task depends on a geometric separation of data points between classes, the JL property is integral to the success of this model reduction step (dashed arrow between the full and subsampled PCA feature spaces in Fig. 1). A related line of work on sparse representation for classification (SRC) (J. Wright,
A. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse
representation. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI),
31(2):210-227, 2009, which is incorporated by reference herein) proposes to leverage sparsity promotion by t\ minimization for image recognition. In this formulation of the classification problem, each training image makes up a column of the dictionary Θ, and the test image y is represented as a linear combination of the dictionary elements weighted by coefficients a. Thus posed, the solution to the following equation is the sparsest representation of the test image given the dictionary,
a = ar grain II a' II i , subject to y = Θα.
a'
Note that the above equation is identical to Eq. (3).
It is then possible to construct a sparse approximation of y using only coefficients in a associated with the ith class,
% = Θδέ( έ),
where 5; (a) is a vector the same size as a whose only non-zero entries are the entries in a associated with training images in class i. Finally, y is assigned a category based on the class whose approximation minimizes the residual between y and y :
category (y) = ar grain \y— Θδέ( έ) II 2
i
The SRC framework exploits enhanced sparsity for classification, making use of the sparse subspace structure of face images. It has been demonstrated to work
exceptionally well for many possible dictionaries Θ, including common features such as eigenfaces and Fisher faces, and unncommon features including downsampled faces and random projections.
It is worth noting that the performance of SRC for face recognition is achieved at the cost of one t\ minimization for each test image. This is distinct from the methods and systems including algorithm discussed herein, which uses i\ minimization to identify sparse sensor locations in the training phase only.
B. PCA, LDA, and classification
Two main dimensionality reduction techniques are used to demonstrate the pixel refinement algorithm discussed below. Consider a set of observations Xi, i = I ;. . . ,m, where Xi £ R"; also suppose that m « n. Let us construct a data matrix X with columns x\, as represented schematically in Fig. 1 ; we assume that rows of X have been mean subtracted. The PCA consists of performing a singular value decomposition (SVD) defined by the following modal decomposition:
ΑΓ = Ψ∑ν*. (4)
The columns of Ψ are the left singular vectors of X; they span the columns of X and are often referred to as the principal components or features of the data set. The matrix∑ is diagonal whose entries δ;; are the singular values of X; there are only m nonzero singular values, and we may write∑ = [∑ m xm 0]T.
The columns of the matrix Ψ are eigenvectors of XXT :
Λ3ΓΓ Ψ = Ψ Λ, m x m
0 0
However, this is an extremely expensive eigendecomposition, since the dimension of XXT is n x n. In implementation, it is more practical to use the method of snapshots (L.
Sirovich. Turbulence and the dynamics of coherent structures, parts I-III. Q. Appl. Math., XLV(3):561-590, 1987, which is incorporated by reference herein), whereby we solve the m x m eigendecomposition for V:
xT x v= v∑mxm.
We may then obtain the m non-zero PCA modes by taking a linear combination of the columns of X:
ψ — γ
1 m Λ 1 '/ y-1
^τηχτη..
The SVD provides a systematic approach to transform data into a lower- dimensional representation. For data sets that are inherently low-rank, the singular values ∑ of Eq. (4), when ordered by magnitude, will contain many entries that are exactly zero, providing evidence for an exact low-rank representation. With most realistic applications and data sets, though, the singular values often exhibit a power-law decrease, where the low-rank representation of the data is approximate. The singular values exhibiting a power-law decrease provides the opportunity for a heuristic (at least quantitatively informed) decision about the number of information-rich dimensions to retain for the reduced-order subspace. Taking the r columns of Ψ as basis vectors corresponding to the r largest singular values, a truncated basis ΨΓ can be formed. We may use this basis to project data from the full measurement space into the reduced r-dimensional PCA space (known as feature space):
ΨΓ Γ : R"→R' , (5) x I→ a. Thus, the principal components ΨΓ minimize the έ¾ projection error, II x - ΨΓ Ψ,ί x ll2.
For categorical decisions, we apply the well known classifier linear discriminant analysis (LDA) in r-dimensional feature space. Despite the plethora of classifier techniques, we chose LDA for its simplicity and well-established behavior. We note once again that PCA and LDA are serving only to illustrate the concept proposed in Fig. 1.
We start by noting that each observation x, (and thus each a, = Ψ? χ;) belongs to one of c distinct categories Cj where j = 1 ; : : : ; c. LDA attempts to find a set of directions in feature space w £ R r x c " 1 where the between-class variance is maximized and within- class variance is minimized. Specifically,
W'TSRW '
w = argmax— . (6,
& w'TSww' ' w'
Here SW is the within-class scatter matrix,
Sw =∑jc=i∑iecj (a; - μ;) (a; - μ7·)Γ,
where μ7· is the mean of class j. B is the between-class scatter matrix,
Figure imgf000018_0001
where Ν,· is the number of observations in class j and μ is the mean of all observations in A = Ψ^ X (in this case μ = 0). LDA may also be performed on the full data X.
It follows that w are eigenvectors corresponding to the nonzero eigenvalues of It should be noted that the number of non-trivial LDA directions must be less than or equal to the number of categories minus one (c - 1), and that at least r + c samples are needed to guarantee is not singular (C M. Bishop. Pattern recognition and machine learning. Springer, New York, 2006, which is incorporated by reference herein).
In schematic form, the top "full image" row of Fig. 1 illustrates the PCA-LDA process for the classification task. In the case of c = 2 categories, the LDA projection is n ' : I → R, and we may apply a threshold value that separates the two categories.
Assuming the two categories have equal covariances, we pick the threshold to be the midpoint between the means of the two classes, η>τμ\ and η>τμΒ.
For decisions between c≥ 2 categories, LDA produces a projection w
> c-l
We use a nearest centroid (NC) method for classification, in which a new
measurement ¾· is assigned to category j for which the distance between wT x; and w>¾ is minimal. Many other classifier choices are possible in the reduced RC~ space, including nearest neighbor (NN) (R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. Wiley-Interscience, 2000, which is incorporated by reference herein), nearest subspace (NS) (K. Lee, J. Ho, and D. J Kriegman. Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Pattern Anal Mach Intell, 27(5):684-98, May 2005, which is incorporated by reference herein), support vector machines (SVM) (A. Munoz and J.M. Moguerza. Estimation of high-density regions using one-class neighbor machines. IEEE Trans Pattern Anal Mach Intell, 28(3):476-80, Mar 2006, which is incorporated by reference herein), and sparse representation (SRC) (J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 31(2):210-227, 2009, which is incorporated by reference herein). While it is possible these classifiers may improve the accuracy of the decision, attention is restricted to NC since the focus is not on the specific decision algorithm, but rather on the method of learning sensor locations.
III. Obtaining Sparse Sensor Locations
The method and system to learn sparse sensor locations is described below, building upon the theory described above. We begin by addressing classification between c = 2 categories, reducing a full measurement vector to a small handful of key
measurements. Also described below is an intermediate technique that starts instead with a random subsample of the full data. Next is developed an extension of the algorithm to classification between c≥ 2 categories. Also introduced is a coupling weight λ, which allows us to decrease the number of sensors required at the cost of slightly lower classification accuracy. Finally, below is demonstrated the possibility of transforming the full-dimensional feature/discrimination projection into an approximate projection from the learned sparse measurements to the decision space. Alternatively, it is possible to recompute a new projection to decision space directly from the data matrix comprised of sparse measurements, as in the last row of Fig. 1. In the following, we refer to features ΨΓ (e.g., PCA modes) and discrimination vectors w (e.g., LDA directions) to emphasize the generality of the method and system.
A. Deciding between two categories
Let us first consider learning sparse sensors for a classification problem between c = 2 categories. The discrimination vector w encodes the direction in ΨΓ that is most informative in discriminating between categories of observations given by X. We seek a measurement vector s that satisfies s = w. In particular, we seek the sparse solution s to find the measurements that best reconstruct w, our approach is to solve for
s = argmin I s II i , subject to s = w. (7) The equations for s are under constrained, so that we may use t\ minimization to find the sparse solution with at most r nonzero elements. It is important to remember that s £ R" and can be visualized as an image where most of the pixels are zero. The r non-zero elements of s represent locations of sensors that best recapture the discriminant projection vector ΨΓ w.
To motivate this approach, consider that we have constructed a projection Ψ^ onto a set of features and a projection wT from feature space to classification space; we may simplify this into a single projection:
η = wTyVjxtest
= (X, xtest )
where X = ΨΓ w. Therefore, Eq. (7) finds the sparse measurement vector s that projects to the same feature space coordinates as X:
ΨΓ Γ s = ΨΓ Γ X
= ΨΓ Γ ΨΓ w = w
\\r x r
Finally, we see that
5 = ΛΓ + ξ (8) where ξ £ ker (Ψ^ΓΤ). That is to say, the sparse measurement s is the same as the vector X (e.g., a Fisherface) plus some residual vector that does not contain any dominant features (i.e., is not in the span of dominant features).
Figure 2 depicts a visualization of the Fisherface X = ΨΓ w and how sparse sensors, as described herein, approximate its dominant features, (a) X (black) and sparse approximation s (red), (b) An alternative visualization of X (grey) and locations of sparse sensors (non-zero elements of s, in red). See below for specific details of cat/dog image recognition.
To compare X and s, Fig. 2 depicts a visualization of these vectors for the dog/cat face recognition problem discussed below. Figure 2 (a) shows the magnitude X along with the 20 pixel measurement vector s obtained by t\ minimization of Eq. (7). The image of X (grey) and the sparse pixels (red) are shown in Fig. 2 (b). Importantly, it is not possible to obtain sparse pixel measurements by thresholding X (Fig. 2 (a)); rather, s is a sparse image that exactly projects to w in ΨΓ space.
To implement this optimal sub-sampling at learned sensors, we construct a q x n projection matrix Φι that maps x \→ x. The number of sensors q is at most r, although in practice q is usually equal to r. The matrix Φι are rows of the n x n identity matrix corresponding to the non-zero elements of s.
D. Learning from randomly subsampled data
The identical approach applies when starting from subsampled data X (the middle "sub-sampled" row of Fig. 1). We now solve for s that best reconstructs the vector w in a space defined by the columns of ΨΓ. Φ is then refined to construct Φ2, the r rows of the p x p identity matrix corresponding to non-zero elements of s.
Note that Φι and Φ2 Φ are both r x n matrices, where typically r « p « n.
Although the two sub-sampling matrices are not the same, we show below that they are quite similar when pin≥ 0.1.
Finally, having learned sparse sensor locations, we project the data into Rr,
Χ = Φ Χ
where Φ can be either Φι ΟΓΦ2 Φ, as illustrated in Fig. 1. Possible methods to project these learned sparse measurements to classification space are described below.
E. Deciding between more than two categories
The simplest extension to classification between c≥ 2 categories can be implemented by considering the projection wT : Rr→ k ~' to decision space, followed by independently solving Eq. (7) for each column of w. However, this approach scales badly with c; in general, discriminating c categories of data by projection into r-dimensional feature space results in at most q = r(c-l) learned sensors locations.
An alternative formulation of the convex optimization problem solves for columns of s E R" x (c l) simultaneously; each column of s (image) projects to a column of w
(discrimination vector) in feature space. We introduce a norm that penalizes the total number of non-zero rows in s (pixel measurements) to reconstruct the c - 1 columns of w. Specifically,
s =argmin {II s' ll + λ II s' Hi},
sl
subject to II s1 - w\\F≤ ε, (9) where IIMIIi
Figure imgf000022_0001
is the Frobenius norm, and ε is a small error tolerance (ε ~ 10~10 for examples as below).
Once again, we have an underdetermined system of equations for s. The value of the coupling weight λ determines the number of non-zero rows of s, so that the number of sensors q identified is at most q where r≤ q ≤ r (c - 1).
In the limit where = 0, the solution to Eq. (9) is the same as obtained by the uncoupled, independent approach. In general, solving Eq. 9 with λ = 0 would result in at most r(c - 1) sensor locations. As λ becomes larger, the coupling between columns of s becomes stronger, and the same pixel location can be shared between columns of s to approximately reconstruct columns of w. In other words, the same pixel measurement is re-used to capture multiple linear discriminant projection vectors. In the limit λ→∞, a minimum of r sensors are found as non-zero rows of s.
The optimization problem as formulated in Eq. (9) is closely related to the one solved by Simultaneous Orthogonal Matching Pursuit (S-OMP, J. A. Tropp, A. C. Gilbert, and M. J. Strauss. Algorithms for simultaneous parse approximation, part i: Greedy pursuit. Signal Processing, 86(3):572-588, 2006, J. A. Tropp. Algorithms for
simultaneous sparse approximation, part ii: Convex relaxation. Signal Processing, 86(3):589-602, 2006, which are each incorporated by reference herein). S-OMP is a greedy algorithm, so instead of a coupling weight parameter λ, one would decide on a stopping criterion (for example, the desired number of iterations/sensors).
Following the same approach as in deciding between two categories, we implement sub-sampling at learned sensors to construct a projection matrix Φ.
F. Projecting sparse measurements to classification space
In the above procedure, the projections from the high-dimensional space into feature space ( Ψ^) and then into decision space {wT ) are computed as a one-time upfront cost. It is then possible to re-use these projections to obtain an induced projection into the decision space starting only from learned sparse measurements. The alternative is to recompute the discrimination vectors on the sparse measurement data X.
Given a new image x, we obtain a vector η of decision space coordinates as follows:
v = wT Vj x. (10)
We then substitute w = Ψ? s and x « Φτ x, where x = Φχ, into Eq. (10):
fj = sT ΨΓ ΨΓ Γ ΦΓχ. (11) where fj « η. Define z as the measurement projection of s, so that z = Φ and s « ΦΓζ. Substituting into Eq. (1 1) yields:
= ζτ Φ ΨΓ Ψ?ΐ Φτ χ
= ζτ Ύ χ.
The matrix T = Φ ΨΓ ΨΓ ΓΦΓ defines a new inner-product on the space of sparse measurements that preserves the geometry of decision space. To compute T efficiently, compute Φ ΨΓ and Ψ?ΦΤ separately and then multiply the results.
Re-computing the discrimination projection on the sparse data X = ΦΧ is typically inexpensive because of the small number of rows. Additionally, for multi-way
classification, this usually leads to better performance, as is discussed in the Examples.
Figure 9 shows a system in which one or more methodologies or technologies can be implemented such as, for example, determining optimal sparse set information, determining significant sensors from a sensor network, determining significant pixel identification classifying information associated with a target decision task, classifying imaged objects, refining a large set of measurement locations to learn a much smaller subset of key locations that best serve a classification task, or the like. Figure 9 illustrates the extension of image classification procedure to a broader class of datasets. In this case, single-pixel measurements become individual sensors in a sensor network. In some embodiments, single-pixel measurements become individual sensors in a sensor network.
The system includes an algorithm that refines a large set of measurement locations to learn a much smaller subset of key locations to best serve a classification task. The algorithm exploits enhanced sparsity for classification, when reconstruction may be bypassed. Enhanced sparsity provides an orders-of-magnitude reduction in number of measurements required when compared with standard compressive sensing strategies. Measurements are projected directly into decision space; reconstruction from so few measurements is neither possible or needed.
Our algorithm leverages the sparsity promoting i\ minimization as phrased in Equations (7) or (9). These convex optimization problems solve for a sparse set of sensor locations to closely approximate the linear discrimination vector in PCA space. The approach may be generalized to a variety of other dimensionality reduction and
discrimination algorithms. In addition, once a set of sensors have been obtained, a number of more sophisticated classifier algorithms may be applied to these measurements to improve the classification accuracy. These steps are shown in panels (d) and (e) of Fig. 9. The approach was demonstrated on two examples of image recognition (see Examples). Classifiers built on learned sensors approached the performance of classifiers built on projections to principal components of the full images. These optimal sensor locations could be learned approximately even from already randomly subsampled images. In either case, the learned sensors performed significantly better than the matched number of randomly chosen sensors. Further, the ensemble of learned sensor locations clustered around coherent features of the images. It is possible to think of this ensemble as a pixel mask for the faces in the training set, which may be of use when applied to engineered or biological systems where the important features may not be salient by inspection.
It is plausible that biological organisms may exploit enhanced sparsity for classification. Organisms interact with the external world with motor outputs, which are often discrete trajectories at specific moments in time, so that the transformation from sensory inputs to motor outputs can be thought of as a classification task. For example, a fly has no need to reconstruct the full flow velocity field around its body— it has only to decide what to do in response to a gust. Sensory organs and data processing by the nervous system can be expensive; therefore, it is advantageous to place a smaller number of sensors at key locations on the body.
These convex optimization problems solve for a sparse set of sensor locations to closely approximate the linear discrimination vector in PCA space. The approach may be generalized to a variety of other dimensionality reduction and discrimination algorithms. In addition, once a set of sensors have been obtained, a number of more sophisticated classifier algorithms may be applied to these measurements to improve the classification accuracy. Examples of these steps are shown in panels (d) and (e) of Fig. 9.
Although introduced for image discrimination, the system and algorithm may be applied to a variety of non-image data types with more general sensor networks; this is shown schematically in Fig. 9. One such application is the detection of disease spread in epidemiological monitoring. We also envision these methods applying to mobile sensor networks in oceanographic and atmospheric sampling, as well as to detect and monitor various network behaviors in the electrical grid, internet packet routing, and
transportation.
Figure 12 illustrates applications for some embodiments of the methods and systems described herein. In some embodiments, the systems and methods are applicable for use with grid technologies. For example, methods and systems described herein can be applied to epidemiological monitoring, power electrical grid systems, internet-related systems, and grids relating to the transport of goods and services. In some embodiments, the systems and methods are applicable for use with medical diagnostics. For example, methods and systems described herein can be used as part of medical diagnostic systems such as those used in genetics, epigenetics, metabolites and natural sensing for
prosthetics. In some embodiments, the systems and methods are applicable for use with engineering diagnostics. For example, methods and systems described herein can be used in process control for manufacturing and acoustic tone engineering diagnostics. In some embodiments, the systems and methods described herein are applicable for use with other technologies. For example, methods and systems described herein can be used with RGB- D images and questionnaire refinement.
In an embodiment, the system includes an object classification system. In an embodiment, the object classification system includes one or more modules. For example, in an embodiment, the object classification system includes a significant sensor module operable to determine significant sensor information associated with a sensor network. In an embodiment, the object classification system includes a significant pixel identification module.
In an embodiment, a module includes, among other things, one or more computing devices such as a processor (e.g., a microprocessor), a central processing unit (CPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), or the like, or any combinations thereof, and can include discrete digital or analog circuit elements or electronics, or combinations thereof. In an embodiment, a module includes one or more ASICs having a plurality of predefined logic components. In an embodiment, a module includes one or more FPGAs, each having a plurality of programmable logic components.
In an embodiment, a module includes one or more components operably coupled (e.g. , communicatively, electromagnetically, magnetically, ultrasonically, optically, inductively, electrically, capacitively coupled, or the like) to each other. In an
embodiment, a module includes one or more remotely located components. In an embodiment, remotely located components are operably coupled, for example, via wireless communication. In an embodiment, remotely located components are operably coupled, for example, via one or more receivers, transmitters, transceivers, antennas, or the like. In an embodiment, the drive control module includes a module having one or more routines, components, data structures, interfaces, and the like.
In an embodiment, a module includes memory that, for example, stores
instructions or information. For example, in an embodiment, a control module includes memory that stores optimal sparse pixel set information, near optimal sparse pixel set information, reference image information, training image information, protocol information, sensor set information, optimal sparse sensor set information, near optimal sparse sensor set information, optimal sparse mote location information, etc. Non-limiting examples of memory include volatile memory (e.g., Random Access Memory (RAM), Dynamic Random Access Memory (DRAM), or the like), non- volatile memory (e.g., Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or the like), persistent memory, or the like. Further non-limiting examples of memory include Erasable
Programmable Read-Only Memory (EPROM), flash memory, or the like. In an embodiment, the memory is coupled to, for example, one or more computing devices by one or more instructions, information, or power buses. For example, in an embodiment, the significant pixel identification module includes memory that, for example, stores optimal sparse pixel set information, near optimal sparse pixel set information, reference image information, training image information, protocol information, or the like.
In an embodiment, a module includes one or more computer-readable media drives, interface sockets, Universal Serial Bus (USB) ports, memory card slots, or the like, and one or more input/output components such as, for example, a graphical user interface, a display, a keyboard, a keypad, a trackball, a joystick, a touch-screen, a mouse, a switch, a dial, or the like, and any other peripheral device. In an embodiment, a module includes one or more user input/output components, user interfaces, or the like, that are operably coupled to at least one computing device configured to control (electrical,
electromechanical, software-implemented, firmware -implemented, or other control, or combinations thereof) at least one parameter associated with, for example, controlling one or more of activating, operating, or the like, object classification system.
In an embodiment, a module includes a computer-readable media drive or memory slot that is configured to accept signal-bearing medium (e.g., computer-readable memory media, computer-readable recording media, or the like). In an embodiment, a program for causing a system to execute any of the disclosed methods can be stored on, for example, a computer-readable recording medium (CRMM), a signal-bearing medium, or the like. Non-limiting examples of signal-bearing media include a recordable type medium such as a magnetic tape, floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), Blu-Ray Disc, a digital tape, a computer memory, or the like, as well as transmission type medium such as a digital or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link (e.g., receiver, transmitter, transceiver, transmission logic, reception logic, etc.). Further non-limiting examples of signal-bearing media include, but are not limited to, DVD-ROM, DVD-RAM, DVD+RW, DVD-RW, DVD-R, DVD+R, CD-ROM, Super Audio CD, CD-R, CD+R, CD+RW, CD-RW, Video Compact Discs, Super Video Discs, flash memory, magnetic tape, magneto-optic disk, MINIDISC, non-volatile memory card, EEPROM, optical disk, optical storage, RAM, ROM, system memory, web server, or the like.
In an embodiment, the significant pixel identification module includes circuitry for determining significant pixels from one or more reference images. For example, in an embodiment, the significant pixel identification module includes circuitry for determining significant pixels from one or more training images.
In an embodiment, the significant pixel identification module includes circuitry for generating optimal sparse pixel set information. In an embodiment, the significant pixel identification module includes circuitry for generating a near optimal sparse pixel set information. In an embodiment, the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based one or more learning protocols.
In an embodiment, the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a modal decomposition protocol. In an embodiment, the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a convex optimization protocol. In an embodiment, the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on an t\ -minimization protocol. In an embodiment, the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a subsampling using a sparse random matrices protocol. In an embodiment, the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a subsampling using a dense random matrices protocol.
In an embodiment, the significant pixel identification module includes circuitry for determining one or more discrimination vectors associated with the significant pixels based on a discrimination analysis protocol.
In an embodiment, the object classification system includes object classification module. For example, in an embodiment, the object classification system includes object classification module including circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information. In an embodiment, the object classification module includes circuitry for classifying at least one object in the first image based on the comparison. In an embodiment, the object classification module includes circuitry for classifying the at least one object in the first image based on comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information and the one or more discrimination vectors. In an embodiment, the object classification module includes circuitry for determining reference data clusters associated with the significant pixels in a sparse sensor space.
In an embodiment, the object classification system includes circuitry for classifying an object in at least one image. For example, in an embodiment, the object classification system includes circuitry for classifying an object in at least one image based on a comparison of one or more parameters associated with the optimal sparse pixel set to one or more pixels associated with the at least one image. In an embodiment, the object classification module includes circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information and generating classification information associated with at least one object in the first image based on a learning protocol.
In an embodiment, an object classification system includes circuitry for determining an optimal sparse pixel set. For example, in an embodiment, the object classification system includes circuitry for determining an optimal sparse pixel set from one or more reference images. In an embodiment, the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more discrimination vectors from the one or more reference images. In an embodiment, the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more sparse pixel locations. In an embodiment, the circuitry for determining the optimal sparse pixel set includes circuitry for generating a feature space transformation from the one or more reference images. In an embodiment, the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more weighting factors associated with the optimal sparse pixel set.
In an embodiment, the object classification system includes circuitry for classifying an object. For example, in an embodiment, the object classification system includes circuitry for classifying an object in at least one image based on a comparison of one or more parameters associated with the optimal sparse pixel set to one or more pixels associated with the at least one image. In an embodiment, the circuitry for classifying the object in at least one image includes circuitry for generating one or more classification categories associated with the object.
In an embodiment, an object classification system includes a significant pixel determination from one or more reference images module. In an embodiment, the object classification system includes an optimal sparse pixel set information generation module.
In an embodiment, the object classification system includes an object classification module. In an embodiment, the object classification module includes one or more first image pixels comparison to optimal sparse pixel set information, the one or more first image pixels indicative of at least one object in the first image. In an embodiment, the object classification module includes a first image object classification module configured to classify at least one object in the first image based on a comparison of one or more pixels of the first image to optimal sparse pixel set information.
In an embodiment, an object classification system includes a significant pixel identification module and an object classification module. In an embodiment, the significant pixel identification module includes circuitry for determining significant pixels from one or more reference images, and generating optimal sparse pixel set information. In an embodiment, the object classification module includes circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information. In an embodiment, the object classification module includes circuitry for classifying at least one object in the first image based on the comparison. In an embodiment, the object classification module includes circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information. In an embodiment, the object classification module includes circuitry for classifying at least one object in the first image based on the comparison.
In an embodiment, a system includes a significant sensors identification module. For example, in an embodiment, the system includes a significant sensors identification module including circuitry for determining significant sensors associated with a sensor network. In an embodiment, the significant sensors identification module includes circuitry for determining optimal sparse sensor location information associated with the sensor network. In an embodiment, the significant sensors identification module includes circuitry for determining nearly optimal sparse sensor location information associated with the sensor network. In an embodiment, the significant sensors identification module includes circuitry for determining optimal sparse sensor location information associated with the sensor network based on at least one t\ -minimization protocol.
In an embodiment, the significant sensors identification module includes circuitry for determining significant motes associated with a sensor network. In an embodiment, the significant sensors identification module includes circuitry for determining significant motes associated with a sensor network, and wherein the optimal sparse sensor module includes circuitry for generating optimal sparse mote set information from the significant motes responsive to one or more target decision task inputs. In an embodiment, the significant sensors identification module includes circuitry for determining optimal sparse mote location information associated with a mote network.
In an embodiment, the system includes an optimal sparse sensor module. For example, in an embodiment, the system includes an optimal sparse sensor module including circuitry for generating optimal sparse sensor set information from the significant sensors. In an embodiment, the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs.
In an embodiment, the system includes a target decision task module. For example, in an embodiment, the system includes a target decision task module including circuitry for classifying information associated with the target decision task. In an embodiment, the system includes a target decision task module including circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information. In an embodiment, the system for classifying grid behavior includes circuitry for determining significant sensors from a sensor network.
In an embodiment, the system for classifying grid behavior includes circuitry for generating optimal sparse sensor set information from the significant sensors. For example, in an embodiment, the system for classifying grid behavior includes circuitry for generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs. In an embodiment, the circuitry for generating the optimal sparse sensor set information includes circuitry for generating one or more discrimination vectors associated with the sensor network. In an embodiment, the circuitry for generating the optimal sparse sensor set information includes circuitry for generating one or more sparse sensor locations associated with the sensor network.
In an embodiment, the system for classifying grid behavior includes circuitry for classifying information associated with the target decision task. For example, in an embodiment, the system for classifying grid behavior includes circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information. In an embodiment, the circuitry for classifying information associated with the target decision task includes circuitry for generating one or more classification categories associated with the target decision task. In an embodiment, the circuitry for classifying information associated with the target decision task includes circuitry for generating one or more classification parameters associated with the target decision task.
In an embodiment, a system includes one or more modules. For example, in an embodiment, the system includes a significant sensors determination from a sensor network module. In an embodiment, the system includes an optimal sparse sensor set information generation module. In an embodiment, the optimal sparse sensor set information generation module is configured to generate optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs. In an embodiment, the system includes a target decision task classification module. In an embodiment, the target decision task classification module is configured to classify information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information. In an embodiment, a system includes a significant sensors identification module and an optimal sparse sensor module. In an embodiment, the significant sensors identification module includes circuitry for determining significant sensors associated with a sensor network. In an embodiment, the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors. In an embodiment, the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors. In an embodiment, the system includes a target decision task module. In an embodiment, the target decision task module includes circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
Figure 13 shows a method 300 for classifying images. At 310, the method 300 includes determining optimal sparse pixel set information from one or more reference images. At 312, determining the optimal sparse pixel set information includes
determining the optimal sparse pixel set information based on at least one recovery protocol. At 314, determining the optimal sparse pixel set information includes determining the optimal sparse pixel set information based on at least one
Figure imgf000032_0001
protocol. At 316, determining the optimal sparse pixel set information includes determining optimal sparse pixel location information. At 318, determining the optimal sparse pixel set information includes determining optimal sparse pixel weighting factor information. At 320, determining the optimal sparse pixel set information includes determining enhanced sparsity classification information associated with the one or more reference images. At 322, determining the optimal sparse pixel set information includes applying a coupling weight protocol.
At 330, determining the optimal sparse pixel set information includes generating discrimination vector information associated with the one or more reference images. At 340, the method 300 includes comparing one or more parameters associated with the optimal sparse pixel set information to one or more pixels associated with an image. At 350, the method 300 includes classifying at least one object in the image based on the comparison.
Figure 14 shows a method 400. At 410, the method 400 includes determining significant sensors from a sensor network. At 412, determining significant sensors from the sensor network includes determining one or more spatial sensor locations optimally informative to a categorical discrimination based on enhanced sparsity protocol. At 414, determining significant sensors from the sensor network includes determining one or more spatial sensor locations optimally informative to a categorical discrimination based on enhanced sparsity for classification protocol. At 416, determining significant sensors from the sensor network includes determining sparse representation information of discriminant vectors in a feature space. At 418, determining significant sensors from the sensor network includes determining sparse representation information of discriminant vectors in a feature space based on an t\ -minimization protocol. At 420, determining significant sensors from the sensor network includes determining nearly optimal sparse sensor locations information. At 422, determining the significant sensors from the sensor network includes determining significant motes from a sensor network, and generating optimal the sparse sensor set information includes generating optimal sparse mote set information from the significant motes.
At 430, the method 400 includes generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs. At 440, the method 400 includes classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information. At 442, classifying the information associated with the target decision task includes generating categorical decision information about a state of the sensor network.
It is noted that Figures 13 and 14 denotes "start" and "end" positions. However, nothing herein should be construed to indicate that these are limiting and it is
contemplated that other or additional steps or functions can occur before or after those described in Figures 13 and 14.
The claims, description, and drawings of this application may describe one or more of the instant technologies in operational/functional language, for example as a set of operations to be performed by a computer. Such operational/functional description in most instances can be specifically-configured hardware (e.g., because a general purpose computer in effect becomes a special purpose computer once it is programmed to perform particular functions pursuant to instructions from program software).
Importantly, although the operational/functional descriptions described herein are understandable by the human mind, they are not abstract ideas of the operations/functions divorced from computational implementation of those operations/functions. Rather, the operations/functions represent a specification for the massively complex computational machines or other means. As discussed in detail below, the operational/functional language must be read in its proper technological context, i.e., as concrete specifications for physical implementations.
The logical operations/functions described herein are a distillation of machine specifications or other physical mechanisms specified by the operations/functions such that the otherwise inscrutable machine specifications may be comprehensible to the human mind. The distillation also allows one of skill in the art to adapt the operational/functional description of the technology across many different specific vendors' hardware configurations or platforms, without being limited to specific vendors' hardware configurations or platforms.
Some of the present technical description (e.g., detailed description, drawings, claims, etc.) may be set forth in terms of logical operations/functions. As described in more detail in the following paragraphs, these logical operations/functions are not representations of abstract ideas, but rather representative of static or sequenced specifications of various hardware elements. Differently stated, unless context dictates otherwise, the logical operations/functions are representative of static or sequenced specifications of various hardware elements. This is true because tools available to implement technical disclosures set forth in operational/functional formats— tools in the form of a high-level programming language (e.g., C, java, visual basic), etc.), or tools in the form of Very high speed Hardware Description Language ("VIDAL," which is a language that uses text to describe logic circuits-)— are generators of static or sequenced specifications of various hardware configurations. This fact is sometimes obscured by the broad term "software," but, as shown by the following explanation, what is termed "software" is a shorthand for a massively complex interchanging/specification of ordered- matter elements. The term "ordered-matter elements" may refer to physical components of computation, such as assemblies of electronic logic gates, molecular computing logic constituents, quantum computing mechanisms, etc.
For example, a high-level programming language is a programming language with strong abstraction, e.g., multiple levels of abstraction, from the details of the sequential organizations, states, inputs, outputs, etc., of the machines that a high-level programming language actually specifies. See, e.g., Wikipedia, High-level programming language, available at the website en.wikipedia.org/wiki/High-level_programming_language (as of June 5, 2012, 21 :00 GMT). In order to facilitate human comprehension, in many instances, high-level programming languages resemble or even share symbols with natural languages. See, e.g., Wikipedia, Natural language, available at the website
en.wikipedia.org/wiki/Natural_language (as of June 5, 2012, 21 :00 GMT).
It has been argued that because high-level programming languages use strong abstraction (e.g., that they may resemble or share symbols with natural languages), they are therefore a "purely mental construct" (e.g., that "software" - a computer program or computer -programming— is somehow an ineffable mental construct, because at a high level of abstraction, it can be conceived and understood in the human mind). This argument has been used to characterize technical description in the form of
functions/operations as somehow "abstract ideas." In fact, in technological arts (e.g., the information and communication technologies) this is not true.
The fact that high-level programming languages use strong abstraction to facilitate human understanding should not be taken as an indication that what is expressed is an abstract idea. In an embodiment, if a high-level programming language is the tool used to implement a technical disclosure in the form of functions/operations, it can be understood that, far from being abstract, imprecise, "fuzzy," or "mental" in any significant semantic sense, such a tool is instead a near incomprehensibly precise sequential specification of specific computational -machines— the parts of which are built up by activating/selecting such parts from typically more general computational machines over time (e.g. , clocked time). This fact is sometimes obscured by the superficial similarities between high-level programming languages and natural languages. These superficial similarities also may cause a glossing over of the fact that high-level programming language implementations ultimately perform valuable work by creating/controlling many different computational machines.
The many different computational machines that a high-level programming language specifies are almost unimaginably complex. At base, the hardware used in the computational machines typically consists of some type of ordered matter (e.g. , traditional electronic devices (e.g., transistors), deoxyribonucleic acid (DNA), quantum devices, mechanical switches, optics, fluidics, pneumatics, optical devices (e.g., optical
interference devices), molecules, etc.) that are arranged to form logic gates. Logic gates are typically physical devices that may be electrically, mechanically, chemically, or otherwise driven to change physical state in order to create a physical reality of Boolean logic.
Logic gates may be arranged to form logic circuits, which are typically physical devices that may be electrically, mechanically, chemically, or otherwise driven to create a physical reality of certain logical functions. Types of logic circuits include such devices as multiplexers, registers, arithmetic logic units (ALUs), computer memory devices, etc., each type of which may be combined to form yet other types of physical devices, such as a central processing unit (CPU)— the best known of which is the microprocessor. A modern microprocessor will often contain more than one hundred million logic gates in its many logic circuits (and often more than a billion transistors). See, e.g., Wikipedia, Logic gates, available at the website en.wikipedia.org/wiki/Logic_gates (as of June 5, 2012, 21 :03 GMT).
The logic circuits forming the microprocessor are arranged to provide a
microarchitecture that will carry out the instructions defined by that microprocessor's defined Instruction Set Architecture. The Instruction Set Architecture is the part of the microprocessor architecture related to programming, including the native data types, instructions, registers, addressing modes, memory architecture, interrupt and exception handling, and external Input/Output. See, e.g., Wikipedia, Computer architecture, available at the website en.wikipedia.org/wiki/Computer_architecture (as of June 5, 2012, 21 :03 GMT).
The Instruction Set Architecture includes a specification of the machine language that can be used by programmers to use/control the microprocessor. Since the machine language instructions are such that they may be executed directly by the microprocessor, typically they consist of strings of binary digits, or bits. For example, a typical machine language instruction might be many bits long {e.g., 32, 64, or 128 bit strings are currently common). A typical machine language instruction might take the form
"11110000101011110000111100111111" (a 32 bit instruction).
It is significant here that, although the machine language instructions are written as sequences of binary digits, in actuality those binary digits specify physical reality. For example, if certain semiconductors are used to make the operations of Boolean logic a physical reality, the apparently mathematical bits "1" and "0" in a machine language instruction actually constitute a shorthand that specifies the application of specific voltages to specific wires. For example, in some semiconductor technologies, the binary number "1" (e.g., logical "1") in a machine language instruction specifies around +5 volts applied to a specific "wire" (e.g., metallic traces on a printed circuit board) and the binary number "0" (e.g., logical "0") in a machine language instruction specifies around -5 volts applied to a specific "wire." In addition to specifying voltages of the machines' configuration, such machine language instructions also select out and activate specific groupings of logic gates from the millions of logic gates of the more general machine. Thus, far from abstract mathematical expressions, machine language instruction programs, even though written as a string of zeros and ones, specify many, many constructed physical machines or physical machine states.
Machine language is typically incomprehensible by most humans (e.g., the above example was just ONE instruction, and some personal computers execute more than two billion instructions every second). See, e.g., Wikipedia, Instructions per second, available at the website en.wikipedia.org/wiki/Instructions_per_second (as of June 5, 2012, 21 :04 GMT).
Thus, programs written in machine language— which may be tens of millions of machine language instructions long— are incomprehensible. In view of this, early assembly languages were developed that used mnemonic codes to refer to machine language instructions, rather than using the machine language instructions' numeric values directly (e.g. , for performing a multiplication operation, programmers coded the abbreviation "mult," which represents the binary number "01 1000" in MIPS machine code). While assembly languages were initially a great aid to humans controlling the microprocessors to perform work, in time the complexity of the work that needed to be done by the humans outstripped the ability of humans to control the microprocessors using merely assembly languages.
At this point, it was noted that the same tasks needed to be done over and over, and the machine language necessary to do those repetitive tasks was the same. In view of this, compilers were created. A compiler is a device that takes a statement that is more comprehensible to a human than either machine or assembly language, such as "add 2 + 2 and output the result," and translates that human understandable statement into a complicated, tedious, and immense machine language code (e.g., millions of 32, 64, or 128 bit length strings). Compilers thus translate high-level programming language into machine language. This compiled machine language, as described above, is then used as the technical specification which sequentially constructs and causes the interoperation of many different computational machines such that humanly useful, tangible, and concrete work is done. For example, as indicated above, such machine language- the compiled version of the higher-level language- functions as a technical specification which selects out hardware logic gates, specifies voltage levels, voltage transition timings, etc., such that the humanly useful work is accomplished by the hardware.
Thus, a functional/operational technical description, when viewed by one of skill in the art, is far from an abstract idea. Rather, such a functional/operational technical description, when understood through the tools available in the art such as those just described, is instead understood to be a humanly understandable representation of a hardware specification, the complexity and specificity of which far exceeds the
comprehension of most any one human. Accordingly, any such operational/functional technical descriptions may be understood as operations made into physical reality by (a) one or more interchained physical machines, (b) interchained logic gates configured to create one or more physical machine(s) representative of sequential/combinatorial logic(s), (c) interchained ordered matter making up logic gates (e.g., interchained electronic devices (e.g., transistors), DNA, quantum devices, mechanical switches, optics, fluidics, pneumatics, molecules, etc.) that create physical reality representative of logic(s), or (d) virtually any combination of the foregoing. Indeed, any physical object which has a stable, measurable, and changeable state may be used to construct a machine based on the above technical description. Charles Babbage, for example, constructed the first computer out of wood and powered by cranking a handle.
Thus, far from being understood as an abstract idea, it can be recognizes that a functional/operational technical description as a humanly-understandable representation of one or more almost unimaginably complex and time sequenced hardware instantiations. The fact that functional/operational technical descriptions might lend themselves readily to high-level computing languages (or high-level block diagrams for that matter) that share some words, structures, phrases, etc. with natural language simply cannot be taken as an indication that such functional/operational technical descriptions are abstract ideas, or mere expressions of abstract ideas. In fact, as outlined herein, in the technological arts this is simply not true. When viewed through the tools available to those of skill in the art, such functional/operational technical descriptions are seen as specifying hardware configurations of almost unimaginable complexity.
As outlined above, the reason for the use of functional/operational technical descriptions is at least twofold. First, the use of functional/operational technical descriptions allows near-infinitely complex machines and machine operations arising from interchained hardware elements to be described in a manner that the human mind can process (e.g., by mimicking natural language and logical narrative flow). Second, the use of functional/operational technical descriptions assists the person of skill in the art in understanding the described subject matter by providing a description that is more or less independent of any specific vendor's piece(s) of hardware.
The use of functional/operational technical descriptions assists the person of skill in the art in understanding the described subject matter since, as is evident from the above discussion, one could easily, although not quickly, transcribe the technical descriptions set forth in this document as trillions of ones and zeroes, billions of single lines of assembly- level machine code, millions of logic gates, thousands of gate arrays, or any number of intermediate levels of abstractions. However, if any such low-level technical descriptions were to replace the present technical description, a person of skill in the art could encounter undue difficulty in implementing the disclosure, because such a low-level technical description would likely add complexity without a corresponding benefit (e.g., by describing the subject matter utilizing the conventions of one or more vendor-specific pieces of hardware). Thus, the use of functional/operational technical descriptions assists those of skill in the art by separating the technical descriptions from the conventions of any vendor-specific piece of hardware.
In view of the foregoing, the logical operations/functions set forth in the present technical description are representative of static or sequenced specifications of various ordered-matter elements, in order that such specifications may be comprehensible to the human mind and adaptable to create many various hardware configurations. The logical operations/functions disclosed herein should be treated as such, and should not be disparagingly characterized as abstract ideas merely because the specifications they represent are presented in a manner that one of skill in the art can readily understand and apply in a manner independent of a specific vendor's hardware implementation.
At least a portion of the devices or processes described herein can be integrated into an information processing system. An information processing system generally includes one or more of a system unit housing, a video display device, memory, such as volatile or non- volatile memory, processors such as microprocessors or digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices (e.g., a touch pad, a touch screen, an antenna, etc.), or control systems including feedback loops and control motors (e.g., feedback for detecting position or velocity, control motors for moving or adjusting components or quantities). An information processing system can be implemented utilizing suitable commercially available components, such as those typically found in data computing/communication or network computing/communication systems.
The state of the art has progressed to the point where there is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Various vehicles by which processes or systems or other technologies described herein can be effected (e.g., hardware, software, firmware, etc., in one or more machines or articles of manufacture), and that the preferred vehicle will vary with the context in which the processes, systems, other technologies, etc., are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation that is implemented in one or more machines or articles of manufacture; or, yet again alternatively, the implementer may opt for some combination of hardware, software, firmware, etc. in one or more machines or articles of manufacture. Hence, there are several possible vehicles by which the processes, devices, other technologies, etc., described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. In an embodiment, optical aspects of
implementations will typically employ optically-oriented hardware, software, firmware, etc., in one or more machines or articles of manufacture.
The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely examples, and that in fact, many other architectures can be implemented that achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively "associated" such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as "associated with" each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being "operably connected", or "operably coupled, " to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being "operably coupleable, " to each other to achieve the desired functionality. Specific examples of operably coupleable include, but are not limited to, physically mateable, physically interacting components, wirelessly interactable, wirelessly interacting components, logically interacting, logically interactable components, etc.
In an embodiment, one or more components may be referred to herein as
"configured to," "configurable to," "operable/operative to," "adapted/adaptable," "able to," "conformable/conformed to," etc. Such terms (e.g., "configured to") can generally encompass active-state components, or inactive-state components, or standby-state components, unless context requires otherwise.
The foregoing detailed description has set forth various embodiments of the devices or processes via the use of block diagrams, flowcharts, or examples. Insofar as such block diagrams, flowcharts, or examples contain one or more functions or operations, it will be understood by the reader that each function or operation within such block diagrams, flowcharts, or examples can be implemented, individually or collectively, by a wide range of hardware, software, firmware in one or more machines or articles of manufacture, or virtually any combination thereof. Further, the use of "Start," "End," or "Stop" blocks in the block diagrams is not intended to indicate a limitation on the beginning or end of any functions in the diagram. Such flowcharts or diagrams may be incorporated into other flowcharts or diagrams where additional functions are performed before or after the functions shown in the diagrams of this application. In an embodiment, several portions of the subject matter described herein is implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal-bearing medium used to actually carry out the distribution. Non-limiting examples of a signal-bearing medium include the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Video Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital or an analog
communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link (e.g., transmitter, receiver, transmission logic, reception logic, etc.), etc.).
While particular aspects of the present subject matter described herein have been shown and described, it will be apparent to the reader that, based upon the teachings herein, changes and modifications can be made without departing from the subject matter described herein and its broader aspects and, therefore, the appended claims are to encompass within their scope all such changes and modifications as are within the true spirit and scope of the subject matter described herein. In general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as "open" terms (e.g., the term "including" should be interpreted as "including but not limited to, " the term "having" should be interpreted as "having at least, " the term "includes" should be interpreted as "includes but is not limited to, " etc.). Further, if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases "at least one" and "one or more" to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles "a" or "an" limits any particular claim containing such introduced claim recitation to claims containing only one such recitation, even when the same claim includes the introductory phrases "one or more" or "at least one" and indefinite articles such as "a" or "an" (e.g., "a" and/or "an" should typically be interpreted to mean "at least one" or "one or more"); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of "two recitations, " without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense of the convention (e.g., " a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense of the convention (e.g., " a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). Typically a disjunctive word or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms unless context dictates otherwise. For example, the phrase "A or B" will be typically understood to include the possibilities of "A" or "B" or "A and B."
With respect to the appended claims, the operations recited therein generally may be performed in any order. Also, although various operational flows are presented in a sequence(s), it should be understood that the various operations may be performed in orders other than those that are illustrated, or may be performed concurrently. Examples of such alternate orderings includes overlapping, interleaved, interrupted, reordered, incremental, preparatory, supplemental, simultaneous, reverse, or other variant orderings, unless context dictates otherwise. Furthermore, terms like "responsive to," "related to," or other past-tense adjectives are generally not intended to exclude such variants, unless context dictates otherwise.
We demonstrated our approach on two examples of image recognition. Classifiers built on learned sensors approached the performance of classifiers built on projections to principal components of the full images. These optimal sensor locations could be learned approximately even from already randomly subsampled images. In either case, the learned sensors performed significantly better than the matched number of randomly chosen sensors. Further, the ensemble of learned sensor locations clustered around coherent features of the images. It is possible to think of this ensemble as a pixel mask for the faces in the training set, which may be of use when applied to engineered or biological systems where the important features may not be salient by inspection.
It is plausible that biological organisms may exploit enhanced sparsity for classification. Organisms interact with the external world with motor outputs, which are often discrete trajectories at specific moments in time, so that the transformation from sensory inputs to motor outputs can be thought of as a classification task. For example, a fly has no need to reconstruct the full flow velocity field around its body— it has only to decide what to do in response to a gust. Sensory organs and data processing by the nervous system can be expensive; therefore, it is advantageous to place a smaller number of sensors at key locations on the body.
Examples
We apply the algorithm for learning sparse sensor locations, as described above, on two examples of face recognition based on publicly available image datasets. Both data sets were presented and described (B. W. Brunton, S. L. Brunton, J. L. Proctor, J. N. Kutz, Optimal Sensor Placement and Enhanced Sparsity for Classification, arXiv: 1310.2417 [cs.CV] , which is incorporated by reference herein). In each experiment, we demonstrate that a classifier built on a few optimally placed sparse sensors performs comparably to a classifier built on PCA features of the full image. Moreover, approximately optimal sparse sensor locations may be learned from pixels randomly subsampled from 10% of the full image. Ensembles of the learned sensors cluster at locations consistent with the coherent features in the faces.
A. Experiment 1 - Cats and Dogs
Our first example seeks to classify images of cats versus images of dogs. Each image belongs to a distinct species, and there is considerable variability between individuals of the same species (Fig. 10). Notably, members of each species take on a large range of colorations in fur, markings, and ear postures. We used 121 images each of cats and dogs; images have n = 64x64 = 4096 pixels and are stacked into columns of X. Figure 10 (b) shows the first four PCA modes of X.
We compared classification accuracy using learned measurements as well as using the principal components of the full image and using random pixel measurements. The accuracy for all of these methods improved with larger numbers of sensors/features; r learned sensors performed almost as well as r principal components and consistently better than r random pixel sensors. Figure 3 shows the results of experiments where classifiers were trained on a random 90% of the images and assessed on the remaining 10% of images. The mean and standard deviation of the cross-validated accuracy over 400 random iterations of training/test images are plotted. Classifiers built on less than 1% of the total pixels (green and red lines in Fig. 3 (a)) performed nearly as well projections to principal components of the full image.
Figure 3 depicts a cross-validation of classification accuracy between images of cats and dogs. Panel (a) compares using r learned sensors/features (solid red and green lines) against using r random pixel sensors (dashed blue line) and projections onto the first r principal components of the full image (solid blue line). Each data point summarizes 400 random iterations. At each iteration, a different 90% subsample was used to train the classifier, whose accuracy was assessed on the remaining 10%> of images. Error bars are standard deviations. Panel (b) shows a summary of mean cross-validated accuracy varying p, the number of pixels used in the random subsample, and r, the number of
features/sensors used to construct the classifier.
Accuracy reached a plateau at around r = 15 features/ sensors (Fig. 3 (b)) of around 80%), suggesting the discriminant vector between cats and dogs may be over-fit when more than 15 features were used. Figure 3 (b) shows mean classification performance using r sensors learned from p subsampled measurements and r features, where p and r are varied systematically. Sparse sensors learned from p = 400 pixels did almost as well as those learned from p = n = 4096 pixels. In other words, we were able to learn a nearly optimally sparse set of sensors starting from an already massively undersampled set of pixels.
The performance of all of these approaches is limited by the large variability in appearances within the categories of cats and dogs. In fact, the images that are most often misclassified include animals that are predominantly one color (black or white) and dogs with pointy as opposed to droopy ears. In other words, some dog images may not lie close to the bulk of the other dog images in feature space, so that a linear classifier is not able to capture the complex geometry separating the two categories.
It is possible to build classifiers from random pixels that perform significantly better than chance. The improvement in accuracy of using computed features or learned sensors over random pixels became smaller as r increases (Fig. 3 (a)). The perhaps surprising performance of random pixels is consistent with the notion of enhanced sparsity and Wright et al.'s (J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and
Machine Intelligence (PAMI), 31(2):210-227, 2009, which is incorporated by reference herein) observation that using random features, as long as there is enough of them, serves classification as well as using engineered features.
Figure 4 shows the mean sensor locations averaged over 400 random learning iterations (top row), compared to images of the mean cat and the mean dog (bottom row). For the top row, r = 15 sensors locations are learned at each iteration from a random 50% of images designated as the training set, and the colormap from black to white represents the probability a sensors is at that location. In other words, we are visualizing the distribution of sensor locations, obtained by summing the rows of either or (Φ2Φ)Γ. Panel (a) shows sensors learned with no subsampling; panel (b) shows sensors learned from 400 randomly sampled pixels; panel (c) shows sensors learned from 400 random projections. The bottom row shows the centroid of the raw data in panel (c), which was subtracted from each image to obtain X. Panels (e) and (f) show the average cat and the average dog after mean subtraction. Comparing the top row to the difference between (e) and (f), it is apparently that sensors cluster around the forehead, eyes, mouth, and the tops of ears.
If the sparse sensors represent pixels that are key to the decision, then they should be clustered at locations that are maximally informative about the difference between cats and dogs. Indeed, an average of sensor locations learned over 400 random iterations shows clustering around the animals' mouth, forehead, eyes, and tops of ears (Fig. 4 (a)). When sparse sensors were learned from an already subsampled dataset, a qualitatively similar distribution of sensor locations was found, where each cluster of sensors showed a slightly larger spatial variance (Fig. 4 (b)). Such an ensemble of sensor placements can be thought of as a mask for facial features particularly relevant for the classification.
Our algorithm applies equally well when the subsampled dataset (middle row in Fig. 1) is not a set of random pixels but a set of random projections. Figure 5 shows that classifiers built on r sensors learned from p = 400 pixels orp = 400 random projections perform identically. Random projections used to produce this result consisted of ensembles of Bernoulli random variables with mean of 0.5. Even so, Fig. 4 (c) makes clear that using random projections instead of pixels is a poor engineering choice. The ensemble of sensors computed from random projections does not illuminate coherent facial features.
Figure 5 shows that the classification accuracy of sensors learned from random projections (black dashed line) is the same as sensors learned from single pixels (red solid line). The experiments performed were identical to those show in Fig. 3. Random projections were ensembles of Bernoulli random variables with mean of 0.5.
B. Experiment 2 - Yale B Faces
We extended sparse sensor learning to classification between more than two categories, applying our approach to human face recognition. We used the Yale Faces Database B extended (A.S. Georghiades, P.N. Belhumeur, and D.J. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(6):643-660, 2001, and K. Lee, J. Ho, and D. J Kriegman. Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Pattern Anal Mach Intell, 27(5):684-98, May 2005, which are each incorporated by reference herein), which contains images of individual faces captured under various lighting conditions (example images in Fig. 11 (a)). Images have been perviously aligned and cropped; each has n = 192x168 = 32,256 pixels. Figure 11 (b) shows the first six eigenfaces corresponding to the three example sets of faces. Comparing the faces dataset with the cat/dog dataset above, there is significantly less variability within each category in facial features, although large portions of the faces were effectively occluded by lack of illumination.
Figure 6 depicts classification of 3, 4, and 5 faces as the coupling weight λ varies between 0 and 100. Number of sensors and the cross-validated performance are shown, comparing sensors learned with no subsampling (solid green lines), sensors learned from 1/10 of the pixels (solid red lines), and random sensors of the same number (dashed blue lines) against the accuracy by using r PC A features of the full images (blue square). Each instance of the classifier was trained on a random 75% of the images and evaluated on the remaining 25%; error bars are standard deviations.
We demonstrated our sensor learning approach to learn sensor locations that categorize c = 3, 4, and 5 faces and compared the classification accuracy of learned sensors to projections to PCA features of the full image and to random sensors of the same number. Figure 6 shows that, as the coupling weight λ is increased, the number of learned sensors decreases. When λ = 0, a maximum of r(c - 1) pixels locations were found. In the limit λ→∞, the number of sensors approached r, the number of PC A features included in the discrimination. By adjusting the magnitude of λ, we gain control over the number of sensor locations used in the classification. When individual sensors are expensive, it may be desirable to use fewer sensors at the cost of slightly lower performance.
The cross-validated accuracy shown in the bottom row of Fig. 6 are results obtained from re-computing the LDA projection wT to decision space after learning the sensor locations. This strategy is not typically an expensive computation, as the number of rows in the measurement vectors is small. Further, a LDA classifier tailor-made for the specific learned pixels can out-perform the PCA approach when the number of sensors exceeds the number of PCA features. In contrast, the induced projection method described above cannot perform better than the PCA-LDA approach. For classification between c >
2 categories, especially for larger λ coupling weights, re-computing the LDA projection usually leads to more accurate results.
Interestingly, sensors learned from randomly subsampled pixels (10% of the original pixels) performed exactly as well as sensors learned from the full image (red and green lines in Fig. 6). Obtaining sparse sensors from already subsampled data presents significant savings in the learning procedure, both in the SVD to extract a feature space and in the convex optimization to solve for a sparse s.
Figure 7 shows that increasing the coupling weight λ results in fewer sensor locations that capture approximately the same features. In both panels, the underlying image is a visualization of the decision vector X = ΨΓΜ> for an example of categorizing c =
3 faces using r = 10 features. Panel (a) shows the locations of sparse sensors that independently reconstruct the c - 1 = 2 columns of w (yellow and green dots, 20 total). The sensors boxed in red are aggregated in panel (b), where the coupling weight λ brought the total number of sensors down to 15.
Figure 7 illustrates an example of increasing λ on the number and locations of sensors identified by the solution to the optimization in Eq. (9). For this example, c = 3 faces were in the training set so w and s each has c - 1 = 2 columns. When λ = 0, the columns of w (linear discriminant vectors in r = 10 dimensional PCA feature space) are treated independently, and r(c - 1) = 20 total sensors are found. Notice, however, in Fig. 7 (a) that certain pairs of sensor locations are in close proximity (red boxes) and likely carry information about the same non-local facial feature. As λ increases and the total number of sensors is penalized, these sensor pairs appear to collapse onto single sensors in Fig. 7 (b). Comparing the two sets of sensor locations, we can see that, in additional to aggregated sensors, some sensors have remained the same (for example, the tops of each eyebrow), some sensors have disappeared (on the cheek, at bottom right), while some entirely new sensors have appeared (lower right corner of eye).
Figure 8 illustrates mean sensor locations (top left) to discriminate between 3 faces
(mean faces b-d), averaged over 400 learning iterations where a random 75% subset of the images was used as the training set. The mean sensor locations map was shrunk by a factor of 4 (using a cubic kernel) to emphasize the sensors converging around the eyes, nose, corner of mouth, and arches of eyebrows.
Sparse sensors for face recognition cluster at major facial features: the eyes, the nose, and corners of the mouth (Fig. 8). The nose is particularly prominent in these masks, consistent with the fact that, due to the eccentricity in illumination, the nose is the only facial feature reliably visible in all images. Interestingly, this data-driven algorithm identifies the same features that are favored by humans. As first noted by Yarbus (A.L. Yarbus, B. Haigh, and L.A. Rigss. Eye movements and vision. Plenum press New York, 1967, which is incorporated by reference herein), humans examining an image of a face spend a preponderance of time fixating at the eyes, the nose, and the mouth.
Aspects of the subject matter described herein are set out in the following numbered clauses:
1. In some embodiments, an object classification system includes: a significant pixel identification module including circuitry for determining significant pixels from one or more reference images, and generating optimal sparse pixel set information; and an object classification module including circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information; and classifying at least one object in the first image based on the comparison.
2. Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating a near optimal sparse pixel set information.
3. Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based one or more learning protocols. 4. Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a modal decomposition protocol.
5. Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a convex optimization protocol.
6. Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on an
Figure imgf000050_0001
protocol.
7. Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a subsampling using a sparse random matrices protocol.
8. Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a subsampling using a dense random matrices protocol.
9. Some embodiments include an object classification system such as in paragraph 1, wherein the significant pixel identification module includes circuitry for determining one or more discrimination vectors associated with the significant pixels based on a discrimination analysis protocol.
10. Some embodiments include an object classification system such as in paragraph 9, wherein the object classification module includes circuitry for classifying the at least one object in the first image based on comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information and the one or more discrimination vectors.
11. Some embodiments include an object classification system such as in paragraph 1, wherein the object classification module includes circuitry for determining reference data clusters associated with the significant pixels in a sparse sensor space.
12. Some embodiments include an object classification system such as in paragraph 1, wherein the object classification module includes circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information and generating classification information associated with at least one object in the first image based on a learning protocol.
13. Some embodiments include an object classification system such as in paragraph 1, wherein the one or more reference images includes one or more training images.
14. In some embodiments, an object classification system includes: circuitry for
determining an optimal sparse pixel set from one or more reference images; and circuitry for classifying an object in at least one image based on a comparison of one or more parameters associated with the optimal sparse pixel set to one or more pixels associated with the at least one image.
15. Some embodiments include an object classification system such as in paragraph 14, wherein the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more discrimination vectors from the one or more reference images.
16. Some embodiments include an object classification system such as in paragraph 14, wherein the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more sparse pixel locations.
17. Some embodiments include an object classification system such as in paragraph 14, wherein the circuitry for determining the optimal sparse pixel set includes circuitry for generating a feature space transformation from the one or more reference images. 18. Some embodiments include an object classification system such as in paragraph 14, wherein the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more weighting factors associated with the optimal sparse pixel set.
19. Some embodiments include an object classification system such as in paragraph 14, wherein the circuitry for classifying the object in at least one image includes circuitry for generating one or more classification categories associated with the object.
0. In some embodiments, a method for classifying images includes: determining optimal sparse pixel set information from one or more reference images; comparing one or more parameters associated with the optimal sparse pixel set information to one or more pixels associated with an image; and classifying at least one object in the image based on the comparison.
1. Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes determining the optimal sparse pixel set information based on at least one recovery protocol. 22. Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes determining the optimal sparse pixel set information based on at least one t\ -minimization protocol.
23. Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes determining optimal sparse pixel location information.
24. Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes determining optimal sparse pixel weighting factor information.
25. Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes determining enhanced sparsity classification information associated with the one or more reference images.
26. Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes applying a coupling weight protocol.
27. Some embodiments include a method for classifying images such as in paragraph 20, wherein determining the optimal sparse pixel set information includes generating discrimination vector information associated with the one or more reference images. 28. In some embodiments, an object classification system includes: a significant pixel determination from one or more reference images module; an optimal sparse pixel set information generation module; and an object classification module.
29. Some embodiments include an object classification system such as in paragraph 28, wherein the object classification module includes: one or more first image pixels comparison to optimal sparse pixel set information, the one or more first image pixels indicative of at least one object in the first image; and a first image object
classification module configured to classify at least one object in the first image based on a comparison of one or more pixels of the first image to optimal sparse pixel set information.
30. In some embodiments, an object classification system includes: a significant pixel identification module; and an object classification module including circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information; and classifying at least one object in the first image based on the comparison.
Some embodiments include an object classification system such as in paragraph 30, wherein the significant pixel identification module includes circuitry for determining significant pixels from one or more reference images, and generating optimal sparse pixel set information.
Some embodiments include an object classification system such as in paragraph 30, wherein the object classification module includes circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information.
Some embodiments include an object classification system such as in paragraph 32, wherein the object classification module includes circuitry for classifying at least one object in the first image based on the comparison.
In some embodiments, a system includes: a significant sensors identification module including circuitry for determining significant sensors associated with a sensor network; an optimal sparse sensor module including circuitry for generating optimal sparse sensor set information from the significant sensors; and a target decision task module including circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information. Some embodiments include a system such in paragraph 34, wherein the significant sensors identification module includes circuitry for determining optimal sparse sensor location information associated with the sensor network.
Some embodiments include a system such in paragraph 34, wherein the significant sensors identification module includes circuitry for determining nearly optimal sparse sensor location information associated with the sensor network.
Some embodiments include a system such in paragraph 34, wherein the significant sensors identification module includes circuitry for determining optimal sparse sensor location information associated with the sensor network based on at least one minimization protocol.
Some embodiments include a system such in paragraph 34, wherein the significant sensors identification module includes circuitry for determining significant motes associated with a sensor network. Some embodiments include a system such in paragraph 34, wherein the significant sensors identification module includes circuitry for determining significant motes associated with a sensor network, and wherein the optimal sparse sensor module includes circuitry for generating optimal sparse mote set information from the significant motes responsive to one or more target decision task inputs.
Some embodiments include a system such in paragraph 34, wherein the significant sensors identification module includes circuitry for determining optimal sparse mote location information associated with a mote network.
Some embodiments include a system such in paragraph 34, wherein the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs. In some embodiments, a system for classifying grid behavior includes: circuitry for determining significant sensors from a sensor network; circuitry for generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs; and circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
Some embodiments include a system for classifying grid behavior such in paragraph 42, wherein the circuitry for generating the optimal sparse sensor set information includes circuitry for generating one or more discrimination vectors associated with the sensor network.
Some embodiments include a system for classifying grid behavior such in paragraph 42, wherein the circuitry for generating the optimal sparse sensor set information includes circuitry for generating one or more sparse sensor locations associated with the sensor network.
Some embodiments include a system for classifying grid behavior such in paragraph 42, wherein the circuitry for classifying information associated with the target decision task includes circuitry for generating one or more classification categories associated with the target decision task.
Some embodiments include a system for classifying grid behavior such in paragraph 42, wherein the circuitry for classifying information associated with the target decision task includes circuitry for generating one or more classification parameters associated with the target decision task.
47. In some embodiments, a method includes: determining significant sensors from a sensor network; generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs; and classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
48. Some embodiments include a method as in paragraph 47, wherein determining
significant sensors from the sensor network includes determining one or more spatial sensor locations optimally informative to a categorical discrimination based on enhanced sparsity protocol.
49. Some embodiments include a method as in paragraph 47, wherein determining
significant sensors from the sensor network includes determining one or more spatial sensor locations optimally informative to a categorical discrimination based on enhanced sparsity for classification protocol.
50. Some embodiments include a method as in paragraph 47, wherein determining
significant sensors from the sensor network includes determining sparse representation information of discriminant vectors in a feature space.
51. Some embodiments include a method as in paragraph 47, wherein determining
significant sensors from the sensor network includes determining sparse representation information of discriminant vectors in a feature space based on an
Figure imgf000055_0001
protocol.
52. Some embodiments include a method as in paragraph 47, wherein determining
significant sensors from the sensor network includes determining nearly optimal sparse sensor locations information.
53. Some embodiments include a method as in paragraph 47, wherein classifying the
information associated with the target decision task includes generating categorical decision information about a state of the sensor network.
54. Some embodiments include a method as in paragraph 47, wherein determining the significant sensors from the sensor network includes determining significant motes from a sensor network; and wherein generating optimal the sparse sensor set information includes generating optimal sparse mote set information from the significant motes.
55. In some embodiments, a system includes: a significant sensors determination from a sensor network module; an optimal sparse sensor set information generation module; and a target decision task classification module.
56. Some embodiments include a system such as in paragraph 55, wherein the optimal sparse sensor set information generation module is configured to generate optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs.
57. Some embodiments include a system such as in paragraph 55, wherein the target decision task classification module is configured to classify information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
58. In some embodiments, a system includes: a significant sensors identification module; and an optimal sparse sensor module including circuitry for generating optimal sparse sensor set information from the significant sensors.
59. Some embodiments include a system such as in paragraph 58, further including a target decision task module.
60. Some embodiments include a system such as in paragraph 59, wherein the target decision task module includes circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
61. Some embodiments include a system such as in paragraph 58, wherein the significant sensors identification module includes circuitry for determining significant sensors associated with a sensor network.
62. Some embodiments include a system such as in paragraph 58, wherein the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors.
All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in any Application Data Sheet, are incorporated herein by reference, to the extent not inconsistent herewith.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments are contemplated. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.

Claims

CLAIMS What is claimed is:
1. An object classification system, comprising:
a significant pixel identification module including circuitry for determining
significant pixels from one or more reference images, and generating optimal sparse pixel set information; and
an object classification module including circuitry for
comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information; and
classifying at least one object in the first image based on the comparison.
2. The object classification system of claim 1, wherein the significant pixel
identification module includes circuitry for generating a near optimal sparse pixel set information.
3. The object classification system of claim 1, wherein the significant pixel
identification module includes circuitry for generating the optimal sparse pixel set information based one or more learning protocols.
4. The object classification system of claim 1, wherein the significant pixel
identification module includes circuitry for generating the optimal sparse pixel set information based on a modal decomposition protocol.
5. The object classification system of claim 1, wherein the significant pixel
identification module includes circuitry for generating the optimal sparse pixel set information based on a convex optimization protocol.
6. The object classification system of claim 1, wherein the significant pixel
identification module includes circuitry for generating the optimal sparse pixel set information based on an
Figure imgf000058_0001
protocol.
7. The object classification system of claim 1, wherein the significant pixel
identification module includes circuitry for generating the optimal sparse pixel set information based on a subsampling using a sparse random matrices protocol.
The object classification system of claim 1, wherein the significant pixel identification module includes circuitry for generating the optimal sparse pixel set information based on a subsampling using a dense random matrices protocol.
The object classification system of claim 1, wherein the significant pixel identification module includes circuitry for determining one or more discrimination vectors associated with the significant pixels based on a discrimination analysis protocol.
The object classification system of claim 9, wherein the object classification module includes circuitry for classifying the at least one object in the first image based on comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information and the one or more discrimination vectors.
The object classification system of claim 1, wherein the object classification module includes circuitry for determining reference data clusters associated with the significant pixels in a sparse sensor space.
The object classification system of claim 1, wherein the object classification module includes circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information and generating classification information associated with at least one object in the first image based on a learning protocol.
The object classification system of claim 1, wherein the one or more reference images includes one or more training images.
An object classification system, comprising:
circuitry for determining an optimal sparse pixel set from one or more reference images; and
circuitry for classifying an object in at least one image based on a comparison of one or more parameters associated with the optimal sparse pixel set to one or more pixels associated with the at least one image.
15. The object classification system of claim 14, wherein the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more discrimination vectors from the one or more reference images.
16. The object classification system of claim 14, wherein the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more sparse pixel locations.
17. The object classification system of claim 14, wherein the circuitry for determining the optimal sparse pixel set includes circuitry for generating a feature space transformation from the one or more reference images.
18. The object classification system of claim 14, wherein the circuitry for determining the optimal sparse pixel set includes circuitry for determining one or more weighting factors associated with the optimal sparse pixel set.
19. The object classification system of claim 14, wherein the circuitry for classifying the object in at least one image includes circuitry for generating one or more classification categories associated with the object.
20. A method for classifying images, comprising:
determining optimal sparse pixel set information from one or more reference images;
comparing one or more parameters associated with the optimal sparse pixel set information to one or more pixels associated with an image; and classifying at least one object in the image based on the comparison.
21. The method for classifying images of claim 20, wherein determining the optimal sparse pixel set information includes determining the optimal sparse pixel set information based on at least one recovery protocol.
22. The method for classifying images of claim 20, wherein determining the optimal sparse pixel set information includes determining the optimal sparse pixel set information based on at least one t\ -minimization protocol. The method for classifying images of claim 20, wherein determining the optimal sparse pixel set information includes determining optimal sparse pixel location information.
The method for classifying images of claim 20, wherein determining the optimal sparse pixel set information includes determining optimal sparse pixel weighting factor information.
The method for classifying images of claim 20, wherein determining the optimal sparse pixel set information includes determining enhanced sparsity classification information associated with the one or more reference images.
The method for classifying images of claim 20, wherein determining the optimal sparse pixel set information includes applying a coupling weight protocol.
The method for classifying images of claim 20, wherein determining the optimal sparse pixel set information includes generating discrimination vector information associated with the one or more reference images.
An object classification system, comprising:
a significant pixel determination from one or more reference images module; an optimal sparse pixel set information generation module; and
an object classification module.
The object classification system of claim 28, wherein the object classification module includes
one or more first image pixels comparison to optimal sparse pixel set information, the one or more first image pixels indicative of at least one object in the first image; and
a first image object classification module configured to classify at least one object in the first image based on a comparison of one or more pixels of the first image to optimal sparse pixel set information.
An object classification system, comprising:
a significant pixel identification module; and
an object classification module including circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information; and
classifying at least one object in the first image based on the comparison.
The object classification system of claim 30, wherein the significant pixel identification module includes circuitry for determining significant pixels from one or more reference images, and generating optimal sparse pixel set information.
The object classification system of claim 30, wherein the object classification module includes circuitry for comparing one or more pixels of a first image indicative of one or more objects imaged in the first image to the optimal sparse pixel set information.
The object classification system of claim 32, wherein the object classification module includes circuitry for classifying at least one object in the first image based on the comparison.
A system, comprising:
a significant sensors identification module including circuitry for determining significant sensors associated with a sensor network;
an optimal sparse sensor module including circuitry for generating optimal sparse sensor set information from the significant sensors; and
a target decision task module including circuitry for classifying information
associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
The system of claim 34, wherein the significant sensors identification module includes circuitry for determining optimal sparse sensor location information associated with the sensor network.
The system of claim 34, wherein the significant sensors identification module includes circuitry for determining nearly optimal sparse sensor location
information associated with the sensor network. The system of claim 34, wherein the significant sensors identification module includes circuitry for determining optimal sparse sensor location information associated with the sensor network based on at least one t\ -minimization protocol.
The system of claim 34, wherein the significant sensors identification module includes circuitry for determining significant motes associated with a sensor network.
The system of claim 34, wherein the significant sensors identification module includes circuitry for determining significant motes associated with a sensor network, and wherein the optimal sparse sensor module includes circuitry for generating optimal sparse mote set information from the significant motes responsive to one or more target decision task inputs.
The system of claim 34, wherein the significant sensors identification module includes circuitry for determining optimal sparse mote location information associated with a mote network.
The system of claim 34, wherein the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs.
A system for classifying grid behavior, comprising:
circuitry for determining significant sensors from a sensor network;
circuitry for generating optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs; and circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
The system for classifying grid behavior of claim 42, wherein the circuitry for generating the optimal sparse sensor set information includes circuitry for generating one or more discrimination vectors associated with the sensor network. The system for classifying grid behavior of claim 42, wherein the circuitry for generating the optimal sparse sensor set information includes circuitry for generating one or more sparse sensor locations associated with the sensor network.
The system for classifying grid behavior of claim 42, wherein the circuitry for classifying information associated with the target decision task includes circuitry for generating one or more classification categories associated with the target decision task.
The system for classifying grid behavior of claim 42, wherein the circuitry for classifying information associated with the target decision task includes circuitry for generating one or more classification parameters associated with the target decision task.
A method, comprising:
determining significant sensors from a sensor network;
generating optimal sparse sensor set information from the significant sensors
responsive to one or more target decision task inputs; and
classifying information associated with the target decision task based on a
comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
The method of claim 47, wherein determining significant sensors from the sensor network includes determining one or more spatial sensor locations optimally informative to a categorical discrimination based on enhanced sparsity protocol.
The method of claim 47, wherein determining significant sensors from the sensor network includes determining one or more spatial sensor locations optimally informative to a categorical discrimination based on enhanced sparsity for classification protocol.
The method of claim 47, wherein determining significant sensors from the sensor network includes determining sparse representation information of discriminant vectors in a feature space. The method of claim 47, wherein determining significant sensors from the sensor network includes determining sparse representation information of discriminant vectors in a feature space based on an
Figure imgf000065_0001
The method of claim 47, wherein determining significant sensors from the sensor network includes determining nearly optimal sparse sensor locations information.
The method of claim 47, wherein classifying the information associated with the target decision task includes generating categorical decision information about a state of the sensor network.
The method of claim 47, wherein determining the significant sensors from the sensor network includes determining significant motes from a sensor network; and wherein generating optimal the sparse sensor set information includes generating optimal sparse mote set information from the significant motes.
A system, comprising:
a significant sensors determination from a sensor network module;
an optimal sparse sensor set information generation module; and
a target decision task classification module.
The system of claim 55, wherein the optimal sparse sensor set information generation module is configured to generate optimal sparse sensor set information from the significant sensors responsive to one or more target decision task inputs.
The system of claim 55, wherein the target decision task classification module is configured to classify information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
A system, comprising:
a significant sensors identification module; and
an optimal sparse sensor module including circuitry for generating optimal sparse sensor set information from the significant sensors.
The system of claim 58, further comprising: a target decision task module.
The system of claim 59, wherein the target decision task module includes circuitry for classifying information associated with the target decision task based on a comparison of the one or more target decision task inputs and one or more parameters associated with the optimal sparse sensor set information.
The system of claim 58, wherein the significant sensors identification module includes circuitry for determining significant sensors associated with a sensor network.
The system of claim 58, wherein the optimal sparse sensor module includes circuitry for generating optimal sparse sensor set information from the significant sensors.
PCT/US2014/057370 2013-09-26 2014-09-25 Systems, devices, and methods for classification and sensor identification using enhanced sparsity WO2015048232A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361882897P 2013-09-26 2013-09-26
US61/882,897 2013-09-26

Publications (1)

Publication Number Publication Date
WO2015048232A1 true WO2015048232A1 (en) 2015-04-02

Family

ID=52744444

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/057370 WO2015048232A1 (en) 2013-09-26 2014-09-25 Systems, devices, and methods for classification and sensor identification using enhanced sparsity

Country Status (2)

Country Link
TW (1) TW201528156A (en)
WO (1) WO2015048232A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899883A (en) * 2015-05-29 2015-09-09 北京航空航天大学 Indoor object cube detection method for depth image scene
CN105631896A (en) * 2015-12-18 2016-06-01 武汉大学 Hybrid classifier decision-based compressed sensing tracking method
CN106373162A (en) * 2015-07-22 2017-02-01 南京大学 Salient object detection method based on saliency fusion and propagation
US9928408B2 (en) 2016-06-17 2018-03-27 International Business Machines Corporation Signal processing
CN109543667A (en) * 2018-11-14 2019-03-29 北京工业大学 A kind of text recognition method based on attention mechanism
CN109635872A (en) * 2018-12-17 2019-04-16 上海观安信息技术股份有限公司 Personal identification method, electronic equipment and computer program product
CN111553888A (en) * 2020-04-15 2020-08-18 成都飞机工业(集团)有限责任公司 Titanium alloy forging microstructure image identification method based on machine learning
CN111881941A (en) * 2020-07-02 2020-11-03 中国空间技术研究院 Intelligent image classification method and system based on compressed sensing domain
CN112257739A (en) * 2020-08-29 2021-01-22 北京邮电大学 Sparse representation classification method based on disturbance compressed sensing
CN117393153A (en) * 2023-12-11 2024-01-12 中国人民解放军总医院 Shock real-time risk early warning and monitoring method and system based on medical internet of things time sequence data and deep learning algorithm

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI610267B (en) * 2016-08-03 2018-01-01 國立臺灣大學 Compressive sensing system based on personalized basis and method thereof

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002183206A (en) * 2000-12-15 2002-06-28 Mitsubishi Electric Corp Method and device for retrieving similar object
US20100189354A1 (en) * 2009-01-28 2010-07-29 Xerox Corporation Modeling images as sets of weighted features
US20100215254A1 (en) * 2009-02-25 2010-08-26 Toyota Motor Engineering & Manufacturing North America Self-Learning Object Detection and Classification Systems and Methods
US20110286628A1 (en) * 2010-05-14 2011-11-24 Goncalves Luis F Systems and methods for object recognition using a large database
WO2013029674A1 (en) * 2011-08-31 2013-03-07 Metaio Gmbh Method of matching image features with reference features

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002183206A (en) * 2000-12-15 2002-06-28 Mitsubishi Electric Corp Method and device for retrieving similar object
US20100189354A1 (en) * 2009-01-28 2010-07-29 Xerox Corporation Modeling images as sets of weighted features
US20100215254A1 (en) * 2009-02-25 2010-08-26 Toyota Motor Engineering & Manufacturing North America Self-Learning Object Detection and Classification Systems and Methods
US20110286628A1 (en) * 2010-05-14 2011-11-24 Goncalves Luis F Systems and methods for object recognition using a large database
WO2013029674A1 (en) * 2011-08-31 2013-03-07 Metaio Gmbh Method of matching image features with reference features

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899883A (en) * 2015-05-29 2015-09-09 北京航空航天大学 Indoor object cube detection method for depth image scene
CN106373162A (en) * 2015-07-22 2017-02-01 南京大学 Salient object detection method based on saliency fusion and propagation
CN105631896A (en) * 2015-12-18 2016-06-01 武汉大学 Hybrid classifier decision-based compressed sensing tracking method
US9928408B2 (en) 2016-06-17 2018-03-27 International Business Machines Corporation Signal processing
CN109543667A (en) * 2018-11-14 2019-03-29 北京工业大学 A kind of text recognition method based on attention mechanism
CN109543667B (en) * 2018-11-14 2023-05-23 北京工业大学 Text recognition method based on attention mechanism
CN109635872B (en) * 2018-12-17 2020-08-04 上海观安信息技术股份有限公司 Identity recognition method, electronic device and computer program product
CN109635872A (en) * 2018-12-17 2019-04-16 上海观安信息技术股份有限公司 Personal identification method, electronic equipment and computer program product
CN111553888A (en) * 2020-04-15 2020-08-18 成都飞机工业(集团)有限责任公司 Titanium alloy forging microstructure image identification method based on machine learning
CN111881941A (en) * 2020-07-02 2020-11-03 中国空间技术研究院 Intelligent image classification method and system based on compressed sensing domain
CN111881941B (en) * 2020-07-02 2024-03-29 中国空间技术研究院 Image intelligent classification method and system based on compressed sensing domain
CN112257739A (en) * 2020-08-29 2021-01-22 北京邮电大学 Sparse representation classification method based on disturbance compressed sensing
CN112257739B (en) * 2020-08-29 2023-12-22 北京邮电大学 Sparse representation classification method based on disturbance compressed sensing
CN117393153A (en) * 2023-12-11 2024-01-12 中国人民解放军总医院 Shock real-time risk early warning and monitoring method and system based on medical internet of things time sequence data and deep learning algorithm
CN117393153B (en) * 2023-12-11 2024-03-08 中国人民解放军总医院 Shock real-time risk early warning and monitoring method and system based on medical internet of things time sequence data and deep learning algorithm

Also Published As

Publication number Publication date
TW201528156A (en) 2015-07-16

Similar Documents

Publication Publication Date Title
WO2015048232A1 (en) Systems, devices, and methods for classification and sensor identification using enhanced sparsity
Luo et al. Adaptive unsupervised feature selection with structure regularization
Brunton et al. Sparse sensor placement optimization for classification
Rida et al. A comprehensive overview of feature representation for biometric recognition
Lu et al. A survey of multilinear subspace learning for tensor data
Ma et al. Sparse representation for face recognition based on discriminative low-rank dictionary learning
Lai et al. Sparse alignment for robust tensor learning
Brunton et al. Optimal sensor placement and enhanced sparsity for classification
Ghojogh et al. Locally linear embedding and its variants: Tutorial and survey
Shao et al. Regularized max-min linear discriminant analysis
Wang et al. Hypergraph canonical correlation analysis for multi-label classification
Tian et al. Task dependent deep LDA pruning of neural networks
Guan et al. Sparse representation based discriminative canonical correlation analysis for face recognition
Benuwa et al. Kernel based locality–sensitive discriminative sparse representation for face recognition
Bouveyron et al. Probabilistic Fisher discriminant analysis: A robust and flexible alternative to Fisher discriminant analysis
Zhang et al. Efficient discriminative learning of parametric nearest neighbor classifiers
Hou et al. Feature fusion using multiple component analysis
Chen et al. Structural max-margin discriminant analysis for feature extraction
Zhang et al. A linear discriminant analysis method based on mutual information maximization
Yan et al. Facial Kinship Verification
Scott II Block-level discrete cosine transform coefficients for autonomic face recognition
Tian et al. Learning iterative quantization binary codes for face recognition
Shafiee et al. Cluster-based multi-task sparse representation for efficient face recognition
Dahmouni et al. Multi-classifiers face recognition system using lbpp face representation
Karimi Exploring new forms of random projections for prediction and dimensionality reduction in big-data regimes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14846858

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14846858

Country of ref document: EP

Kind code of ref document: A1