US20230297808A1 - Generating and identifying functional subnetworks within structural networks - Google Patents

Generating and identifying functional subnetworks within structural networks Download PDF

Info

Publication number
US20230297808A1
US20230297808A1 US18/188,888 US202318188888A US2023297808A1 US 20230297808 A1 US20230297808 A1 US 20230297808A1 US 202318188888 A US202318188888 A US 202318188888A US 2023297808 A1 US2023297808 A1 US 2023297808A1
Authority
US
United States
Prior art keywords
neural network
functional
input
functional activity
structural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/188,888
Inventor
Michael Wolfgang Reimann
Max Christian Nolte
Henry Markram
Kathryn Hess Bellwald
Ran LEVI
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecole Polytechnique Federale de Lausanne EPFL
Original Assignee
Ecole Polytechnique Federale de Lausanne EPFL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecole Polytechnique Federale de Lausanne EPFL filed Critical Ecole Polytechnique Federale de Lausanne EPFL
Priority to US18/188,888 priority Critical patent/US20230297808A1/en
Publication of US20230297808A1 publication Critical patent/US20230297808A1/en
Assigned to ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (EPFL) reassignment ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (EPFL) ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BELLWALD, KATHRYN HESS, LEVI, RAN, NOLTE, MAX CHRISTIAN, MARKRAM, HENRY, REIMANN, MICHAEL WOLFGANG
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • This invention relates to neural networks, and more particularly to methods and tools for generating and identifying functional subnetworks within structural networks.
  • the function of the neural network can be understood by a determined analysis of the topology and the weights in the neural networks.
  • a neural network device is a device that mimics the information encoding and other processing capabilities of networks of biological neurons using a system of interconnected nodes.
  • a neural network device can be implemented in hardware, in software, or in combinations thereof.
  • a neural network device includes a plurality of nodes that are interconnected by a plurality of structural links.
  • Nodes are discrete information processing constructs that are analogous to neurons in biological networks.
  • Nodes generally process one or more input signals received over one or more of links to produce one or more output signals that are output over one or more of links.
  • nodes can be artificial neurons that weight and sum multiple input signals, pass the sum through one or more non-linear activation functions, and output one or more output signals.
  • nodes can operate as accumulators, e.g., in accordance with an integrate-and-fire model.
  • Structural links are connections that are capable of transmitting signals between nodes.
  • structural links are bidirectional links that convey a signal from every first node to a second node in the same manner as a signal is conveyed from the second to the first.
  • some portion or all of structural links can be unidirectional links that convey a signal from a first of nodes to a second of nodes without conveying signals from the second to the first.
  • structural links can have diverse properties other than or in addition to directionality.
  • different structural links can carry signals of different magnitudes—resulting in a different strengths of interconnection between respective of nodes.
  • different structural links can carry different types of signal (e.g., inhibitory and/or excitatory signals).
  • signal e.g., inhibitory and/or excitatory signals.
  • structural links can be modelled on the links between soma in biological systems and reflect at least a portion of the enormous morphological, chemical, and other diversity of such links.
  • the tools include a definition of functional edges that—at a particular period of time—constitute a functional subnetwork of a neural network.
  • the definition is directional in that it specifies a direction of information propagation between vertices.
  • the subnetworks are temporal constructs in that different functional subnetworks exit at different times during the operation of the neural network.
  • the definition is a composite definition that it defines both the structural characteristics and functional characteristics of subnetworks within a neural network at different periods in time.
  • a method in a first aspect, includes providing a plurality of structural connections between vertices in a neural network, assigning a direction of information flow to respective of the structural connections, generating a first functional subgraph of the neural network, and generating a second functional subgraph of the neural network.
  • the first functional subgraph includes a first proper subset of the structural connections, wherein vertices connected by the first proper subset of the structural connections are active during a first period of time.
  • the second functional subgraph includes a second proper subset of the structural connections, wherein vertices connected by the second proper subset are active during a second period of time.
  • a method in a second aspect, includes generating a functional subgraph of a network from a structural graph of the network, wherein the structural graph comprises a set of vertices and structural connections between the vertices. Generating the functional subgraph comprises identifying a directed functional edge of the functional subgraph based on presence of structural connection and directional communication of information across the same structural connection.
  • a method is suitable for characterizing a neural network that comprises a set of vertices and structural connections between the vertices.
  • the method includes defining a functional subgraph in the neural network using a definition to identify a plurality of functional edges and analyzing the functional subgraph using one or more topological analyses.
  • the definition can include a definition of a class of structural connection between two vertices in the set of vertices, and a definition of a directional response to a first input elicitable from the two vertices that satisfy the definition of the structural connection class.
  • a method is suitable for distinguishing between inputs to a neural network that comprises a set of vertices and structural connections between the vertices.
  • the method includes inputting a first input to the neural network; characterizing the response of the neural network to the first input using the method of the third aspect; inputting a second input to the neural network; characterizing the response of the neural network to the second input using the method of the third aspect; and distinguishing between the first input and the second input based on results of the respective topological analyses.
  • a method comprising demarking a proper subset of the vertices and functional edges between the vertices in the subset as a directed subgraph, wherein the functional edges are defined based on a presence of structural connections between the vertices in the subset and directional information communication across the structural connections.
  • a method of manufacturing a neural network includes any one the first aspect, the second aspect, the third aspect, the fourth aspect, or the fifth aspect.
  • a method of analyzing performance of a neural network includes any one the first aspect, the second aspect, the third aspect, the fourth aspect, or the fifth aspect.
  • the second aspect can be used to generate the first functional subgraph and to generate the second functional subgraph in the first aspect.
  • Each one of the first aspect, the second aspect, the third aspect, the fourth aspect, or the fifth aspect can include one or more of the following features. Only some of the vertices connected by the second proper subset can be included in the first proper subset. The activity during the first period of time and the activity during the second period of time are both responsive to a same input. A plurality of structural connections can be provided by identifying the plurality of structural connection in a pre-existing neural network.
  • the direction of information flow can be assigned by determining the direction of information in the pre-existing neural network and assigning the direction in accordance with the determined direction.
  • First and the second functional subgraphs of the neural network can be generated by inputting an input into the pre-existing neural network and identifying the first and the second functional subgraph based on the response of the pre-existing neural network to the input.
  • a first functional subgraph of the neural network can be generated by weighting the structural connections of the first proper subset to achieve a desired activity during the first period of time.
  • a first functional subgraph of the neural network can be generated by training the first proper subset of the structural connections.
  • a plurality of structural connections can be provided by adding structural connections between at least two functional subgraphs of the neural network.
  • a structural graph can include undirected structural connections between the vertices.
  • a directed functional edge can be identified by requiring a direct structural connection between two vertices and a directional response to a first input elicitable from the two vertices.
  • a directed functional edge can be identified by identifying a directional communication of information by requiring a response to a first input by a first vertex and a subsequent response by a second vertex.
  • a response to the first input can be required to have occurred during a first time period defined with respect to the first input.
  • the first time period can, e.g., by defined as occurring after an initial propagation of the first input through the functional subgraph.
  • the subsequent response can be required to have occurred during a second time period defined with respect to either the first time period or the response to the first input.
  • the second time period can begin immediately after the first time period ends.
  • the second time period can overlaps with the first time period.
  • the directed functional edge of the functional subgraph can also be identified based on presence of second structural connection and directional communication of information across the second structural connection.
  • the directed functional edge can be identified by identifying a plurality of directed functional edges and classifying the plurality of directed functional edges as the functional subgraph.
  • a definition of the class of a structural connection can require a direct structural connection between two of the vertices, e.g., a undirected structural connection.
  • a definition of a directional response can require a response to the first input by a first vertex of the two vertices, and a subsequent response by a second vertex of the two vertices.
  • the response to the first input can be required to have occurred during a first time period defined with respect to the first input.
  • the first time period can be defined as occurring after an initial propagation of the first input through the functional subgraph.
  • the subsequent response can be required to have occurred during a second time period defined with respect to either the first time period or the response to the first input.
  • the second time period begins immediately after the first time period ends or the second time period can overlap with the first time period.
  • a definition of a functional edge can include a definition of a second structural connection between two vertices in the set of vertices.
  • a definition of a functional edge can include a definition of a second directional response to the first input elicitable from the two vertices that satisfy the definition of the second structural connection.
  • a functional subgraph can be analyzed by determining a simplex count based on the defined functional edges.
  • a simplex count can be determined by determining at least one of a 1-dimensional simplex count and a 2-dimensional simplex count.
  • a functional subgraph can be analyzed by determining Betti numbers for a network that consists of the defined functional edges. Betti numbers can be determined by determining at least one of the first Betti number and the second Betti number.
  • a functional subgraph can be analyzed by determining an Euler characteristic for a network that consists of the defined functional edges.
  • a first input and second input can be distinguished based on the respective topological analyses by classifying results of the respective topological analyses, e.g., using a probabilistic classifier.
  • the probabilistic classifier can be a Bayesian classifier.
  • An input to the network can be classified based on a topological analysis of the directed subgraph.
  • a classifier e.g., a probabilistic classifier
  • the directional information communication across the structural connections can include information communication from a first vertex of the set to a second vertex of the set.
  • FIG. 1 is a flowchart of a process for characterizing parameters of a neural network device using topological methods.
  • FIG. 2 is a schematic representation of different oriented simplices.
  • FIG. 3 is a schematic representation of an example directed graph and its associated flag complex.
  • FIG. 4 is a flowchart of a process for distinguishing functional responses to different input patterns fed into a neural network.
  • Network topology is the arrangement of the various elements of a network, including neural network devices.
  • the elements include nodes and links between the nodes.
  • the nodes can be modeled in one or more ways after biological neurons.
  • the links between the nodes can be modeled in one or more ways after the connections between biological neurons, including synapses, dendrites, and axons.
  • Topological characterizations of a networks tend to focus on the connections between elements rather than the exact shape of the objects involved.
  • a structural connection between nodes may provide a link between two nodes over which a signal can be transmitted.
  • a functional connection reflects the actual transmission of information from one node to the other over the structural connection, i.e., is part of the “functioning” of the neural network.
  • the signal transmission can be part of active information processing and/or information storage by the neural network.
  • a functional connection between two nodes may arise in response to an input and may indicate active participation of those nodes in processing the information content of the input.
  • Structural characterizations of neural network devices can be used, e.g., in the construction and/or reconstruction of neural networks.
  • Reconstruction of a neural network can include, e.g., copying or mimicking at least some of the structure of a first neural network in a second neural network.
  • a simpler second neural network can recreate a portion of the structure of a more complex first neural network.
  • the first neural network can be a biological neural network and the second neural network can be an artificial neural network, although this is not necessarily the case.
  • the neural network devices need not be mere reconstructions. Rather, neural network devices can also be constructed or, in effect, “manufactured” ab initio.
  • the characterizations provided by topological methods can provide general characteristics of desirable structure in neural network devices. For example, the topological characterizations may define a desired level of “structuring” or “ordering” of the neural network device.
  • topological characterizations can be used to construct and reconstruct neural networks in which the distribution of directed cliques (directed all-to-all connected subsets) of neurons by size differs significantly from both that in Erdos-Renyi random graphs with the same number of vertices and the same average connection probability and that in more sophisticated random graphs, constructed either by taking into account distance-dependent probabilities varying within and between cortical layers or morphological types of neurons, or according to Peters' Rule.
  • the neural network devices can include highly prominent motifs of directed cliques of up to eight neurons.
  • the neural networks can incorporate approximately 10e8 3-cliques and 4-cliques, approximately 10e7 5-cliques, approximately 10e5 6-cliques, and approximately 10e3 7-cliques.
  • topological methods can be used to construct or reconstruct neural networks in which the Euler characteristic (EC) of the neural networks can be a value on the order of 10e7, indicating a preponderence of directed cliques consisting of an odd number of neurons.
  • EC Euler characteristic
  • topological methods can be used to construct or reconstruct neural networks in which the homological dimension of the neural networks is 5, which compares to the homological dimension of at most 4 for random graphs and hence indicates that the neural networks possess a higher degree of organizational complexity than random graphs.
  • the homological dimension of a neural network is the maximum n such that ⁇ n is not equal to zero, wherein ⁇ 0 , ⁇ 1 , ⁇ 2 , . . . are Betti numbers that provide a measure the higher-order organizational complexity of the network by detecting “cyclic” chains of intersecting directed cliques.
  • the topological methods described herein can also use defined functional patterns of communication across multiple links to characterize the processing activity within neural network devices.
  • the presence of the functional patterns indicates “structuring” or “ordering” of the information flow along the nodes and links of the neural network device under particular circumstances.
  • Such functional characterizations thus characterize neural network devices in ways that are lost when mere structural information is considered.
  • characterizations of the patterns of communication across multiple links in a neural network device can be used in the construction and/or reconstruction of neural networks.
  • Reconstruction of a neural network can include, e.g., copying or mimicking at least some of the function of a first neural network in a second neural network.
  • a simpler second neural network can recreate a portion of the functioning of a more complex first neural network.
  • the first neural network can be a biological neural network and the second neural network can be an artificial neural network, although this is not necessarily the case.
  • neural network devices that are characterized based on defined patterns of communication across multiple links need not be mere reconstructions. Rather, the function of neural network devices can be constructed or, in effect, “manufactured” ab initio.
  • the characterizations provided by topological methods can provide general characteristics of the functional behavior of a desirable neural network device. This can be beneficial to, e.g., reduce training time or even provide a partially- or fully-functional neural network device “out of the box.” In other words, the topological characterizations may define a desired level of “structuring” or “ordering” of the information flow within a functioning neural network device.
  • functional sub-networks can be assembled like components, e.g., by adding structural links between different functional sub-networks to achieve desired processing results.
  • the topological characterizations may define particular functional responses to particular input patterns. For example, a given stimulus can be applied repeatedly to a neural network device and the responsive activity within the neural network device can be measured. The activity can be binned into timesteps and characterized using topological techniques. For example, the topological techniques can be akin to those used to characterize the structure of a neural network device. In some implementations, a transmission-response graph can be generated for each timestep to represent the activity in the neural network device in a manner suited for analysis using topological methods.
  • the vertices are the nodes of the neural network device and the edges are the links between the nodes that are active during the timestep leads.
  • the activity can be, e.g., signal transmissions along the link that leads to firing of a connected node.
  • the duration of the timesteps and the precise rule for formation of the transmission-response graph for each timestep can be biologically motivated.
  • FIG. 1 is a flowchart of a process 100 for characterizing parameters of a neural network device using topological methods.
  • the topological parameters characterized using process 100 are structural and characterize nodes and their links, i.e., without encompassing the activity in the neural network and its function.
  • Such structural parameters can be used, e.g., to construct or reconstruct a neural network device.
  • a functional connectivity matrix can be determined.
  • topological techniques can be applied to characterize the functional activity of nodes and their links.
  • Such functional parameters can be used, e.g., to construct or reconstruct a neural network device that has desirable processing activity.
  • such functional parameters can be used to distinguish different inputs into the neural network device.
  • process 100 includes computing binary adjacency matrices for a neural network device at 105 .
  • An adjacency matrix is a square matrix that can be used to represent a finite graph, e.g., such as a finite graph that itself represents a neural network device.
  • the entries in an adjacency matrix generally, a binary bit (i.e., a “1” or a “0”)—indicate whether pairs of nodes are structurally linked or not in the graph. Because each entry only requires one bit, a neural network device can be represented in a very compact way.
  • Process 100 also includes determining associated directed flag complexes at 110 .
  • Directed flag complexes are oriented simplicial complexes that encode the connectivity and the direction of orders of the underlying directed graph.
  • each directed n-clique in the underlying graph corresponds to an oriented (n ⁇ 1)-simplex in the flag complex and the faces of a simplex correspond to the directed subcliques of its associated directed clique.
  • Associated directed flag complexes can be determined for structural links (e.g., the existence of directional links), for functional links (e.g., directed activity along links), or both.
  • FIG. 2 is a schematic representation of different oriented simplices 205 , 210 , 215 , 220 .
  • Simplex 205 has a dimension 0
  • simplex 210 has a dimension 1
  • simplex 215 has a dimension 2
  • simplex 220 has a dimension 3.
  • Simplices 205 , 210 , 215 , 220 are oriented simplices in that simplices 205 , 210 , 215 , 220 have a fixed orientation (that is, there is a linear ordering of the nodes).
  • the orientation can embody the structure if the links, the function of the links, or both.
  • the nodes of the neural network device are represented as dots whereas the links between the nodes are represented as lines connecting these dots.
  • the lines include arrows that denote either the direction of structural link or the direction of activity along the link.
  • all links herein are illustrated as unidirectional links, although this is not necessarily the case.
  • the nodes and links in a neural network device can be treated as vertices and edges in topological methods.
  • the network or subnetwork of nodes and links can be treated as a graph or a subgraph in topological methods. For this reason, the terms are used interchangeably herein.
  • FIG. 3 is a schematic representation of an example directed graph 305 and its associated flag complex 310 .
  • a “directed graph” consists of a pair of finite sets (V, E) and a function ⁇ :E ⁇ V ⁇ V.
  • the elements of the set V are the or “vertices” of
  • the elements of E are the “edges” of
  • the function ⁇ associates with each edge an ordered pair of vertices.
  • the function ⁇ is required to satisfy the following two conditions.
  • a vertex v ⁇ is said to be a “sink” if ⁇ 1 (e) ⁇ v for all e ⁇ E.
  • a vertex v ⁇ G is said to be a “source’ if ⁇ 2 (e) ⁇ v for all e ⁇ E.
  • Two graphs and ′ are “isomorphic” if there is morphism of graphs ( ⁇ , ⁇ ): ⁇ ′ such that both ⁇ and ⁇ are bijections, which can be called an “isomorphism of directed graphs.”
  • An “abstract oriented simplicial complex” is a collection S of finite, ordered sets with the property that if ⁇ S, then every subset ⁇ of ⁇ is also a member of S.
  • a “subcomplex” of an abstract oriented simplicial complex is a sub-collection S′ ⁇ S that is itself an abstract oriented simplicial complex.
  • abstract oriented simplicial complexes are referred to herein as “simplicial complexes.”
  • a simplicial complex is said to be “finite” if it has only finitely many simplices. If ⁇ S, we define the “dimension” of ⁇ , denoted dim( ⁇ ), to be
  • a simplicial complex gives rise to a topological space by means of the construction known as “geometric realization.”
  • geometric realization a point (a standard geometric 0-simplex) with each 0-simplex, a line segment (a standard geometric 1-simplex) with each 1-simplex, a filled-in triangle (a standard geometric 2-simplex) with each 2-simplex, etc., glued together along common faces.
  • the intersection of two simplices in S neither of which is a face of the other, is a proper subset, and hence a face, of both of them.
  • n-simplex is nothing but a (n+1)-clique, canonically realized as a geometric object.
  • An n-simplex is said to be “oriented” if there is a linear ordering on its vertices.
  • the corresponding (n+1)-clique is said to be a “directed (n+1)-clique.”
  • S (n) S n ⁇ . . . ⁇ S 0 , which is called the “n-skeleton” of S, is a subcomplex of S.
  • S is n-dimensional, and k ⁇ n, then the collection S k ⁇ . . . ⁇ S n is not a subcomplex of S because it is not closed under taking subsets. However if one adds to that collection all the faces of all simplices in S k ⁇ . . . ⁇ S n , one obtains a subcomplex of S called the “k-coskeleton” of S, which we will denote by S (k) .
  • Directed graphs such as directed graph 305 give rise to abstract oriented simplicial complexes.
  • Let (V, E, ⁇ ) be a directed graph.
  • the (j, k)-coefficient of the structural adjacency matrix is a binary “1” if and only if there is a directed connection in the neural network from the node/vertex with GID j to the node/vertex with GID k.
  • the adjacency matrix can thus be referred to as the “structural matrix” of the neural network and the directed flag complex can be referred to as a “neocortical microcircuit complex” or “N-complex.”
  • Process 100 also includes determining a parameter of a neural network device based on the relevant matrix and/or the N-complex using topological methods at 115 . There are several different parameters that can be determined.
  • the simplices in the neural network in each dimension can simply be counted. Such simplex counts can indicate the degree of “structuring” or “ordering” of the nodes and connections (or the activity) within the neural network device.
  • the Euler characteristic of all N-complexes can be computed.
  • the Betti numbers of a simplicial complex can be computed. In particular, the n-th Betti number, ⁇ n , counts the number of chains of simplices intersecting along faces to create an “n-dimensional hole” in the complex, which requires a certain degree of organization among the simplices.
  • FIG. 4 is a flowchart of a process 400 for distinguishing—using topological methods—functional responses to different input patterns fed into a neural network.
  • Process 400 includes receiving, at the neural network, one or more input patterns at 405 .
  • the one or more input patterns can correspond to a known input.
  • process 400 can be part of the training of a neural network, the design of a process for reading the processed output of a neural network, the testing of a neural network, and/or the analysis of an operational neural network.
  • known input patterns can be used to confirm that the functional response of the neural network is appropriate.
  • the one or more input patterns can correspond to an unknown input(s).
  • process 400 can be part of the operation of a trained neural network and the functional response of the neural network can represent the processed output of the neural network.
  • Process 400 also includes dividing the activity in neural network into time bins at 410 .
  • a time bin is a duration of time.
  • the total functional activity in a neural network responsive to an input pattern e.g., signal transmission along edges and/or nodes
  • the duration of the time bins can be chosen based on the extent to which activity in each bin is distinguishable when different input patterns are received.
  • Such an ex post analysis can be used, e.g., when designing a process for reading the processed output of a neural network using input patterns that correspond to known inputs.
  • the process for reading the output of a neural network can be adapted to the observed activity responsive to different input patterns. This can be done, e.g., to ensure that the process for reading the neural network appropriately captures the processing results when input patterns that correspond to unknown inputs are received.
  • the duration of the time bins can be constant.
  • the duration of the time bins can be 5 ms.
  • the time bins can commence after the input pattern is received at the neural network. Such a delay can allow the input pattern to propagate to relevant portions of the neural network device prior to meaningful processing results are expected. Subsequent time bins can be defined with respect to an end of a preceding time bin or with respect to the time after the input pattern is received.
  • Process 400 also includes recording a functional connectivity matrix for each time bin at 415 .
  • a functional connectivity matrix is a measure of the response of each edge or link in a neural network device to a given input pattern during a time bin.
  • a functional connectivity matrix is a binary matrix where active and inactive edges/links are denoted (e.g., with a binary “1” and “0”, respectively).
  • a functional connectivity matrix is thus akin to a structural adjacency matrix except that the functional connectivity matrix captures activity.
  • an edge or connection can be denoted as “active” when a signal is transmitted along the edge or connection from a transmitting node to a receiving node during the relevant time bin and when the receiving node or vertex responds to the transmitted signal by subsequently transmitting a second signal.
  • the responsive second signal need not be transmitted within the same time bin as the received signal. Rather, the responsive second signal can be transmitted within some duration after the received signal, e.g., in a subsequent time bin.
  • responsive second signals can be identified by identifying signals that are transmitted by the receiving node within a fixed duration after the receiving node receives the first signal. For example, the duration can be about 1.5 times as long as the duration of a time bin, or about 7.5 ms in neural network devices that are modeled after biological neural networks.
  • the responsive second signal need not be responsive solely to single signal received by the node. Rather, multiple signals can be received by the receiving node (e.g., along multiple edges or connections).
  • the receiving node or vertex can “respond” to all or a portion of the received signals in accordance with the particularities of the information processing performed by that node.
  • the receiving node or vertex can, e.g., weight and sum multiple input signals, pass the sum through one or more non-linear activation functions, and output one or more output signals, e.g., as an accumulator, e.g., in accordance with an integrate-and-fire model.
  • the transmitted signals are spikes and each (j, k)-coefficient in a functional connectivity matrix is denoted as “active” if and only if the following three conditions are satisfied, where s i j denotes the time of the i-th spike of node j:
  • Process 400 also includes characterizing one or more parameters of the activity recorded in the functional connectivity matrix using topological methods at 420 .
  • the activity recorded in the functional connectivity matrix can be characterized using one or more of the approaches used in process 100 ( FIG. 1 ), substituting the functional connectivity matrix for the structural adjacency matrix.
  • a characterization of a topological parameter of the neural network device can be determined for different functional connectivity matrices from different time bins. In effect, the structuring or ordering of the activity in the neural network can be determined at different times.
  • Process 400 also includes distinguishing the functional response of the neural network device to a received input pattern from other functional responses of the neural network device to other received input patterns based on the characterized topological parameters at 425 .
  • input patterns that correspond to known inputs can be used in a variety contexts. For example, during testing of a neural network, distinguishing the functional response of the neural network device to a received input pattern from functional responses of the neural network device to other patterns can indicate whether the neural network device is functioning properly. As another example, during training of a neural network, distinguishing the functional response of the neural network device to a received input pattern from functional responses of the neural network device to other patterns can provide an indication that training is complete. As another example, in analyzing an operational neural network, distinguishing the functional response of the neural network device to a received input pattern from functional responses of the neural network device to other patterns can provide an indication of the processing performed by the neural network device.
  • the input patterns can correspond to unknown inputs
  • distinguishing the functional response of the neural network device to a received input pattern from functional responses of the neural network device to other patterns can be used to read the output of the neural network device.
  • Distinguishing functional responses to different input patterns fed into a neural network using topological methods can also be performed in a variety of other contexts. For example, changes in functional responses over time can be used to identify the results of training and/or the structures associated with training.

Abstract

In one aspect, a method includes generating a functional subgraph of a network from a structural graph of the network. The structural graph comprises a set of vertices and structural connections between the vertices. Generating the functional subgraph includes identifying a directed functional edge of the functional subgraph based on presence of structural connection and directional communication of information across the same structural connection.

Description

    RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 15/864,146, now allowed, filed 8 Jan. 2018, which claims the benefit of U.S. Provisional Application 62/443,071 filed 6 Jan. 2017, the entire contents of which are incorporated herein by reference.
  • TECHNICAL FIELD
  • This invention relates to neural networks, and more particularly to methods and tools for generating and identifying functional subnetworks within structural networks.
  • BACKGROUND
  • In simple neural networks that have a limited number of vertices and structural connections between the vertices, the function of the neural network can be understood by a determined analysis of the topology and the weights in the neural networks.
  • In neural networks of increased complexity, such analyses become untenable. Even if the complete network topology is known, human oversight is lost and the function of subnetworks inscrutable.
  • A neural network device is a device that mimics the information encoding and other processing capabilities of networks of biological neurons using a system of interconnected nodes. A neural network device can be implemented in hardware, in software, or in combinations thereof.
  • A neural network device includes a plurality of nodes that are interconnected by a plurality of structural links. Nodes are discrete information processing constructs that are analogous to neurons in biological networks. Nodes generally process one or more input signals received over one or more of links to produce one or more output signals that are output over one or more of links. For example, in some implementations, nodes can be artificial neurons that weight and sum multiple input signals, pass the sum through one or more non-linear activation functions, and output one or more output signals. In some implementations, nodes can operate as accumulators, e.g., in accordance with an integrate-and-fire model.
  • Structural links are connections that are capable of transmitting signals between nodes. In some implementations, structural links are bidirectional links that convey a signal from every first node to a second node in the same manner as a signal is conveyed from the second to the first. However, this is not necessarily the case. For example, in a neural network, some portion or all of structural links can be unidirectional links that convey a signal from a first of nodes to a second of nodes without conveying signals from the second to the first. As another example, in some implementations, structural links can have diverse properties other than or in addition to directionality. For example, in some implementations, different structural links can carry signals of different magnitudes—resulting in a different strengths of interconnection between respective of nodes. As another example, different structural links can carry different types of signal (e.g., inhibitory and/or excitatory signals). Indeed, in some implementations, structural links can be modelled on the links between soma in biological systems and reflect at least a portion of the enormous morphological, chemical, and other diversity of such links.
  • SUMMARY
  • Methods and tools for generating and characterizing functional subnetworks of a neural network are described. The tools include a definition of functional edges that—at a particular period of time—constitute a functional subnetwork of a neural network. The definition is directional in that it specifies a direction of information propagation between vertices. The subnetworks are temporal constructs in that different functional subnetworks exit at different times during the operation of the neural network. In effect, the definition is a composite definition that it defines both the structural characteristics and functional characteristics of subnetworks within a neural network at different periods in time.
  • In a first aspect, a method includes providing a plurality of structural connections between vertices in a neural network, assigning a direction of information flow to respective of the structural connections, generating a first functional subgraph of the neural network, and generating a second functional subgraph of the neural network. The first functional subgraph includes a first proper subset of the structural connections, wherein vertices connected by the first proper subset of the structural connections are active during a first period of time. The second functional subgraph includes a second proper subset of the structural connections, wherein vertices connected by the second proper subset are active during a second period of time.
  • In a second aspect, a method includes generating a functional subgraph of a network from a structural graph of the network, wherein the structural graph comprises a set of vertices and structural connections between the vertices. Generating the functional subgraph comprises identifying a directed functional edge of the functional subgraph based on presence of structural connection and directional communication of information across the same structural connection.
  • In a third aspect, a method is suitable for characterizing a neural network that comprises a set of vertices and structural connections between the vertices. The method includes defining a functional subgraph in the neural network using a definition to identify a plurality of functional edges and analyzing the functional subgraph using one or more topological analyses. The definition can include a definition of a class of structural connection between two vertices in the set of vertices, and a definition of a directional response to a first input elicitable from the two vertices that satisfy the definition of the structural connection class.
  • In a fourth aspect, a method is suitable for distinguishing between inputs to a neural network that comprises a set of vertices and structural connections between the vertices. The method includes inputting a first input to the neural network; characterizing the response of the neural network to the first input using the method of the third aspect; inputting a second input to the neural network; characterizing the response of the neural network to the second input using the method of the third aspect; and distinguishing between the first input and the second input based on results of the respective topological analyses.
  • In a fifth aspect, for a network that comprises a set of vertices and structural connections between the vertices, a method comprising demarking a proper subset of the vertices and functional edges between the vertices in the subset as a directed subgraph, wherein the functional edges are defined based on a presence of structural connections between the vertices in the subset and directional information communication across the structural connections.
  • In a sixth aspect, a method of manufacturing a neural network includes any one the first aspect, the second aspect, the third aspect, the fourth aspect, or the fifth aspect.
  • In a seventh aspect, a method of analyzing performance of a neural network includes any one the first aspect, the second aspect, the third aspect, the fourth aspect, or the fifth aspect.
  • In an eighth aspect, wherein the second aspect can be used to generate the first functional subgraph and to generate the second functional subgraph in the first aspect.
  • Each one of the first aspect, the second aspect, the third aspect, the fourth aspect, or the fifth aspect can include one or more of the following features. Only some of the vertices connected by the second proper subset can be included in the first proper subset. The activity during the first period of time and the activity during the second period of time are both responsive to a same input. A plurality of structural connections can be provided by identifying the plurality of structural connection in a pre-existing neural network.
  • The direction of information flow can be assigned by determining the direction of information in the pre-existing neural network and assigning the direction in accordance with the determined direction. First and the second functional subgraphs of the neural network can be generated by inputting an input into the pre-existing neural network and identifying the first and the second functional subgraph based on the response of the pre-existing neural network to the input.
  • A first functional subgraph of the neural network can be generated by weighting the structural connections of the first proper subset to achieve a desired activity during the first period of time. A first functional subgraph of the neural network can be generated by training the first proper subset of the structural connections. A plurality of structural connections can be provided by adding structural connections between at least two functional subgraphs of the neural network.
  • A structural graph can include undirected structural connections between the vertices. A directed functional edge can be identified by requiring a direct structural connection between two vertices and a directional response to a first input elicitable from the two vertices. A directed functional edge can be identified by identifying a directional communication of information by requiring a response to a first input by a first vertex and a subsequent response by a second vertex. A response to the first input can be required to have occurred during a first time period defined with respect to the first input. The first time period can, e.g., by defined as occurring after an initial propagation of the first input through the functional subgraph. The subsequent response can be required to have occurred during a second time period defined with respect to either the first time period or the response to the first input. The second time period can begin immediately after the first time period ends. The second time period can overlaps with the first time period.
  • The directed functional edge of the functional subgraph can also be identified based on presence of second structural connection and directional communication of information across the second structural connection. The directed functional edge can be identified by identifying a plurality of directed functional edges and classifying the plurality of directed functional edges as the functional subgraph.
  • A definition of the class of a structural connection can require a direct structural connection between two of the vertices, e.g., a undirected structural connection. A definition of a directional response can require a response to the first input by a first vertex of the two vertices, and a subsequent response by a second vertex of the two vertices. The response to the first input can be required to have occurred during a first time period defined with respect to the first input. For example, the first time period can be defined as occurring after an initial propagation of the first input through the functional subgraph. The subsequent response can be required to have occurred during a second time period defined with respect to either the first time period or the response to the first input. For example, the second time period begins immediately after the first time period ends or the second time period can overlap with the first time period.
  • A definition of a functional edge can include a definition of a second structural connection between two vertices in the set of vertices. A definition of a functional edge can include a definition of a second directional response to the first input elicitable from the two vertices that satisfy the definition of the second structural connection.
  • A functional subgraph can be analyzed by determining a simplex count based on the defined functional edges. A simplex count can be determined by determining at least one of a 1-dimensional simplex count and a 2-dimensional simplex count. A functional subgraph can be analyzed by determining Betti numbers for a network that consists of the defined functional edges. Betti numbers can be determined by determining at least one of the first Betti number and the second Betti number. A functional subgraph can be analyzed by determining an Euler characteristic for a network that consists of the defined functional edges.
  • A first input and second input can be distinguished based on the respective topological analyses by classifying results of the respective topological analyses, e.g., using a probabilistic classifier. The probabilistic classifier can be a Bayesian classifier.
  • An input to the network can be classified based on a topological analysis of the directed subgraph. For example, a classifier, e.g., a probabilistic classifier, can be applied to one or more topological metrics of the of the directed subgraph. The directional information communication across the structural connections can include information communication from a first vertex of the set to a second vertex of the set.
  • The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the invention will be apparent from the description and drawings, and from the claims.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a flowchart of a process for characterizing parameters of a neural network device using topological methods.
  • FIG. 2 is a schematic representation of different oriented simplices.
  • FIG. 3 is a schematic representation of an example directed graph and its associated flag complex.
  • FIG. 4 is a flowchart of a process for distinguishing functional responses to different input patterns fed into a neural network.
  • DETAILED DESCRIPTION
  • Network topology is the arrangement of the various elements of a network, including neural network devices. In neural network devices, the elements include nodes and links between the nodes. In some neural network devices, the nodes can be modeled in one or more ways after biological neurons. Further, the links between the nodes can be modeled in one or more ways after the connections between biological neurons, including synapses, dendrites, and axons.
  • Even though the nodes and links of a neural network device can have a variety of different characteristics, their network topology can be characterized using various topological methods. Topological characterizations of a networks tend to focus on the connections between elements rather than the exact shape of the objects involved.
  • The topological methods described herein can be used to characterize structural connections between nodes in neural network, functional connections between nodes in neural network, or both. In a neural network, a structural connection between nodes may provide a link between two nodes over which a signal can be transmitted. In contrast, a functional connection reflects the actual transmission of information from one node to the other over the structural connection, i.e., is part of the “functioning” of the neural network. For example, the signal transmission can be part of active information processing and/or information storage by the neural network. In a particular example, a functional connection between two nodes may arise in response to an input and may indicate active participation of those nodes in processing the information content of the input.
  • Structural characterizations of neural network devices can be used, e.g., in the construction and/or reconstruction of neural networks. Reconstruction of a neural network can include, e.g., copying or mimicking at least some of the structure of a first neural network in a second neural network. For example, a simpler second neural network can recreate a portion of the structure of a more complex first neural network. In some implementations, the first neural network can be a biological neural network and the second neural network can be an artificial neural network, although this is not necessarily the case.
  • In some implementations, the neural network devices need not be mere reconstructions. Rather, neural network devices can also be constructed or, in effect, “manufactured” ab initio. In some implementations, the characterizations provided by topological methods can provide general characteristics of desirable structure in neural network devices. For example, the topological characterizations may define a desired level of “structuring” or “ordering” of the neural network device.
  • By way of example, topological characterizations can be used to construct and reconstruct neural networks in which the distribution of directed cliques (directed all-to-all connected subsets) of neurons by size differs significantly from both that in Erdos-Renyi random graphs with the same number of vertices and the same average connection probability and that in more sophisticated random graphs, constructed either by taking into account distance-dependent probabilities varying within and between cortical layers or morphological types of neurons, or according to Peters' Rule. In particular, the neural network devices can include highly prominent motifs of directed cliques of up to eight neurons. For example, in neural networks with approximately 3×10e4 vertices and 8×10e6 edges, the neural networks can incorporate approximately 10e8 3-cliques and 4-cliques, approximately 10e7 5-cliques, approximately 10e5 6-cliques, and approximately 10e3 7-cliques.
  • As another example, topological methods can be used to construct or reconstruct neural networks in which the Euler characteristic (EC) of the neural networks can be a value on the order of 10e7, indicating a preponderence of directed cliques consisting of an odd number of neurons.
  • As another example, topological methods can be used to construct or reconstruct neural networks in which the homological dimension of the neural networks is 5, which compares to the homological dimension of at most 4 for random graphs and hence indicates that the neural networks possess a higher degree of organizational complexity than random graphs. The homological dimension of a neural network is the maximum n such that βn is not equal to zero, wherein β0, β1, β2, . . . are Betti numbers that provide a measure the higher-order organizational complexity of the network by detecting “cyclic” chains of intersecting directed cliques.
  • The topological methods described herein can also use defined functional patterns of communication across multiple links to characterize the processing activity within neural network devices. The presence of the functional patterns indicates “structuring” or “ordering” of the information flow along the nodes and links of the neural network device under particular circumstances. Such functional characterizations thus characterize neural network devices in ways that are lost when mere structural information is considered.
  • In some implementations, characterizations of the patterns of communication across multiple links in a neural network device can be used in the construction and/or reconstruction of neural networks. Reconstruction of a neural network can include, e.g., copying or mimicking at least some of the function of a first neural network in a second neural network. For example, a simpler second neural network can recreate a portion of the functioning of a more complex first neural network. In some implementations, the first neural network can be a biological neural network and the second neural network can be an artificial neural network, although this is not necessarily the case.
  • In some implementations, neural network devices that are characterized based on defined patterns of communication across multiple links need not be mere reconstructions. Rather, the function of neural network devices can be constructed or, in effect, “manufactured” ab initio. In some implementations, the characterizations provided by topological methods can provide general characteristics of the functional behavior of a desirable neural network device. This can be beneficial to, e.g., reduce training time or even provide a partially- or fully-functional neural network device “out of the box.” In other words, the topological characterizations may define a desired level of “structuring” or “ordering” of the information flow within a functioning neural network device. In some implementations, functional sub-networks can be assembled like components, e.g., by adding structural links between different functional sub-networks to achieve desired processing results.
  • For example, in some implementations, the topological characterizations may define particular functional responses to particular input patterns. For example, a given stimulus can be applied repeatedly to a neural network device and the responsive activity within the neural network device can be measured. The activity can be binned into timesteps and characterized using topological techniques. For example, the topological techniques can be akin to those used to characterize the structure of a neural network device. In some implementations, a transmission-response graph can be generated for each timestep to represent the activity in the neural network device in a manner suited for analysis using topological methods. In particular, in a transmission-response graph, the vertices are the nodes of the neural network device and the edges are the links between the nodes that are active during the timestep leads. The activity can be, e.g., signal transmissions along the link that leads to firing of a connected node. In some cases, the duration of the timesteps and the precise rule for formation of the transmission-response graph for each timestep can be biologically motivated.
  • FIG. 1 is a flowchart of a process 100 for characterizing parameters of a neural network device using topological methods. In the flowchart, the topological parameters characterized using process 100 are structural and characterize nodes and their links, i.e., without encompassing the activity in the neural network and its function. Such structural parameters can be used, e.g., to construct or reconstruct a neural network device.
  • As discussed further below, in other implementations, functional rather than structural topological parameters can be characterized. For example, rather than determining a structural adjacency matrix, a functional connectivity matrix can be determined. Nevertheless, topological techniques can be applied to characterize the functional activity of nodes and their links. Such functional parameters can be used, e.g., to construct or reconstruct a neural network device that has desirable processing activity. As another example, such functional parameters can be used to distinguish different inputs into the neural network device.
  • In the illustrated implementation, process 100 includes computing binary adjacency matrices for a neural network device at 105. An adjacency matrix is a square matrix that can be used to represent a finite graph, e.g., such as a finite graph that itself represents a neural network device. The entries in an adjacency matrix—generally, a binary bit (i.e., a “1” or a “0”)—indicate whether pairs of nodes are structurally linked or not in the graph. Because each entry only requires one bit, a neural network device can be represented in a very compact way.
  • Process 100 also includes determining associated directed flag complexes at 110. Directed flag complexes are oriented simplicial complexes that encode the connectivity and the direction of orders of the underlying directed graph. In particular, each directed n-clique in the underlying graph corresponds to an oriented (n−1)-simplex in the flag complex and the faces of a simplex correspond to the directed subcliques of its associated directed clique. Associated directed flag complexes can be determined for structural links (e.g., the existence of directional links), for functional links (e.g., directed activity along links), or both.
  • FIG. 2 is a schematic representation of different oriented simplices 205, 210, 215, 220. Simplex 205 has a dimension 0, simplex 210 has a dimension 1, simplex 215 has a dimension 2, and simplex 220 has a dimension 3. Simplices 205, 210, 215, 220 are oriented simplices in that simplices 205, 210, 215, 220 have a fixed orientation (that is, there is a linear ordering of the nodes). The orientation can embody the structure if the links, the function of the links, or both.
  • In the figures, the nodes of the neural network device are represented as dots whereas the links between the nodes are represented as lines connecting these dots. The lines include arrows that denote either the direction of structural link or the direction of activity along the link. For the sake of convenience, all links herein are illustrated as unidirectional links, although this is not necessarily the case.
  • The nodes and links in a neural network device can be treated as vertices and edges in topological methods. The network or subnetwork of nodes and links can be treated as a graph or a subgraph in topological methods. For this reason, the terms are used interchangeably herein.
  • FIG. 3 is a schematic representation of an example directed graph 305 and its associated flag complex 310. In further detail, a “directed graph”
    Figure US20230297808A1-20230921-P00001
    consists of a pair of finite sets (V, E) and a function τ:E→V×V. The elements of the set V are the or “vertices” of
    Figure US20230297808A1-20230921-P00001
    , the elements of E are the “edges” of
    Figure US20230297808A1-20230921-P00001
    , and the function τ associates with each edge an ordered pair of vertices. The “direction” of a connection or edge e with τ (e)=(v1, v2) is taken to be from τ1(e)=v1, the source node or vertex, to τ2(v)=v2, the target vertex.
  • The function τ is required to satisfy the following two conditions.
      • (1) For each e∈E, if τ(e)=(v1, v2), then v1≠v2, i.e., there are no loops in the graph.
      • (2) The function τ is injective, i.e., for any pair of vertices (v1, v2), there is at most one edge directed from v1 to v2.
  • A vertex v∈
    Figure US20230297808A1-20230921-P00001
    is said to be a “sink” if τ1(e)≠v for all e∈E. A vertex v∈G is said to be a “source’ if τ2(e)≠v for all e∈E.
  • A “morphism of directed graphs” from a directed graph
    Figure US20230297808A1-20230921-P00001
    =(V, E, τ) to a directed graph
    Figure US20230297808A1-20230921-P00001
    ′=(V′, E′, τ′) consists of a pair of set maps α: V→V′ and β: E→E′ such that β takes an edge in
    Figure US20230297808A1-20230921-P00001
    with source v1 and target v2 to an edge in
    Figure US20230297808A1-20230921-P00001
    ′ with source α(v1) and target α(v2), i.e., τ′ºβ=(α, α)ºτ. Two graphs
    Figure US20230297808A1-20230921-P00001
    and
    Figure US20230297808A1-20230921-P00001
    ′ are “isomorphic” if there is morphism of graphs (α, β):
    Figure US20230297808A1-20230921-P00001
    Figure US20230297808A1-20230921-P00001
    ′ such that both α and β are bijections, which can be called an “isomorphism of directed graphs.”
  • A “path” in a directed graph
    Figure US20230297808A1-20230921-P00001
    consists of a sequence of edges (e1, . . . , en) such that for all 1≤k≤n, the target of ek is the source of ek+1, i.e., τ2(ek)=τ1(ek+1). The “length” of the path (e1, . . . , en) is n, i.e., the number of edges of which the path is composed. If, in addition, target of en is the source of e1, i.e., τ2(en)=τ1(e1), then (e1, . . . , en) is an “oriented cycle.”
  • An “abstract oriented simplicial complex” is a collection S of finite, ordered sets with the property that if σ∈S, then every subset τ of σ is also a member of S. A “subcomplex” of an abstract oriented simplicial complex is a sub-collection S′⊆S that is itself an abstract oriented simplicial complex. For the sake of convenience, abstract oriented simplicial complexes are referred to herein as “simplicial complexes.”
  • The elements of a simplicial complex S are called its “simplices.” A simplicial complex is said to be “finite” if it has only finitely many simplices. If σ∈S, we define the “dimension” of σ, denoted dim(σ), to be |σ|, i.e., the cardinality of the set σ minus one. If σ is a simplex of dimension n, then we refer to σ as an n-simplex of S. The set of all n-simplices of S is denoted Sn. A simplex τ is said to be a face of σ if π is a subset of σ of a strictly smaller cardinality. A “front face” of an n-simplex σ=(v0, . . . , vn) is a face τ=(v0, . . . , vm) for some m<n. Similarly, a “back face” of σ is a face τ′=(vi, . . . , vn) for some 0<i<n. If α=(v0, . . . , vn)∈Sn, then the ith face of σ is the (n−1)-simplex σi obtained from σ by removing the node or vertex vi.
  • A simplicial complex gives rise to a topological space by means of the construction known as “geometric realization.” In brief, one associates a point (a standard geometric 0-simplex) with each 0-simplex, a line segment (a standard geometric 1-simplex) with each 1-simplex, a filled-in triangle (a standard geometric 2-simplex) with each 2-simplex, etc., glued together along common faces. The intersection of two simplices in S, neither of which is a face of the other, is a proper subset, and hence a face, of both of them. In the geometric realization this means that the geometric simplices that realize the abstract simplices intersect on common faces, and hence give rise to a well-defined geometric object. A geometric n-simplex is nothing but a (n+1)-clique, canonically realized as a geometric object. An n-simplex is said to be “oriented” if there is a linear ordering on its vertices. In this case the corresponding (n+1)-clique is said to be a “directed (n+1)-clique.”
  • If S is a simplicial complex, then the union S(n)=Sn∪ . . . ∪S0, which is called the “n-skeleton” of S, is a subcomplex of S. We say that S is “n-dimensional” if S=S(n), and n is minimal with this property. If S is n-dimensional, and k≤n, then the collection Sk∪ . . . ∪Sn is not a subcomplex of S because it is not closed under taking subsets. However if one adds to that collection all the faces of all simplices in Sk∪ . . . ∪Sn, one obtains a subcomplex of S called the “k-coskeleton” of S, which we will denote by S(k).
  • Directed graphs such as directed graph 305 give rise to abstract oriented simplicial complexes. Let
    Figure US20230297808A1-20230921-P00001
    =(V, E, τ) be a directed graph. The “directed flag complex” associated with G is the abstract simplicial complex S=S(
    Figure US20230297808A1-20230921-P00001
    ), with S0=V and whose n-simplices Sn for n≥1 are (n+1)-tuples (v0, . . . , vn), of vertices such that for each 0≤i<j≤n, there is an edge (or connection) in
    Figure US20230297808A1-20230921-P00001
    from vi to vj. Notice that because of the assumptions on τ, an n-simplex in S is characterised by the (ordered) sequence (v0, . . . , vn), but not by the underlying set of nodes or vertices. For instance (v1, v2, v3) and (v2, v1, v3) are distinct 2-simplices with the same set of vertices.
  • Returning to process 100 and FIG. 1 , for each node in the neural network device, there is a vertex in the underlying directed graph that is labelled with the unique global identification number (GID). The (j, k)-coefficient of the structural adjacency matrix is a binary “1” if and only if there is a directed connection in the neural network from the node/vertex with GID j to the node/vertex with GID k. The adjacency matrix can thus be referred to as the “structural matrix” of the neural network and the directed flag complex can be referred to as a “neocortical microcircuit complex” or “N-complex.”
  • Process 100 also includes determining a parameter of a neural network device based on the relevant matrix and/or the N-complex using topological methods at 115. There are several different parameters that can be determined.
  • For example, the simplices in the neural network in each dimension can simply be counted. Such simplex counts can indicate the degree of “structuring” or “ordering” of the nodes and connections (or the activity) within the neural network device. As another example, the Euler characteristic of all N-complexes can be computed. As yet another example, the Betti numbers of a simplicial complex can be computed. In particular, the n-th Betti number, βn, counts the number of chains of simplices intersecting along faces to create an “n-dimensional hole” in the complex, which requires a certain degree of organization among the simplices.
  • FIG. 4 is a flowchart of a process 400 for distinguishing—using topological methods—functional responses to different input patterns fed into a neural network.
  • Process 400 includes receiving, at the neural network, one or more input patterns at 405. In some implementations, the one or more input patterns can correspond to a known input. For example, process 400 can be part of the training of a neural network, the design of a process for reading the processed output of a neural network, the testing of a neural network, and/or the analysis of an operational neural network. In these cases, known input patterns can be used to confirm that the functional response of the neural network is appropriate. In other implementations, the one or more input patterns can correspond to an unknown input(s). For example, process 400 can be part of the operation of a trained neural network and the functional response of the neural network can represent the processed output of the neural network.
  • Process 400 also includes dividing the activity in neural network into time bins at 410. A time bin is a duration of time. The total functional activity in a neural network responsive to an input pattern (e.g., signal transmission along edges and/or nodes) can be subdivided according the time in which the activity is observed.
  • In some implementations, the duration of the time bins can be chosen based on the extent to which activity in each bin is distinguishable when different input patterns are received. Such an ex post analysis can be used, e.g., when designing a process for reading the processed output of a neural network using input patterns that correspond to known inputs. In other words, the process for reading the output of a neural network can be adapted to the observed activity responsive to different input patterns. This can be done, e.g., to ensure that the process for reading the neural network appropriately captures the processing results when input patterns that correspond to unknown inputs are received.
  • In some implementations, the duration of the time bins can be constant. For example, in neural network devices that are modeled after biological neural networks, the duration of the time bins can be 5 ms.
  • In some implementations, the time bins can commence after the input pattern is received at the neural network. Such a delay can allow the input pattern to propagate to relevant portions of the neural network device prior to meaningful processing results are expected. Subsequent time bins can be defined with respect to an end of a preceding time bin or with respect to the time after the input pattern is received.
  • Process 400 also includes recording a functional connectivity matrix for each time bin at 415. A functional connectivity matrix is a measure of the response of each edge or link in a neural network device to a given input pattern during a time bin. In some implementations, a functional connectivity matrix is a binary matrix where active and inactive edges/links are denoted (e.g., with a binary “1” and “0”, respectively). A functional connectivity matrix is thus akin to a structural adjacency matrix except that the functional connectivity matrix captures activity.
  • In some implementations, an edge or connection can be denoted as “active” when a signal is transmitted along the edge or connection from a transmitting node to a receiving node during the relevant time bin and when the receiving node or vertex responds to the transmitted signal by subsequently transmitting a second signal.
  • In general, the responsive second signal need not be transmitted within the same time bin as the received signal. Rather, the responsive second signal can be transmitted within some duration after the received signal, e.g., in a subsequent time bin. In some implementations, responsive second signals can be identified by identifying signals that are transmitted by the receiving node within a fixed duration after the receiving node receives the first signal. For example, the duration can be about 1.5 times as long as the duration of a time bin, or about 7.5 ms in neural network devices that are modeled after biological neural networks.
  • In general, the responsive second signal need not be responsive solely to single signal received by the node. Rather, multiple signals can be received by the receiving node (e.g., along multiple edges or connections). The receiving node or vertex can “respond” to all or a portion of the received signals in accordance with the particularities of the information processing performed by that node. In other words, as discussed above, the receiving node or vertex can, e.g., weight and sum multiple input signals, pass the sum through one or more non-linear activation functions, and output one or more output signals, e.g., as an accumulator, e.g., in accordance with an integrate-and-fire model.
  • In some implementations, the transmitted signals are spikes and each (j, k)-coefficient in a functional connectivity matrix is denoted as “active” if and only if the following three conditions are satisfied, where si j denotes the time of the i-th spike of node j:
      • (1) The (j, k)-coefficient of the structural matrix is 1, i.e., there is a structural connection from the node or vertex j to the node or vertex k;
      • (2) the node or vertex with GID j spikes in the n-th time bin; and
      • (3) the node or vertex with GID k spikes within an interval after the neuron with GID j.
        In effect, it is assumed that spiking of node or vertex k is influenced by the spiking of node or vertex j.
  • Process 400 also includes characterizing one or more parameters of the activity recorded in the functional connectivity matrix using topological methods at 420. For example, the activity recorded in the functional connectivity matrix can be characterized using one or more of the approaches used in process 100 (FIG. 1 ), substituting the functional connectivity matrix for the structural adjacency matrix. For example, a characterization of a topological parameter of the neural network device can be determined for different functional connectivity matrices from different time bins. In effect, the structuring or ordering of the activity in the neural network can be determined at different times.
  • Process 400 also includes distinguishing the functional response of the neural network device to a received input pattern from other functional responses of the neural network device to other received input patterns based on the characterized topological parameters at 425. As discussed above, in some instances, input patterns that correspond to known inputs can be used in a variety contexts. For example, during testing of a neural network, distinguishing the functional response of the neural network device to a received input pattern from functional responses of the neural network device to other patterns can indicate whether the neural network device is functioning properly. As another example, during training of a neural network, distinguishing the functional response of the neural network device to a received input pattern from functional responses of the neural network device to other patterns can provide an indication that training is complete. As another example, in analyzing an operational neural network, distinguishing the functional response of the neural network device to a received input pattern from functional responses of the neural network device to other patterns can provide an indication of the processing performed by the neural network device.
  • In instances wherein the input patterns can correspond to unknown inputs, distinguishing the functional response of the neural network device to a received input pattern from functional responses of the neural network device to other patterns can be used to read the output of the neural network device.
  • Distinguishing functional responses to different input patterns fed into a neural network using topological methods can also be performed in a variety of other contexts. For example, changes in functional responses over time can be used to identify the results of training and/or the structures associated with training.
  • A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. Accordingly, other embodiments are within the scope of the following claims.

Claims (16)

1-20. (canceled)
21. A computer-implemented method, comprising:
receiving, at a neural network, a first input;
dividing functional activity in the neural network that is responsive to the first input into first time bins;
recording, for each of the first time bins, a first measure of the functional activity in the neural network device during that first time bin that is responsive to the first input;
characterizing one or more parameters of the recorded first measure of the functional activity using topological methods; and
receiving, at the neural network, a second input;
dividing functional activity in the neural network that is responsive to the second input into second time bins;
recording, for each of the second time bins, a measure of the functional activity in the neural network device during that second time bin that is responsive to the first input;
characterizing one or more parameters of the recorded second measure of the functional activity using topological methods; and
distinguishing the functional response of the neural network to the first input from the functional response of the neural network to the second input based on the characterized topological parameters.
22. The method of claim 21, wherein the topological methods comprise determining associated directed flag complexes.
23. The method of claim 21, wherein the functional activity in the neural network that is responsive to the first and second inputs includes signal transmission along edges of the neural network.
24. The method of claim 21, wherein the duration of the time bins is constant.
25. The method of claim 21, wherein the measure of the functional activity is a functional connectivity matrix.
26. The method of claim 21, wherein the first input and the second input are known inputs.
27. The method of claim 21, wherein the method further comprises determining that neural network device is functioning properly or trained based on the distinguishing of the functional response to the first input from the functional response to the second input.
28. A computer-implemented method, comprising:
receiving, at a first neural network, an input;
dividing functional activity in the neural network that is responsive to the input into time bins;
recording, for each of the time bins, a measure of the functional activity in the neural network device during that time bin that is responsive to the input;
characterizing one or more parameters of the recorded measure of the functional activity using topological methods; and
reconstructing at least some of functioning of the first neural network in a second neural network using the characterizations provided by the topological methods.
29. The method of claim 28, wherein the second neural network is simpler than the first neural network.
30. The method of claim 28, wherein the functioning of the first neural network is reconstructed in the second neural network without training the second neural network.
31. The method of claim 28, wherein the topological methods comprise determining associated directed flag complexes.
32. The method of claim 28, wherein the functional activity in the neural network that is responsive to the input includes signal transmission along edges of the neural network.
33. The method of claim 28, wherein the duration of the time bins is constant.
34. The method of claim 28, wherein the measure of the functional activity is a functional connectivity matrix.
35. The method of claim 28, wherein the method further comprises determining that second neural network is functioning properly or trained based on a functional response of the second neural network to the input pattern.
US18/188,888 2017-01-06 2023-03-23 Generating and identifying functional subnetworks within structural networks Pending US20230297808A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/188,888 US20230297808A1 (en) 2017-01-06 2023-03-23 Generating and identifying functional subnetworks within structural networks

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201762443071P 2017-01-06 2017-01-06
US15/864,146 US11615285B2 (en) 2017-01-06 2018-01-08 Generating and identifying functional subnetworks within structural networks
US18/188,888 US20230297808A1 (en) 2017-01-06 2023-03-23 Generating and identifying functional subnetworks within structural networks

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/864,146 Continuation US11615285B2 (en) 2017-01-06 2018-01-08 Generating and identifying functional subnetworks within structural networks

Publications (1)

Publication Number Publication Date
US20230297808A1 true US20230297808A1 (en) 2023-09-21

Family

ID=62783115

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/864,146 Active 2040-02-15 US11615285B2 (en) 2017-01-06 2018-01-08 Generating and identifying functional subnetworks within structural networks
US18/188,888 Pending US20230297808A1 (en) 2017-01-06 2023-03-23 Generating and identifying functional subnetworks within structural networks

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/864,146 Active 2040-02-15 US11615285B2 (en) 2017-01-06 2018-01-08 Generating and identifying functional subnetworks within structural networks

Country Status (1)

Country Link
US (2) US11615285B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11972343B2 (en) 2018-06-11 2024-04-30 Inait Sa Encoding and decoding information

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11893471B2 (en) 2018-06-11 2024-02-06 Inait Sa Encoding and decoding information and artificial neural networks
US11663478B2 (en) 2018-06-11 2023-05-30 Inait Sa Characterizing activity in a recurrent artificial neural network
US11698930B2 (en) * 2018-06-21 2023-07-11 Intel Corporation Techniques for determining artificial neural network topologies
US11569978B2 (en) 2019-03-18 2023-01-31 Inait Sa Encrypting and decrypting information
US11652603B2 (en) * 2019-03-18 2023-05-16 Inait Sa Homomorphic encryption
US11797827B2 (en) 2019-12-11 2023-10-24 Inait Sa Input into a neural network
US11816553B2 (en) 2019-12-11 2023-11-14 Inait Sa Output from a recurrent neural network
US20210182655A1 (en) * 2019-12-11 2021-06-17 Inait Sa Robust recurrent artificial neural networks
US11580401B2 (en) * 2019-12-11 2023-02-14 Inait Sa Distance metrics and clustering in recurrent neural networks
US11651210B2 (en) 2019-12-11 2023-05-16 Inait Sa Interpreting and improving the processing results of recurrent neural networks
CN111931871B (en) * 2020-09-27 2021-01-15 上海兴容信息技术有限公司 Communication mode determination method and system
US11860977B1 (en) * 2021-05-04 2024-01-02 Amazon Technologies, Inc. Hierarchical graph neural networks for visual clustering

Family Cites Families (119)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5822742A (en) 1989-05-17 1998-10-13 The United States Of America As Represented By The Secretary Of Health & Human Services Dynamically stable associative learning neural network system
US7321882B2 (en) 2000-10-13 2008-01-22 Fraunhofer-Gesellschaft Zur Foederung Der Angewandten Forschung E.V. Method for supervised teaching of a recurrent artificial neural network
DE10139682B4 (en) * 2001-08-11 2004-08-05 Deneg Gmbh Method for generating neural networks
MY138544A (en) * 2003-06-26 2009-06-30 Neuramatix Sdn Bhd Neural networks with learning and expression capability
US20060112028A1 (en) * 2004-11-24 2006-05-25 Weimin Xiao Neural Network and Method of Training
JP4639784B2 (en) 2004-12-06 2011-02-23 ソニー株式会社 Learning device, learning method, and program
WO2007049282A2 (en) 2005-10-26 2007-05-03 Cortica Ltd. A computing device, a system and a method for parallel processing of data streams
US20140156901A1 (en) 2005-10-26 2014-06-05 Cortica Ltd. Computing device, a system and a method for parallel processing of data streams
GB2453263A (en) 2006-05-16 2009-04-01 Douglas S Greer System and method for modeling the neocortex and uses therefor
US8224759B2 (en) 2007-05-01 2012-07-17 Evolved Machines, Inc. Regulating activation threshold levels in a simulated neural circuit
US8818923B1 (en) 2011-06-27 2014-08-26 Hrl Laboratories, Llc Neural network device with engineered delays for pattern storage and matching
EP2531959B1 (en) 2010-02-05 2017-07-26 Ecole Polytechnique Fédérale de Lausanne (EPFL) Organizing neural networks
US9015093B1 (en) 2010-10-26 2015-04-21 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
US8775341B1 (en) 2010-10-26 2014-07-08 Michael Lamport Commons Intelligent control with hierarchical stacked neural networks
CN104335219B (en) 2012-03-30 2018-06-05 爱迪德技术有限公司 Addressable system is protected using variable correlative coding
TWI546759B (en) 2013-03-28 2016-08-21 國立臺灣大學 Distant data classification system and method thereof
US9292554B2 (en) 2013-08-20 2016-03-22 Pivotal Software, Inc. Thin database indexing
AR097974A1 (en) 2013-10-11 2016-04-20 Element Inc SYSTEM AND METHOD FOR BIOMETRIC AUTHENTICATION IN CONNECTION WITH DEVICES EQUIPPED WITH CAMERA
US9558442B2 (en) 2014-01-23 2017-01-31 Qualcomm Incorporated Monitoring neural networks with shadow networks
US9202178B2 (en) * 2014-03-11 2015-12-01 Sas Institute Inc. Computerized cluster analysis framework for decorrelated cluster identification in datasets
US9425952B2 (en) 2014-03-27 2016-08-23 Samsung Israel Research Corporation Algebraic manipulation detection codes from algebraic curves
US9195903B2 (en) 2014-04-29 2015-11-24 International Business Machines Corporation Extracting salient features from video using a neurosynaptic system
US9373058B2 (en) 2014-05-29 2016-06-21 International Business Machines Corporation Scene understanding using a neurosynaptic system
US10019506B1 (en) 2014-06-26 2018-07-10 Google Llc Identifying objects in images
CN104318304A (en) 2014-10-20 2015-01-28 上海电机学院 BP network structure design method for pattern recognition and based on sample study
US9946970B2 (en) 2014-11-07 2018-04-17 Microsoft Technology Licensing, Llc Neural networks for encrypted data
US10333696B2 (en) 2015-01-12 2019-06-25 X-Prime, Inc. Systems and methods for implementing an efficient, scalable homomorphic transformation of encrypted data with minimal data expansion and improved processing efficiency
US10650047B2 (en) * 2015-03-12 2020-05-12 International Business Machines Corporation Dense subgraph identification
KR102130162B1 (en) 2015-03-20 2020-07-06 프라운호퍼 게젤샤프트 쭈르 푀르데룽 데어 안겐반텐 포르슝 에. 베. Assignment of relevance scores for artificial neural networks
EP3314541A1 (en) 2015-06-26 2018-05-02 Sentiance NV Deriving movement behaviour from sensor data
US11094058B2 (en) 2015-08-14 2021-08-17 Elucid Bioimaging Inc. Systems and method for computer-aided phenotyping (CAP) using radiologic images
US20220012877A1 (en) 2015-08-14 2022-01-13 Elucid Bioimaging Inc. Quantitative imaging for detecting histopathologically defined plaque fissure non-invasively
CN106650923B (en) 2015-10-08 2019-04-09 上海兆芯集成电路有限公司 Neural network unit with neural memory and neural processing unit and sequencer
JP6636630B2 (en) * 2015-10-28 2020-01-29 グーグル エルエルシー Modify calculation graph
KR102433254B1 (en) * 2015-10-28 2022-08-18 구글 엘엘씨 Processing computational graphs
EP4202782A1 (en) * 2015-11-09 2023-06-28 Google LLC Training neural networks represented as computational graphs
US20170139759A1 (en) 2015-11-13 2017-05-18 Ca, Inc. Pattern analytics for real-time detection of known significant pattern signatures
TWI745321B (en) 2016-01-08 2021-11-11 馬來西亞商雲頂圖爾斯診斷中心有限公司 Method and system for determining network connections
US10586173B2 (en) 2016-01-27 2020-03-10 Bonsai AI, Inc. Searchable database of trained artificial intelligence objects that can be reused, reconfigured, and recomposed, into one or more subsequent artificial intelligence models
US10157629B2 (en) 2016-02-05 2018-12-18 Brainchip Inc. Low power neuromorphic voice activation system and method
US10204286B2 (en) 2016-02-29 2019-02-12 Emersys, Inc. Self-organizing discrete recurrent network digital image codec
US11774944B2 (en) 2016-05-09 2023-10-03 Strong Force Iot Portfolio 2016, Llc Methods and systems for the industrial internet of things
US11468337B2 (en) 2016-05-13 2022-10-11 The Trustees Of Princeton University System and methods for facilitating pattern recognition
US20220269346A1 (en) 2016-07-25 2022-08-25 Facebook Technologies, Llc Methods and apparatuses for low latency body state prediction based on neuromuscular data
EP3494520A1 (en) 2016-08-04 2019-06-12 Google LLC Encoding and reconstructing inputs using neural networks
US11120353B2 (en) 2016-08-16 2021-09-14 Toyota Jidosha Kabushiki Kaisha Efficient driver action prediction system based on temporal fusion of sensor data using deep (bidirectional) recurrent neural network
US10565493B2 (en) 2016-09-22 2020-02-18 Salesforce.Com, Inc. Pointer sentinel mixture architecture
US10157045B2 (en) 2016-11-17 2018-12-18 The Mathworks, Inc. Systems and methods for automatically generating code for deep learning systems
KR101997975B1 (en) 2016-12-01 2019-07-08 한국과학기술원 Spiking neural network system for dynamic control of flexible, stable and hybrid memory storage
US11429854B2 (en) 2016-12-04 2022-08-30 Technion Research & Development Foundation Limited Method and device for a computerized mechanical device
CN106778856A (en) 2016-12-08 2017-05-31 深圳大学 A kind of object identification method and device
US10515302B2 (en) 2016-12-08 2019-12-24 Via Alliance Semiconductor Co., Ltd. Neural network unit with mixed data and weight size computation capability
US10885425B2 (en) 2016-12-20 2021-01-05 Intel Corporation Network traversal using neuromorphic instantiations of spike-time-dependent plasticity
US11625569B2 (en) 2017-03-23 2023-04-11 Chicago Mercantile Exchange Inc. Deep learning for credit controls
US9785886B1 (en) 2017-04-17 2017-10-10 SparkCognition, Inc. Cooperative execution of a genetic algorithm with an efficient training algorithm for data-driven model creation
US11501139B2 (en) 2017-05-03 2022-11-15 Intel Corporation Scaling half-precision floating point tensors for training deep neural networks
US20180336453A1 (en) 2017-05-19 2018-11-22 Salesforce.Com, Inc. Domain specific language for generation of recurrent neural network architectures
CN116957055A (en) 2017-06-05 2023-10-27 渊慧科技有限公司 Selecting actions using multimodal input
US11049018B2 (en) 2017-06-23 2021-06-29 Nvidia Corporation Transforming convolutional neural networks for visual sequence learning
US10402687B2 (en) 2017-07-05 2019-09-03 Perceptive Automata, Inc. System and method of predicting human interaction with vehicles
US20210182604A1 (en) 2017-07-05 2021-06-17 Perceptive Automata, Inc. System and method of predicting human interaction with vehicles
WO2019027992A1 (en) 2017-08-03 2019-02-07 Telepathy Labs, Inc. Omnichannel, intelligent, proactive virtual agent
WO2019035862A1 (en) 2017-08-14 2019-02-21 Sisense Ltd. System and method for increasing accuracy of approximating query results using neural networks
US20190122140A1 (en) 2017-10-20 2019-04-25 STATGRAF Research LLP. Data analysis and rendering
US10410111B2 (en) 2017-10-25 2019-09-10 SparkCognition, Inc. Automated evaluation of neural networks using trained classifier
US10628486B2 (en) 2017-11-15 2020-04-21 Google Llc Partitioning videos
KR20240053667A (en) 2017-12-22 2024-04-24 레스메드 센서 테크놀로지스 리미티드 Apparatus, system, and method for health and medical sensing
TWI640933B (en) 2017-12-26 2018-11-11 中華電信股份有限公司 Two-stage feature extraction system and method based on neural network
WO2019147329A1 (en) 2018-01-23 2019-08-01 Hrl Laboratories, Llc A method and system for distributed coding and learning in neuromorphic networks for pattern recognition
US11278413B1 (en) 2018-02-06 2022-03-22 Philipp K. Lang Devices, systems, techniques and methods for determining the fit, size and/or shape of orthopedic implants using computer systems, artificial neural networks and artificial intelligence
US20190251447A1 (en) 2018-02-09 2019-08-15 Htc Corporation Device and Method of Training a Fully-Connected Neural Network
EP3766009A4 (en) 2018-03-13 2021-12-01 HRL Laboratories, LLC Sparse associative memory for identification of objects
US20190354832A1 (en) 2018-05-17 2019-11-21 Università della Svizzera italiana Method and system for learning on geometric domains using local operators
US20190304568A1 (en) 2018-03-30 2019-10-03 Board Of Trustees Of Michigan State University System and methods for machine learning for drug design and discovery
US10771488B2 (en) 2018-04-10 2020-09-08 Cisco Technology, Inc. Spatio-temporal anomaly detection in computer networks using graph convolutional recurrent neural networks (GCRNNs)
DE102018109392A1 (en) 2018-04-19 2019-10-24 Beckhoff Automation Gmbh METHOD FOR DETECTING OPTICAL CODES, AUTOMATION SYSTEM AND COMPUTER PROGRAM PRODUCT TO PERFORM THE PROCESS
US11501138B1 (en) 2018-04-20 2022-11-15 Perceive Corporation Control circuits for neural network inference circuit
EP3561727A1 (en) 2018-04-23 2019-10-30 Aptiv Technologies Limited A device and a method for extracting dynamic information on a scene using a convolutional neural network
US20190335192A1 (en) 2018-04-27 2019-10-31 Neulion, Inc. Systems and Methods for Learning Video Encoders
US11663478B2 (en) 2018-06-11 2023-05-30 Inait Sa Characterizing activity in a recurrent artificial neural network
US20190378007A1 (en) 2018-06-11 2019-12-12 Inait Sa Characterizing activity in a recurrent artificial neural network
WO2019238483A1 (en) 2018-06-11 2019-12-19 Inait Sa Characterizing activity in a recurrent artificial neural network and encoding and decoding information
US11893471B2 (en) 2018-06-11 2024-02-06 Inait Sa Encoding and decoding information and artificial neural networks
US11972343B2 (en) 2018-06-11 2024-04-30 Inait Sa Encoding and decoding information
US20190378000A1 (en) 2018-06-11 2019-12-12 Inait Sa Characterizing activity in a recurrent artificial neural network
US10417558B1 (en) 2018-09-28 2019-09-17 Deep Insight Solutions, Inc. Methods and systems for artificial neural network optimistic event processing
US11823038B2 (en) 2018-06-22 2023-11-21 International Business Machines Corporation Managing datasets of a cognitive storage system with a spiking neural network
US11218498B2 (en) 2018-09-05 2022-01-04 Oracle International Corporation Context-aware feature embedding and anomaly detection of sequential log data using deep recurrent neural networks
US11068942B2 (en) 2018-10-19 2021-07-20 Cerebri AI Inc. Customer journey management engine
WO2020088739A1 (en) 2018-10-29 2020-05-07 Hexagon Technology Center Gmbh Facility surveillance systems and methods
US11270072B2 (en) 2018-10-31 2022-03-08 Royal Bank Of Canada System and method for cross-domain transferable neural coherence model
US20210398621A1 (en) 2018-11-07 2021-12-23 Kuano Ltd. A quantum circuit based system configured to model physical or chemical systems
US11120111B2 (en) 2018-12-11 2021-09-14 Advanced New Technologies Co., Ltd. Authentication based on correlation of multiple pulse signals
US20190370647A1 (en) 2019-01-24 2019-12-05 Intel Corporation Artificial intelligence analysis and explanation utilizing hardware measures of attention
WO2020163355A1 (en) 2019-02-05 2020-08-13 Smith & Nephew, Inc. Methods for improving robotic surgical systems and devices thereof
US11544535B2 (en) 2019-03-08 2023-01-03 Adobe Inc. Graph convolutional networks with motif-based attention
US11569978B2 (en) 2019-03-18 2023-01-31 Inait Sa Encrypting and decrypting information
US11652603B2 (en) 2019-03-18 2023-05-16 Inait Sa Homomorphic encryption
US11210554B2 (en) 2019-03-21 2021-12-28 Illumina, Inc. Artificial intelligence-based generation of sequencing metadata
US11347965B2 (en) 2019-03-21 2022-05-31 Illumina, Inc. Training data generation for artificial intelligence-based sequencing
US10996664B2 (en) 2019-03-29 2021-05-04 Mitsubishi Electric Research Laboratories, Inc. Predictive classification of future operations
EP3948402B1 (en) 2019-04-01 2023-12-06 Evolution Optiks Limited Pupil tracking system and method, and digital display device and digital image rendering system and method using same
US20200380335A1 (en) 2019-05-30 2020-12-03 AVAST Software s.r.o. Anomaly detection in business intelligence time series
US20200402497A1 (en) 2019-06-24 2020-12-24 Replicant Solutions, Inc. Systems and Methods for Speech Generation
WO2021010509A1 (en) 2019-07-15 2021-01-21 엘지전자 주식회사 Artificial intelligence cooking apparatus
US11763155B2 (en) 2019-08-12 2023-09-19 Advanced Micro Devices, Inc. Using sub-networks created from neural networks for processing color images
US11064108B2 (en) 2019-08-21 2021-07-13 Sony Corporation Frame rate control for media capture based on rendered object speed
US20210097578A1 (en) 2019-10-01 2021-04-01 JJG Development, LLC, DBA Smart Pickle Marketing automation platform
US20220187847A1 (en) 2019-11-05 2022-06-16 Strong Force Vcn Portfolio 2019, Llc Robot Fleet Management for Value Chain Networks
US11797827B2 (en) 2019-12-11 2023-10-24 Inait Sa Input into a neural network
US20230028511A1 (en) 2019-12-11 2023-01-26 Inait Sa Constructing and operating an artificial recurrent neural network
US11651210B2 (en) 2019-12-11 2023-05-16 Inait Sa Interpreting and improving the processing results of recurrent neural networks
US20210182655A1 (en) 2019-12-11 2021-06-17 Inait Sa Robust recurrent artificial neural networks
US11816553B2 (en) 2019-12-11 2023-11-14 Inait Sa Output from a recurrent neural network
US11580401B2 (en) 2019-12-11 2023-02-14 Inait Sa Distance metrics and clustering in recurrent neural networks
US10885020B1 (en) 2020-01-03 2021-01-05 Sas Institute Inc. Splitting incorrectly resolved entities using minimum cut
US20220157436A1 (en) 2020-08-10 2022-05-19 Zeriscope, Inc. Method and system for distributed management of transdiagnostic behavior therapy
AU2021401816A1 (en) 2020-12-18 2023-06-22 Strong Force Vcn Portfolio 2019, Llc Robot fleet management and additive manufacturing for value chain networks
US20220261593A1 (en) 2021-02-16 2022-08-18 Nvidia Corporation Using neural networks to perform object detection, instance segmentation, and semantic correspondence from bounding box supervision

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11972343B2 (en) 2018-06-11 2024-04-30 Inait Sa Encoding and decoding information

Also Published As

Publication number Publication date
US20180197069A1 (en) 2018-07-12
US11615285B2 (en) 2023-03-28

Similar Documents

Publication Publication Date Title
US20230297808A1 (en) Generating and identifying functional subnetworks within structural networks
US11010658B2 (en) System and method for learning the structure of deep convolutional neural networks
Lu et al. Community detection in complex networks via clique conductance
EP1949311B1 (en) A computing device, a system and a method for parallel processing of data streams
Zheleva et al. Using friendship ties and family circles for link prediction
US20140156901A1 (en) Computing device, a system and a method for parallel processing of data streams
Moayedikia Multi-objective community detection algorithm with node importance analysis in attributed networks
Dotko et al. Topological analysis of the connectome of digital reconstructions of neural microcircuits
Moore et al. Analysis of wireless networks using Hawkes processes
CN112153002A (en) Alarm information analysis method and device, computer equipment and storage medium
Bielak et al. Attre2vec: Unsupervised attributed edge representation learning
Bampasidou et al. Modeling collaborations with persistent homology
Belan Specialized cellular structures for image contour analysis
Chai et al. Correlation Analysis-Based Neural Network Self-Organizing Genetic Evolutionary Algorithm
Rossi et al. Relational similarity machines (RSM): A similarity-based learning framework for graphs
Li et al. Kernel-based structural-temporal cascade learning for popularity prediction
Fister Jr et al. Discovering dependencies among mined association rules with population-based metaheuristics
Taiwo et al. Explaining the Power of Topological Data Analysis in Graph Machine Learning
Pandey et al. MSP-N: Multiple selection procedure with ‘N’possible growth mechanisms
Thomason et al. Sensitivity of social network analysis metrics to observation noise
Mengiste et al. Relevance of network topology for the dynamics of biological neuronal networks
Li et al. A hierarchy of graph neural networks based on learnable local features
Zeng et al. Improved aggregating and accelerating training methods for spatial graph neural networks on fraud detection
Prashanthi et al. Enhancing Cyber security Frameworks: Integrating Pigeon-Inspired Optimization and Dense Neural Networks for Advanced Intrusion Detection Using the CIC-IDS-2017 Dataset.
Tortorella et al. Richness of Node Embeddings in Graph Echo State Networks

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: ECOLE POLYTECHNIQUE FEDERALE DE LAUSANNE (EPFL), SWITZERLAND

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:REIMANN, MICHAEL WOLFGANG;NOLTE, MAX CHRISTIAN;MARKRAM, HENRY;AND OTHERS;SIGNING DATES FROM 20180115 TO 20230324;REEL/FRAME:065318/0020